BetaArchive Logo
Navigation Home Database Screenshots Gallery Image Uploader Server Info FTP Servers Wiki Forum RSS Feed Rules Please Donate
UP: 31d, 3h, 2m | CPU: 21% | MEM: 6399MB of 12287MB used
{The community for beta collectors}

Forum rules


Any off topic discussions should go in this forum. Post count is not increased by posting here.
FTP Access status is required to post in this forum. Find out how to get it


Post new topic Reply to topic  [ 14 posts ] 
Author Message
 PostPost subject: BetaArchive FTP Server: The Old vs The New 2018        Posted: Tue Apr 24, 2018 8:30 pm 
Reply with quote
Administrator
User avatar
Offline

Joined
Tue Feb 12, 2008 5:28 pm

Posts
7577
Well, here it is again, some info about my previous server setup vs the new I just installed... I decided to make it in the same format at the previous one I made in early 2017. So enjoy!
--------

---------------------------------------------------------------------------

The Old

---------------------------------------------------------------------------


Image
(click to enlarge)


I am not going to go into detail of what each component is here since it's mentioned in the previous version I made. Most of the components are the same, so the new stuff is more interesting...:


---------------------------------------------------------------------------

The New

---------------------------------------------------------------------------


Image
(click to enlarge)


  1. D-Link DGS-1216T. 16port L2 Ethernet switch. The backbone of my network. 16 gigabit ports, fully managed and used to separate my external network from my internal using VLAN segmenting. This switch is really old (some 15+ years old) but it works great.
  2. 32-port KVM switch. This is an old Raritan KVM that uses Ethernet cables to connect to dongles that connects to your PC. It uses some old Java interface will allows you to access each computer remotely, but due to its old Java version and security issues I use it only locally. It works great.
  3. External 4-bay drive enclosure. Connected with an older SAS SFF-8470 to SFF-8088 connector on one of the SAS controllers connected in my primary storage rack (4). I use this for personal backups.
  4. Primary storage enclosure. This is the main harddrive storage unit running FreeNAS and sharing all the drives to the primary server using the iSCSI protocol.

    Specifications:

    • Intel Xeon E5-2609 running at 2.40GHz (4c/4t).
    • ASUS Z9PE-D16 server motherboard supporting dual CPUs and ECC DDR3 RAM (one CPU occupied, half of the memory and PCI-express slots populated).
    • 32GB ECC DDR3 RAM.
    • 4x 1Gbit network interfaces (onboard, not used).
    • 1x Mellanox ConnectX-2 10Gbit network adapter (connected to primary server).
    • 1x 16port LSI SAS-9300-16i SAS HBA controller connected to the harddrive backplane using mini-SAS cables.
    • 1x 16port LSI SAS-9300-16e SAS HBA controller connected to the external 4-port enclosure (3).
    • 16x 3.5" SAS/SATA storage bays for storage expansion.

    The server runs FreeNAS off an internal USB stick and shares all the harddrives directly through iSCSI (iSCSI Target mode) over one 10Gbit link. This storage enclosure holds all my private data including the four BetaArchive harddrives you got access to through the FTP.
  5. Secondary storage enclosure. This enclosure is going to be used for all the backups of the primary enclosure. It's not yet fully installed but the idea is to either hold a full server motherboard setup like with the primary enclosure, or simply connect this bay to the primary enclosure through external SAS connectors. I have an extra 10Gbit NIC I can use for this to double the speed to 20Gbit if needed.
  6. Primary server. The center of the entire server system.

    Specifications:

    • Intel Xeon E5-1650 v4 @ 3.60GHz (6c/12t).
    • SuperMicro X10SRH-CLN4F server motherboard.
    • 128GB DDR4 ECC RAM.
    • 4x 1Gbit network interfaces.
    • 2x Mellanox ConnectX-3 10GBit QDR network adapters (running at optional 40Gbit if needed), connected to the primary storage enclosure. Only one adapter used at the moment, but secondary may be used for the secondary storage enclosure.
    • 3x LSI SAS-9300-4i SAS HBA controllers (one onboard the motherboard, two others as PCI-express adapters) feeding the enclosure harddrive modules:
    • 24x 2.5" SAS/SATA storage bays for harddrive expansion. These drives holds the operating system, virtual machine images, cache disks and other data. I only use 6 bays at the moment, but they are all equipped with fast Samsung 850 Pro SSDs (250GB and 500GB models). Plenty of room for future expansion.

    The server runs Windows Server 2016 Datacenter and hosts several Hyper-V virtual machines running the services needed such as my router (pfSense), email server (Kerio Connect), game servers, various lab OSes, file sharing server (DFS, FTP), backup server (Veeam), web server (IIS) etc. It does the grunt work in my network environment and acts as my domain controller, DNS and DHCP server. It also makes sure deduplication is running on the BetaArchive harddrives.
  7. Testing server. HP ProLiant DL360 G8 1U server. This was the old primary server but it's been demoted now as my lab server and emergency server in case the primary server breaks down. At the moment it doesn't run anything as I have not yet had the need to configure it.

    Specifications:

    • 2x Intel Xeon E5-2630 3.2GHz CPUs (6c/12t).
    • 288GB of ECC DDR3 RAM.
    • 8x 2.5" SAS/SATA storage bays.
    • Two power supplies that work in tandem to provide stable power.
    • 4x 1000Mbit built in network interfaces with the option to upgrade them to 2x10Gbit.

What's been removed from previous setups is the Lenovo tower server as well as several external USB harddrives including a USB connected 8bay harddrive tower. I will most likely sell the Lenovo server along with several rack servers I have (3x HP ProLiant DL360 G7, 1x HP ProLiant DL380 G7, Apple Xserve), and all the USB harddrives will be cracked open and installed into the primary or secondary storage enclosure once I get all that sorted. The USB drives were mostly for backups and will continue to be used as such but directly installed into the storage enclosures.

And and all I spent around 2000 euros for the primary server, the primary storage enclosure and four 10Gbit Mellanox network cards. I got bits and pieces from work and other sources as well to complete it all. What remains now is getting the backup enclosure up and running and I also plan on pulling a 10Gbit fibre cable from my server to my workstation as I always max out my data transfers to the server (exactly 112MB/s due to the gigabit limitation) and need to remove that bottleneck.

I have not yet maxed out the 10Gbit connection to the storage drives since even when running SSDs you peak at around 4Gbit, so i got plenty of room for expansions. General disk I/O is of course slower across iSCSI than directly connected to a SATA controller but since I don't push that much random data it won't matter for me.

If anyone has experience with iSCSI and knows good tips and tricks to improve performance in Windows let me know.

--------

Now time for some very important credits!

  • Again a very big thanks to BA member dw5304 for all the tips, setup ideas and licenses for the server setup.
  • To all the BetaArchive members helping out making this community grow and also making sure that I need to run to the store getting more and larger harddrives :).

_________________
Image
Official guidelines: The Definitive Guide to BetaArchive :: Abandonware
Tools: Alcohol120% (Portable)
Listings: BetaArchive Database (beta)
Channels: Discord :: Twitter


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Tue Apr 24, 2018 9:19 pm 
Reply with quote
Donator
User avatar
Offline

Joined
Tue Dec 15, 2009 8:56 pm

Posts
2214

Favourite OS
Windows Chad Edition/macOS Chad
mrpijey wrote:
I have not yet maxed out the 10Gbit connection to the storage drives since even when running SSDs you peak at around 4Gbit, so i got plenty of room for expansions. General disk I/O is of course slower across iSCSI than directly connected to a SATA controller but since I don't push that much random data it won't matter for me.

If anyone has experience with iSCSI and knows good tips and tricks to improve performance in Windows let me know.

--------

Now time for some very important credits!

  • Again a very big thanks to BA member dw5304 for all the tips, setup ideas and licenses for the server setup.
  • To all the BetaArchive members helping out making this community grow and also making sure that I need to run to the store getting more and larger harddrives :).

iSCSI performance can be very subjective especially with how it's presented from the initiator. Depending on the workload, dataset and what your initiator is, there's a lot of tuning you can do that's not just the age old network bodge. Tho - next point...

I don't spot a 10G switch in there - you're not just bridging on the interfaces are you?

Drop me a line on Discord sometime - I consult and construct these kinds of architectures (and more) for a living - happy to help if you ever need any advice or want some tips on improving performance. FS.com is also a great shout if you want to keep your fibre run from your switch/server to desktop as cheap as possible, without really skimping on the quality. Not had a problem with any of their transceivers, SMF rolls or MMF patches.

EDIT:
Also had to double take at device #3, looked a lot like those friendly little XServes :)

_________________
Image


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Tue Apr 24, 2018 9:30 pm 
Reply with quote
Administrator
User avatar
Offline

Joined
Tue Feb 12, 2008 5:28 pm

Posts
7577
I don't need a switch since it's directly connected. NIC <> NIC. The iSCSI network sits on its own subnet and IP address range and not affected at all by any other network. My switch can handle two 10Gbit GBICs but didn't see much point running it through the switch since the server talks directly to the storage units anyway as if it were a SAN.

I already got some high end GBICs and cables since I work with these things myself. I just don't tweak much on the software side nor do I do much with iSCSI since we run FC at work. And I also don't connect $20k units at home so it's a bit different :).

dw5304 loaded me with some optimization documents I will pour through as well. Frankly I don't know how much more optimizations I can do without changing hardware to improve speeds since saturating the 10Gbit connection would require a workload that is much higher than I've ever needed (remember that everything before was gigabit and that worked fine), but if I can improve latency etc that would be nice. My current limitation is the workstation, but I can easily chuck in a 10Gbit card in it, pull a fibreoptic cable to the switch or server and get full speed.

_________________
Image
Official guidelines: The Definitive Guide to BetaArchive :: Abandonware
Tools: Alcohol120% (Portable)
Listings: BetaArchive Database (beta)
Channels: Discord :: Twitter


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Tue Apr 24, 2018 9:52 pm 
Reply with quote
Donator
User avatar
Offline

Joined
Tue Dec 15, 2009 8:56 pm

Posts
2214

Favourite OS
Windows Chad Edition/macOS Chad
Are the interfaces bridged at all is what I'm asking? If they are that will induce latency and hurt the packet throughput as this is largely a CPU bound task unless you involve DPDK and some bleeding edge 10G cards (which is still somewhat experimental), where this forwarding can be kept on the card. That was why I asked about the absence of a 10G switch. This is something I've been testing with x86 routers (i.e. control plane and data plane are both x86 controlled) for a certain Eastern American based ISP (mostly for low cost edge applications). There's also various sync options you can use for your actual iSCSI target or vmfs stores or OS mount depending on where you use it which affects performance (sync against file IO or block IO) - but if you already have this in hand then that's great :)

I'm still around to PM if you need assistance. Drop me some details and I'm happy to help if I'm available. If the latency is 10g to 10g, see above, if outside of your 10G loop, check the MTU of the workstation, and also check for packet fragmentation. If you're still having trouble after your docs and after basic troubleshooting, you know where to find me :)

_________________
Image


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Tue Apr 24, 2018 11:55 pm 
Reply with quote
Donator
User avatar
Offline

Joined
Fri May 14, 2010 1:29 pm

Posts
825

Location
Southern Germany

Favourite OS
IRIX 5.3
For optimum ISCSI performance:
* run the two 10G interfaces standalone (no NLB or similar crap) and on two separate subnets
* enable jumbo frames
* enable LSO/TSO and LRO if the driver support from the Mellanox cards supports it properly (some older cards have buggy implementations that actually hurt performance... just try with TCP offloading enabled and disabled and see which works better)
* use the MS Software Initiator (you probably already do)
* use MPIO (Multipathing) with the "least queue depth" policy
* use multiple sessions (not the MCS/Multiple Connections per Session crap) to your storage target
* Sometimes, disabling Receive Side Scaling (RSS, called autotuning in netsh.exe) can also help performance a bit, try it with and without to see which works better

That should give you good performance for ISCSI, there might be some more tweaks that you can do but don't expect any huge increases (most of the stuff above is already pretty close to noise if you do benchmark measurements)

_________________
I upload stuff to archive.org from time to time. See here for everything that doesn't fit BA


Top  Profile
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Wed Apr 25, 2018 7:26 am 
Reply with quote
Administrator
User avatar
Offline

Joined
Tue Feb 12, 2008 5:28 pm

Posts
7577
soulman wrote:
Are the interfaces bridged at all is what I'm asking? If they are that will induce latency and hurt the packet throughput as this is largely a CPU bound task unless you involve DPDK and some bleeding edge 10G cards (which is still somewhat experimental), where this forwarding can be kept on the card. That was why I asked about the absence of a 10G switch. This is something I've been testing with x86 routers (i.e. control plane and data plane are both x86 controlled) for a certain Eastern American based ISP (mostly for low cost edge applications). There's also various sync options you can use for your actual iSCSI target or vmfs stores or OS mount depending on where you use it which affects performance (sync against file IO or block IO) - but if you already have this in hand then that's great :)

I'm still around to PM if you need assistance. Drop me some details and I'm happy to help if I'm available. If the latency is 10g to 10g, see above, if outside of your 10G loop, check the MTU of the workstation, and also check for packet fragmentation. If you're still having trouble after your docs and after basic troubleshooting, you know where to find me :)

No it's not bridged, just simple point to point connection. And the cards use RDMA so the CPU is hardly affected. I don't use VMFS stores since I don't use ESX, it's a simple iSCSI target/initiator setup tied directly to the harddrives in FreeNAS.

Yeah, the basic stuff like MTU etc has already been applied.

Darkstar wrote:
For optimum ISCSI performance:
* run the two 10G interfaces standalone (no NLB or similar crap) and on two separate subnets
* enable jumbo frames
* enable LSO/TSO and LRO if the driver support from the Mellanox cards supports it properly (some older cards have buggy implementations that actually hurt performance... just try with TCP offloading enabled and disabled and see which works better)
* use the MS Software Initiator (you probably already do)
* use MPIO (Multipathing) with the "least queue depth" policy
* use multiple sessions (not the MCS/Multiple Connections per Session crap) to your storage target
* Sometimes, disabling Receive Side Scaling (RSS, called autotuning in netsh.exe) can also help performance a bit, try it with and without to see which works better

Yeah, done most of this. MPIO is not used since I don't use multipath at all, nor do I need it. Seems I already did most of the stuff to get the most performance out of this card. The Mellanox cards come with built in tweaking and benchmarking tools but that requires of course that everything is shut down, which I can't bother with at this moment. But everything works, nothing bottlenecks anything really (except for the harddrives themselves of course due to their mechanical nature) and any other performance issues are to be expected with this kind of network.

_________________
Image
Official guidelines: The Definitive Guide to BetaArchive :: Abandonware
Tools: Alcohol120% (Portable)
Listings: BetaArchive Database (beta)
Channels: Discord :: Twitter


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Wed Apr 25, 2018 10:28 am 
Reply with quote
Donator
User avatar
Offline

Joined
Fri May 14, 2010 1:29 pm

Posts
825

Location
Southern Germany

Favourite OS
IRIX 5.3
mrpijey wrote:
MPIO is not used since I don't use multipath at all, nor do I need it.


Well, it increases the transfer speed to 20gbit instead of 10gbit, and can also reduce latency if SCSI CDBs are getting queued up in the host. But I guess transferring ~1 gigabyte per second is more than enough for your usecase, yes :)

_________________
I upload stuff to archive.org from time to time. See here for everything that doesn't fit BA


Top  Profile
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Wed Apr 25, 2018 12:16 pm 
Reply with quote
Administrator
User avatar
Offline

Joined
Tue Feb 12, 2008 5:28 pm

Posts
7577
MPIO only works when you have multiple physical paths to your storage network, i.e for redundancy purposes where the storage system doesn't go down because one of the connections die. What you explain isn't really MPIO but more of NIC teaming where two NICs are tied together to double the bandwidth. MPIO uses more load balancing etc which won't affect me a bit since everything comes from a single source, and goes to a single source, i.e server<>fileserver. I could however connect two network cards at both ends, put them on different subnets or IPs and then have MPIO functionality but that would be useless for me.

We use MPIO at work for our systems but it doesn't increase speed, it just load balances and makes sure that no one loses connectivity if one of the connections fails or mirrored file hosts die.

To really max out my connection I would need to max out 2-3 workstation SSDs at full speed, which I never do except for very short times when I burst video streams etc but I never do that on the server. However I do max out at exactly 112MB/s file transfer from the workstation but that's the gigabit connection that tops out, that's why I will consider moving the other 10Gbit card to my workstation instead and have a direct 10Gbit connection to the main server. I never maxed it out before but that was probably because I bottlenecked the storage I/O on the old system, whereas now I have three independent 12Gbit/s/channel SAS adapters that handles the load... Tech is sweet :).

_________________
Image
Official guidelines: The Definitive Guide to BetaArchive :: Abandonware
Tools: Alcohol120% (Portable)
Listings: BetaArchive Database (beta)
Channels: Discord :: Twitter


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Wed Apr 25, 2018 3:49 pm 
Reply with quote
Donator
Offline

Joined
Thu Jun 02, 2011 5:58 pm

Posts
162

Location
microsoft land
for everyone who is give mrpijey advice on his iscsii setup please rember every disk is a single disk and he does not use raid thus he will max out at whatever bus the sas disks are at.


Top  Profile
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Wed Apr 25, 2018 4:08 pm 
Reply with quote
Donator
User avatar
Offline

Joined
Fri May 14, 2010 1:29 pm

Posts
825

Location
Southern Germany

Favourite OS
IRIX 5.3
mrpijey wrote:
MPIO only works when you have multiple physical paths to your storage network, i.e for redundancy purposes where the storage system doesn't go down because one of the connections die. What you explain isn't really MPIO but more of NIC teaming where two NICs are tied together to double the bandwidth. MPIO uses more load balancing etc which won't affect me a bit since everything comes from a single source, and goes to a single source, i.e server<>fileserver. I could however connect two network cards at both ends, put them on different subnets or IPs and then have MPIO functionality but that would be useless for me.

We use MPIO at work for our systems but it doesn't increase speed, it just load balances and makes sure that no one loses connectivity if one of the connections fails or mirrored file hosts die.


Wrong. NIC Teaming is not the same as multipathing. And Redundancy is only one aspect of Multipathing, increased performance is another.

Let the storage guy (with over 10 years of experience in that area ;-) ) explain:

Basically, what you do is tell your system that there are two independent physical paths to it's LUN. Then you create a session (in ISCSI terminology) through each of the paths. And then you tell the multipath driver how to handle the multiple paths. If you tell it to use an "active-standby" connection, all you get is fault tolerance, i.e. all data traffic goes through one path until that path dies, then everything switches to the other path. This is apparently what you're doing in your company, from what you say you're seeing in terms of performance.

If you tell your MPIO driver to use "round-robin", "least queue depth" or any other "active-active" policy, then the reads and writes are distributed over both paths according to a certain heuristic. This means you actively use both paths and can saturate them both, in the best case getting twice the bandwidth. It looks like this in the ISCSI initiator:
Image
In this case we have two active paths that are both transferring data at the same time, and two standby paths which only get used when both active paths fail (this is because of the particular type of SAN we use, which has dual controllers and two of the paths are faster so the slower ones get set to "standby" by default)

NIC teaming on the other hand works similar, but on a network level instead of the storage layer. First, NIC teaming (aka Link Aggregation) doesn't magically increase the bandwidth, because a single data flow from one single source IP to a single destination IP is still limited to one of the two paths, the bandwidth only increases statistically if you have many connections from many IP addresses flowing through your NIC team, and even then an even distribution is not guaranteed. (Depending on the distribution function you use you can replace "IP" with "MAC" in the above paragraph, but the core issue still remains).

From the "outside", both look the same (i.e. you have two cables going from your server to your storage), but what happens on the "inside" is totally different....

Theoretically, you could also use NIC teaming to increase the bandwidth to your storage, but this has multiple disadvantages, mainly:
  • you still need two IP addresses on your "teamed" NIC on either your server or your storage, or both, to be able to use the imcreased throughput in all cases.
  • the whole failover and buffering etc. happens in the network stack, out of reach of the storage stack, which increases error recovery latency. The storage driver cannot see packets are being dropped or ports going down, for example (because the network stack hides that while it's trying to recover the link), and just waits for them excessively long, instead of switching to the other path quickly.

That's why everyone agrees that MPIO is better than Teaming for ISCSI traffic.

dw5304 wrote:
for everyone who is give mrpijey advice on his iscsii setup please rember every disk is a single disk and he does not use raid thus he will max out at whatever bus the sas disks are at.

This changes nothing. This is about the traffic between server and storage, which can be much higher than what a single disk supports. It actually reinforces the (possible) advantages of having >10Gbit bandwidth, because you can stream data more quickly to multiple disks all at once (in theory, of course, depending on other factors like CPU power and PCI bandwidth, etc.)


EDIT:

This is a simple test in a small lab (it's not geared towards performance so don't worry about the low numbers here ... ;-) )
Image
In this screenshot you can see the graphs of both network interfaces (called "Storage 1" and "Storage 2") show activity just by copying a single file to an ISCSI attached LUN with MPIO with the "Least Queue Depth" setting. Storage 1 does a bit less I/O but that is due to other circumstances, in theory you should be able to saturate them rather evenly. This proves that MPIO can indeed increase the performance and do more than "just" provide failover protection.

_________________
I upload stuff to archive.org from time to time. See here for everything that doesn't fit BA


Top  Profile
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Thu Apr 26, 2018 12:42 am 
Reply with quote
Administrator
User avatar
Offline

Joined
Tue Feb 12, 2008 5:28 pm

Posts
7577
You seem to have misunderstood what I asked for. I asked for how to make my current setup faster with the current hardware I have. As I mentioned already MPIO requires an additional physical connection (that's what the MP stands for) which is not what I wanted since I am not out for a HA solution, and it would be far more complicated for me to add another NICs at both ends than just replace them faster 40Gbit adapters. If i wanted higher throughput I would have done that already.

I also never claimed that MPIO is the same as NIC teaming, I said that what you described sounded more like NIC teaming because I thought it was understood that I asked how to get the best speed out of my iSCSI setup and not that I wanted redundancy with the added benefit of increased speed (which is what MPIO offers if configured as such). I have over 20 years of experience with this and used to setup HA solutions with old coax based ethernet so this is nothing new, but I've had little experience with iSCSI itself and needed tips on how to tweak the right settings for optimum performance.

But based on the current answers I got so far (which is the same answers I got by doing a simple google) I seem to know what I needed to know to get the good performance I got. Thanks anyway.

_________________
Image
Official guidelines: The Definitive Guide to BetaArchive :: Abandonware
Tools: Alcohol120% (Portable)
Listings: BetaArchive Database (beta)
Channels: Discord :: Twitter


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Fri Apr 27, 2018 8:27 am 
Reply with quote
Donator
User avatar
Offline

Joined
Tue Dec 15, 2009 8:56 pm

Posts
2214

Favourite OS
Windows Chad Edition/macOS Chad
mrpijey wrote:
You seem to have misunderstood what I asked for. I asked for how to make my current setup faster with the current hardware I have. As I mentioned already MPIO requires an additional physical connection (that's what the MP stands for) which is not what I wanted since I am not out for a HA solution, and it would be far more complicated for me to add another NICs at both ends than just replace them faster 40Gbit adapters. If i wanted higher throughput I would have done that already.

I also never claimed that MPIO is the same as NIC teaming, I said that what you described sounded more like NIC teaming because I thought it was understood that I asked how to get the best speed out of my iSCSI setup and not that I wanted redundancy with the added benefit of increased speed (which is what MPIO offers if configured as such). I have over 20 years of experience with this and used to setup HA solutions with old coax based ethernet so this is nothing new, but I've had little experience with iSCSI itself and needed tips on how to tweak the right settings for optimum performance.

But based on the current answers I got so far (which is the same answers I got by doing a simple google) I seem to know what I needed to know to get the good performance I got. Thanks anyway.


I'll be honest - not sure why you posted this question anyway then, and of BA of all places if BA is not an IT support forum? Might be best for you to share your experience and setup on reddit or servethehome or whatever forum people use for the education labs these days. They'll probably give you the same answers because they're common sanity checks to make, no matter how much experience you have.

You're using ZFS as your storage filesystem and backend are you not? I will say again to look at the workload you are using your storage system for, and then checking your disk sync options to correlate with the best practice tuning guides for your workload as a starting point. If you've made sure you've not bodged a direct to direct connection which I see very often with people who start out with FreeNAS do, then you've got some FS and initiator tuning to do. RDMA isn't the cureall to PEBKAC - but it is if every packet is 9000 bytes (*3mil PPS), and a bridge, disabled driver etc is not hampering that. Whilst I haven't actively used FreeNAS in a few years, I can easily tell you that it and BSD have not always been plug and play with optimum NIC settings for 10G or IB, and it is quite common to see folks bodge a dual port 10G NIC into a bridge to make up for the lack of a switch in a lab, then wonder why the packet throughput suffers drastically or they get higher CPU load than normal even with RDMA enabled.

And I mean technically - Darkstar is right, you could increase the throughput with the current hardware you have, for the sake of a £10 DAC and MPIO. Obviously that won't fix sync latency which I'm guessing is what you're suggesting here, but that's also where 10, 20 or 30 years of storage system experience won't help you if you just jump straight into the ZFS or off the shelf ZFS rabbit hole without reading the docu! Your $20k or $200k SAN typically has an account manager with it you can complain to if you haven't RTFM'd and still get support, but ZFS based solutions do not - and that's the big difference.

_________________
Image


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Fri Apr 27, 2018 9:39 am 
Reply with quote
Administrator
User avatar
Offline

Joined
Tue Feb 12, 2008 5:28 pm

Posts
7577
I didn't post any question, all I said that if someone had any tips it would be appreciated. This post was about giving info about the new server setup, not to ask how to tweak network settings. Darkstar and you replied to it with tips so I didn't ask anything.

I am not using ZFS at all, I am using iSCSI with direct passthrough to the physical harddrives (i.e share the drives on block level, not sharing a virtual drive image) to use the NTFS and ReFS filesystems I use on the drives. I don't use any switches or anything between the NICs.

If I needed to buy extra stuff to increase throughput then it won't be with the hardware I got would it? As I mentioned, if I had issues with raw throughput I would have opted for 40Gbit on the go. The throughput is fine, it's the smaller stuff I look to improve such as latency and IOPS.

_________________
Image
Official guidelines: The Definitive Guide to BetaArchive :: Abandonware
Tools: Alcohol120% (Portable)
Listings: BetaArchive Database (beta)
Channels: Discord :: Twitter


Top  Profile  WWW
 PostPost subject: Re: BetaArchive FTP Server: The Old vs The New 2018        Posted: Fri Apr 27, 2018 7:11 pm 
Reply with quote
Donator
User avatar
Offline

Joined
Fri May 14, 2010 1:29 pm

Posts
825

Location
Southern Germany

Favourite OS
IRIX 5.3
Yeah, I thought the cards you had were dual-port cards. It's been so long since I last saw single-port 10 GbE cards that I kinda forgot that this was ever a thing ;-)

If you ever need/want the increased throughput, drop me a line, I have a couple spare dual-port 10Gb NICs (Chelsio T320 I think, full height, with SFPs if you need them) that I could send you.

_________________
I upload stuff to archive.org from time to time. See here for everything that doesn't fit BA


Top  Profile
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ] 




Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Jump to:  

All views expressed in these forums are those of the author and do not necessarily represent the views of the BetaArchive site owner.

Powered by phpBB® Forum Software © phpBB Group

Copyright © 2006-2019

 

Sitemap | XML | RSS