I use the netgear gs110mx (2 port 10GbE + 8 port 1GbE and can't say enough good things about it, especially after my disasterous experience with the Asus xg-u2008 (wouldn't negotiate at 10GbE with either of my NIC's. Asus's official stance is it is only certified for use with their own xg-c100c (or whatever their NIC is) which is ridiculous because one of my NIC's is an onboard Quanta that uses the SAME X550 chip as the Asus PCIe NIC (I think?)
$200 for a 2-port switch (the GbE ports barely count, given that you can get 8-port GbE switches for $15) is by no means cheap. I need at least three ports (my NAS, my desktop and my GF's workstation), and given that each of those will need ~$100 NICs, the switch really can't be much more expensive than that, and of course with the requisite number of ports. Of course there is some benefit to having your GbE ports linked directly to your 10GbE ports if multiple simultaneous accesses are required, but my only reason for wanting this is increasing bandwidth for single devices. I'd be happy with 5GbE, really (or 2.5 if it was cheap enough!), but $200 for two ports is out of the question. $100 for five 5GbE ports? Sure, I'd take that.
You're not going to see a 4 port switch for $100ish until single port NICs get down around $25 or so; currently 10GBe hardware costs about $80/port. If you need more than 2 ports now netgear's XS505M has 4x 10GbE ethernet (and 1x 10GbE SFP port) for $380, or the MS510TX which has 1x 10GbE ethernet, 1x 10GBe SFP, 2x 5GbE, 2x 2.5GbE, and 4x1GbE ports.
I hadn't seen the MS510TX before. That looks like it could be a somewhat future proof solution for me. I really only need 5GbE between my desktop and my server, but I'd also like at least a couple of 2.5GbE ports for supporting future high speed wireless access points (I assume we will see some 802.11ax WAP/WiRouters with 2.5GbE ports). As it stands, 802.11ac 80Mhz 2:2 I can see 60MiB/sec or a little better between my laptop with an Intel 7265ac in it and my wireless network. With 802.11ax, especially with better client side MU:MIMO I can imagine many scenarios exceeding a single 1GbE link.
Looks like some places have it for $270 new. I could probably spring for that and a couple of low cost 10/5/2.5/1 GbE capable cards. Makes my cost under $500 with some future proofing. Just 2 10/5/2.5GbE just isn't enough for my needs. At least a pair at 5GbE and a pair at 2.5GbE should be enough for a few years. And a 10GbE means I could always add on a 2nd, similar, one some day and now I've got 4x5GbE and 4x2.5GbE.
Are switches usually built from single-port controllers? What we need are more affordable multi-port controllers. With more ports integrated into a single chip, prices should scale far better than 1:1 per port compared to single-port NICs.
One spinning HDD is usually only pushing about 160MiB/sec and faster 7200rpm drives aren't really getting past 200MiB/sec. A single 2.5GbE link should be able to push about 300MiB/sec.
So even a RAID0 array of a couple of hard drives might not fully saturate a 2.5GbE link
I stand corrected. Somewhere in the back of mind I remember they were doing 2xxMB/s already. Turns out only those newest 12TB manage to just touch 200MB/s read and write.
Has anyone had any experience with the Netgear MS510TX?
They are NBASE-T and support 2x 2.5, 2x 5, 1x 10 and 4x 1 Gbit at 26 Watts max power. If that fan is well designed and variable, it might have acceptable noise levels...
I seems it's actually being phased out already, because I only managed to find it when looking explicity for that product, not by navigating their site top down.
15 years after Gigabit Ethernet came, 2.5 Gb or 5 Gb are not yet affordable and 10 Gb requires different cables, so except for new deployments there is no way to do it. With fiber optics becoming cheaper and more energy efficient than copper, why even bother?
Because copper is easy to install and terminate with pretty basic tools (even your ultra-fancy shielded Class FA is find with hand crimpers as long as you use the right dies). To do a fibre install with anything other than pre-made pre-terminated fibre, you need to do the cut-and-clean-and-cleave-and-clean dance (with quality cleavers), and then either much about with resin and manual insertion and polishing, or rent a fusion splicer for pre-made pigtails.
Skill level required is higher, tool cost is dramatically higher, individual component parts are all more expensive. This is why even in commercial installs fibre-to-the-desk died a quick death and stayed dead in favour of copper to the desk and fibre backbones.
The main reason 10GbE RJ45 hasn't dropped in price quickly is because for these higher speeds, the enterprise world doesn't use it much compared to SFP. With 1GbE, RJ45 benefited from economies of scale to drive prices down, but we aren't getting that with 10GbE+ copper. What's kind of cool is that even though 10GbE RJ45 isn't mainstream, the cost of SFP equipment has been dropping to the point of being reasonable (relatively speaking) if you look for deals. I've seen 10GbE NICs under $70, 15m cables (complete) for $20, and 8-port+ switches for under $500 on the regular.
SFP simply allows for: Much greater port density Longer runs (literally miles) Greater bandwidth Greater security Lower latency Greater reliability Negligible EMI Lower power usage (<1W/port vs 2-4W/port)
Once you get into the realm of QSFP+ breakout cables, the value proposition is absolutely undeniable (at scale, of course). While the "last meter" for client connections and POE will continue to use RJ45 for some time, everything else is evolving. As I wire up my house with CAT6A, I'm also running conduit for the inevitable need (need?) for optical HDMI, CFP, SFP, and whatever else is coming down the line in the next 20+ years.
You are absolutely right, but I was thinking about a different target for this: not enterprises or SME, not office users, but home users that want a faster way to transfer their (porn/pirated) movies from their main computer to the NAS in the same room or to the player next to the big TV screen. In that case they can buy pre-built cables in either fiber or Twinax and link the stuff via small port count SFP+ switches. For example my NAS is next to my workstation, the stuff I record with the camera is transferred on a 1Gbps connection and I have a few TB of that. In this case I would go for Twinax. I am not considering laying out fiber across my home, I don't need it, but where I need more speed I would go for Twinax, fiber and copper in this particular order.
Cards with the same hardware have been available at retail and online for over a year from companies like Akitio and Startech. This lower price point is new, but Akitio has been in the ~$120 range when on sale for a long time already.
I can well understand the pain and frustration of waiting for affordable 10Gbit.
But I’ve played with 10GBase-T for many years now, thanks to some employer sponsoring and some other affordable 10GBase-T NIC, which can, after all, always be *directly connected*, if nor arms and legs are left to pay for the switch.
In the office lab, I have had 48-Port 10GBase-T switches to play with for some years. But even in the home lab, an Asus XG-U2008 switch will connect two 10Gbit ports with eight 1Gbit ports *and* be completely silent. If you consider that 10 Watts/port is where 10Gbase-T started, that’s quite a feat. And considering that the 48Port HP switch has fan capacity to push out 500 Watts of heat, I can assure you that it is very noisy starting up.
You and I want 16 ports at $160 and completely silent and that won’t happen at 10Watts/port. Green and power reduced as well as N-Base-T Ethernet have a good chance to change that, because 3 Watts may be good enough for 10Gbit, 2.5Gbit can be had for tranquility.
Netgear does sell some switches which offer NBase-T at 20 Watts or so for the switch, not quite passive but perhaps acceptable noise levels during quiet hours which give you something like 1x 10, 2x 5, 2x 2.5 and the other ports 1Gbit.
Turns out, 10Gbit on the wire won’t get you 10x performance in most workloads I have measured anyway so this may be a better deal than you think. Because at least with Windows, I haven’t been able to get much better than 250Mbyte/s across a 10Gbit link anyway.
I operate one 14TB primary server in my home lab, running Windows server. And a backup, same OS, same capacity. So every now and then, I switch on the backup, and have it synchronize the files, some of which are quite small.
And because they are so small, what really determines the speed of synchronization is latency, not bandwidth. Sometimes the effective data transfer rates drops to kilobytes/s, sometimes on bigger files it will go to 250Mbyte/s, but I’ve never seen 1GByte/s or even half.
There is quality LSI/Avago/Broadcom hardware RAID arrays on both sides, they certainly do >500MB/s sequential: I’ve measured that, copying data to RAM disks. I have also copied RAM disk to RAM disk and been quite disappointed, because it’s nowhere near the 70GB/s my 4 channel Haswell Xeon is supposed to be able to, nor even the 25GB/s it should be able to do per channel.
Linux is quite a different story: At least with iperf3 there is absolutely no problem pushing 960MByte/s at idle CPU clocks, even with single a single thread. I’ve done some iSCSI testing some years back and while I don’t have the details in my head any more, I know that even with Linux at both ends, the difference between the theoretical max and what got delivered was quite big.
So if you really want to get better than 1Gbit, get a cheapo NBase-T like this or the Aquantia 107 and do either direct connects, use an ordinary Linux PC as switch or get one of those entry level NetGear boxes which support 2.5, 5 and 10Gbit and see if you really are actually able to take advantage of 10Gbit speeds.
I have a direct connection between my Windows 10 desktop and Freenas server with an Aquantia AQC107 in the desktop and an Intel X540 T1 in the Freenas server, which is a Dell T320 with 8 x WD Red 8TB drives. I get about 800 megabytes per second sequential both ways and about 1.1 gigabytes per second if I transfer something from the Freenas ARC cache to my desktop SSD array. Not sure how you aren't getting speeds like that from Windows. Are you not using jumbo frames?
I've seen several tests on Internet where 600-800 MB/sec file copy was achieved on Windows. Aquantia was in the 600 MB/sec range, Intel was in the 800 MB/sec.
Yeah, I wonder if you've got network configuration issues going on or something. Or maybe running a stale version of SMB or something. I've got dual 1GbE links between my desktop and server running through my switch using SMB Multichannel and I have no trouble pushing 235MiB/sec, which is the max link speed with overhead. Granted, that is slower than what you are talking...but only barely.
Small files it'll slow down a fair amount, but that is as much my RAID0 array, which is a pair of 3TB Seagate Barracudas in both machines. Smallish files like pictures and MP3s will run at more like 80-120MiB/sec. However, if I put the SSD in my desktop and in my server on the network and do a file copy of even small files like that I can transfer a 2GiB folder of 2-4MiB images at about 200-230MiB/sec (server has a not super fast rather old first generation SATAIII 60GB SSD as the boot drive). Large files tick along at the link limit of 235MiB/sec.
SMB has a fair amount of overhead per file, which is usually a hit you see with small files because of how the network file system handles communication and stuff. But with a 10GbE link, if drive speed were taken out, I'd think you'd still see at least 800-900MiB/sec RAM disk file transfer between reasonably fast machines with small files.
I've carefully rechecked everything and I can confirm that I indeed *can* get to 400MB/s, which is what both RAIDs are capable of sustaining for *huge* files like clonezilla images.
So I'll have to partially retract the Windows 'dissing' :-)
One issue I have with Windows is that iperf3 results are terribly inconsistent, I may get 6Gbit/s on a first run, and it will then drop t 2Gbit/s ever after.
Perhaps Windows is all *too smart* and notices that nothing useful is happening...
And it can't be renegotiating line speeds as the ASUS switch actually doesn't support the NBASE-T intermediate line rates and the LEDs stay blue for 10Gbit all through.
None of that with Linux on both ends: 960MB/s and never lower.
In any case...
Since the vast majority of all files are relatively small, actual copies tend to be vastly slower as 512MB of write-back cache cannot quite compensate the fact that the OS will try to protect metadata corruption by serializing and it's still mechanical disks underneath.
So let me just say, that you have to manage your expectations and that a faster network is more likely to expose other bottlenecks.
You can already buy the Aquantia based card for $84. https://amzn.to/2KGVqSO It's QNAP branded, but works just fine in a Windows PC with the drivers from Aquantia's site.
Cat7? Given that Cat7 is a _higher_ rathing than the stated Cat6A, it shouldn't be a problem, right? Requirements tend to be _minimum_ requirements, after all. It's not like our Cat5-requiring GbE connections suddenly drop to 10Mb/s if we plug in a Cat6 cable ;)
Cat7 is not recognized by TIA. As soon as you terminate it for Ethernet, it's 6a at best. Because of this marketing bullshit (like like "6e" which also isn't a real thing but you'll find people trying to sell it), they're skipping 7 and going to 8 for the next version, but you probably won't ever be using that in your house.
10GBase-T works fine over Cat6 for shorter distances (55m by spec). You need 6a to get the full 100m. 7 gets you absolutely nothing. 8 is for very high speed, very short runs in data centers.
But Cat6 is not going to handle a noisy EMI environment or bundle cable cross talk very well and Cat5e significantly worse than that.
I've seen 10GbE run over 10-15 meters of cable in computer labs just fine where you do have a lot of cabling going on. Never seen it done over longer distances for anything, but in a home, so long as you didn't run it along power wiring or assemble big bundled runs you'd probably be able to push 10GbE to anywhere in a reasonable sized house over Cat5e without issues.
Now if you are cabling from scratch, I'd at least use Cat6. IMHO, I don't see a huge need to run Cat6a with the greater difficulty in terminating and running it and extra costs. But Cat6 over Cat5e I do see the reasons. Then again, almost every drop in my house is accessible enough to not have to open walls or ceilings to pull a new cable to the location. So if having wired with Cat6 (I have a few Cat5e runs that I installed when I first moved in 5 years ago, but 80% are Cat6) was a mistake later with some future standard, it doesn't take all that much effort to redo things again.
Everywhere I searched everyone said Cat7 is the way to go if you want to be future proof and that Cat6a isnt shielded well enough, especially when there are several cables side by side. Even the professional I hired. They got installed in the house and they work fine, even over distances >50 meters and sitting right beside 240V power cables. They are very stiff, so its better to use single sockets, which is more expensive and costs more space on the wall. Thats the only downside I see. Its not much more expensive either.
I would love to have a SoHo focused switch with some basic capabilities. 1 vlan for cameras 1 vlan for WiFi 1 vlan for normal traffic 2 10gig ports (1 for server, 1 for my main computer) 8 1gig ports with PoE for cameras, APs, and normal wired computers
Been waiting for these, but my pfSense box isn't upgradable, so I'd have to replace the whole thing. My ISP offers 10Gbit internet, but I have no hardware to take advantage of it yet.
I'm using a Netgear GS110EMX with two Aquantia cards and the speed is excellent. I only need 10G Ethernet between two machines, the rest are single or teamed Gigabit connections. I went with the Netgear after hearing about problems with Asus consumer 10Gb switch, and I'm glad i did.
I've had my first Asus switch 'die' on me (the port would no longer work at 10Gbit, auto-negotiation failed, but *would* function when I configured the NIC for 1Gbit) and I attributed that to the fact that I was using an older 10GBase-T NIC from Intel that uses a full 10 Watts on the PHY, where the Aquantias will typically try 'greener' 3 Watts from what I read.
Had it swapped without problems by the e-tailer and have not tried the Intel NICs since, because they do generate quite a bit of heat and are actually designed for server airflow. Same with BroadComs I had, two of those actually died in my tower chassis, because the air-flow wasn't good enough for the 20Watts a dual port NIC could burn on PHY alone.
The Asus switch does get pretty warm even with the low power Aquantia NICs connected (and all 1Gbit ports in full action), but zero noise is rather more compelling than the risk of having to buy another in two years.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
46 Comments
Back to Article
colinstu - Thursday, August 9, 2018 - link
If only cheap 5-8 port 10GbE switches existed...Railgun - Thursday, August 9, 2018 - link
They’re coming. Ubiquity should hopefully have a 6-port maybe PoE soon.Railgun - Thursday, August 9, 2018 - link
Ubiquiti that is.Samus - Thursday, August 9, 2018 - link
I use the netgear gs110mx (2 port 10GbE + 8 port 1GbE and can't say enough good things about it, especially after my disasterous experience with the Asus xg-u2008 (wouldn't negotiate at 10GbE with either of my NIC's. Asus's official stance is it is only certified for use with their own xg-c100c (or whatever their NIC is) which is ridiculous because one of my NIC's is an onboard Quanta that uses the SAME X550 chip as the Asus PCIe NIC (I think?)Valantar - Friday, August 10, 2018 - link
$200 for a 2-port switch (the GbE ports barely count, given that you can get 8-port GbE switches for $15) is by no means cheap. I need at least three ports (my NAS, my desktop and my GF's workstation), and given that each of those will need ~$100 NICs, the switch really can't be much more expensive than that, and of course with the requisite number of ports. Of course there is some benefit to having your GbE ports linked directly to your 10GbE ports if multiple simultaneous accesses are required, but my only reason for wanting this is increasing bandwidth for single devices. I'd be happy with 5GbE, really (or 2.5 if it was cheap enough!), but $200 for two ports is out of the question. $100 for five 5GbE ports? Sure, I'd take that.DanNeely - Friday, August 10, 2018 - link
You're not going to see a 4 port switch for $100ish until single port NICs get down around $25 or so; currently 10GBe hardware costs about $80/port. If you need more than 2 ports now netgear's XS505M has 4x 10GbE ethernet (and 1x 10GbE SFP port) for $380, or the MS510TX which has 1x 10GbE ethernet, 1x 10GBe SFP, 2x 5GbE, 2x 2.5GbE, and 4x1GbE ports.azazel1024 - Friday, August 10, 2018 - link
I hadn't seen the MS510TX before. That looks like it could be a somewhat future proof solution for me. I really only need 5GbE between my desktop and my server, but I'd also like at least a couple of 2.5GbE ports for supporting future high speed wireless access points (I assume we will see some 802.11ax WAP/WiRouters with 2.5GbE ports). As it stands, 802.11ac 80Mhz 2:2 I can see 60MiB/sec or a little better between my laptop with an Intel 7265ac in it and my wireless network. With 802.11ax, especially with better client side MU:MIMO I can imagine many scenarios exceeding a single 1GbE link.Looks like some places have it for $270 new. I could probably spring for that and a couple of low cost 10/5/2.5/1 GbE capable cards. Makes my cost under $500 with some future proofing. Just 2 10/5/2.5GbE just isn't enough for my needs. At least a pair at 5GbE and a pair at 2.5GbE should be enough for a few years. And a 10GbE means I could always add on a 2nd, similar, one some day and now I've got 4x5GbE and 4x2.5GbE.
Valantar - Friday, August 10, 2018 - link
Are switches usually built from single-port controllers? What we need are more affordable multi-port controllers. With more ports integrated into a single chip, prices should scale far better than 1:1 per port compared to single-port NICs.Maltz - Thursday, August 9, 2018 - link
Did they actually say something along those lines, or is that speculation? Because I'd be all over that, but I wasn't exactly holding my breath.Valantar - Friday, August 10, 2018 - link
When has anything made by Ubiquiti ever been cheap? Sure, there are pricier options, but cheap? No.pixelstuff - Thursday, August 9, 2018 - link
If only cheap 2.5 GbE switches existed... I think I'd be happy enough.CaedenV - Friday, August 10, 2018 - link
Yep, 2.5gig Ethernet would be just fine for home use... It just needs to be faster than my spinning drives that I still have.iwod - Friday, August 10, 2018 - link
Well your spinning HDD could would still overflow your 2.5Gbps connection. I wish for 5Gbps.But from the looks of things Home NAS market isn't getting any bigger.
azazel1024 - Friday, August 10, 2018 - link
One spinning HDD is usually only pushing about 160MiB/sec and faster 7200rpm drives aren't really getting past 200MiB/sec. A single 2.5GbE link should be able to push about 300MiB/sec.So even a RAID0 array of a couple of hard drives might not fully saturate a 2.5GbE link
iwod - Sunday, August 12, 2018 - link
I stand corrected. Somewhere in the back of mind I remember they were doing 2xxMB/s already. Turns out only those newest 12TB manage to just touch 200MB/s read and write.Then yes, 2.5Gbps is good right now.
abufrejoval - Friday, August 10, 2018 - link
Has anyone had any experience with the Netgear MS510TX?They are NBASE-T and support 2x 2.5, 2x 5, 1x 10 and 4x 1 Gbit at 26 Watts max power.
If that fan is well designed and variable, it might have acceptable noise levels...
I seems it's actually being phased out already, because I only managed to find it when looking explicity for that product, not by navigating their site top down.
mode_13h - Monday, August 13, 2018 - link
I'd be happy with 4 fast ports. 2 isn't enough, but I don't need 8, or even 5.AdrianB1 - Thursday, August 9, 2018 - link
15 years after Gigabit Ethernet came, 2.5 Gb or 5 Gb are not yet affordable and 10 Gb requires different cables, so except for new deployments there is no way to do it. With fiber optics becoming cheaper and more energy efficient than copper, why even bother?edzieba - Friday, August 10, 2018 - link
Because copper is easy to install and terminate with pretty basic tools (even your ultra-fancy shielded Class FA is find with hand crimpers as long as you use the right dies). To do a fibre install with anything other than pre-made pre-terminated fibre, you need to do the cut-and-clean-and-cleave-and-clean dance (with quality cleavers), and then either much about with resin and manual insertion and polishing, or rent a fusion splicer for pre-made pigtails.Skill level required is higher, tool cost is dramatically higher, individual component parts are all more expensive. This is why even in commercial installs fibre-to-the-desk died a quick death and stayed dead in favour of copper to the desk and fibre backbones.
nathanddrews - Friday, August 10, 2018 - link
QFTThe main reason 10GbE RJ45 hasn't dropped in price quickly is because for these higher speeds, the enterprise world doesn't use it much compared to SFP. With 1GbE, RJ45 benefited from economies of scale to drive prices down, but we aren't getting that with 10GbE+ copper. What's kind of cool is that even though 10GbE RJ45 isn't mainstream, the cost of SFP equipment has been dropping to the point of being reasonable (relatively speaking) if you look for deals. I've seen 10GbE NICs under $70, 15m cables (complete) for $20, and 8-port+ switches for under $500 on the regular.
SFP simply allows for:
Much greater port density
Longer runs (literally miles)
Greater bandwidth
Greater security
Lower latency
Greater reliability
Negligible EMI
Lower power usage (<1W/port vs 2-4W/port)
Once you get into the realm of QSFP+ breakout cables, the value proposition is absolutely undeniable (at scale, of course). While the "last meter" for client connections and POE will continue to use RJ45 for some time, everything else is evolving. As I wire up my house with CAT6A, I'm also running conduit for the inevitable need (need?) for optical HDMI, CFP, SFP, and whatever else is coming down the line in the next 20+ years.
AdrianB1 - Friday, August 10, 2018 - link
You are absolutely right, but I was thinking about a different target for this: not enterprises or SME, not office users, but home users that want a faster way to transfer their (porn/pirated) movies from their main computer to the NAS in the same room or to the player next to the big TV screen. In that case they can buy pre-built cables in either fiber or Twinax and link the stuff via small port count SFP+ switches. For example my NAS is next to my workstation, the stuff I record with the camera is transferred on a 1Gbps connection and I have a few TB of that. In this case I would go for Twinax. I am not considering laying out fiber across my home, I don't need it, but where I need more speed I would go for Twinax, fiber and copper in this particular order.nathanddrews - Friday, August 10, 2018 - link
Twinax is badass.genzai - Thursday, August 9, 2018 - link
Cards with the same hardware have been available at retail and online for over a year from companies like Akitio and Startech. This lower price point is new, but Akitio has been in the ~$120 range when on sale for a long time already.abufrejoval - Thursday, August 9, 2018 - link
I can well understand the pain and frustration of waiting for affordable 10Gbit.But I’ve played with 10GBase-T for many years now, thanks to some employer sponsoring and some other affordable 10GBase-T NIC, which can, after all, always be *directly connected*, if nor arms and legs are left to pay for the switch.
In the office lab, I have had 48-Port 10GBase-T switches to play with for some years. But even in the home lab, an Asus XG-U2008 switch will connect two 10Gbit ports with eight 1Gbit ports *and* be completely silent. If you consider that 10 Watts/port is where 10Gbase-T started, that’s quite a feat. And considering that the 48Port HP switch has fan capacity to push out 500 Watts of heat, I can assure you that it is very noisy starting up.
You and I want 16 ports at $160 and completely silent and that won’t happen at 10Watts/port. Green and power reduced as well as N-Base-T Ethernet have a good chance to change that, because 3 Watts may be good enough for 10Gbit, 2.5Gbit can be had for tranquility.
Netgear does sell some switches which offer NBase-T at 20 Watts or so for the switch, not quite passive but perhaps acceptable noise levels during quiet hours which give you something like 1x 10, 2x 5, 2x 2.5 and the other ports 1Gbit.
Turns out, 10Gbit on the wire won’t get you 10x performance in most workloads I have measured anyway so this may be a better deal than you think. Because at least with Windows, I haven’t been able to get much better than 250Mbyte/s across a 10Gbit link anyway.
I operate one 14TB primary server in my home lab, running Windows server. And a backup, same OS, same capacity. So every now and then, I switch on the backup, and have it synchronize the files, some of which are quite small.
And because they are so small, what really determines the speed of synchronization is latency, not bandwidth. Sometimes the effective data transfer rates drops to kilobytes/s, sometimes on bigger files it will go to 250Mbyte/s, but I’ve never seen 1GByte/s or even half.
There is quality LSI/Avago/Broadcom hardware RAID arrays on both sides, they certainly do >500MB/s sequential: I’ve measured that, copying data to RAM disks. I have also copied RAM disk to RAM disk and been quite disappointed, because it’s nowhere near the 70GB/s my 4 channel Haswell Xeon is supposed to be able to, nor even the 25GB/s it should be able to do per channel.
Linux is quite a different story: At least with iperf3 there is absolutely no problem pushing 960MByte/s at idle CPU clocks, even with single a single thread. I’ve done some iSCSI testing some years back and while I don’t have the details in my head any more, I know that even with Linux at both ends, the difference between the theoretical max and what got delivered was quite big.
So if you really want to get better than 1Gbit, get a cheapo NBase-T like this or the Aquantia 107 and do either direct connects, use an ordinary Linux PC as switch or get one of those entry level NetGear boxes which support 2.5, 5 and 10Gbit and see if you really are actually able to take advantage of 10Gbit speeds.
No need to complain any more, time to tune!
oRAirwolf - Friday, August 10, 2018 - link
I have a direct connection between my Windows 10 desktop and Freenas server with an Aquantia AQC107 in the desktop and an Intel X540 T1 in the Freenas server, which is a Dell T320 with 8 x WD Red 8TB drives. I get about 800 megabytes per second sequential both ways and about 1.1 gigabytes per second if I transfer something from the Freenas ARC cache to my desktop SSD array. Not sure how you aren't getting speeds like that from Windows. Are you not using jumbo frames?AdrianB1 - Friday, August 10, 2018 - link
I've seen several tests on Internet where 600-800 MB/sec file copy was achieved on Windows. Aquantia was in the 600 MB/sec range, Intel was in the 800 MB/sec.azazel1024 - Friday, August 10, 2018 - link
Yeah, I wonder if you've got network configuration issues going on or something. Or maybe running a stale version of SMB or something. I've got dual 1GbE links between my desktop and server running through my switch using SMB Multichannel and I have no trouble pushing 235MiB/sec, which is the max link speed with overhead. Granted, that is slower than what you are talking...but only barely.Small files it'll slow down a fair amount, but that is as much my RAID0 array, which is a pair of 3TB Seagate Barracudas in both machines. Smallish files like pictures and MP3s will run at more like 80-120MiB/sec. However, if I put the SSD in my desktop and in my server on the network and do a file copy of even small files like that I can transfer a 2GiB folder of 2-4MiB images at about 200-230MiB/sec (server has a not super fast rather old first generation SATAIII 60GB SSD as the boot drive). Large files tick along at the link limit of 235MiB/sec.
SMB has a fair amount of overhead per file, which is usually a hit you see with small files because of how the network file system handles communication and stuff. But with a 10GbE link, if drive speed were taken out, I'd think you'd still see at least 800-900MiB/sec RAM disk file transfer between reasonably fast machines with small files.
abufrejoval - Friday, August 10, 2018 - link
I've carefully rechecked everything and I can confirm that I indeed *can* get to 400MB/s, which is what both RAIDs are capable of sustaining for *huge* files like clonezilla images.So I'll have to partially retract the Windows 'dissing' :-)
One issue I have with Windows is that iperf3 results are terribly inconsistent, I may get 6Gbit/s on a first run, and it will then drop t 2Gbit/s ever after.
Perhaps Windows is all *too smart* and notices that nothing useful is happening...
And it can't be renegotiating line speeds as the ASUS switch actually doesn't support the NBASE-T intermediate line rates and the LEDs stay blue for 10Gbit all through.
None of that with Linux on both ends: 960MB/s and never lower.
In any case...
Since the vast majority of all files are relatively small, actual copies tend to be vastly slower as 512MB of write-back cache cannot quite compensate the fact that the OS will try to protect metadata corruption by serializing and it's still mechanical disks underneath.
So let me just say, that you have to manage your expectations and that a faster network is more likely to expose other bottlenecks.
DigitalFreak - Thursday, August 9, 2018 - link
You can already buy the Aquantia based card for $84. https://amzn.to/2KGVqSO It's QNAP branded, but works just fine in a Windows PC with the drivers from Aquantia's site.Beaver M. - Friday, August 10, 2018 - link
Uh... so that means 10 Gbit wont work with Cat7 cables on this card??Valantar - Friday, August 10, 2018 - link
Cat7? Given that Cat7 is a _higher_ rathing than the stated Cat6A, it shouldn't be a problem, right? Requirements tend to be _minimum_ requirements, after all. It's not like our Cat5-requiring GbE connections suddenly drop to 10Mb/s if we plug in a Cat6 cable ;)ERIFNOMI - Friday, August 10, 2018 - link
Cat7 is not recognized by TIA. As soon as you terminate it for Ethernet, it's 6a at best. Because of this marketing bullshit (like like "6e" which also isn't a real thing but you'll find people trying to sell it), they're skipping 7 and going to 8 for the next version, but you probably won't ever be using that in your house.10GBase-T works fine over Cat6 for shorter distances (55m by spec). You need 6a to get the full 100m. 7 gets you absolutely nothing. 8 is for very high speed, very short runs in data centers.
Beaver M. - Friday, August 10, 2018 - link
Uh, I am running 10 Gbit over Cat7 cables.azazel1024 - Friday, August 10, 2018 - link
To add, Cat5e can do 10GbE up to 45 meters.But Cat6 is not going to handle a noisy EMI environment or bundle cable cross talk very well and Cat5e significantly worse than that.
I've seen 10GbE run over 10-15 meters of cable in computer labs just fine where you do have a lot of cabling going on. Never seen it done over longer distances for anything, but in a home, so long as you didn't run it along power wiring or assemble big bundled runs you'd probably be able to push 10GbE to anywhere in a reasonable sized house over Cat5e without issues.
Now if you are cabling from scratch, I'd at least use Cat6. IMHO, I don't see a huge need to run Cat6a with the greater difficulty in terminating and running it and extra costs. But Cat6 over Cat5e I do see the reasons. Then again, almost every drop in my house is accessible enough to not have to open walls or ceilings to pull a new cable to the location. So if having wired with Cat6 (I have a few Cat5e runs that I installed when I first moved in 5 years ago, but 80% are Cat6) was a mistake later with some future standard, it doesn't take all that much effort to redo things again.
PixyMisa - Friday, August 10, 2018 - link
Cat7 is a weird thing that lives in its own weird land. You want Cat6a.Beaver M. - Friday, August 10, 2018 - link
Everywhere I searched everyone said Cat7 is the way to go if you want to be future proof and that Cat6a isnt shielded well enough, especially when there are several cables side by side.Even the professional I hired. They got installed in the house and they work fine, even over distances >50 meters and sitting right beside 240V power cables.
They are very stiff, so its better to use single sockets, which is more expensive and costs more space on the wall. Thats the only downside I see. Its not much more expensive either.
ionuts - Friday, August 10, 2018 - link
So why would I buy this instead of a TB3 ethernet adapter?Reflex - Friday, August 10, 2018 - link
Is there a 10GbE TB3 ehternet adapter?ionuts - Friday, August 10, 2018 - link
Akitio, Sonnet, etc.CaedenV - Friday, August 10, 2018 - link
I would love to have a SoHo focused switch with some basic capabilities.1 vlan for cameras
1 vlan for WiFi
1 vlan for normal traffic
2 10gig ports (1 for server, 1 for my main computer)
8 1gig ports with PoE for cameras, APs, and normal wired computers
And all that for $250 or less
ajp_anton - Friday, August 10, 2018 - link
Been waiting for these, but my pfSense box isn't upgradable, so I'd have to replace the whole thing. My ISP offers 10Gbit internet, but I have no hardware to take advantage of it yet.LordConrad - Friday, August 10, 2018 - link
I'm using a Netgear GS110EMX with two Aquantia cards and the speed is excellent. I only need 10G Ethernet between two machines, the rest are single or teamed Gigabit connections. I went with the Netgear after hearing about problems with Asus consumer 10Gb switch, and I'm glad i did.abufrejoval - Saturday, August 11, 2018 - link
I've had my first Asus switch 'die' on me (the port would no longer work at 10Gbit, auto-negotiation failed, but *would* function when I configured the NIC for 1Gbit) and I attributed that to the fact that I was using an older 10GBase-T NIC from Intel that uses a full 10 Watts on the PHY, where the Aquantias will typically try 'greener' 3 Watts from what I read.Had it swapped without problems by the e-tailer and have not tried the Intel NICs since, because they do generate quite a bit of heat and are actually designed for server airflow. Same with BroadComs I had, two of those actually died in my tower chassis, because the air-flow wasn't good enough for the 20Watts a dual port NIC could burn on PHY alone.
The Asus switch does get pretty warm even with the low power Aquantia NICs connected (and all 1Gbit ports in full action), but zero noise is rather more compelling than the risk of having to buy another in two years.
SultanFaris - Saturday, August 11, 2018 - link
Good Information and is very useful.mode_13h - Monday, August 13, 2018 - link
Please inquire how much power this actually burns @ 10 Gbps.Although, the bigger issue will be switches, as others already mentioned.
The MaD HaCkER - Sunday, September 8, 2019 - link
You are early adopters. The first to jump with one of those new parachute thingy’s. We all want to follow... As soon as we see if/how it works out. ;)