And mentioning that switch without pointing out that it only had 2 10G ports is a bit disingenuous IMO. Especially since it looks like you'd need to spend 3x as much for one with more ports (8 of them, there don't appear to be any 4x 10G switches listed) on Newegg.
Factor in about $1000 for 12 SFP+ 10G media converters...
But at least it will be reliable. I had to RMA my ASUS XG-U2008 after 6 months...one 10G port failed after a few months, then the other failed a few months later. I only need one to work since the 10G connection is a downstream server to 1G connections, but wow...I've put a fan on it this time hoping it prolongs its lifespan.
Those are fiber. Copper is around 5x more than that. Nobody is going to run fiber to a bunch of nodes (unless you need the range) because that is even more cost prohibitive than the 10G transceiver to copper media + CAT6a.
It’s a cheap solution for them to offer a switch that is 3/4 SFP ports. That’s ridiculous. It should be 1/4 SFP. But I get, up until now with the wide availability of copper 10G, fiber was often the only available mainstream solution. But that is no longer the case.
Oh I see what you were saying. And I agree. It doesn't make sense to buy a predominately SFP switch if you mainly want a bunch of copper ports.
A TP-Link T1700X-16TS or maybe a Netgear XS708E or XS716E would probably be cheaper than trying to convert all the SFP ports to copper.
In the short term, I'd like to see the manufacturers swap out all the 1G ports with 2.5G ports and provide a few 10G backbone ports. Hopefully they could do that without major heat syncs and engineering challenges.
The thing about that switch is that it is passive.
And it is relatively massive to make that happen: It is quite a bit of metal all around, and open to the side to allow for natural convection to cool it.
10Gbase-T used to require 10Watts per port just for PHY, which is actually quite low when you consider just how complex the modulation techniques are. With green Ethernet and the 28nm process they use for the mostly analog PHY and logic, Aquantia is delivering a much lower energy footprint and whoever makes the chipset for this switch (have not dared to tear it apart yet, but would love to know who makes it), seems to almost have gotten lower than that.
But eight 10Gbase-T ports must either require active cooling or a much larger passive design, able to dissipate a "desktop CPU class" heat budget.
That IMHO is the main attraction of 2.5 or 5G switches, because they might be able to operate with much less heat, ideally they'd even support a variable heat setting so you could tune them for cabinet vs. desktop use and have them modulate the speeds of their ports to match the power ceiling.
10Gbit sustained use on all ports may be rather rare, but in my case I'd mostly want the network kick to higher speeds when I run backups, which typically only involves two nodes at the same time--just not the same two nodes throughout.
So I could live with the power budget limitations of this switch design, I'd just want it to be more flexible where the ports are concerned and typically 5Gbit is sufficient for what my RAIDs (built from low-noise 5k HDDs) can actually absorb, so I could run more ports at 5 or 2.5G.
I have been fantasizing about putting my own 10Gbit switch together using several dual ported 10Gbit cards and done some rough benchmarking using bridges under Linux. I used an older Core2 quad core motherboard that's typically enjoying its retirement in a box and has three PCIe x4 or better slots. Blowing iperf3 loads across ports on Intel 10Gbit dual cards, across 16x slots on the Northbridge or even 4x Northbridge to Southbridge never dropped below 900MB/s while the CPU was bored to death.
But that's 100Watts... so nothing I'd like to keep around with what we pay for power in Germany.
I did not manage to find say a 4x PCIe x4 board with an ECC capable Pentium C200 chipset, that could serve as a base for such a software switch (Open vSwitch, here I come).
I also could not find any 10Gbase-T x4 NICs for a two slot design that were anywhere near affordable (I'd also take a 8x 10Gbase-T 16x PCIe line card to run my own white box switch).
Didn't dare to break apart my production setup to test with the Aquantia NICs, but all other benchmarks did not indicate that there is any measurable difference in throughput or CPU overhead between these and the Intel NICs (software support, DPDK etc. is likely another matter).
5G is actually quite reasonable. That would saturate a SATA SSD or a RAID array of mass storage drives. Like gigabit 20 years ago when it launched, 10G will be mostly wasted in almost any environment. But I’m not saying we shouldn’t do it ;)
Unless they end up being much cheaper I don't expect 2.5/5G to have any significant home penetration. They were created as stopgaps for enterprise; not because 10G was too expensive but because pulling cat 6A is too labor intensive and the reduced speeds can be acheived at 100m over existing cat 5e/6 wires. Typical home runs are short enough that 10G over 5e is doable.
10G seems like it still runs really hot and requires heat syncs on the chips. I would think 2.5 would be reasonable though for inclusion on motherboards and would definitely bump the speed of large transfers up noticeably.
For those interested in total system cost for 10GbE in the home, here's my experience; I went fibre in order to learn about optical networking - and because I'm a cheapskate. 1x TP-LINK T1700G-28TQ - this gives you 4x 10GbE ~ £250 4x Mellanox ConnectX-2 PCIe 2.0 x8 NICs - e-bay ~ £30 each 4x 30m LC UPC to LC UPC OM3 patch cable - fs.com ~ £10 each 8x Mellanox MFM1T02A-SR Compatible 10GBASE-SR SFP+ transceivers - fs.com ~ £12 each With shipping (fs.com shipping is expensive) and duty, the total bill was £636, still cheaper than going copper! Be wary that that there is a lot of unravelling of technical compatability issues that needs to be done if you go fibre - be prepared to learn - www.servethehome.com is your friend :)
I could be wrong, but aren't most incoming ISP connections in the U.S. still copper 1 Gb? Can you even get a copper line 10 Gb cable modem in the U.S.? East Asian countries such as S. Korea and Japan seem to have the fastest average internet connections, world-wide. What cable modems are being used in those countries? Are those areas still mostly copper, or have they mostly switched to fiber?
ultra fast ISP connections largely dont matter. Your typical connection to a server is only going to have a 5-10mbps connection, so if you have 3-4 people in the household, then a 50mbps connection is the 'fastest practical' conneciton you can have. Not saying there arent exceptions, but that is my expierence.
The point to having an ultra fast in-home network is that it opens up other possibilities. The ability to do more low latency in-home streaming, using a NAS like local storage, etc. but because 99% of people just use network for internet it keeps high speed networking equipment fairly expensive and for enterprise only :(
@vailr: The high speed 10g connections people are talking about are not for the Internet. This is for internal network in home. As an example, let's say at home you have a media server and want to be able to stream from the media server to 6 locations in the home. 1gbit may not be enough, so people are starting to use 10gbit in the home, but it can be rather expensive. Hence this thread on how it can be done cheaper
Okay, but the question remains: what speed and type of modems are being used in East Asian high speed internet countries: copper or fiber? If someone was (theoretically) looking to stream 8k video content over the internet, what would be the minimum connection speed required for problem-free playback? That is: from the cable modem directly to a single display device?
8k wont take much. 4K h265 streaming is under 10-15mbps, and h265 scales really well with resolution, so moving to 8k should only be 20-30mbps if it is compressed right, and probably less. Shouldn't be a big deal to stream on a standard network... the real issue is uncompressing it and getting it to a display lol
I was thinking the same thing. Most Bluray media is compressed down to 30 Mbps or less. You can easily fit 6 of those streams in a 1000 Mbps connection.
The main benefit is when you want to copy those files or videos from a camcorder. Trying to copy 4-8GB files around the network takes a couple minutes each over a 1 Gbps connection. A 10Gbps connection makes the copy time almost a non issue.
This card has been available for months now. I've had mine since mid September and I ordered it on Amazon for $100. I'm not sure why several news companies are saying this is just launching now.
What I can tell you, however, is that at 10Gbit they will survive in an ordinary PC with convection cooling under room temperature. Got two running for a month or three now.
I tested 10GBase-T cards from Intel and Broadcom some years ago in my home lab which is mostly desktop hardware for acoustics and cheap and some of them died (actually most just powerd down) because their passive heat-sinks were designed to be actively cooled by server fans.
Same with the Asus 208 switch that has two 10Gbit ports: It's all metal outside and it will get warm, but convection cooling will do it. 8 ports without fans still seem out of range.
AFAIK 10Gbase-T started at 10Watts/port just for PHY but judging from just how hot the heat-sink on the Aquantia feels, I'd guestimate around 3 Watts under load.
I was hoping the author might ask the manufacturer for some hard data (I know it varies - just looking for ballpark), but your experience is quite valuable.
ESXi isn't likely to happen, FreeBSD isn't there either.
I use it on Windows and Linux and even if the Linux drivers need to be compiled from source, they seem to register in DPKG so that kernel updates won't require manual intervention.
Using it on VZ7 (CentOS with OpenVZ containers), Ubuntu 16.04 and Windows 2016.
You can run FreeBSD and even ESXi *inside* a VM if that helps you in any way yet profit from 10Gbit line speeds. As a matter of fact I ran pfSense like that for a while (Ubuntu host, VirtualBox with paravirtualized drivers from BSD), because BSD wouldn't support a USB Ethernet adapter I wanted to use for the backup Internet connection and the Bay Trail hardware only had to physical Ethernet ports available.
The virtualization overhead was much less than what I remember from early VMware days: Virtual Ethernet has come a long way.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
32 Comments
Back to Article
DanNeely - Thursday, November 9, 2017 - link
The card's not showing on Amazon at the moment.And mentioning that switch without pointing out that it only had 2 10G ports is a bit disingenuous IMO. Especially since it looks like you'd need to spend 3x as much for one with more ports (8 of them, there don't appear to be any 4x 10G switches listed) on Newegg.
mervincm - Thursday, November 9, 2017 - link
https://www.ubnt.com/unifi-switching/unifi-switch-... 550$-600$16 ports of 10gig Ethernet 12-SFP+, 4-RJ45
Samus - Thursday, November 9, 2017 - link
Factor in about $1000 for 12 SFP+ 10G media converters...But at least it will be reliable. I had to RMA my ASUS XG-U2008 after 6 months...one 10G port failed after a few months, then the other failed a few months later. I only need one to work since the 10G connection is a downstream server to 1G connections, but wow...I've put a fan on it this time hoping it prolongs its lifespan.
pixelstuff - Friday, November 10, 2017 - link
Not necessarily $1,000. http://www.fs.com/c/uniquiti-10g-sfp-plus-3141Samus - Friday, November 10, 2017 - link
Those are fiber. Copper is around 5x more than that. Nobody is going to run fiber to a bunch of nodes (unless you need the range) because that is even more cost prohibitive than the 10G transceiver to copper media + CAT6a.It’s a cheap solution for them to offer a switch that is 3/4 SFP ports. That’s ridiculous. It should be 1/4 SFP. But I get, up until now with the wide availability of copper 10G, fiber was often the only available mainstream solution. But that is no longer the case.
pixelstuff - Monday, November 20, 2017 - link
Oh I see what you were saying. And I agree. It doesn't make sense to buy a predominately SFP switch if you mainly want a bunch of copper ports.A TP-Link T1700X-16TS or maybe a Netgear XS708E or XS716E would probably be cheaper than trying to convert all the SFP ports to copper.
In the short term, I'd like to see the manufacturers swap out all the 1G ports with 2.5G ports and provide a few 10G backbone ports. Hopefully they could do that without major heat syncs and engineering challenges.
Nevod - Tuesday, November 14, 2017 - link
Where are those 10gbase-t SFP+ for $80? What I've found costs over $300 per unit.abufrejoval - Tuesday, November 14, 2017 - link
The thing about that switch is that it is passive.And it is relatively massive to make that happen: It is quite a bit of metal all around, and open to the side to allow for natural convection to cool it.
10Gbase-T used to require 10Watts per port just for PHY, which is actually quite low when you consider just how complex the modulation techniques are. With green Ethernet and the 28nm process they use for the mostly analog PHY and logic, Aquantia is delivering a much lower energy footprint and whoever makes the chipset for this switch (have not dared to tear it apart yet, but would love to know who makes it), seems to almost have gotten lower than that.
But eight 10Gbase-T ports must either require active cooling or a much larger passive design, able to dissipate a "desktop CPU class" heat budget.
That IMHO is the main attraction of 2.5 or 5G switches, because they might be able to operate with much less heat, ideally they'd even support a variable heat setting so you could tune them for cabinet vs. desktop use and have them modulate the speeds of their ports to match the power ceiling.
10Gbit sustained use on all ports may be rather rare, but in my case I'd mostly want the network kick to higher speeds when I run backups, which typically only involves two nodes at the same time--just not the same two nodes throughout.
So I could live with the power budget limitations of this switch design, I'd just want it to be more flexible where the ports are concerned and typically 5Gbit is sufficient for what my RAIDs (built from low-noise 5k HDDs) can actually absorb, so I could run more ports at 5 or 2.5G.
I have been fantasizing about putting my own 10Gbit switch together using several dual ported 10Gbit cards and done some rough benchmarking using bridges under Linux. I used an older Core2 quad core motherboard that's typically enjoying its retirement in a box and has three PCIe x4 or better slots. Blowing iperf3 loads across ports on Intel 10Gbit dual cards, across 16x slots on the Northbridge or even 4x Northbridge to Southbridge never dropped below 900MB/s while the CPU was bored to death.
But that's 100Watts... so nothing I'd like to keep around with what we pay for power in Germany.
I did not manage to find say a 4x PCIe x4 board with an ECC capable Pentium C200 chipset, that could serve as a base for such a software switch (Open vSwitch, here I come).
I also could not find any 10Gbase-T x4 NICs for a two slot design that were anywhere near affordable (I'd also take a 8x 10Gbase-T 16x PCIe line card to run my own white box switch).
Didn't dare to break apart my production setup to test with the Aquantia NICs, but all other benchmarks did not indicate that there is any measurable difference in throughput or CPU overhead between these and the Intel NICs (software support, DPDK etc. is likely another matter).
pixelstuff - Friday, November 10, 2017 - link
I just want to know when we'll see Mini-ITX motherboards with at least the 2G ethernet built in?Lolimaster - Friday, November 10, 2017 - link
"Home" ethernet should've evolved to 2.5G long ago. Mid to high end with 2.5 or 5G options.Samus - Friday, November 10, 2017 - link
5G is actually quite reasonable. That would saturate a SATA SSD or a RAID array of mass storage drives. Like gigabit 20 years ago when it launched, 10G will be mostly wasted in almost any environment. But I’m not saying we shouldn’t do it ;)DanNeely - Friday, November 10, 2017 - link
Unless they end up being much cheaper I don't expect 2.5/5G to have any significant home penetration. They were created as stopgaps for enterprise; not because 10G was too expensive but because pulling cat 6A is too labor intensive and the reduced speeds can be acheived at 100m over existing cat 5e/6 wires. Typical home runs are short enough that 10G over 5e is doable.pixelstuff - Friday, November 10, 2017 - link
10G seems like it still runs really hot and requires heat syncs on the chips. I would think 2.5 would be reasonable though for inclusion on motherboards and would definitely bump the speed of large transfers up noticeably.mode_13h - Sunday, November 12, 2017 - link
Exactly. 2.5 should be the new default, for enthusiast motherboards. First, we need support for it in affordable switches.Phyllis Hershberger - Friday, July 5, 2019 - link
Chinasfp will much cheaper than US as we know, i took modules from sfpcableshttps://www.sfpcables.com/10gsfp-transceiver-axs85...
timbotim - Friday, November 10, 2017 - link
For those interested in total system cost for 10GbE in the home, here's my experience; I went fibre in order to learn about optical networking - and because I'm a cheapskate.1x TP-LINK T1700G-28TQ - this gives you 4x 10GbE ~ £250
4x Mellanox ConnectX-2 PCIe 2.0 x8 NICs - e-bay ~ £30 each
4x 30m LC UPC to LC UPC OM3 patch cable - fs.com ~ £10 each
8x Mellanox MFM1T02A-SR Compatible 10GBASE-SR SFP+ transceivers - fs.com ~ £12 each
With shipping (fs.com shipping is expensive) and duty, the total bill was £636, still cheaper than going copper!
Be wary that that there is a lot of unravelling of technical compatability issues that needs to be done if you go fibre - be prepared to learn - www.servethehome.com is your friend :)
vailr - Friday, November 10, 2017 - link
I could be wrong, but aren't most incoming ISP connections in the U.S. still copper 1 Gb?Can you even get a copper line 10 Gb cable modem in the U.S.?
East Asian countries such as S. Korea and Japan seem to have the fastest average internet connections, world-wide.
What cable modems are being used in those countries?
Are those areas still mostly copper, or have they mostly switched to fiber?
CaedenV - Sunday, November 12, 2017 - link
ultra fast ISP connections largely dont matter. Your typical connection to a server is only going to have a 5-10mbps connection, so if you have 3-4 people in the household, then a 50mbps connection is the 'fastest practical' conneciton you can have. Not saying there arent exceptions, but that is my expierence.The point to having an ultra fast in-home network is that it opens up other possibilities. The ability to do more low latency in-home streaming, using a NAS like local storage, etc. but because 99% of people just use network for internet it keeps high speed networking equipment fairly expensive and for enterprise only :(
hitchcock4 - Saturday, November 11, 2017 - link
@vailr: The high speed 10g connections people are talking about are not for the Internet. This is for internal network in home. As an example, let's say at home you have a media server and want to be able to stream from the media server to 6 locations in the home. 1gbit may not be enough, so people are starting to use 10gbit in the home, but it can be rather expensive. Hence this thread on how it can be done cheapervailr - Saturday, November 11, 2017 - link
Okay, but the question remains: what speed and type of modems are being used in East Asian high speed internet countries: copper or fiber? If someone was (theoretically) looking to stream 8k video content over the internet, what would be the minimum connection speed required for problem-free playback? That is: from the cable modem directly to a single display device?CaedenV - Sunday, November 12, 2017 - link
8k wont take much. 4K h265 streaming is under 10-15mbps, and h265 scales really well with resolution, so moving to 8k should only be 20-30mbps if it is compressed right, and probably less. Shouldn't be a big deal to stream on a standard network... the real issue is uncompressing it and getting it to a display lolmode_13h - Sunday, November 12, 2017 - link
Streaming is a bad example. The main use case is all about large file transfer and fileservers.pixelstuff - Monday, November 20, 2017 - link
I was thinking the same thing. Most Bluray media is compressed down to 30 Mbps or less. You can easily fit 6 of those streams in a 1000 Mbps connection.The main benefit is when you want to copy those files or videos from a camcorder. Trying to copy 4-8GB files around the network takes a couple minutes each over a 1 Gbps connection. A 10Gbps connection makes the copy time almost a non issue.
Yingste - Saturday, November 11, 2017 - link
This card has been available for months now. I've had mine since mid September and I ordered it on Amazon for $100. I'm not sure why several news companies are saying this is just launching now.mode_13h - Sunday, November 12, 2017 - link
Please post power dissipation for 2.5 G, 5 G, and 10 G!abufrejoval - Tuesday, November 14, 2017 - link
Sorry, but hard numbers are too much work!What I can tell you, however, is that at 10Gbit they will survive in an ordinary PC with convection cooling under room temperature. Got two running for a month or three now.
I tested 10GBase-T cards from Intel and Broadcom some years ago in my home lab which is mostly desktop hardware for acoustics and cheap and some of them died (actually most just powerd down) because their passive heat-sinks were designed to be actively cooled by server fans.
Same with the Asus 208 switch that has two 10Gbit ports: It's all metal outside and it will get warm, but convection cooling will do it. 8 ports without fans still seem out of range.
AFAIK 10Gbase-T started at 10Watts/port just for PHY but judging from just how hot the heat-sink on the Aquantia feels, I'd guestimate around 3 Watts under load.
Hope this is better than nothing.
mode_13h - Friday, November 24, 2017 - link
Thanks for that.I was hoping the author might ask the manufacturer for some hard data (I know it varies - just looking for ballpark), but your experience is quite valuable.
CaedenV - Sunday, November 12, 2017 - link
Anyone happen to know if/how this works with esxi or FreeBSD? My home server needs this :Dabufrejoval - Tuesday, November 14, 2017 - link
ESXi isn't likely to happen, FreeBSD isn't there either.I use it on Windows and Linux and even if the Linux drivers need to be compiled from source, they seem to register in DPKG so that kernel updates won't require manual intervention.
Using it on VZ7 (CentOS with OpenVZ containers), Ubuntu 16.04 and Windows 2016.
You can run FreeBSD and even ESXi *inside* a VM if that helps you in any way yet profit from 10Gbit line speeds. As a matter of fact I ran pfSense like that for a while (Ubuntu host, VirtualBox with paravirtualized drivers from BSD), because BSD wouldn't support a USB Ethernet adapter I wanted to use for the backup Internet connection and the Bay Trail hardware only had to physical Ethernet ports available.
The virtualization overhead was much less than what I remember from early VMware days: Virtual Ethernet has come a long way.
Phyllis Hershberger - Tuesday, October 23, 2018 - link
Common faults and troubleshooting methods for switchesSFCable - Tuesday, July 2, 2019 - link
yah rightCoin123 - Thursday, June 24, 2021 - link
I am going to try one.Now I am using baudcom.com.cn.