Since there are already compatible chipsets and devices on the market and this standard was pushed aggressively to market the advantages of 2.5/5GbE as a Wi-Fi complement, I expect high-end consumer routers to have at least 2.5Gb support in the next year. There will be a Netgear "R9000" or something like it.
Yeah and GbE took a long time to go mainstream. We have had roughly 8-10 years where GbE ports were pretty standard on motherboards. 10GbE as pointed out is still too expensive in cost and power consumption. Even in a laptop it is hard to stick a chip in there that is going to pull down 5-7w of power, just for networking to do 10GbE.
Most networking is still low bandwidth and home users, 95% of them are going to be connecting wirelessly to an internet connection not feeding a whole lot either.
That 5% and in higher demand businesses is where this is "needed".
And yeah, I'll be excited about only a 2.5x speed increase after years and years of being capped at 1Gbps. Okay, well I've been capped at 2Gbps since Windows 8 and SMB Multichannel.
For very high demand applications or data centers, 10GbE is just fine. The cost isn't too much, nor is the power consumption a serious hindrance. In wireless gear the article points out the issues and business it is the same. They aren't going to pay a million bucks to re-cable their office and they might not be able to anyway as it might be leased space. So roll in with $40,000 worth of networking equipment and they've increased their network performance by 2.5-5x.
I am excited about it for routers and wifi gear. Sure my internet connection is only 75Mbps and probably isn't getting much faster anytime soon, but unlike 95% of home users, I DO transfer big files and want it transferred fast. That is why I have dual GbE links between my desktop and server and I am running Windows 10 with SMB Multichannel and my network link is still my choke point. My storage is capable of ~300-350MB/sec. So the thought of dual 2.5GbE links in a year or three has me salivating.
Same thing on the network side. My current 802.11ac laptops and my routers can hit an easy 60-72MB/sec transfers. Not port limited by any means, but 802.11ax is absolutely going to be limited by port speed. If I increase my internet connection a fair amount more I could run in to bottle necks if doing a big wireless transfer from a laptop to my server and at the same time I am downloading something at max speed from the internet.
I'll wait till prices are a little lower, but I'll still hope that Intel will jump on the spec as well as mainstream for everyone else also. I'd happily shell out $500 for a new 16/24 port switch, a couple of dual port (or 4 single) cards to drop in to my desktop and server. Router and access points can wait until wave 2 802.11ac is deployed in a serious way or 802.11ax hits.
Depends what you consider mainstream and long time. If you consider Apple as defining the high end of mainstream, the PowerBook picked up GigE in October 2001. That's only two years after the spec was ratified, which seems reasonably fast to me.
I think the important point is to compare that with how slow Apple has been (ie not yet!) to pick up 10GigE, which tells you something about the cost+size+power of the required chips, compared to the expected benefits. If these specs deliver what they claim (ie the claimed speed in a cell that's small enough and low enough power) I expect Apple to rapidly pick them up.
(Which admittedly, nowadays, probably means something like provide the OS support very rapidly, and sell a first party TB or USB3 to ethernet dongle. They may not bother to change the ports on the new versions of mac minis, iMacs, and mac Pro's on the grounds that the whole point of TB [and maybe USB3] is to have fewer different ports on the back of each device?)
>Copper GbE is like what, 17 years old by now? And we are supposed to be excited over a 2.5x speed increase over 17 years?
Given the requirement that it work over the same ~16 year old cables, I'd say it's pretty impressive. Most fast new standards tend to require proportionately more advanced interconnects. Get 2.5gbit out of a cable that is just slightly newer than USB 1.1 is pretty surprising actually.
If enterprises used entirely copper wiring since then, more likely we are using 10G on copper by now. Yet, fiber has virtually limitless bandwidth so a company location to last more than 10-20 years would future proof using fiber as much as possible to avoid costs of laying new cables.
On the topic, it is a pretty interesting niche for a solution to high bandwidth access points which doesn't exist yet. At this point, with the latest Wi-Fi ac technology for access points and clients, it is overkill. It is because ac technology cannot such saturate the link and high density areas often have multiple access points.
A possible application would be if Wi-Fi "ad" will prove popular, if they increase the range. It seems to be useful in large open areas when there is no obstruction.
One thing to mention on cabling your home, unless it is a big house or has a TON of RFI, Cat6 is just fine for 10GbE. It can be deployed up to 55 meters on Cat6 and 45 meters on Cat5e. That won't work for businesses and both Cat5e and Cat6 are significantly more impacted by alien cross talk than Cat6a is. Those distances for 5e and 6 for 10GbE are predicated on low RFI and SINGLE cable installations (to reduce alien cross talk). Cat6a the 100 meter length is high alien cross talk environments (IE cable bundles).
That out of the way, I've seen 10GbE work on Cat6 in large bundles in labs and small network rooms without any issues at all over distances of 20-30 meters.
Basically if you are wiring your house, Cat6 is pretty much going to be future proof to Cat6 for everything but the largest houses (25GbE that is also coming down the pipe almost absolutely will not run on Cat6. That said, it is only rated for something like 10 meters on Cat6a).
On the other hand, if you're wiring your house, the cabling is probably the least of your costs. Why skimp there?
The only reason I see not to buy the highest quality, most heavily shielded networking cables available are physical limitations such as minimum bend radius. Otherwise, is it really worth saving $0.20 per foot on cable just to risk the massive headache and expense of rewiring 10 years down the line?
It makes me really sad when I see new construction homes listed at multiple hundreds of thousands of dollars advertising cat5e wiring. I mean, it's sort of nice that they thought of it, but it doesn't exactly send a positive message that the developer didn't at least spend the nominal extra money on cat6a. Where else might they have used inferior materials just to save a fraction of a percent of the final sale price?
As mentioned down the comments some, the smartest way to do it is likely running Cat6 and fiber next to each other. If you are getting the fiber in bulk, the price is actually pretty comparable between Cat6+fiber compared to just running Cat6a.
I agree it doesn't make a lot of sense to skimp on new construction, but a lot of wiring jobs are for existing construction. Example my current house. I am not paying an electrician either, so it isn't like the difference between a $2k job and a $2.5k job. When I priced it out, for all of the drops all around my house it is the difference between a $200 job and a $500 job...plus the $200 job got done in about 1/2 or less of the time as the $500 job (because terminating Cat6a is a MUCH bigger pain in the butt than Cat6, same with running it).
I can always pull the wiring back out and re-run Cat6a if it every becomes absolutely vital.
It is possible some future standard is going to require Cat6a to be able to run in a residential setting, but what is currently on the books or near future is
1GbE and 2.5GbE 100 meters on Cat5e and better 5GbE 100 meters on Cat6 (not sure of length on 5e, but at a guess, 60-80 meters) 10GbE 45 meters on 5e, 55 meters on 6, 100 meters on Cat6a Proposed 25GbE 10 meters on Cat6a (not enough to cover a house) 40GbE Cat7a to I think 10 meters
I'd guess the proposed 25GbE might be able to manage more than 10 meters on Cat7a, but a lot of it is simply how high frequency you need to get. To fit in an RJ45, the wire gauge can only be so large and even if you suppress noise to zero, receivers are only so sensitive and you can only transmit at so high a frequency (distortion and power use, plus higher cross talk).
Yes, as electronics improve, the transmitters are lower distortion, the receivers are more sensitive, etc.
That said, copper has its limitations and with the RJ45 standard you are never likely to see 100GbE on Cat 8 running 100 meters or anything like that. You might see 25GbE some day running on something like Cat7a, 8 whatever being able to run 30, 40 meters. You are also going to be running fire hoses of cabling and spending 20 minutes terminating each one and all of the individual shielded wires.
You get to a point where fiber is just much easier to run, service and terminate. You could have 100Gbps running on a copper patch cable a meter from your fiber media converter to the back of your desktop easily.
Psha, mere Category 8 twisted pair wiring isn't forward looking AT ALL. I'm going straight for #6 gauge wire they use on 3-phase high-voltage power lines in order to future-proof! It should be good for 100 TbE and handle that 1.21 jigawatts I need to travel through time.
And how is single mode fiber going to carry lightning from the top of the tower so that I can go back to the future, Doc? >.< We need PoE or the DeLorean's flux capacitor won't work!
At 10Gbit, we are getting close to speeds at which data goes in and out of RAM within a PC. And that speed isn't progressing much anymore. So I don't think it's absurd to assume that we probably won't have any use for 100Gbit in a home. There is just nothing that can consume it. And unless you have 4K CCTV in every corner of every room (not sure I'd like to live in that flat...), I don't see how IoT will consume that either.
10Gb is nowhere near of modern memory speeds. Dual channel ddr4 3000 sticks can reach somewhere around 40GB/s, so rougly 30 times more than 10Gbps network interfaces...
Probably the relatively high current and the fluctuations of it cause interference to the data signals, which are already very difficult to get running in less noisy environments, not to mention the additional noise caused by power delivery.
Is there any talk yet, and/or should there be at this point, about finally ditching metal wiring for signal transmission, and moving on to optical fiber for consumer/business networks? (A single cable can still include copper wires for power transfer, alongside an optical fiber for data transfer...)
With optical cables there's no worry about EMI/crosstalk/noise, the distance limits are at least an order of magnitude greater, the signal travels faster, and bandwidth can scale into Tbps (more limited by transceiver technology than by the cable itself.)
Fiber optics have been around for a long time now. Lots of R&D has been done on silicon photonics for SLI/scaling/cost reduction. How close are we to finally making the leap?
Considering 10 GbE is already priced out of most solutions, going fiber is a pretty laughable suggestion.
The problem is always price, every other problem about all of this is secondary.
Hell, at the snail pace we're going, it's probably going to be cheaper to network in coax with DOCSIS 3.1 modems within a year than 10 GbE at this point.
At the consumer end I think a major problem has always been that fiber is a lot less abuse tolerant than twisted pair. You can wad the latter up into a bundle with tight 180* bends, drop heavy stuff on it, and roll over it with a chair without any lasting harm. Glass is much more likely to shatter in those circumstances.
(weird: don't recall reading about it in this year's IDF coverage - probably just poor memory on my part.)
Anyway, the upshot is we have first-gen silicon photonics interconnect fabric for data centers out as of today, basically (give or take a couple months.). That might mean were looking at consumer-level availability/affordability in maybe 10 years or so...
Of course, that would be predicated on a well-established standard. Considering how long *that* normally takes, perhaps now would be a good time to start working on one...
Yes. Intel has been working on this for some time. Thunderbolt ("light peak") was originally targeted for an optical interconnect, but the silicon photonics was just not there yet. Etching optical components (lenses, wave guides, and such) in Si has been possible for some time, but indirect band gap material such as Si and Ge are not suitable as laser mediums. That is why LEDs and laser dioides are made of direct band gap materials like GaAs and InAs. Intel has found a way to embed InP, also a direc band gap material, and so has the first on-chip laser. There is huge potential in this, since it is likely possible to have a normal CMOS SoC sandwiched beneath a photonic layer. This implies much more than an on-chip optical networking PHY. Think chip-to-chip interconnect between CPU and RAM or core-to-core.
That's fine but the obstacle is not so much the cable than the cost of switches, NIC and routers capable of pumping 10Gbit on multiple ports.
Also for a home usage, already many laptops don't have an RJ45 port. I don't think any has an optical port. Same for motherboard. Unless you are happy to scarify a precious PCIe port that you could use otherwise for some NVMe SSDs.
Good point, there. Though, how much more bandwidth can be squeezed from wireless hubs? OK, maybe we'll all be on gigabit WiFi in few years - but I'm skeptical about 10-Gb, never mind 100+ Gb - particularly in dense urban environments with lots of devices splitting the spectrum and interfering with each other.
As to what could possibly require that much bandwidth (outside of a data center or a supercomputer) - well, maybe 16K stereo-VR in 16-bpc HDR at 150 FPS, for example...
It is all about tiered wireless performance. 60GHz can easily support performance up in to the dozens of Gbps ACTUAL transfer speeds. The issue is, it is effectively line of sight and by that I mean, don't accidently cover the antenna with your hand on your device or walk between your device and the access point. It can work on multipath/reflection, but still much slower than LoS.
5Ghz can accommodate some pretty ridiculous speeds with MIMO, high character sets (like 1024QAM) and wide frequency (like 160MHz channels). That is a combination of wave 2 802.11ac and "next gen" 802.11ax. In theory you can hit 10Gbps with 802.11ax with IIRC 4 streams...which you'll never see in a laptop even.
But that is the kind of performance you might expect, again, in the same room. 5GHz doesn't go far though, so even in an urban environment there isn't going to be much stepping on your neighbors. But figure 802.11ax with a laptop might realistically hit 2-2.5Gbps real speeds and the router with MU:MIMO might be able to handle 3-4Gbps of actual throughput between multiple devices. We of course need 2.5/5GbE to support that. That would also be all same room use.
Go a room over and you might realistically see more like 1GbE performance. Two rooms over it might be more like 400Mbps.
Go 3 rooms over and you drop from 5GHz to 2.4GHz and now you are chugging along at a real 100Mbps. 4 rooms over and you are 40Mbps. Etc. Where the real performance improvements lie, especially in noisy environments or ones with poor signal propagation are all about performance close to an access point.
My old TP-Link WDR3600 on one side of my house to the other through several walls and about 50ft can manage roughly 25Mbps on 2.4GHz. My much newer AC1200 is also stuck on 2.4Ghz and it can only manage about 30Mbps. Not much of an improvement. 5Ghz won't even penetrate that far. Get a room closer and I can get 5GHz on both routers...I get about 14Mbps on my WDR3600 and about 26Mbps on my AC1200. Move to the same room and I can get 200Mbps on my WDR3600...I get 490Mbps on my AC1200. Improved electronics and processing can improve long range performance some, but the biggest gains are only going to be seen close in where you already have a very strong signal and also only in situations where there is relatively low noise.
I am sure there will be more advances in wireless as time goes on, but again, a lot of the whiz bang ones are only going to be close in. A lot of the recent cellular advances with LTE have been through bonding channels so you can have 20MHz of spectrum to work with, instead of maybe just 5MHz. Then you add in more streams. This is all stuff that Wifi did a long time ago, but cellular is just starting to do. There are some gains to be had still with cellular for single client performance, but to get much more you have to move to different frequencies so that you can bond more channels together. Spatial efficiency can only be improved so much and that generally requires stronger signals (either transmitting louder or being closer).
Not too hard at all as long there is a need/application for that. A fast and easy solution in the near future is to use a wider frequency in the 60 GHz band while increasing the MIMO antennas.
Another solution is to go optical transmission via infra-red or visible light where the only drawback is physical obstruction.
On ethernet, one thing that would be great is an industry standard for a mini RJ45 port for laptops. Would be much better than having to play with dongles or wifi. I would think this would be trivial to implement.
Well, thunderbolt can handle Ethernet. Maybe they can come up with a way to get that working without a dongle. You'd still have to carry around an RJ45 to Thunderbolt patch cable though, which would kind of be the same thing as carrying around the dongle.
This. If you look at an RJ45 there isn't really a good way to make that smaller. You could make the connector flat and do away with the key and some other stuff to make it shorter, which would help...but you are still going to be stuck with either having a patch cable terminated in "mini" RJ45 or a dongle.
Might as well just have a dongle to start with. I like having an RJ45 right on a laptop (my ~4yr old Ivy Bridge laptop has one of the spring hinged RJ45's so it takes up somewhat less space), but I also don't mind carrying around a USB3 GbE adapter either. I actually have 2, one I carry around in my tablet bag (because it is a 2-in-1) for when I need/want wired on the road and I have one plugged in to a patch cable at my desk so I just plug the USB3 port in when I want wired right there.
Of course it is an older 2-in-1 that only hits about 100Mbps max on wireless. My wife's 2-in-1 has an Intel 7265 in it that can hit about 500Mbps max with a slight tailwind. I don't think I've ever used that wired before.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
37 Comments
Back to Article
shelbystripes - Monday, October 3, 2016 - link
Since there are already compatible chipsets and devices on the market and this standard was pushed aggressively to market the advantages of 2.5/5GbE as a Wi-Fi complement, I expect high-end consumer routers to have at least 2.5Gb support in the next year. There will be a Netgear "R9000" or something like it.StrangerGuy - Monday, October 3, 2016 - link
Copper GbE is like what, 17 years old by now? And we are supposed to be excited over a 2.5x speed increase over 17 years?Thankfully those EEE rejects aren't working on far more performance critical stuff like CPUs, GPUs, SoCs, USB, LTE, Wi-Fi, SATA, PCIe, SSDs etc.
azazel1024 - Monday, October 3, 2016 - link
Yeah and GbE took a long time to go mainstream. We have had roughly 8-10 years where GbE ports were pretty standard on motherboards. 10GbE as pointed out is still too expensive in cost and power consumption. Even in a laptop it is hard to stick a chip in there that is going to pull down 5-7w of power, just for networking to do 10GbE.Most networking is still low bandwidth and home users, 95% of them are going to be connecting wirelessly to an internet connection not feeding a whole lot either.
That 5% and in higher demand businesses is where this is "needed".
And yeah, I'll be excited about only a 2.5x speed increase after years and years of being capped at 1Gbps. Okay, well I've been capped at 2Gbps since Windows 8 and SMB Multichannel.
For very high demand applications or data centers, 10GbE is just fine. The cost isn't too much, nor is the power consumption a serious hindrance. In wireless gear the article points out the issues and business it is the same. They aren't going to pay a million bucks to re-cable their office and they might not be able to anyway as it might be leased space. So roll in with $40,000 worth of networking equipment and they've increased their network performance by 2.5-5x.
I am excited about it for routers and wifi gear. Sure my internet connection is only 75Mbps and probably isn't getting much faster anytime soon, but unlike 95% of home users, I DO transfer big files and want it transferred fast. That is why I have dual GbE links between my desktop and server and I am running Windows 10 with SMB Multichannel and my network link is still my choke point. My storage is capable of ~300-350MB/sec. So the thought of dual 2.5GbE links in a year or three has me salivating.
Same thing on the network side. My current 802.11ac laptops and my routers can hit an easy 60-72MB/sec transfers. Not port limited by any means, but 802.11ax is absolutely going to be limited by port speed. If I increase my internet connection a fair amount more I could run in to bottle necks if doing a big wireless transfer from a laptop to my server and at the same time I am downloading something at max speed from the internet.
I'll wait till prices are a little lower, but I'll still hope that Intel will jump on the spec as well as mainstream for everyone else also. I'd happily shell out $500 for a new 16/24 port switch, a couple of dual port (or 4 single) cards to drop in to my desktop and server. Router and access points can wait until wave 2 802.11ac is deployed in a serious way or 802.11ax hits.
tomatus270389 - Monday, October 3, 2016 - link
"Sure my internet connection is only 75Mbps"I am sitting on a 3Mbps and my provider doesn't offer more... it sucks living in 3rd world countries and in rural areas...
name99 - Tuesday, October 4, 2016 - link
"Yeah and GbE took a long time to go mainstream."Depends what you consider mainstream and long time. If you consider Apple as defining the high end of mainstream, the PowerBook picked up GigE in October 2001. That's only two years after the spec was ratified, which seems reasonably fast to me.
I think the important point is to compare that with how slow Apple has been (ie not yet!) to pick up 10GigE, which tells you something about the cost+size+power of the required chips, compared to the expected benefits. If these specs deliver what they claim (ie the claimed speed in a cell that's small enough and low enough power) I expect Apple to rapidly pick them up.
(Which admittedly, nowadays, probably means something like provide the OS support very rapidly, and sell a first party TB or USB3 to ethernet dongle. They may not bother to change the ports on the new versions of mac minis, iMacs, and mac Pro's on the grounds that the whole point of TB [and maybe USB3] is to have fewer different ports on the back of each device?)
damianrobertjones - Friday, October 7, 2016 - link
Back then Apple was FAR from mainstream. When did Dell/HP go for GbE?saratoga4 - Monday, October 3, 2016 - link
>Copper GbE is like what, 17 years old by now? And we are supposed to be excited over a 2.5x speed increase over 17 years?Given the requirement that it work over the same ~16 year old cables, I'd say it's pretty impressive. Most fast new standards tend to require proportionately more advanced interconnects. Get 2.5gbit out of a cable that is just slightly newer than USB 1.1 is pretty surprising actually.
zodiacfml - Thursday, October 6, 2016 - link
If enterprises used entirely copper wiring since then, more likely we are using 10G on copper by now. Yet, fiber has virtually limitless bandwidth so a company location to last more than 10-20 years would future proof using fiber as much as possible to avoid costs of laying new cables.On the topic, it is a pretty interesting niche for a solution to high bandwidth access points which doesn't exist yet. At this point, with the latest Wi-Fi ac technology for access points and clients, it is overkill. It is because ac technology cannot such saturate the link and high density areas often have multiple access points.
A possible application would be if Wi-Fi "ad" will prove popular, if they increase the range. It seems to be useful in large open areas when there is no obstruction.
azazel1024 - Monday, October 3, 2016 - link
One thing to mention on cabling your home, unless it is a big house or has a TON of RFI, Cat6 is just fine for 10GbE. It can be deployed up to 55 meters on Cat6 and 45 meters on Cat5e. That won't work for businesses and both Cat5e and Cat6 are significantly more impacted by alien cross talk than Cat6a is. Those distances for 5e and 6 for 10GbE are predicated on low RFI and SINGLE cable installations (to reduce alien cross talk). Cat6a the 100 meter length is high alien cross talk environments (IE cable bundles).That out of the way, I've seen 10GbE work on Cat6 in large bundles in labs and small network rooms without any issues at all over distances of 20-30 meters.
Basically if you are wiring your house, Cat6 is pretty much going to be future proof to Cat6 for everything but the largest houses (25GbE that is also coming down the pipe almost absolutely will not run on Cat6. That said, it is only rated for something like 10 meters on Cat6a).
chaos215bar2 - Monday, October 3, 2016 - link
On the other hand, if you're wiring your house, the cabling is probably the least of your costs. Why skimp there?The only reason I see not to buy the highest quality, most heavily shielded networking cables available are physical limitations such as minimum bend radius. Otherwise, is it really worth saving $0.20 per foot on cable just to risk the massive headache and expense of rewiring 10 years down the line?
It makes me really sad when I see new construction homes listed at multiple hundreds of thousands of dollars advertising cat5e wiring. I mean, it's sort of nice that they thought of it, but it doesn't exactly send a positive message that the developer didn't at least spend the nominal extra money on cat6a. Where else might they have used inferior materials just to save a fraction of a percent of the final sale price?
azazel1024 - Tuesday, October 4, 2016 - link
As mentioned down the comments some, the smartest way to do it is likely running Cat6 and fiber next to each other. If you are getting the fiber in bulk, the price is actually pretty comparable between Cat6+fiber compared to just running Cat6a.I agree it doesn't make a lot of sense to skimp on new construction, but a lot of wiring jobs are for existing construction. Example my current house. I am not paying an electrician either, so it isn't like the difference between a $2k job and a $2.5k job. When I priced it out, for all of the drops all around my house it is the difference between a $200 job and a $500 job...plus the $200 job got done in about 1/2 or less of the time as the $500 job (because terminating Cat6a is a MUCH bigger pain in the butt than Cat6, same with running it).
I can always pull the wiring back out and re-run Cat6a if it every becomes absolutely vital.
It is possible some future standard is going to require Cat6a to be able to run in a residential setting, but what is currently on the books or near future is
1GbE and 2.5GbE 100 meters on Cat5e and better
5GbE 100 meters on Cat6 (not sure of length on 5e, but at a guess, 60-80 meters)
10GbE 45 meters on 5e, 55 meters on 6, 100 meters on Cat6a
Proposed 25GbE 10 meters on Cat6a (not enough to cover a house)
40GbE Cat7a to I think 10 meters
I'd guess the proposed 25GbE might be able to manage more than 10 meters on Cat7a, but a lot of it is simply how high frequency you need to get. To fit in an RJ45, the wire gauge can only be so large and even if you suppress noise to zero, receivers are only so sensitive and you can only transmit at so high a frequency (distortion and power use, plus higher cross talk).
Yes, as electronics improve, the transmitters are lower distortion, the receivers are more sensitive, etc.
That said, copper has its limitations and with the RJ45 standard you are never likely to see 100GbE on Cat 8 running 100 meters or anything like that. You might see 25GbE some day running on something like Cat7a, 8 whatever being able to run 30, 40 meters. You are also going to be running fire hoses of cabling and spending 20 minutes terminating each one and all of the individual shielded wires.
You get to a point where fiber is just much easier to run, service and terminate. You could have 100Gbps running on a copper patch cable a meter from your fiber media converter to the back of your desktop easily.
Communism - Monday, October 3, 2016 - link
You should be running Cat 8 cable at this point (Class I for Cat 6a compatibility or Class II for Cat 7a compatibility)You're going to need Cat 8 to exceed 100 GbE @ 100m.
BrokenCrayons - Monday, October 3, 2016 - link
Psha, mere Category 8 twisted pair wiring isn't forward looking AT ALL. I'm going straight for #6 gauge wire they use on 3-phase high-voltage power lines in order to future-proof! It should be good for 100 TbE and handle that 1.21 jigawatts I need to travel through time.willis936 - Monday, October 3, 2016 - link
Or, you know, a bundle of single mode fiber.BrokenCrayons - Tuesday, October 4, 2016 - link
And how is single mode fiber going to carry lightning from the top of the tower so that I can go back to the future, Doc? >.< We need PoE or the DeLorean's flux capacitor won't work!cm2187 - Monday, October 3, 2016 - link
At 10Gbit, we are getting close to speeds at which data goes in and out of RAM within a PC. And that speed isn't progressing much anymore. So I don't think it's absurd to assume that we probably won't have any use for 100Gbit in a home. There is just nothing that can consume it. And unless you have 4K CCTV in every corner of every room (not sure I'd like to live in that flat...), I don't see how IoT will consume that either.zepi - Monday, October 3, 2016 - link
10Gb is nowhere near of modern memory speeds. Dual channel ddr4 3000 sticks can reach somewhere around 40GB/s, so rougly 30 times more than 10Gbps network interfaces...zodiacfml - Thursday, October 6, 2016 - link
The physical link is not for a single client which is the reason for these high speed standards.DanNeely - Monday, October 3, 2016 - link
What's the deal with 10GBASE-T not supporting POE?zepi - Monday, October 3, 2016 - link
Probably the relatively high current and the fluctuations of it cause interference to the data signals, which are already very difficult to get running in less noisy environments, not to mention the additional noise caused by power delivery.markhahn - Monday, October 3, 2016 - link
most things that can live on the minimal power provided by POE are not fast enough to drive 10Gb.zodiacfml - Thursday, October 6, 2016 - link
Right. The most powerful PoE standard barely has power for the 10 Gb ethernet interface itself.boeush - Monday, October 3, 2016 - link
Is there any talk yet, and/or should there be at this point, about finally ditching metal wiring for signal transmission, and moving on to optical fiber for consumer/business networks? (A single cable can still include copper wires for power transfer, alongside an optical fiber for data transfer...)With optical cables there's no worry about EMI/crosstalk/noise, the distance limits are at least an order of magnitude greater, the signal travels faster, and bandwidth can scale into Tbps (more limited by transceiver technology than by the cable itself.)
Fiber optics have been around for a long time now. Lots of R&D has been done on silicon photonics for SLI/scaling/cost reduction. How close are we to finally making the leap?
boeush - Monday, October 3, 2016 - link
Gah.. SLI should have been LSI; sorry and here's another forlorn call for an editing function on these posts.Communism - Monday, October 3, 2016 - link
Considering 10 GbE is already priced out of most solutions, going fiber is a pretty laughable suggestion.The problem is always price, every other problem about all of this is secondary.
Hell, at the snail pace we're going, it's probably going to be cheaper to network in coax with DOCSIS 3.1 modems within a year than 10 GbE at this point.
DanNeely - Monday, October 3, 2016 - link
At the consumer end I think a major problem has always been that fiber is a lot less abuse tolerant than twisted pair. You can wad the latter up into a bundle with tight 180* bends, drop heavy stuff on it, and roll over it with a chair without any lasting harm. Glass is much more likely to shatter in those circumstances.boeush - Monday, October 3, 2016 - link
After some digging, I found this:https://www.hpcwire.com/2016/08/18/intel-launches-...
(weird: don't recall reading about it in this year's IDF coverage - probably just poor memory on my part.)
Anyway, the upshot is we have first-gen silicon photonics interconnect fabric for data centers out as of today, basically (give or take a couple months.). That might mean were looking at consumer-level availability/affordability in maybe 10 years or so...
Of course, that would be predicated on a well-established standard. Considering how long *that* normally takes, perhaps now would be a good time to start working on one...
Jaybus - Wednesday, October 5, 2016 - link
Yes. Intel has been working on this for some time. Thunderbolt ("light peak") was originally targeted for an optical interconnect, but the silicon photonics was just not there yet. Etching optical components (lenses, wave guides, and such) in Si has been possible for some time, but indirect band gap material such as Si and Ge are not suitable as laser mediums. That is why LEDs and laser dioides are made of direct band gap materials like GaAs and InAs. Intel has found a way to embed InP, also a direc band gap material, and so has the first on-chip laser. There is huge potential in this, since it is likely possible to have a normal CMOS SoC sandwiched beneath a photonic layer. This implies much more than an on-chip optical networking PHY. Think chip-to-chip interconnect between CPU and RAM or core-to-core.cm2187 - Monday, October 3, 2016 - link
That's fine but the obstacle is not so much the cable than the cost of switches, NIC and routers capable of pumping 10Gbit on multiple ports.Also for a home usage, already many laptops don't have an RJ45 port. I don't think any has an optical port. Same for motherboard. Unless you are happy to scarify a precious PCIe port that you could use otherwise for some NVMe SSDs.
boeush - Monday, October 3, 2016 - link
Good point, there. Though, how much more bandwidth can be squeezed from wireless hubs? OK, maybe we'll all be on gigabit WiFi in few years - but I'm skeptical about 10-Gb, never mind 100+ Gb - particularly in dense urban environments with lots of devices splitting the spectrum and interfering with each other.As to what could possibly require that much bandwidth (outside of a data center or a supercomputer) - well, maybe 16K stereo-VR in 16-bpc HDR at 150 FPS, for example...
azazel1024 - Tuesday, October 4, 2016 - link
It is all about tiered wireless performance. 60GHz can easily support performance up in to the dozens of Gbps ACTUAL transfer speeds. The issue is, it is effectively line of sight and by that I mean, don't accidently cover the antenna with your hand on your device or walk between your device and the access point. It can work on multipath/reflection, but still much slower than LoS.5Ghz can accommodate some pretty ridiculous speeds with MIMO, high character sets (like 1024QAM) and wide frequency (like 160MHz channels). That is a combination of wave 2 802.11ac and "next gen" 802.11ax. In theory you can hit 10Gbps with 802.11ax with IIRC 4 streams...which you'll never see in a laptop even.
But that is the kind of performance you might expect, again, in the same room. 5GHz doesn't go far though, so even in an urban environment there isn't going to be much stepping on your neighbors. But figure 802.11ax with a laptop might realistically hit 2-2.5Gbps real speeds and the router with MU:MIMO might be able to handle 3-4Gbps of actual throughput between multiple devices. We of course need 2.5/5GbE to support that. That would also be all same room use.
Go a room over and you might realistically see more like 1GbE performance. Two rooms over it might be more like 400Mbps.
Go 3 rooms over and you drop from 5GHz to 2.4GHz and now you are chugging along at a real 100Mbps. 4 rooms over and you are 40Mbps. Etc. Where the real performance improvements lie, especially in noisy environments or ones with poor signal propagation are all about performance close to an access point.
My old TP-Link WDR3600 on one side of my house to the other through several walls and about 50ft can manage roughly 25Mbps on 2.4GHz. My much newer AC1200 is also stuck on 2.4Ghz and it can only manage about 30Mbps. Not much of an improvement. 5Ghz won't even penetrate that far. Get a room closer and I can get 5GHz on both routers...I get about 14Mbps on my WDR3600 and about 26Mbps on my AC1200. Move to the same room and I can get 200Mbps on my WDR3600...I get 490Mbps on my AC1200. Improved electronics and processing can improve long range performance some, but the biggest gains are only going to be seen close in where you already have a very strong signal and also only in situations where there is relatively low noise.
I am sure there will be more advances in wireless as time goes on, but again, a lot of the whiz bang ones are only going to be close in. A lot of the recent cellular advances with LTE have been through bonding channels so you can have 20MHz of spectrum to work with, instead of maybe just 5MHz. Then you add in more streams. This is all stuff that Wifi did a long time ago, but cellular is just starting to do. There are some gains to be had still with cellular for single client performance, but to get much more you have to move to different frequencies so that you can bond more channels together. Spatial efficiency can only be improved so much and that generally requires stronger signals (either transmitting louder or being closer).
zodiacfml - Thursday, October 6, 2016 - link
Not too hard at all as long there is a need/application for that. A fast and easy solution in the near future is to use a wider frequency in the 60 GHz band while increasing the MIMO antennas.Another solution is to go optical transmission via infra-red or visible light where the only drawback is physical obstruction.
Xajel - Monday, October 3, 2016 - link
The big question is when we can expect consumer products with this standard...cm2187 - Monday, October 3, 2016 - link
On ethernet, one thing that would be great is an industry standard for a mini RJ45 port for laptops. Would be much better than having to play with dongles or wifi. I would think this would be trivial to implement.Mr Perfect - Tuesday, October 4, 2016 - link
Well, thunderbolt can handle Ethernet. Maybe they can come up with a way to get that working without a dongle. You'd still have to carry around an RJ45 to Thunderbolt patch cable though, which would kind of be the same thing as carrying around the dongle.azazel1024 - Tuesday, October 4, 2016 - link
This. If you look at an RJ45 there isn't really a good way to make that smaller. You could make the connector flat and do away with the key and some other stuff to make it shorter, which would help...but you are still going to be stuck with either having a patch cable terminated in "mini" RJ45 or a dongle.Might as well just have a dongle to start with. I like having an RJ45 right on a laptop (my ~4yr old Ivy Bridge laptop has one of the spring hinged RJ45's so it takes up somewhat less space), but I also don't mind carrying around a USB3 GbE adapter either. I actually have 2, one I carry around in my tablet bag (because it is a 2-in-1) for when I need/want wired on the road and I have one plugged in to a patch cable at my desk so I just plug the USB3 port in when I want wired right there.
Of course it is an older 2-in-1 that only hits about 100Mbps max on wireless. My wife's 2-in-1 has an Intel 7265 in it that can hit about 500Mbps max with a slight tailwind. I don't think I've ever used that wired before.
damianrobertjones - Friday, October 7, 2016 - link
£1009 on Amazon. No thanks.