Interesting. Before posting this comment I went to check the latest news on fiber optics which lasted for an hour of reading. I was surprised by my lack of knowledge on the latest fiber news. Anandtech should include such.
I discovered that the optical techniques I know 10 years ago still applies and to which I know that fiber has virtually infinite bandwidth as a student. Yet, seeing the current bandwidth possible on a single fiber amazed me a lot. It also brought me to dark fiber which is intriguing.
Going back to the article, attached memory makes sense. I have seen comments from users that Ethernet networking is approaching memory speeds. Though far from the truth and it does tell that DRAM can be on a remote location.
The only concern is that HBM might be become mainstream (lower cost and increased density, performance) and then DRAM modules might not be cost effective anymore while non-volatile becomes cheaper and faster.
Ethernet approaching memory bandwidths is a rather optimistic claim at best. Dual channel DDR4 can hit ~30GB/sec or 240gbps in commodity hardware vs 10gbps ethernet still being almost entirely limited to enterprise servers (which are probably equipped with quadcore memory controllers for 2x the bandwidth quoted above) and the 100gbps standard being AFAIK almost entirely limited to very high end data center network backbones and the like at present.
GPUs are an order of magnitude higher, the Tesla P100 his 720GBps/5,760gbps with the consumer centric 1080 GTX at 320GBps/2560gbps.
The fastest undersea data cables can hit multiple terabits per second smoking even GPU memory busses; but at the cost of complexity and prices orders of magnitude beyond ethernet.
- I don't know why some users mention that but the idea comparing memory bandwidth and networking is still interesting. - The Tesla P100 uses HBM2. Such technology will scale even higher. I feel that it might happen in SoCs soon. - A single fiber has vast bandwidth. If limited by cost and present technology, adding more cables quickly solves the problem.
> A single fiber has vast bandwidth. If limited by cost and present technology, adding more cables quickly solves the problem.
Besides more fibers an optical transmitter requires light sources, modulators and detectors, as well as packaging of all components. There's no fully integrated reasonably cheap solution in sight yet, despite people working on it since at least 15 years. That doesn't mean it won't happen, but it takes a lot more than "simply attach a few more fibers".
So what is this thing in human words? Replacement for FSB/HyperTransport/QuickPath? But it talks about various storage devices and not just CPU and RAM... so is it going to replace PCI-express too? Or is it just a protocol thing that will work on top of PCI-e? DMA replacement of sorts?
It's a bit of everything. Right now it is a packet based protocol that uses memory semantics across an 802.3 phy with connections between points composed of lanes (up to 256). Each lane has a signalling range of 16-112 GT/s, and lanes are aggregated into links, with the possibility of multiple links per connection. Supported topologies are point to point, daisy-chained or switched, with discrete and integrated packaging possibilities. Devices are located on subnets (each of which supports up to 4k devices), and a network can be composed of up to 64k subnets. They are also in the process of specifying a new connector (and the requirements they have for that connector are insane). So, it's really designed for very, very large numbers of processors, with accelerators as first class citizens, and vast amounts of storage (DRAM, nvm, and cold storage). That means it's competing with this kind infiniband, but, interestingly, it can also be integrated so as to replace those buses you mentioned.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
15 Comments
Back to Article
JoeyJoJo123 - Wednesday, October 12, 2016 - link
Is it just me, or does it sound like some edgy teenagers came up with the name of this consortium? Gen-Z? Come on...XZerg - Wednesday, October 12, 2016 - link
I presumed it came from their goal to build for generation of systems where Zettabytes of storage is as standard...ddriver - Wednesday, October 12, 2016 - link
It came from the goal to have a silly moniker for everything that can otherwise be referred to using standard terminology.Noremacam - Wednesday, October 12, 2016 - link
That's because Generations A-Y were complete failures.eddieobscurant - Wednesday, October 12, 2016 - link
weird name coincidence with the upcoming samsung's z-ssdszodiacfml - Wednesday, October 12, 2016 - link
Interesting. Before posting this comment I went to check the latest news on fiber optics which lasted for an hour of reading. I was surprised by my lack of knowledge on the latest fiber news. Anandtech should include such.I discovered that the optical techniques I know 10 years ago still applies and to which I know that fiber has virtually infinite bandwidth as a student. Yet, seeing the current bandwidth possible on a single fiber amazed me a lot. It also brought me to dark fiber which is intriguing.
Going back to the article, attached memory makes sense. I have seen comments from users that Ethernet networking is approaching memory speeds. Though far from the truth and it does tell that DRAM can be on a remote location.
The only concern is that HBM might be become mainstream (lower cost and increased density, performance) and then DRAM modules might not be cost effective anymore while non-volatile becomes cheaper and faster.
DanNeely - Wednesday, October 12, 2016 - link
Ethernet approaching memory bandwidths is a rather optimistic claim at best. Dual channel DDR4 can hit ~30GB/sec or 240gbps in commodity hardware vs 10gbps ethernet still being almost entirely limited to enterprise servers (which are probably equipped with quadcore memory controllers for 2x the bandwidth quoted above) and the 100gbps standard being AFAIK almost entirely limited to very high end data center network backbones and the like at present.GPUs are an order of magnitude higher, the Tesla P100 his 720GBps/5,760gbps with the consumer centric 1080 GTX at 320GBps/2560gbps.
The fastest undersea data cables can hit multiple terabits per second smoking even GPU memory busses; but at the cost of complexity and prices orders of magnitude beyond ethernet.
http://www.corsair.com/en-us/blog/2014/september/d...
Yojimbo - Wednesday, October 12, 2016 - link
Gen-Z uses an Ethernet physical layer.http://genzconsortium.org/faq/gen-z-technology/#17
zodiacfml - Thursday, October 13, 2016 - link
Thanks for the link.zodiacfml - Thursday, October 13, 2016 - link
- I don't know why some users mention that but the idea comparing memory bandwidth and networking is still interesting.- The Tesla P100 uses HBM2. Such technology will scale even higher. I feel that it might happen in SoCs soon.
- A single fiber has vast bandwidth. If limited by cost and present technology, adding more cables quickly solves the problem.
MrSpadge - Monday, October 17, 2016 - link
> A single fiber has vast bandwidth. If limited by cost and present technology, adding more cables quickly solves the problem.Besides more fibers an optical transmitter requires light sources, modulators and detectors, as well as packaging of all components. There's no fully integrated reasonably cheap solution in sight yet, despite people working on it since at least 15 years. That doesn't mean it won't happen, but it takes a lot more than "simply attach a few more fibers".
Visual - Thursday, October 13, 2016 - link
So what is this thing in human words?Replacement for FSB/HyperTransport/QuickPath? But it talks about various storage devices and not just CPU and RAM... so is it going to replace PCI-express too? Or is it just a protocol thing that will work on top of PCI-e? DMA replacement of sorts?
vanilla_gorilla - Thursday, October 13, 2016 - link
My assumption is that this is designed to support the coming storm of storage class memory.tuxRoller - Saturday, October 15, 2016 - link
It's a bit of everything.Right now it is a packet based protocol that uses memory semantics across an 802.3 phy with connections between points composed of lanes (up to 256). Each lane has a signalling range of 16-112 GT/s, and lanes are aggregated into links, with the possibility of multiple links per connection. Supported topologies are point to point, daisy-chained or switched, with discrete and integrated packaging possibilities.
Devices are located on subnets (each of which supports up to 4k devices), and a network can be composed of up to 64k subnets.
They are also in the process of specifying a new connector (and the requirements they have for that connector are insane).
So, it's really designed for very, very large numbers of processors, with accelerators as first class citizens, and vast amounts of storage (DRAM, nvm, and cold storage). That means it's competing with this kind infiniband, but, interestingly, it can also be integrated so as to replace those buses you mentioned.
lanek81 - Thursday, October 13, 2016 - link
Is there a reason why AMD have been removed completely from each part where it was cited in the original press release ?And obvliously, it seems Gen-Z, is AMD Global memory Interconnect ( GMI link, or data coherent fabric )