HyperTransport is a consortium, Infinity Fabric now Architecture, is AMD exclusive (to best of my limited knowledge)
Maybe next up will be Hyper Infinity to "marry" the two near and dear words togther..
I love the name HyperTransport as well, however, maybe they decided to stop using it as HT is "intel" where Infinity is their own specific branding, to win "mindset" of their various customer/consumers...
No "real" possible limitations more or less, as it is Infinity (and beyond) .. .something like that anyways.
Near limitless scaling potential (which they have shown, countless reviewers have more or less proven ala Zen2 (Ryzen 3xxx)
I wonder if their "next huge move" at some point in the very near future will be the FIRST readily available optical based chip (or fabric) as that time is approaching for sure...that is, at some point, silicon will reach limits of die shrinks, bajillions of cores or not, whereas Optical they can "go back to the start" as in 200+nm dies, as Optical is a whole new ballgame to be played upon the field.
Maybe at the same time, we will also start seeing proper self driving vehicles, that one does not need a license to have, alternative fuels for these vehicles also be "common place", universal income for everyone on earth...
**HyperTransport is a consortium, Infinity Fabric now Architecture, is AMD exclusive (to best of my limited knowledge)** _______________________________________________
'Infinity Fabric' was purchased by Rory Read in 2012 or so, for $330M. It was based on the SeaMicro 'Freedom' fabric.
About the 6 IF Links: Ins't it much more likely, that it will be 8 Links? Then you get a direct all to all connectivity with 7 links. The CPU would have a direct link to all GPUs as well (the 8th link). This would be true all to all connectivity with maximum bandwidth and minimum latency.
With 8 IF links, the HPC / Super Computer configuration for Frontier and El Capitan makes much more sense. Again, with direct all to all connectivity you need 4 links for each CPU and GPU. But because it is a Super Computer you want to have more bandwidth. Now just double the IF width for each link and you get there. And again 8 links makes very much sense.
6 IF links would be possible, that is true. But you get an asymmetry of the network. With 8 links this asymmetry disappears. So I suspect, that the 6 links shown are just a artwork related reason.
6 IF links is probably the maximum they could manage per graphics card. Sure, with 6 IF links instead of an all-to-all interconnect you get a *near* all-to-all. It is not fully symmetric and it's less clean, but it might just be the most they could reach.
But then, something like a NVLink switch would make very much sense as well, because you could reduce the number of links per GPU and at the same time increase overall bandwidth between all GPUs.
I wonder what will IF be able to achieve with CPU clusters at that point. Will it be similarly capable of connecting up to 6 dies on one chip ?
Or will they unfurl that "hypercube" and we'd get more silicon dies, some requiring one hop.
I know about star topology and I/O chiplet, but it could be interesting if one could get a chip with more CPU dies ( perhaps stacked ), with only some in the group being connected to the I/O complex directly.
6 links is still good for 300GB/s bidirectionally (each link is 25GB*2). I think the fabric power consumption becomes a larger issue with more links, as does diminishing returns. If we assume each bidirectional link costs 12.5W (don't have actual figures, so hypothetically), that's 75W right off the bat. It makes sense, now, that Arcturus' GPU power consumption is a mere 200W. With 6 links, they push the GPU right up to 275W, and that's with display, graphics, and pixel engines completely gutted (rumored) for pure compute use in a server (or supercomputer).
300GB/s is based on current PCIe 4.0. This could about double, via PCIe 5.0, by the time El Capitan is ready. That's quite a bit of bandwidth to keep those workloads moving, but still lagging behind 1TB/s+ of HBM2/HBM2e VRAM when using 4 modules and 4096-bit bus.
We may see 8 links and all-to-all for 8 GPUs if the power consumption can be tamed. That's probably the largest issue right now. I'm not totally sure if this is the case, but logically, it makes sense.
If we count all 6 IF links in all 8 GPUs (48 links * 50GB/s), that's an aggregate system bandwidth of 2.4TB/s bidirectionally. Looks pretty decent when examined as a whole cluster.
Pretty sure Microsoft said windows 10 was the last version at least for a long time. 10 is their evolving platform that gets major and incremental updates from time to time.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
18 Comments
Back to Article
Sahrin - Thursday, March 5, 2020 - link
I still prefer the Hypertransport braning :)ballsystemlord - Thursday, March 5, 2020 - link
You mean branding.I liked HyperTransport better too.
Dragonstongue - Thursday, March 5, 2020 - link
HyperTransport is a consortium, Infinity Fabric now Architecture, is AMD exclusive (to best of my limited knowledge)Maybe next up will be Hyper Infinity to "marry" the two near and dear words togther..
I love the name HyperTransport as well, however, maybe they decided to stop using it as HT is "intel" where Infinity is their own specific branding, to win "mindset" of their various customer/consumers...
No "real" possible limitations more or less, as it is Infinity (and beyond) .. .something like that anyways.
Near limitless scaling potential (which they have shown, countless reviewers have more or less proven ala Zen2 (Ryzen 3xxx)
I wonder if their "next huge move" at some point in the very near future will be the FIRST readily available optical based chip (or fabric) as that time is approaching for sure...that is, at some point, silicon will reach limits of die shrinks, bajillions of cores or not, whereas Optical they can "go back to the start" as in 200+nm dies, as Optical is a whole new ballgame to be played upon the field.
Maybe at the same time, we will also start seeing proper self driving vehicles, that one does not need a license to have, alternative fuels for these vehicles also be "common place", universal income for everyone on earth...
Can only dream (^.^)
Threska - Friday, March 6, 2020 - link
Optical, or some other material for their bus.Lord of the Bored - Friday, March 6, 2020 - link
"Maybe next up will be Hyper Infinity to "marry" the two near and dear words togther.."I would love to go for a ride on the Hyper Infinity Transport Fabric.
jtd871 - Friday, March 6, 2020 - link
"Magic Carpet Ride"Lord of the Bored - Saturday, March 7, 2020 - link
It'll be a whole new world, with new fantastic computing possibilities.Smell This - Friday, March 6, 2020 - link
**HyperTransport is a consortium, Infinity Fabric now Architecture, is AMD exclusive (to best of my limited knowledge)**_______________________________________________
'Infinity Fabric' was purchased by Rory Read in 2012 or so, for $330M. It was based on the SeaMicro 'Freedom' fabric.
Mr Read *pant-eds* Intel. HA!
https://www.extremetech.com/computing/120601-amd-b...
https://www.anandtech.com/show/9170/amd-exits-dens...
Fataliity - Friday, March 6, 2020 - link
No it's not. They are completely different.https://dvcon-india.org/sites/dvcon-india.org/file...
https://dvcon-india.org/sites/dvcon-india.org/file...
These are the first papers about Infinity Fabric, from AMD (and the last I think). They are using a PHY connection (8g, 16g, 32g),
Similar to this. https://ip.cadence.com/ipportfolio/ip-portfolio-ov...
haukionkannel - Friday, March 6, 2020 - link
So in the next gen we will have chiplet gpu Also from amd and chiplet apus!basix - Friday, March 6, 2020 - link
About the 6 IF Links: Ins't it much more likely, that it will be 8 Links? Then you get a direct all to all connectivity with 7 links. The CPU would have a direct link to all GPUs as well (the 8th link). This would be true all to all connectivity with maximum bandwidth and minimum latency.With 8 IF links, the HPC / Super Computer configuration for Frontier and El Capitan makes much more sense. Again, with direct all to all connectivity you need 4 links for each CPU and GPU. But because it is a Super Computer you want to have more bandwidth. Now just double the IF width for each link and you get there. And again 8 links makes very much sense.
6 IF links would be possible, that is true. But you get an asymmetry of the network. With 8 links this asymmetry disappears. So I suspect, that the 6 links shown are just a artwork related reason.
Santoval - Friday, March 6, 2020 - link
6 IF links is probably the maximum they could manage per graphics card. Sure, with 6 IF links instead of an all-to-all interconnect you get a *near* all-to-all. It is not fully symmetric and it's less clean, but it might just be the most they could reach.basix - Friday, March 6, 2020 - link
Yeah, cost is an issue.But then, something like a NVLink switch would make very much sense as well, because you could reduce the number of links per GPU and at the same time increase overall bandwidth between all GPUs.
Brane2 - Friday, March 6, 2020 - link
IMO it's not about the links, but the cost of underlying coherence (MOESI etc) circuitry.Brane2 - Friday, March 6, 2020 - link
I wonder what will IF be able to achieve with CPU clusters at that point.Will it be similarly capable of connecting up to 6 dies on one chip ?
Or will they unfurl that "hypercube" and we'd get more silicon dies, some requiring one hop.
I know about star topology and I/O chiplet, but it could be interesting if one could get a chip with more CPU dies ( perhaps stacked ), with only some in the group being connected to the I/O complex directly.
JasonMZW20 - Saturday, March 7, 2020 - link
6 links is still good for 300GB/s bidirectionally (each link is 25GB*2). I think the fabric power consumption becomes a larger issue with more links, as does diminishing returns. If we assume each bidirectional link costs 12.5W (don't have actual figures, so hypothetically), that's 75W right off the bat. It makes sense, now, that Arcturus' GPU power consumption is a mere 200W. With 6 links, they push the GPU right up to 275W, and that's with display, graphics, and pixel engines completely gutted (rumored) for pure compute use in a server (or supercomputer).300GB/s is based on current PCIe 4.0. This could about double, via PCIe 5.0, by the time El Capitan is ready. That's quite a bit of bandwidth to keep those workloads moving, but still lagging behind 1TB/s+ of HBM2/HBM2e VRAM when using 4 modules and 4096-bit bus.
We may see 8 links and all-to-all for 8 GPUs if the power consumption can be tamed. That's probably the largest issue right now. I'm not totally sure if this is the case, but logically, it makes sense.
If we count all 6 IF links in all 8 GPUs (48 links * 50GB/s), that's an aggregate system bandwidth of 2.4TB/s bidirectionally. Looks pretty decent when examined as a whole cluster.
pogsnet - Thursday, March 12, 2020 - link
We needed updated OS perhaps Windows 11 so soon to compensate for this fast innovations.jaker788 - Saturday, May 23, 2020 - link
Pretty sure Microsoft said windows 10 was the last version at least for a long time. 10 is their evolving platform that gets major and incremental updates from time to time.