It's datacenter or HPC related mostly, and very important one too. It can massively increase IO related applications performance usually in big data area.
A very long time from now, the technology used here will lead to products you use at home. As of right now, this is super-spendy "big iron" type stuff.
GPUs can. Think CrossFire or SLI but the GPUs are in separate devices connected by Omni-Path. If you have several PCs you could tap all of their resources from whichever one you happen to be sitting at across the network.
It won't. This is a technology to link multiple nodes in a "Big Data" Server, a super comuter. This will *NOT* come to consumers anytime soon and quite honestly, consumers have no real use for it at the moment.
Well, actually, it already has in a way. Although it's not designed as a network fabric, mostly because Intel isn't interested in producing dedicated switches for it and potentially cannibalizing a more lucrative market, Thunderbolt isn't too far off from a consumer version of Omni-Path. Consider the upcoming Alpine Ridge Thunderbolt 3 controllers vs. Omni-Path:
80% of SerDes rate (20.625 Gbit/s vs 25.78125 Gbit/s) 1/2 the channels per port (2C vs. 4C) 40% of peak port bandwidth 25% of PCIe back-end bandwidth (PCIe 3.0 x2 or x4 vs. PCIe 3.0 x8 or x16) No dedicated switches Latency ~3x higher (although it’s hard to say at this point for TB3) 35% of TDP (2.8 W vs. 8 W) or 17.5% if we believe Intel’s claims of “50% reduced power” with TB3 60x smaller receptacle (USB Type-C at 8.34 x 2.56 x 6.2 mm vs. QSFP28 at 19 x 8.8 x 48 mm) Intel’s recommended customer price is $9.95 vs. ??? Cables cost 2-4x less ($14.50 - $60.00 / m for 0.5 to 60 m active copper and optical cable assemblies vs. $30.00 - $256.00 / m for 0.5 to 50 m passive copper and active optical cable assemblies)
You may be right in that consumers haven't found a real use for Thunderbolt either, but creatives, professionals and academics certainly have.
x2 ^^^ The real science is in the 20-25Gbit SERDES rates these things are using. No doubt that there is a lot of crossover in the development of OmniPath and TB3, in that regard.
Actually, it affects you concretely. Your Siri will be faster. Your weather predictions will come faster. Virtually any app that accesses complex cloud data will -- as these are adopted -- return information to you faster. It is unlikely that these will be used for simple websites, but anything that requires computation, such as voice recognition or facial pattern recognition or data analytics will operate faster.
This is what I call zero value add. You sound like a marketing arm for Intel. Not a hint of any work beyond that Intel told you. Did you ask them how do they intend to win in this space (beyond of course the integration threat) by pushing technologies which couldnt stand on their own (qlogic, fulcrum)?
Was there any information about addressing or routing? Seems like the either the FLIT or the LTP would be routable, but they don't specify. Also, PCIe only or are they making a QPI interface version for Xeons (non-Phi) ?
Not to sound snobbish, but having done a fair amount of infiniband work, I don't see how this is all that compelling. Sure, their switches have more ports and a slightly different protocol (that seems like an interesting take but I'm doubtful that it amounts to a huge benefit over 64/66), but the throughput and latency numbers are essentially the same as EDR(although Mellanox claims 10-20% better switch latency than the chart above) and the general engineering of fabric w/manager seems the same. It seems more like an attempt to control the market by branding their own same-but-different flavor.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
16 Comments
Back to Article
alphasquadron - Wednesday, August 26, 2015 - link
Can someone tell me how this will affect a normal consumer. Does this allow processors to be faster or something. Or is this network related.mmrezaie - Wednesday, August 26, 2015 - link
It's datacenter or HPC related mostly, and very important one too. It can massively increase IO related applications performance usually in big data area.zoxo - Wednesday, August 26, 2015 - link
If you are a normal customer, this does not affect you in any way.JKflipflop98 - Wednesday, August 26, 2015 - link
A very long time from now, the technology used here will lead to products you use at home. As of right now, this is super-spendy "big iron" type stuff.vision33r - Wednesday, August 26, 2015 - link
I like to see what game or app you use at home that can push that much data through the interface.repoman27 - Wednesday, August 26, 2015 - link
GPUs can. Think CrossFire or SLI but the GPUs are in separate devices connected by Omni-Path. If you have several PCs you could tap all of their resources from whichever one you happen to be sitting at across the network.frenchy_2001 - Wednesday, August 26, 2015 - link
It won't.This is a technology to link multiple nodes in a "Big Data" Server, a super comuter.
This will *NOT* come to consumers anytime soon and quite honestly, consumers have no real use for it at the moment.
repoman27 - Wednesday, August 26, 2015 - link
Well, actually, it already has in a way. Although it's not designed as a network fabric, mostly because Intel isn't interested in producing dedicated switches for it and potentially cannibalizing a more lucrative market, Thunderbolt isn't too far off from a consumer version of Omni-Path. Consider the upcoming Alpine Ridge Thunderbolt 3 controllers vs. Omni-Path:80% of SerDes rate (20.625 Gbit/s vs 25.78125 Gbit/s)
1/2 the channels per port (2C vs. 4C)
40% of peak port bandwidth
25% of PCIe back-end bandwidth (PCIe 3.0 x2 or x4 vs. PCIe 3.0 x8 or x16)
No dedicated switches
Latency ~3x higher (although it’s hard to say at this point for TB3)
35% of TDP (2.8 W vs. 8 W) or 17.5% if we believe Intel’s claims of “50% reduced power” with TB3
60x smaller receptacle (USB Type-C at 8.34 x 2.56 x 6.2 mm vs. QSFP28 at 19 x 8.8 x 48 mm)
Intel’s recommended customer price is $9.95 vs. ???
Cables cost 2-4x less ($14.50 - $60.00 / m for 0.5 to 60 m active copper and optical cable assemblies vs. $30.00 - $256.00 / m for 0.5 to 50 m passive copper and active optical cable assemblies)
You may be right in that consumers haven't found a real use for Thunderbolt either, but creatives, professionals and academics certainly have.
extide - Friday, August 28, 2015 - link
x2 ^^^ The real science is in the 20-25Gbit SERDES rates these things are using. No doubt that there is a lot of crossover in the development of OmniPath and TB3, in that regard.donwilde1 - Saturday, August 29, 2015 - link
Actually, it affects you concretely. Your Siri will be faster. Your weather predictions will come faster. Virtually any app that accesses complex cloud data will -- as these are adopted -- return information to you faster. It is unlikely that these will be used for simple websites, but anything that requires computation, such as voice recognition or facial pattern recognition or data analytics will operate faster.TechEagle - Wednesday, August 26, 2015 - link
This is what I call zero value add. You sound like a marketing arm for Intel. Not a hint of any work beyond that Intel told you. Did you ask them how do they intend to win in this space (beyond of course the integration threat) by pushing technologies which couldnt stand on their own (qlogic, fulcrum)?Ryan Smith - Wednesday, August 26, 2015 - link
The short answer is integration. Omni-Path is integrated into Xeon Phi, nothing else is.tspacie - Thursday, August 27, 2015 - link
Was there any information about addressing or routing? Seems like the either the FLIT or the LTP would be routable, but they don't specify.Also, PCIe only or are they making a QPI interface version for Xeons (non-Phi) ?
basroil - Thursday, August 27, 2015 - link
Is that a heat sink on the connector???? That must be some serious data power!toyotabedzrock - Thursday, August 27, 2015 - link
It sounds like you would need the Xeon Phi to actually make use of that kind of bandwidth.sor - Friday, August 28, 2015 - link
Not to sound snobbish, but having done a fair amount of infiniband work, I don't see how this is all that compelling. Sure, their switches have more ports and a slightly different protocol (that seems like an interesting take but I'm doubtful that it amounts to a huge benefit over 64/66), but the throughput and latency numbers are essentially the same as EDR(although Mellanox claims 10-20% better switch latency than the chart above) and the general engineering of fabric w/manager seems the same. It seems more like an attempt to control the market by branding their own same-but-different flavor.