It was appropriately ridiculed. I have no issue with using that as a greatly simplified example to explain how the internet works to him given his age, but for him to puppet it on the floor of the senate and in legislation was the point where he stepped into the path of ridicule, rightly deserved ridicule.
Sounds like the idea. If you can just dump your GPUs straight into your network, then why bother buying a bunch of those pesky CPUs just to act as GPU hosts?
Maybe? I was picturing a port stuck on a regular PCIe accelerator, to conserve PCIe slots and cut inter node GPU to GPU latency.
Most projects need some kind of traditional CPU/OS as a host. IDK about HPC workloads, but those folk seem to like big CPUs too.
I can picture Nvidia eventually selling embedded (or socketed?!) Teslas as full blown system on packages some day, complete with DDRX slots and other I/O. That would be one heck of an undertaking though.
While the marketing and PR and salesmen are all harping on about synergy, and integration and so on, I personally feel that the whole acquisition was entirely a "for the love of god don't give Intel absolute NIC monopoly" move, especially with nV (and Xilinx) entering negotiations *after* Intel announced that they were going to acquire Mellanox. Then a few months after being outbid, Intel announced RDMA-capable controllers to compete against Mellanox. Not a shred of anti-competitive behaviour planned there, eh?
Different question will be dealing with their SFPs. A major reason I went Mellonax is they don't vendor lock their connectors. This makes it easy to connect a Mellanox 100GbE switch to an Aruba 1GbE switch as the Mellanox switch doesn't care about the SFP coding. Will nVidia's management want to make thing vendor locked now?
Same as the support question. Personally, seeing how things are running right now, I feel nV is perfectly happy letting the Mellanox team do their own thing. As long as they can remain at the top-end of the market performance-wise and profitable.
I don't know a lot about this space, but how would a networking chip replace the x86 processor in the data center? I think they're pretty independent. I don't think this will change anything for regular Mellanox customers.
Are they going to change their pricing to match their server GPU scheme? Their server GPU's are nice but the pricing is crazy. To the point where particular AMD GPU server chips can't be found. Certain cloud providers took all of them.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
20 Comments
Back to Article
imaheadcase - Monday, April 27, 2020 - link
Every time i hear something like "interconnect provider" i just think "So they run cables". hehementor07825 - Monday, April 27, 2020 - link
Well...yes.But then again, the internet is just a series of tubes.
surt - Monday, April 27, 2020 - link
It always bugged me that that statement got so much criticism, given that it was entirely true.RadiclDreamer - Monday, April 27, 2020 - link
Sort of....I think it was more the words that came after that made him a laughing stock. Watch the entire speech before making a final judgement on it.
rahvin - Monday, April 27, 2020 - link
It was appropriately ridiculed.I have no issue with using that as a greatly simplified example to explain how the internet works to him given his age, but for him to puppet it on the floor of the senate and in legislation was the point where he stepped into the path of ridicule, rightly deserved ridicule.
LiKenun - Monday, April 27, 2020 - link
They lay pipe.Crazyeyeskillah - Tuesday, April 28, 2020 - link
This guy Networks.brucethemoose - Monday, April 27, 2020 - link
So... Teslas with built-in infiniband ports?edzieba - Monday, April 27, 2020 - link
Sounds like the idea. If you can just dump your GPUs straight into your network, then why bother buying a bunch of those pesky CPUs just to act as GPU hosts?brucethemoose - Monday, April 27, 2020 - link
Maybe? I was picturing a port stuck on a regular PCIe accelerator, to conserve PCIe slots and cut inter node GPU to GPU latency.Most projects need some kind of traditional CPU/OS as a host. IDK about HPC workloads, but those folk seem to like big CPUs too.
I can picture Nvidia eventually selling embedded (or socketed?!) Teslas as full blown system on packages some day, complete with DDRX slots and other I/O. That would be one heck of an undertaking though.
schujj07 - Monday, April 27, 2020 - link
As a person who uses Mellanox equipment in their data center, I wonder if this is going to change things with support.ZeDestructor - Monday, April 27, 2020 - link
Not in the near term I think.While the marketing and PR and salesmen are all harping on about synergy, and integration and so on, I personally feel that the whole acquisition was entirely a "for the love of god don't give Intel absolute NIC monopoly" move, especially with nV (and Xilinx) entering negotiations *after* Intel announced that they were going to acquire Mellanox. Then a few months after being outbid, Intel announced RDMA-capable controllers to compete against Mellanox. Not a shred of anti-competitive behaviour planned there, eh?
schujj07 - Monday, April 27, 2020 - link
Different question will be dealing with their SFPs. A major reason I went Mellonax is they don't vendor lock their connectors. This makes it easy to connect a Mellanox 100GbE switch to an Aruba 1GbE switch as the Mellanox switch doesn't care about the SFP coding. Will nVidia's management want to make thing vendor locked now?Spunjji - Tuesday, April 28, 2020 - link
Of course they'll *want* to - Nvidia management love them some vendor lock-in - but I doubt they would do so in the short-to-medium term.ZeDestructor - Tuesday, April 28, 2020 - link
Same as the support question. Personally, seeing how things are running right now, I feel nV is perfectly happy letting the Mellanox team do their own thing. As long as they can remain at the top-end of the market performance-wise and profitable.UltraWide - Monday, April 27, 2020 - link
It looks like they want the GPU to exist in the date center without relying on x86 processor to control it.webdoctors - Monday, April 27, 2020 - link
I don't know a lot about this space, but how would a networking chip replace the x86 processor in the data center? I think they're pretty independent. I don't think this will change anything for regular Mellanox customers.Griffinme - Tuesday, April 28, 2020 - link
Are they going to change their pricing to match their server GPU scheme? Their server GPU's are nice but the pricing is crazy. To the point where particular AMD GPU server chips can't be found. Certain cloud providers took all of them.Eliadbu - Tuesday, April 28, 2020 - link
Their high performance Switches and NICs aren't that affordable anyway, I don't think pricing will change much since they need to stay competitivesyristix - Saturday, May 2, 2020 - link
Israeli American company - at the risk of sounding somewhat controversial WTF has it got to do with the chinese?