This seems excessively aggressive and complicated engineering when a single dual-Sapphire-Rapids system can hold 2TB of DDR5 for about £7000 (32 64GB modules at £230) and access it much faster. Increase the capacity by a factor eight and keep the same price per gigabyte and it becomes more interesting.
Still early days so yeah the cost isn't very competitive, but isn't a key differentiator the fact that all of the systems sharing the CXL memory bank get a coherent view of the contents at any given time? Eight servers sharing a common data set in storage would have to independently load that data set into memory and copy any changes back to the common storage, but eight servers on a CXL bank can work directly from the common memory. Still needs some kind of system for locking data that is being written, but it's probably not any more complicated than it would have been when working from disk.
Another step towards separating memory from processing. Nice. But consider that HBM3 is now doing 1-2 TB/s whereas this solution is probably at least 1/10 of that speed. Maybe a PCIe 5.0 x16 connection can handle the throughput?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
4 Comments
Back to Article
mode_13h - Monday, August 14, 2023 - link
So, the external link is what? A CXL/PCIe x16 cable connector? Do the cables have the same PCIe edge connector as the cards?TomWomack - Tuesday, August 15, 2023 - link
This seems excessively aggressive and complicated engineering when a single dual-Sapphire-Rapids system can hold 2TB of DDR5 for about £7000 (32 64GB modules at £230) and access it much faster. Increase the capacity by a factor eight and keep the same price per gigabyte and it becomes more interesting.Eletriarnation - Wednesday, August 16, 2023 - link
Still early days so yeah the cost isn't very competitive, but isn't a key differentiator the fact that all of the systems sharing the CXL memory bank get a coherent view of the contents at any given time? Eight servers sharing a common data set in storage would have to independently load that data set into memory and copy any changes back to the common storage, but eight servers on a CXL bank can work directly from the common memory. Still needs some kind of system for locking data that is being written, but it's probably not any more complicated than it would have been when working from disk.[email protected] - Tuesday, August 15, 2023 - link
Another step towards separating memory from processing. Nice. But consider that HBM3 is now doing 1-2 TB/s whereas this solution is probably at least 1/10 of that speed. Maybe a PCIe 5.0 x16 connection can handle the throughput?