Comments Locked

16 Comments

Back to Article

  • konbala - Wednesday, August 2, 2023 - link

    Computer audiophiles' dream come true
  • mode_13h - Wednesday, August 2, 2023 - link

    Why? You can already transmit 24/96 to an outboard DAC over toslink. If you need something like Dolby Atmos and want optical isolation, there are optical transceivers you can get for HDMI.
  • edzieba - Wednesday, August 2, 2023 - link

    Also PIC-SIG: OCuLink? What OCuLink?
  • nfriedly - Wednesday, August 2, 2023 - link

    It would be pretty awesome if PIC-SIG made the official cabling standard compatible with OCuLink. But if not, I could see passive adapters as a possibility.
  • speedping - Wednesday, August 2, 2023 - link

    Would this enable cloud computing with modular GPUs? (Like NVMe-oF did to SSD storage?)
    How far away can something like a GPU be until the added latency makes it unusable? Each meter adds 3.3 nanoseconds of latency due to the speed of light
  • mode_13h - Wednesday, August 2, 2023 - link

    Presumably anything they do would apply equally to CXL, which already supports switch fabrics. It's therefore entirely plausible to see rack-scale disaggregated GPU compute setups.

    I think there are still pragmatic reasons to keep the GPUs in the same rack, however. Otherwise, the amount of bandwidth you'd need between racks would get pretty nuts.
  • Jorgp2 - Wednesday, August 2, 2023 - link

    Those have been a thing for at least a decade.

    You could get PCIE expansion systems just like you could with storage.
  • mode_13h - Wednesday, August 2, 2023 - link

    I tihnk PCIe over optical shouldn't be just inserting a pair of optical transceivers at each end, but should revisit fundamental aspects of the signalling to properly adapt it for a fiberoptics medium. I sure hope whatever they do can scale at least half as well as PCIe has done so far, and that's unlikely if they make too many short-term oriented compromises.

    > ranging from x1 to x16 connections.

    I thought it went up to x32, even though we don't often see them in practice.
  • watersb - Wednesday, August 2, 2023 - link

    I don’t work with modern server-class hardware, but I have noticed how servers are now cables for internal connections like data link between CPU sockets.

    Short cables where both ends terminate on the same circuit board — replacing long traces on the circuit board with shorter travels to a high density plug or socket of some kind, then over a cable, to another such connector.

    Sure, for a decade that the industry has been using PCIe lanes for the CPU to talk to the motherboard chipset; in theory that traffic could be carried over a short cable or riser adapter, but such an approach was quite unusual.

    But now PCIe bus traffic has become so high-frequency that circuit boards cannot deliver the signals.

    Maybe this is obvious to everyone here, but it’s still new to me, and has not yet become necessary for consumer PCIe 5…

    But how long until your gaming rig needs CXL to talk to RAM?
  • mode_13h - Thursday, August 3, 2023 - link

    > how long until your gaming rig needs CXL to talk to RAM?

    Once the main memory turns into a stack sitting on the same interposer as the CPU, then CXL will be the natural option for memory expansion.
  • back2future - Thursday, August 3, 2023 - link

    if RAM-like memory isn't integrated into CPU again with HBM3& and optical (price-worthy, efficient, endurability, power for transmitter diodes add to high bandwidth setups (chipset cooling 'again'), Thunderbolt optical impact on mass consumer market?) seems more being option inside CPU and server market peripherals or (if cheap alternative to 'copper') with high isolation to interference on parallel lanes, but not necessarily on distance improvement for peripherals (because of ~equal signal transmission speed, but possibly bandwidth and latency improvement for signal data on higher clock speed and lower signal degradation because of noise?)
    CXL3.0 is based on (g&e?)PCIe6.x (server&workstation(~2024/25) market or several yrs min. for arriving on consumer desktop&mobiles(?)), and for comparison DDR DIMM's native latency is ~20ns, while CXL implementation requires tolerance to ~200ns.
  • back2future - Thursday, August 3, 2023 - link

    [ DDR5 DIMM's are getting up to ~10ns and HBM seems more towards ~100ns ]
  • TeXWiller - Thursday, August 3, 2023 - link

    It would be nice to see small, cabled EDSFF backplanes becoming a thing on workstations and gaming systems alike. Let the magic of volume manufacturing bring the price of hot-swappable, large capacity PCIe storage and plug-in accelerators down to the consumer levels.
  • LuxZg - Friday, August 4, 2023 - link

    I wonder if it could be cheaper than tracing on motherboard... Or am I the only one envisioning smaller MBO, with 6-8 optical connector somewhere near CPU/chipset, bypassing ATX sizes for ITX, replacing PCIe x1-x16 slots with something thin and light, and allowing us to place hardware wherever we want in some new weird cases? Sure, we have x16 ribbons and similar, but I'd gladly switch that to something as thin as fiber cable or even if it ends up as "thick" as SATA. Then play around with free-form placing of components.

    I know I know, I'm letting my imagination go wild 😂 these connectors would probably add another 300$ to average MBO, even if it was actually cents in manufacturing, just because...
  • Jonsnas - Tuesday, August 22, 2023 - link

    <a href="https://rousernews.com/">https://rousernew...
  • Jonsnas - Tuesday, August 22, 2023 - link

    Nice: https://socialmagz.com/
    https://rousernews.com/
    https://cranefest.com/
    https://techcompetitor.com/
    https://technowhy.com/
    https://techleem.com/

Log in

Don't have an account? Sign up now