Comments Locked

38 Comments

Back to Article

  • Xajel - Wednesday, February 3, 2021 - link

    Damn, I wish these are cheap and not overpriced by current image of them being a server market product.

    A simple switch with x1 lanes bifurcation can be the perfect solution for any consumer grade motherboard as a true replacement for SATA, the port can be configured with x1, x2 or x4 lanes depending on the speed of the SSD we want. A single x4 port can be connected to 4 drives of x1 lanes for example or 2 drives of x2 lanes or a single x4 lanes drive.
    The OCuLink connector seems to be perfect for such application, assuming it can be updated for PCIe 5.0.

    But I guess Intel & AMD are forced to have their own PCIe switches integrated in the chipsets because a third party switch is so damn expensive. And we're limited to these as Motherboard manufacturers can't afford to put expensive switches because "consumer" users want more PCIe lanes.
  • Deicidium369 - Wednesday, February 3, 2021 - link

    Oculink is dead - Intel cancelled it - so no new Oculink

    Rest of your post is gibberish
  • Billy Tallis - Wednesday, February 3, 2021 - link

    What connector do you think is going to be most popular for PCIe 5.0 cabling between motherboards and SSD backplanes?
  • Deicidium369 - Wednesday, February 3, 2021 - link

    It won't be Oculink. If we are talking about enabling disaggregation in CXL 2.x - then it could be copper or it could be optical - with optical being most likely - and not just for connecting storage, but also eventual pools of resources like FPGAs, AI (Habana or TPU), GPU (Intel, Nvidia or AMD) and memory (volatile and non volatile)... then optical makes the most sense - Intel's silicon photonics being a prime candidate... but Nvidia with Mellanox IP would be a possibility as well.

    as far as CXL goes - PCIe5 is a temporary stopgap to PCIe6 - with even more issues when using copper runs longer that a few inches (motherboard).
  • edzieba - Friday, February 5, 2021 - link

    Yep, totally dead. So dead there are products shipping with it and in active use, with recent updates to PCIe 4.0. Very dead indeed. Quite impressive to Intel to be able to kill a standard they did not originate and do not control, too.

    Of course, you may have completely mistaken Oculink for Omnipath, which has no relation.
  • mode_13h - Friday, February 5, 2021 - link

    Yeah, that must've been what he was thinking. I believe Omnipath is indeed discontinued.
  • WaltC - Wednesday, February 3, 2021 - link

    It's going to be a long time before PCIe5 hits the consumer markets. It's amusing to note than when AMD was shipping PCie4 CPUs, GPUs, and chipsets into the market how some people were saying that "You don't really 'need' PCie4--PCie3 is plenty," the bit about 'need' is an Intel standby for trying to convince people that they don't 'need' a competitor's superior products and tech. Before Intel licensed x86-64 from AMD, Intel's line was "You don't need 64-bits on the desktop" as Intel thought it was "too much" for consumers while encouraging business to go Itanium for 64 bit performance. Intel ran an entire ad campaign based on that slogan! As we all know, it didn't work.

    I mean $1k Z590 motherboards that don't have system-wide PCIe4 buses? AMD is selling hybrid PCIe4/PCie3 motherboards for 25% of what Intel wants to charge for its most advanced bus support--AMD's B550 "value" motherboards! And some people think Intel is preparing to release PCIe5 chipsets and CPUs? Not a chance, I'd say. I know AMD is in no hurry for PCIe5, either.
  • eek2121 - Wednesday, February 3, 2021 - link

    By "long time before" you actually mean "months".
  • DanNeely - Wednesday, February 3, 2021 - link

    I'll believe it when I see it. About a year or two before the first AMD PCIe4 boards came out I read an article on an engineering site (IEEE????) that predicted that meeting the needed PCB tolerances and/or redrivers needed to support PCIe4 would add about $100 to full size mobos; and that with a maximum PCB path length of only ~2" (vs 4" for 4.0 and 8" for 3.0) requiring a ton of redrivers on top of even more stringent PCB manufacturing that adding PCIe5 to the board would cost about $400.

    The major price increases we've seen first with AMD and now Intel's 4.0 capable boards make it clear the first part of the article's predictions were correct. That leaves me skeptical that we'll see 5.0 show up below the Threadripper/HEDT pseudo-workstation tier of boards anytime soon. (Maybe the top slot and an m.2 riser card next to the dimms; but unless they can figure out how to reduce PCB costs and make dirt cheap redrivers I'm not expecting 5.0 to be widespread anytime soon.)
  • back2future - Wednesday, February 3, 2021 - link

    need for PCIe5 instead of PCIe4 should consider how long system and peripherals are idling between usage of massively increased bandwidth to/from peripherals (gpu, storage, acellerator hardware, maybe network >400Gb -1Tb) and whats the power/temperature increase for chipset and specialized circuits for enabling accustomed mainboard layout (maybe even with compatibility for PCIe6 in mind, if that's within a reasonable time scale on faster markets?)
  • Deicidium369 - Wednesday, February 3, 2021 - link

    PCIe5 on Servers makes perfect sense - but on Workstation Class (we use Xeon Scalable for our Engineering CAD/CAE workstations)there is no advantage for PCIe5 over PCIe4 - I can get NIC at 200GBE/IB EDR that are PCIe4 x16... so until 800GBE faster drops (My Cisco 93600 switches support 400Gb/s on some ports - yet you can't even find a 400GBE NIC) PCIe4 will suffice.

    I cannot see the use case for PCIe5 slots on a desktop PC - even PCIe4 is overkill in most cases.

    Sounds like you have been in the industry for a while - remember the near MIL-Spec motherboards that Compaq used to make? PCIe5 on the desktop (presenting as PCIe5 slots) would need to be that level or better.
  • back2future - Thursday, February 4, 2021 - link

    Saturating trans ocean fibre bundles networking capacity needs one 16lanes PCIe6 (then), 4320p (120Hz) can be provided on 2lanes PCIe4 or 1lane >PCIe5 and Thunderbolt (40Gb/s) on 4lanes PCIe4 or 2lanes >PCIe5. With arriving technological changes towards PCIe6 (>=2021) that's probably one of most interesting parts on cpu&PCIe carrier boards these days (Considering laser/fibre transmission limits <>40Tb/s and <1Tb/s, combined memory&storage development > 1lane PCIe5, You mentioned on comment before, or maybe also power supply on 24/7 devices and temperature management). These parts spreading into consumer level provides chances for new development concepts (including user side, interest related data filtering supported through ai)?
    What's beyond light for data transmission ...
  • Tomatotech - Thursday, February 4, 2021 - link

    "I cannot see the use case for PCIe5 slots on a desktop PC - even PCIe4 is overkill in most cases."

    NVME drives are already bumping near the top of PCIe4 right now. Moving forward will require PCIe5. At least for marketing numbers - I'm aware there are many other parts to NVME performance that could be improved.
  • mode_13h - Friday, February 5, 2021 - link

    Not "bumping", as that would imply maxing PCIe 4.0, which they cannot do.

    NAND is about being cheaper and denser, rather than faster. To reach higher speeds, it either comes at the expense of density or by adding channels, each of which is expensive. Even the mighty Samsung cannot resist the urge to go denser, as we see in the case of their slightly disappointing 980 Pro.

    The case that SSDs would make for PCIe 5.0 would be to drop down to x2 lanes (without sacrificing performance). But it sounds like it's not going to be a net savings either for them or the motherboard, so they probably won't.
  • calc76 - Wednesday, February 3, 2021 - link

    There are $80 AMD B550 and $130 AMD X570 boards with PCIe 4.0 slots.

    The reason prices are so high right now is primarily due to very high demand combined with supply chain issues.
  • Alexvrb - Sunday, February 7, 2021 - link

    Dan, the price increases aren't that high because of PCIe 4.0. Look at B550 board launch prices. They weren't THAT much more comparing apples to apples to same-class B450 boards, even without considering the rising component costs. I paid $150 for mine, and there were cheaper options at the time. An equivalent full ATX B450 with a similar VRM config and featureset (minus 4.0) wasn't dramatically less.

    The SUBSEQUENT price increases aren't due to PCIe 4.0, but rather market conditions (look at CPU and GPU prices). Demand is stupid high among both gamers and miners (ughh). I got a Ryzen 3600 for $170, now it's hovering around $200 and that's one of the chips that hasn't risen dramatically - I can't even look at current-gen chip prices.
  • Deicidium369 - Wednesday, February 3, 2021 - link

    Well Alder Lake - desktop - is likely to be PCIe5 internally (say x16) that presents as 32 PCIe4 lanes - I seriously doubt that there will be a PCIe5 slot on consumer motherboards.
  • JayNor - Wednesday, February 10, 2021 - link

    what about the Alder Lake PCIE5 leaks?
  • Duncan Macdonald - Wednesday, February 3, 2021 - link

    Given the delays on almost all of Intel's plans - will Intel have PCIe 5.0 CPUs in significant numbers before AMD does ? (How many years late are Intel's 7nm offerings ?)
  • shabby - Wednesday, February 3, 2021 - link

    Sure... when pigs fly.
  • onewingedangel - Wednesday, February 3, 2021 - link

    Both Sapphire Rapids and Alder Lake are 10nm, and should both release this year, but rate of ramp up is another question.
  • Deicidium369 - Wednesday, February 3, 2021 - link

    "vanilla" 10nm is Ice Lake and Ice Lake SP - ultra high volume
    10nm SF is Tiger Lake
    10nm ESF is the Golden Cove Trio and Xe HP and Xe HPC..

    So 3 different lines, Ice Lake SP is already in HVM and being shipped to OEMs - doesn't get much higher volume than a 2S Xeon.

    10nm ESF is later this year - and will be available in mass quantities - as it is a major long term platform (Golden Coves brings laptop, desktop and servers under same arch and bring 1-8S servers under the same arch - unlike Cooper Lake 14nm for 4-8S and Ice Lake SP 10nm for 1-2S)
    So volume will be a given - the AMD fanboy memes of Intel having terrible yields and performance are stale.
  • mode_13h - Friday, February 5, 2021 - link

    So much complaining about "AMD fanboys" and yet you always come to Intel's defense. Never a harsh word for them, as if they can do no wrong.

    I don't care why you do it, your actions speak for themselves. You're an Intel cheerleader, if not an outright superfan.
  • Qasar - Friday, February 5, 2021 - link

    mode_13h, thats what he does best. look at all the intel shilling he did last year. he claims to also buy amd, but considering how hard he shills for intel, i doubt that.
  • mode_13h - Friday, February 5, 2021 - link

    We should start referring to him as our resident Intel PR rep. Just to alert anyone who doesn't already know about his bias.
  • Deicidium369 - Wednesday, February 3, 2021 - link

    Yes - and Intel only ships in significant numbers - and not by the dozens like AMD....

    7nm was never going to be before 2023. Intel's 7nm is present in Xe HPC which was it's launch product anyway - that was delayed - but the Desktop (Meteor Lake) and Server (Granite Rapids) were never going to be much before 2023.

    Sapphire Rapids - late 2021 - and in late 2022 the 7nm follow up will be available...
  • RU482 - Wednesday, February 3, 2021 - link

    If Sapphire Rapids ships in 2021 I'll buy you lunch.
    /sceptical
  • 29a - Wednesday, February 3, 2021 - link

    Do you really think you could stand being in the same room long enough to eat lunch, I couldn't?
  • calc76 - Wednesday, February 3, 2021 - link

    "7nm was never going to be before 2023."

    That's an interesting definition of 'never' as Intel publicly stated 10nm was expected by 2015 and 7nm by 2017.

    Intel should be at 3nm by now based on their own roadmaps from ~ 2013.

    Their 7 year fail is how we ended up with 14nm+++++++ Rocket Lake CPUs which run at 300W+
  • JayNor - Wednesday, February 10, 2021 - link

    xe-hpc is in the lab already. Some labeled die photos show Intel 7nm compute chiplets. Are these confirmed?
  • Arsenica - Wednesday, February 3, 2021 - link

    Oh PCIe 5.0, yet another of the technologies that Intel foolishly pursued for the Exascale Aurora supercomputer just to have AMD deliver a 50% faster supercomputer at least 6 months before Intel even starts assembly it.
  • RogerAndOut - Thursday, February 4, 2021 - link

    AMD has good reason to consider PCIe 5.0 (and 6.0). InfinityFabric/Infinity Architecture communications takes place over PCI-e lanes. So the desktop CPUs and GPUs are likely to gain a feature that is not currently needed because support is added for data center EPYC and GPU-based systems.

    This was first seen in the move from PCIe 3 to PCIe 4 when EPYC 7002 Series was shipped. The doubling of the speed of the Infinity Fabric link between 2 processors was so much that DELL has a system where only 48 (rather than 64) PCIe lanes from each CPU were allocated to the link and the remaining 32 (16 from each CPU) used for other tasks. Resulting in a system with 160 lanes for general IO. Why so many? Well it does support 24x NVMe SSDs so that it 96 lanes allocated to start.

    The other advantage for AMD is that their off-chip PCIe implementation is a function of the I/O die and not their CPU core(s) so if they have a reason to they can mix and match features depending on what they wish to release.
  • COtech - Thursday, February 4, 2021 - link

    With distance on the motherboard becoming increasingly important would it make sense to use both sides of the motherboard to place some resources closer to the CPU ?
  • Tomatotech - Thursday, February 4, 2021 - link

    Yes. Many motherboards already have a NVME drive slot on the bottom. It's not a perfect location but it makes sense going forward, especially if the manufacturers want two PCIe 5 NVME drive ports but only have 1" of distance to locate them.
  • back2future - Thursday, February 4, 2021 - link

    On evaluation of JEDEC and Micron specifications and memory emulation on 3.6Gb/s (clock rate 1.8GHz) it was explained, that for in spec functionality voltage difference for high/low signal being within 700mV, timing for setup and hold of that bit has to be within a 62+87 ps since clock inversion. That is a distance of around 1.75" done on speed of light.
    Insertion losses when signal changes pcb layers can vary from 65 to 5dB for 120-160mil to below 60mil. That's for DDR4-3600 3.6GT/s 25.6Gb/s standards. DDR5-8400 525MHz*64bit/8*16 = 2× 33,6 GB/s on 4.2GHz clock rate. PCIe4 gets 16GT/s, PCIe5 32GT/s and data signal jitter termination for DDR4 is required for getting jitter to 1/5 influence compared to without ... on a real numbers from around 130ps to 35ps at 3.6GT/s.
    That's some kind of art on this business and knowledge on materials.
  • back2future - Thursday, February 4, 2021 - link

    Sorry, meant: DDR4-3600 3.6GT/s 25.6GB/s standards
  • back2future - Thursday, February 4, 2021 - link

    ... considering signaling termination power, have a look on current PCIe4 NVME heat sinks, appearing like almost all above 1/2" in height, https://images.anandtech.com/doci/16458/Q140_678x4... close to M.2 connector, rev 1.1 (or up to theoretical rev 4.0, 12/2020, already)?
  • Wizard2021 - Wednesday, February 10, 2021 - link

    20 years is a long time to get out that PCIE 5.0
    20 years of enough SLEEP for you ??
    Well at least you got it out!
    It about time
    But way too late!
    Good job anyway
    What your next job PCI E 6.0 ??
    Need Extra Sleep for another 20 years ???

Log in

Don't have an account? Sign up now