I haven't heard anything about 7.0. I don't know if they've finalized a path to it yet. From what I understand the industry is looking at trying to integrate optical technologies because it's running up against some physical constraints with its current way of doing things. If that happens, I wonder whether it will be through PCIe or not.
Of course they have it prepared long time ago but not for us ordinary end users. They have time and they don't want to skip speeds straight to fastest.. Otherwise they wouldn't have anything to offer.. Alway they must work few years in front. To keep business going
Takes allot of R&D and testing to ratify these standards. They don't have them "sitting in a vault" somewhere so they can profit off the current stuff.
Even just the controllers need to be cost-effective and need to wait on smaller fabrication processes becoming viable.
I though the same and I would expecxt so. With moving to PAM4 they already took one off the bigger current tricks out of their sleeves. Not sure if they can significantly increase bantwith again without a even more significant change in technology or at some point even kill backwards compability and move to optics.
They have been quite slow with PCIe 3.0 to PCIe 4.0 but then pushed hard on PCIe 5.0 and PCIe 6.0 even annoucning them early. I doubt they can double speeds again easily for PCIe 7.0 within the next 2-3 years. Not without significant changes in my opinion. I would expect PCIe 5.0 and PCIe 6.0 will have a longer lifecycle again and maybe won't see a successor standard finalized until 2026/27
No one here can tell the future mate, all we can do is speculate. I think that in the transition between 3 and 4 the had time to build up a strategy for future revisions which proved itself in relatively fast time for bringing them out. The people who might know it are the people who work on it for PCI SIG.
it'll probably be easier and cheaper to just double the link width than to increase the link level to PCIe 6.0. It'll be a while before its needed though. We're not really saturating a 4.0 X8 link to the chipset... yet.
From a practical standpoint, video cards are currently BARELY beyond the limits of PCIe 3.0, and nowhere near needing PCIe 5.0 speeds. There are some things that could make use of it, primarily NVMe SSDs, but that's pretty much it. Multi-GPU or linking multiple PCIe devices to each other via the PCIe bus rather than using a bridge connector would be one of the few obvious uses.
Servers are the place where there will be a big benefit.
This isn't about frames per second, it's about getting data in and out of CPUs. 8x PCIe 6.0 lanes can be broken out to a lightning fast NVMe array, two GPUs, and high speed networking. No need to cut down on bandwidth or lanes for each individual device because you can turn those 8 CPU lanes into 32 gen 4 lanes in the chipset.
I had an 890FX, I think. Yeah, I loved having so many lanes, but sadly they're only PCIe 2.0.
I used that board for a fileserver. It had 6x SATA 3 and supported ECC RAM. Only cost something like $160, which was about what I paid for the Phenom II I put in it.
I'm still at PCIe 3.0 on my Z390 platform, we don't even have a PCIe 5.0 compatible consumer SSD on the market. These standards are meaningless to regular joe unless there's revolution hardware to use it. These standards are most likely meant for massive DATA centers and AI research stuff. Maybe for self-driving cars.
M1 Pro/Max MacBook Pro has SSD's that clock 7.4GB/s. Now we can put all sorts of caveats next to the conditions under which you will actually see that level of performance, but it gives an indication of what *is* possible at the consumer level, giving one possible target.
A second possible target for this sort of performance is 10G ethernet which, yes, has definitely taken its sweet time into the consumer market, but the various pieces required are now available in ways that wasn't the case two years ago.
A third use case that remains unclear (but IMHO is becoming more real every day) is CXL, especially as a memory extender. If we can access "fast enough" second tier DRAM via PCIe, then concern over being able to expand DRAM via slots will become less of a concern, and we can see more DRAM move onto the package, with consequent power improvements.
Doesn't FEC imply some amount of overhead, as redundant data is being sent to fill the blanks of the actual data sent beforehand? In that case, is this overhead estimated? I wonder how much effective bandwidth will be lost, maybe the mechanism is adaptive based on the history of crc errors in prior packets?
PCI SIG has been following 802.3's path. I expect RS coding with 6% overhead. Fixed BER gain. Adaptive FEC is technically possible, but I haven't seen a technology develop it. It hurts very little to make conservative FEC choices so people usually just do that.
Yes, but it sounds like that's more than offset by switching the bit encoding is switching from 128/130 to 1/1. Specifically, slide 4 of the presentation says "Flit ... enables more than double the bandwidth gain".
> maybe the mechanism is adaptive based on the history of crc errors in prior packets?
From the article, it doesn't sound like it. The fixed-sized FLIT packets sound like just that. And I presume that size is baked right into the spec.
> maybe the mechanism is adaptive based on the history of crc errors in prior packets?
What happens if you get CRC errors is a retransmit. If you get too many retransmits, then I think either your hardware is faulty or wasn't designed as per the spec. If your signal path meets SNR guidelines, then the rate of CRC errors should be very low. If that ceases to be the case, then I think the expectation is that the customer will replace their hardware.
Existing PCIe uses 128b/130b NRZ encoding, which introduces it's own 1-2% of overhead at the physical layer. Switching to PAM4 removes that level of overhead to replace it with CRC and other error checking. The trade-off is a relative wash.
seems PCIe 7.0 is understandable by doubling bandwidth going PAM-16, PCIe 8.0 will get a harder part, if lane length should stay suitable without additional retiming (otah, additional retiming for topping ~1TB/s bandwidth (~256GT/s data rate) each pin would be a low price for doubling bw?) Interesting to look at: https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-s... (nextplatform)
correction: (otah, additional retiming for topping ~1TB/s bandwidth (~256Gb/s data rate), 2x32GB/s each pin (nowadays ~bw DDR4 DRAM memory), would be a low price for doubling bw?)
this further: ~1TB/s/pin would be PCIe12 (~2040, on 3yrs cadence), being one pin on peripheral component interface express, same bandwidth like todays registers/top level cache (L1) ?
It seems to me there are two factors driving the cadence. First, there needs to be a demand for more bandwidth, which seems (recently) to have been supplied by ever-faster network links, SSDs, and deep learning. I doubt those will slow much, anytime soon.
However, then you have the technology development that's needed to deliver new speeds in a cost-effective way. And that seems harder to predict. The PCI SIG can only move forward with a new spec, once the enabling technologies are proven.
guess 2020 combined bandwidth of global internet infrastructure capacity was ~75TBps, what are ~10 PCIe12 x16, half duplex mode, network cards (or 1280 PCIe5 x16)?
(PAM-16 was copied from above link: each position 2/4/8/16 level modulated pulse, seem to be a clock rate (count) reduction by half for each modulation increase and 16bit words?)
more or less PCI-SIG's roadmap towards what's possible in todays silicon and copper (within a cpu and nm scale), but with doubling the bandwidth 'til then, would be necessary for a peripherals interface _ Whoo, let engineers surprise us. (L1 cache is todays highest bw bus speeds, maybe 256bit bus width, enabling ~1-xTB/s bus speed, 1 or 2 pins then would enable that for peripherals, like 1pin PCIe6 replaces a 16x PCIe2.x bus) not saying it's not possible, but maybe needs light, graphene, quantum theory
What's the reason PCI-SIG doesn't show a renewed roadmap (this 5/1/2022) including PCIe7? (maybe a doubling bandwidth claim on an average 3 years pace is only valid up to PCIe6 within their planings, but that would not sound like PCI-SIG?)
Statista.com Revenue of the computer hardware market worldwide 2012-2025, by segment "In 2020, the global computer hardware market generated a total revenue of over 298 billion U.S. dollars. The two segments that generated the majority of the revenue were laptop sales, with around 123 billion U.S. dollars of revenue and tablets with around 57 billion U.S. dollars in revenue. The Statista Consumer Market Outlook estimates that a peak revenue in 2021 will be followed by a period of decline until revenue across all segments will again increase in 2025."
various mobile equipment probably not the market for PCIe6 volume (still ~118bnUSD left) seen numbers for global mainboard market ~13bnUSD for 2021 (annual growth ~1.4%) at the moment PCIe seems being a ~20-25bnUSD global market (expected growth to ~48bnUSD ~2026, annual growth ~4-5%, with ~40% origin from NorthAmerica, most from switches (~9bnUSD share) and storage (been ~7bnUSD 2020) ) (2020: dominant PCIe3 was ~8.6bnUSD, dominant geographical region APAC ~7.9bnUSD before North America, Europe)
Statista.com global IT device spending 2021 ~802bn USD (forecast 2022 ~821bnUSD)
Global IT spending forecast 2012-2022, by segment "The global information technology (IT) spending on devices, including PCs, tablets, mobile phones, printers, as well as data center systems, enterprise software, and communications services came to 3.87 trillion U.S. dollars in 2020 and is expected to increase by approximately 9.5 percent to around 4.24 trillion U.S. dollars in 2021. This is likely due to an increase in demand for technological devices for remotely working employees, which has surged since the outbreak of the COVID-19 pandemic." datacenter systems ~200bnUSD enterprise software ~680bnUSD devices ~821bnUSD IT services ~1294bnUSD Communication services ~1482bnUSD
maybe not (especially) for PCIe, but customers for consumer hardware are a reliable source for production volume for industrial revenue (maybe therefore PCI-SIG won't increase a spreading between standards and availability between datacenters and consumer desktop/workstation and mobile equipment too much) gap between PCIe3 and PCIe4 - lower investment funding and technical delays from supporters? there was a time with lowest spendings for global IT around 2015-2016, first PCIe4 hardware announces started 2016 (PCIe5 ~Q4/2019-Q2020 with controller and cpu integration) and PCIe retimer market got visible ~2016
> The Statista Consumer Market Outlook estimates that a peak revenue in 2021 > will be followed by a period of decline
I'm not sure about that. There's a lot of pent-up demand that's waiting for prices to come down & availability to increase. It's a bold move to be making predictions in these times, but I understand that's their whole business.
> maybe therefore PCI-SIG won't increase a spreading between standards and availability > between datacenters and consumer desktop/workstation and mobile equipment too much
This makes no sense to me. They're going to try to keep pace with datacenter needs, or risk losing that market almost entirely. PCI SIG members who need faster interconnects for future datacenter products are going to try to push it into that role, and if they meet too much resistance, they'll go elsewhere.
Also competing expenses with higher energy demand, cost for food and living, some level of inflation, environmental responsibility awareness (requiring attention, time and financial resources) or somekind of saturated infrastructure perfection. Maybe it's much naming it 'decline', but might be true for a short period of stagnation (until there's free capacity for upgrading again, with all connected rebound effects). Seen with memory prices also, adjusting between DDR3 and DDR4?
Maybe it's more about identification being customer and informed about a companies production line best efforts for improvement and having somekind of connection into that top line (for PCIe it's compatibility and trust in their announcements). With highest speed products, production volume on these parts (probably) will be less than with todays top speed devices and networks won't double throughput every 3 yrs through all connected nodes, maybe not yet that much for PCIe6 standard, but a 2-3 generations later, one should not have lost a consumer level production support for funding research and development for top tier (?) elite devices? It's PCI-SIG, if you can put your PCIe12 device into a PCIe1 slot (wouldn't that be a great slogan then?) ;)
Huh? Wouldn't you only need PAM-8 to double the bit-rate? But that would also mean implementations would need to double their SNR, which seems like it could get expensive.
I guess one thing that makes photonics attractive is the relative lack of interference?
> What about doing signaling like what RAM & Memory Controllers do with DDR / QDR / ODR?
DRAM uses a parallel datapath, whereas PCIe is serial. Serial interconnects have clocking as an intrinsic part of the signal. So, the concept of multiplying the data transmission intervals relative to the clock period doesn't apply.
Like all the RX 6000-series AMD GPUs, it uses Infinity Cache to compensate for the narrower link width. So, it's not as if the only thing that changed was the link width. In the end, all the card needs to do is deliver better value than other current offerings, at current market prices. So, we really just need to see how it benchmarks.
And regarding prices, it seems clear the 4 GB memory capacity was targeted at preventing Ethereum mining. It's not a great solution, but also not one that can be easily circumvented.
Consumer products are unlikely to see PCIe 6.0 for some time due to the bandwidth limits of the main memory. Even a dual DIMM DDR5 memory can not provide enough bandwidth for a 128GB/sec PCIe link. (Corsair DDR5-6400 can manage 51GB/sec per DIMM,) To provide the required bandwidth will require a workstation or server processor with more memory channels (Threadripper, EPYC or Xeon). The additional costs from a 4 channel memory system mean the there will be little demand for a consumer CPU to have such a capability.
(1) expanding the gateway between CPU and everything else. A good set of PCIe 6.0 lanes will make getting data into and out of CPU much easier;
(2) reducing hardware costs by reducing the number of lanes needed for a hardware item. E.g. a SSD that needed 4x PCIe 4.0 lanes will need only 2x PCIe 5.0 lanes, and only 1x PCIe 6.0 lane, with a reduction in costs at each step.
(3) enabling higher SSD speeds. PCIe 5.0 has only been out for a few months, and already some 5.0 SSDs are maxing it out. It's embarrassing for the PCI-SIG standards commission that a standard that has only been out a few months is already inadequate. These standards are intended to be sufficient for at least the next few years. Hence there is a bit of a rush to get 6.0 out relatively soon to enable the industry to move forward.
Yes these high-speed SSDs are unaffordable to the typical person in the street, but there seems to be high demand from datacentres for large ultra-fast SSDs, and the PCI-SIG standard commission is in the unusual position of being the ones holding up this work.
I have to say I'm very impressed at the success of flash-based SSDs. I remember when they first came out - they were slower than HDDs and expensive and low capacity. Now the flash-based tech seems almost infinitely extensible to faster and faster speeds - it's almost like a doubling of speed every two years, combined with staggering capacities (for these with deep pockets).
> expanding the gateway between CPU and everything else.
Most communication the CPU does with devices is via memory. Doing otherwise would kill CPU performance, due to the latencies involved in PCIe transactions, and PCIe 6.0 won't fix that. CXL should help, but I'm not aware of a roadmap for bringing it to consumers.
> reducing hardware costs by reducing the number of lanes needed for a hardware item.
Could be, but it depends on what hardware costs you're talking about. If it's the motherboard, then what you're saying is that it can wire up fewer lanes in its various slots, however that will provide terrible performance on older, wider peripherals. So, that seems unlikely.
The next issue is that you're presuming it's a net-savings for device makers to upgrade to the next PCIe version and halve the width, which won't always be true. I'm sure it's not currently true of PCIe 5.0, and won't be true of PCIe 6.0 for a while, if ever.
> PCIe 5.0 has only been out for a few months, and already some 5.0 SSDs are maxing it out. > It's embarrassing for the PCI-SIG standards commission
You're talking about the Samsung PM1743? That's a U.2 drive, which means it's only x4. Are you aware there are x8 and I think even x16 PCIe SSDs? All you'd have to do is put one in a 2.5" U.2 form factor with an upgraded PCIe controller.
Also, the PCI-SIG is an industry consortium. It's primarily run through the action of member companies who are building products incorporating PCIe. It's not an independent organization that's like competing with industry in some sort of race. I think they know where industry is at and what's needed, which is why they pushed onward to finalize PCIe 6.0, so soon after 5.0.
Also, I wouldn't say flash is the main use case for PCIe 6.0. It's probably compute accelerators, networking, and CXL-connected DRAM (since CXL piggybacks on part of PCIe and is backed by many of the same companies) and maybe Optane.
Of course, none of those are consumer use cases. I think Intel was over-ambitious in even rolling out PCIe 5.0. Whether it was a smart move or not depends partly on how many issues customers have with PCIe 5.0 peripherals they actually try to use in these motherboards. The other concern is board cost, although supply-chain issues make it hard to know much much PCIe 5.0 is to blame for that mess.
I agree, in general. I happen to have a specific use case of editing large video files without transcoding, where saving the edited video clip actually runs at the sequential speed of my NVMe SSD. Granted, it's an older SSD, but it's something that takes long enough that I have to wait for it.
It seems inevitable to me that SSDs will move to an x16 link standard, it's just a matter of time. Either Intel or AMD will get motherboards with true dual x16 in the next generation, and someone will build an SSD that plugs into that and outclasses the competition in performance, and then all the other makers will push for standardization.
> It seems inevitable to me that SSDs will move to an x16 link standard
There are some datacenter x8 or x16 SSDs, for a long time already.
> Either Intel or AMD will get motherboards with true dual x16 in the next generation
Okay, so I guess you mean for consumers. Well, PCIe SSD cards for consumers are nothing new, nor are carrier-cards for M.2 drives. It's a niche product, with most users probably plugging them into workstation or HEDT motherboards.
The reason why PCIe x16 SSDs won't go mainstream is that the market for such insane amounts of I/O throughput is too small. It's a niche product and a very expensive one, at that.
If PCIe 6.0 is 1b/1b, what does that mean for clock recovery? As far as I'm aware, the whole reason for the bit scrambling was to provide enough transitions to recover the clock reliably, so surely there has to be some replacement for it?
According to the PCIe 6.0 FAQ, Flits are 256-bytes. If you want to talk about protocol overhead, that's something engineers would consider distinct from bit-encoding. The actual overhead added by Flits would depend on the size of the FEC and CRC fields, as well as other fixed protocol structures. Unfortunately, the FAQ doesn't go into that level of detail. However, for PAM4 + Flits to be a net win vs. 128/130, they must total < 32 bits.
PCIe seems to perform two roles: - as a common language for independently designed chip-to-chip communication (think eg ethernet or SSD that's directly on board, and sometimes on-package) - as a total connection out to slots or to TB sockets
My question is whether using the same spec for both these roles remains optimal? How much of the extra complexity (and ultimately power expenditure) of PCIe6 is driven by the need to maintain length specs and slot/socket compatibility?
Or, to put it differently, is there scope in PCIe7 for a split between PCIe-short-length (which either simplifies the protocol or doubles the speed -- BUT only promises to work over short, high quality, PCB distances) and a PCIe-external -- which pays the costs (power, voltage, complexity...) of maintaining long distances and external sockets/slots?
That's basically the way PCIe 4, 5, and 6 are already implemented. The trace lengths allowed are very short, and if you want to go longer you need retransmitters that add power and cost.
> PCIe seems to perform two roles: > - as a common language for independently designed chip-to-chip communication > > ... > > My question is whether using the same spec for both these roles remains optimal?
Isn't it already superseded in this role? We have CXL, CCIX, Gen-Z (OMG, what a horrible name for an interconnect - just try searching for it!), and then ARM has its own CoreLink/CCI.
So, you no longer need to piggyback atop the full PCIe stack, as there are better-suited solutions (most of which do seem to borrow PCIe's PHY).
Which of the current consumer CPUs and PCH chipsets could actually support a data rate anywhere close to this speed? I think even most CPU to RAM speeds are lower, unless we go to something like Apple's M Max SoC with very wide DDR5 or some server CPUs. But, yes, faster PCIe is always a good idea, if it stays affordable. (Big if, I know)
Intel massively jumped the gun on rolling out PCIe 5.0 to consumers. I think they simply don't need that level of I/O bandwidth, unless we start seeing a major resurgence in multi-GPU rendering.
Before worrying about doubling desktop PC bandwidth *again*, let's see how quickly PCIe 5.0 is even adopted by peripheral makers. By the time that happens, the approach of having some in-package memory might've finally taken hold.
Of course it is not new technology and they on purpose didn't implemented it earlier because they making money and not rushing for ordinary customers To skip speed straight away.. They already have done even pcie7 or pcie 8 but they won't tell you they keeping it for later.. Just to keep business going
Are you aware that technology move incrementally for *reasons*? Faster interconnect standards depend on numerous technological advancements, most notably advancements in semiconductor fabrication. That stuff takes time. And then, once the technological foundations are in place, engineers have to work out reliable and cost-effective ways to use it. After that, companies need to build and test IP, supporting chips (like retimers and switches), and even the test equipment, itself. Finally, end-products can come to market that embrace the new standard. All of that takes a lot of time, and you can't really speed it up any further, without exponentially increasing development costs while probably only hastening the pace of advancements a little bit.
Finally, PCIe derives its benefit from its incredibly wide adoption. It's an example of the "network-effect" in action. This further complicates the standardization, implementation, and testing phases, but it's more than worth it, in order to have such incredibly wide compatibility across devices and generations. About the only thing I can think of that's in the same league as PCIe is Ethernet.
I thought of another point, which is that even *if* the technology existed to go straight from PCIe 3.0 to what we have in 5.0, in a cost-effective way, or to skip 5 and go to 6, or to condense the sequence of 4, 5, 6 into 2 iterations, there's a further benefit from moving it a deliberate and step-wise manner.
The rationale is that you can find many *new* products shipping with only PCIe 2.0 or 3.0, for instance. Particularly embedded CPUs and ASICs made on an older process node, or where energy-efficiency is at a greater premium. It's therefore fundamentally worthwhile to have a ladder where the gaps between the rungs aren't too wide, so that each product can rather precisely dial in the spec that makes the most sense for it.
Also, a fairly even doubling in speeds makes life simpler when you're dealing with lane aggregation, splitting, step-downs, etc.
>As always, PCIe 6.0 is backwards compatible with earlier specifications; so older devices will work in newer hosts, and newer devices will work in older hosts.
Correction. Newer devices might work in older hosts; depending on the features used in the device and whether the vendor chooses to implement backwards compatibility. Let's not forget that PCIe 2.2+ allows greater power draw than 2.0 and a new packet format that results in newer devices not always working in older hosts even if they can work with newer hosts running in lower pcie version modes.
Like so many have said before we are hitting a real limit to what can be accomplished with copper interconnects. A look at modern networking is a good demonstration of this fact. Ethernet connections using 25g or higher require optical transmission for any meaningful length beyond a single rack. While DAC, direct attach copper, cables, using coax or biaxial cabling, are possible, they are much more expensive, less modular, and come with severe length restrictions, limiting their use to inter-rack components. When the 10G BASE-T, 10 gig Ethernet over twisted pair, standard was released I remember reading predictions, it would be the last stop for BASE-T, although 25G BASE-T does exist, I have never seen actual hardware. The integration of optical interconnects into pc hardware, on pcbs; motherboards, between cpus and even for internet to external, are a major focus of current research and many prototypes have been demonstrated with commercialization, I predict, within 5, but likely closer to 3, especially in the data center space. I think the major push to get to PCIE 6 was to enable cache coherent technology, CLX CCIX etc., which will completely revolutionize the cloud, data center,HPC computing landscape. With CLX as an example, with PCIE 5 enabling the first version 1.1, major uplift will come with version 2.0. CXL 2.0 allows switching, connecting multiple devices to one host, and pooling, one/multiple devices to multiple hosts, in addition to encryption and various integrity/root of trust methods. This will allow the major change in computer architecture I mentioned. Memory pooling, both conventional, and PCIE card based, think of it like a JBOD, but for memory instead of storage, will be a major use case. TLDR CXL 2.0 will make all aspects of servers disaggregated. Does your organization need to expand memory, get one of the devices I describe above, good on memory, but lacking in compute, get devices that do only that, storage, same, get the idea? With this disaggregation manufactures will be able to specialize single task products, bringing greatly improved performance/efficiency from the ability to optimize those products to their sole function instead of moderate performance on multiple functions. One can only imagine the possible ways to take advantage of this technology. Now that we have a roadmap, through PCIE 6, that enables full implementation of CXL, the timeline to the next standard can afford to wait a little longer for optical interconnects to come to market and mature.
BTW, to anyone who think it is a conspiracy to make more money. PCI-SIG is a nonprofit and not a manufacturer or producer of anything aside from the standards themselves. The individuals design these standards are experts in their particular field, with some being the top, and you would be hard pressed to find a tech expert who would intentionally advocate for the halting of progress. In addition to the highly respected individuals who do the day to day, many hundreds of companies, I think it is getting close to 1000, are members of PCI-SIG. it is simply not possible to get that many diverse organizations cooperating at a level high enough for a plot of this magnitude to take place.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
77 Comments
Back to Article
Kamen Rider Blade - Tuesday, January 11, 2022 - link
So, should we expect a longer time gap between PCIe 6.0 & 7.0 like we had between 3.0 & 4.0?Yojimbo - Tuesday, January 11, 2022 - link
I haven't heard anything about 7.0. I don't know if they've finalized a path to it yet. From what I understand the industry is looking at trying to integrate optical technologies because it's running up against some physical constraints with its current way of doing things. If that happens, I wonder whether it will be through PCIe or not.Ethos Evoss - Thursday, January 13, 2022 - link
Of course they have it prepared long time ago but not for us ordinary end users.They have time and they don't want to skip speeds straight to fastest..
Otherwise they wouldn't have anything to offer.. Alway they must work few years in front. To keep business going
StevoLincolnite - Thursday, January 13, 2022 - link
Takes allot of R&D and testing to ratify these standards. They don't have them "sitting in a vault" somewhere so they can profit off the current stuff.Even just the controllers need to be cost-effective and need to wait on smaller fabrication processes becoming viable.
Matthias B V - Tuesday, January 11, 2022 - link
I though the same and I would expecxt so. With moving to PAM4 they already took one off the bigger current tricks out of their sleeves. Not sure if they can significantly increase bantwith again without a even more significant change in technology or at some point even kill backwards compability and move to optics.They have been quite slow with PCIe 3.0 to PCIe 4.0 but then pushed hard on PCIe 5.0 and PCIe 6.0 even annoucning them early. I doubt they can double speeds again easily for PCIe 7.0 within the next 2-3 years. Not without significant changes in my opinion. I would expect PCIe 5.0 and PCIe 6.0 will have a longer lifecycle again and maybe won't see a successor standard finalized until 2026/27
Kamen Rider Blade - Tuesday, January 11, 2022 - link
So I can expect a 7 year gap between 6.0 & 7.0?Eliadbu - Wednesday, January 12, 2022 - link
No one here can tell the future mate, all we can do is speculate. I think that in the transition between 3 and 4 the had time to build up a strategy for future revisions which proved itself in relatively fast time for bringing them out. The people who might know it are the people who work on it for PCI SIG.willis936 - Tuesday, January 11, 2022 - link
As a PHY guy: this is so sick. Consumer PAM4 32 GBaud? I hope chipsets and motherboards stay priced for mortals.Kamen Rider Blade - Tuesday, January 11, 2022 - link
I can see PCIe 6.0 as the Back Bone connection between future Generation Ryzen CPU's and the Chipset it's connecting to.And everything attached to the Chipset will be 1/2 Generations behind at PCIe 5.0 or 4.0
lightningz71 - Tuesday, January 11, 2022 - link
it'll probably be easier and cheaper to just double the link width than to increase the link level to PCIe 6.0. It'll be a while before its needed though. We're not really saturating a 4.0 X8 link to the chipset... yet.ReviewerOfThings - Tuesday, January 11, 2022 - link
Thanks for the article :) The chart at the beginning (from PCI-SIG) seems to have confusing terminology between "actual" and 'IO" bandwidth.Yojimbo - Tuesday, January 11, 2022 - link
So 19 months between 4 and 5 and 31 months between 5 and 6.DougMcC - Tuesday, January 11, 2022 - link
Probably unsurprising given the bigger changes in 6, and the whole, you know, global pandemic thing throwing processes into disarray.Targon - Tuesday, January 11, 2022 - link
From a practical standpoint, video cards are currently BARELY beyond the limits of PCIe 3.0, and nowhere near needing PCIe 5.0 speeds. There are some things that could make use of it, primarily NVMe SSDs, but that's pretty much it. Multi-GPU or linking multiple PCIe devices to each other via the PCIe bus rather than using a bridge connector would be one of the few obvious uses.Servers are the place where there will be a big benefit.
FreckledTrout - Tuesday, January 11, 2022 - link
In the consumer space. In the data center this will be huge.willis936 - Tuesday, January 11, 2022 - link
This isn't about frames per second, it's about getting data in and out of CPUs. 8x PCIe 6.0 lanes can be broken out to a lightning fast NVMe array, two GPUs, and high speed networking. No need to cut down on bandwidth or lanes for each individual device because you can turn those 8 CPU lanes into 32 gen 4 lanes in the chipset.minde - Tuesday, January 11, 2022 - link
SSD s real speed is only 2 GB/s with pcie 3x4, pcie 4x4Qasar - Tuesday, January 11, 2022 - link
i just would like to see more lanes on the desk top, like x99 hadTheinsanegamerN - Thursday, January 13, 2022 - link
Or AMD's own 990fx chipset, 40 PCIe lanes......Qasar - Thursday, January 13, 2022 - link
heh, forgot about that one. but looking for something much newer, maybe am5 ?ive looked into TR, but even an inexpensive board, and a 16 core, i easily 2k.... not in the cards right now.
mode_13h - Thursday, January 13, 2022 - link
I had an 890FX, I think. Yeah, I loved having so many lanes, but sadly they're only PCIe 2.0.I used that board for a fileserver. It had 6x SATA 3 and supported ECC RAM. Only cost something like $160, which was about what I paid for the Phenom II I put in it.
ray2ksix - Tuesday, January 11, 2022 - link
I'm still at PCIe 3.0 on my Z390 platform, we don't even have a PCIe 5.0 compatible consumer SSD on the market. These standards are meaningless to regular joe unless there's revolution hardware to use it. These standards are most likely meant for massive DATA centers and AI research stuff. Maybe for self-driving cars.minde - Tuesday, January 11, 2022 - link
we need new memory tech for increasing speed more than 2GB/s. With pcie 5x4 ssd will be similar real speedname99 - Wednesday, January 12, 2022 - link
M1 Pro/Max MacBook Pro has SSD's that clock 7.4GB/s.Now we can put all sorts of caveats next to the conditions under which you will actually see that level of performance, but it gives an indication of what *is* possible at the consumer level, giving one possible target.
A second possible target for this sort of performance is 10G ethernet which, yes, has definitely taken its sweet time into the consumer market, but the various pieces required are now available in ways that wasn't the case two years ago.
A third use case that remains unclear (but IMHO is becoming more real every day) is CXL, especially as a memory extender. If we can access "fast enough" second tier DRAM via PCIe, then concern over being able to expand DRAM via slots will become less of a concern, and we can see more DRAM move onto the package, with consequent power improvements.
Bigos - Tuesday, January 11, 2022 - link
Doesn't FEC imply some amount of overhead, as redundant data is being sent to fill the blanks of the actual data sent beforehand? In that case, is this overhead estimated? I wonder how much effective bandwidth will be lost, maybe the mechanism is adaptive based on the history of crc errors in prior packets?willis936 - Wednesday, January 12, 2022 - link
PCI SIG has been following 802.3's path. I expect RS coding with 6% overhead. Fixed BER gain. Adaptive FEC is technically possible, but I haven't seen a technology develop it. It hurts very little to make conservative FEC choices so people usually just do that.mode_13h - Wednesday, January 12, 2022 - link
> Doesn't FEC imply some amount of overheadYes, but it sounds like that's more than offset by switching the bit encoding is switching from 128/130 to 1/1. Specifically, slide 4 of the presentation says "Flit ... enables more than double the bandwidth gain".
> maybe the mechanism is adaptive based on the history of crc errors in prior packets?
From the article, it doesn't sound like it. The fixed-sized FLIT packets sound like just that. And I presume that size is baked right into the spec.
> maybe the mechanism is adaptive based on the history of crc errors in prior packets?
What happens if you get CRC errors is a retransmit. If you get too many retransmits, then I think either your hardware is faulty or wasn't designed as per the spec. If your signal path meets SNR guidelines, then the rate of CRC errors should be very low. If that ceases to be the case, then I think the expectation is that the customer will replace their hardware.
lightningz71 - Wednesday, January 12, 2022 - link
Existing PCIe uses 128b/130b NRZ encoding, which introduces it's own 1-2% of overhead at the physical layer. Switching to PAM4 removes that level of overhead to replace it with CRC and other error checking. The trade-off is a relative wash.back2future - Tuesday, January 11, 2022 - link
seems PCIe 7.0 is understandable by doubling bandwidth going PAM-16, PCIe 8.0 will get a harder part, if lane length should stay suitable without additional retiming (otah, additional retiming for topping ~1TB/s bandwidth (~256GT/s data rate) each pin would be a low price for doubling bw?)Interesting to look at:
https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-s... (nextplatform)
back2future - Tuesday, January 11, 2022 - link
correction: (otah, additional retiming for topping ~1TB/s bandwidth (~256Gb/s data rate), 2x32GB/s each pin (nowadays ~bw DDR4 DRAM memory), would be a low price for doubling bw?)back2future - Tuesday, January 11, 2022 - link
this further: ~1TB/s/pin would be PCIe12 (~2040, on 3yrs cadence), being one pin on peripheral component interface express, same bandwidth like todays registers/top level cache (L1) ?Kamen Rider Blade - Tuesday, January 11, 2022 - link
What if they swapped to a 4-7 year cadence?PCIe 3.x -> 4.0 was a 7 year gap.
Asking them to innovate every 3 years is pushing it.
I think every 4 years, like the olympics is a more reasonable ask with every now and then having a 7 year gap.
mode_13h - Wednesday, January 12, 2022 - link
It seems to me there are two factors driving the cadence. First, there needs to be a demand for more bandwidth, which seems (recently) to have been supplied by ever-faster network links, SSDs, and deep learning. I doubt those will slow much, anytime soon.However, then you have the technology development that's needed to deliver new speeds in a cost-effective way. And that seems harder to predict. The PCI SIG can only move forward with a new spec, once the enabling technologies are proven.
back2future - Wednesday, January 12, 2022 - link
guess 2020 combined bandwidth of global internet infrastructure capacity was ~75TBps, what are ~10 PCIe12 x16, half duplex mode, network cards (or 1280 PCIe5 x16)?(PAM-16 was copied from above link: each position 2/4/8/16 level modulated pulse, seem to be a clock rate (count) reduction by half for each modulation increase and 16bit words?)
back2future - Wednesday, January 12, 2022 - link
more or less PCI-SIG's roadmap towards what's possible in todays silicon and copper (within a cpu and nm scale), but with doubling the bandwidth 'til then, would be necessary for a peripherals interface _ Whoo, let engineers surprise us. (L1 cache is todays highest bw bus speeds, maybe 256bit bus width, enabling ~1-xTB/s bus speed, 1 or 2 pins then would enable that for peripherals, like 1pin PCIe6 replaces a 16x PCIe2.x bus)not saying it's not possible, but maybe needs light, graphene, quantum theory
back2future - Wednesday, January 12, 2022 - link
correction: lane instead of pin ( like one lane (x1) = 4 differential pairs for electrical connection )back2future - Wednesday, January 12, 2022 - link
correction: lane instead of pin ( like one lane (x1) = 2 differential pairs for electrical connection )back2future - Wednesday, January 12, 2022 - link
What's the reason PCI-SIG doesn't show a renewed roadmap (this 5/1/2022) including PCIe7?(maybe a doubling bandwidth claim on an average 3 years pace is only valid up to PCIe6 within their planings, but that would not sound like PCI-SIG?)
back2future - Wednesday, January 12, 2022 - link
Statista.comRevenue of the computer hardware market worldwide 2012-2025, by segment
"In 2020, the global computer hardware market generated a total revenue of over 298 billion U.S. dollars. The two segments that generated the majority of the revenue were laptop sales, with around 123 billion U.S. dollars of revenue and tablets with around 57 billion U.S. dollars in revenue. The Statista Consumer Market Outlook estimates that a peak revenue in 2021 will be followed by a period of decline until revenue across all segments will again increase in 2025."
various
mobile equipment probably not the market for PCIe6 volume (still ~118bnUSD left)
seen numbers for global mainboard market ~13bnUSD for 2021 (annual growth ~1.4%)
at the moment PCIe seems being a ~20-25bnUSD global market (expected growth to ~48bnUSD ~2026, annual growth ~4-5%, with ~40% origin from NorthAmerica, most from switches (~9bnUSD share) and storage (been ~7bnUSD 2020) )
(2020: dominant PCIe3 was ~8.6bnUSD, dominant geographical region APAC ~7.9bnUSD before North America, Europe)
Statista.com
global IT device spending 2021 ~802bn USD (forecast 2022 ~821bnUSD)
Global IT spending forecast 2012-2022, by segment
"The global information technology (IT) spending on devices, including PCs, tablets, mobile phones, printers, as well as data center systems, enterprise software, and communications services came to 3.87 trillion U.S. dollars in 2020 and is expected to increase by approximately 9.5 percent to around 4.24 trillion U.S. dollars in 2021. This is likely due to an increase in demand for technological devices for remotely working employees, which has surged since the outbreak of the COVID-19 pandemic."
datacenter systems ~200bnUSD
enterprise software ~680bnUSD
devices ~821bnUSD
IT services ~1294bnUSD
Communication services ~1482bnUSD
maybe not (especially) for PCIe, but customers for consumer hardware are a reliable source for production volume for industrial revenue (maybe therefore PCI-SIG won't increase a spreading between standards and availability between datacenters and consumer desktop/workstation and mobile equipment too much)
gap between PCIe3 and PCIe4 - lower investment funding and technical delays from supporters?
there was a time with lowest spendings for global IT around 2015-2016,
first PCIe4 hardware announces started 2016 (PCIe5 ~Q4/2019-Q2020 with controller and cpu integration) and PCIe retimer market got visible ~2016
mode_13h - Thursday, January 13, 2022 - link
> The Statista Consumer Market Outlook estimates that a peak revenue in 2021> will be followed by a period of decline
I'm not sure about that. There's a lot of pent-up demand that's waiting for prices to come down & availability to increase. It's a bold move to be making predictions in these times, but I understand that's their whole business.
> maybe therefore PCI-SIG won't increase a spreading between standards and availability
> between datacenters and consumer desktop/workstation and mobile equipment too much
This makes no sense to me. They're going to try to keep pace with datacenter needs, or risk losing that market almost entirely. PCI SIG members who need faster interconnects for future datacenter products are going to try to push it into that role, and if they meet too much resistance, they'll go elsewhere.
back2future - Thursday, January 13, 2022 - link
Also competing expenses with higher energy demand, cost for food and living, some level of inflation, environmental responsibility awareness (requiring attention, time and financial resources) or somekind of saturated infrastructure perfection. Maybe it's much naming it 'decline', but might be true for a short period of stagnation (until there's free capacity for upgrading again, with all connected rebound effects). Seen with memory prices also, adjusting between DDR3 and DDR4?Maybe it's more about identification being customer and informed about a companies production line best efforts for improvement and having somekind of connection into that top line (for PCIe it's compatibility and trust in their announcements). With highest speed products, production volume on these parts (probably) will be less than with todays top speed devices and networks won't double throughput every 3 yrs through all connected nodes, maybe not yet that much for PCIe6 standard, but a 2-3 generations later, one should not have lost a consumer level production support for funding research and development for top tier (?) elite devices?
It's PCI-SIG, if you can put your PCIe12 device into a PCIe1 slot (wouldn't that be a great slogan then?) ;)
back2future - Tuesday, June 21, 2022 - link
announcement for PCIe 7.0, staying on PAM-4, release ~2025https://www.businesswire.com/news/home/20220621005...
mode_13h - Wednesday, January 12, 2022 - link
> doubling bandwidth going PAM-16Huh? Wouldn't you only need PAM-8 to double the bit-rate? But that would also mean implementations would need to double their SNR, which seems like it could get expensive.
I guess one thing that makes photonics attractive is the relative lack of interference?
timecop1818 - Wednesday, January 12, 2022 - link
pam8 would be 3bits per symbol, pam16 would be 4. since pam4 is 2 bits, you need 16 to "double".this is all theoretically speaking of course.
mode_13h - Wednesday, January 12, 2022 - link
Right. I shouldn't post when I'm so tired.Anyway, my main point was about signal-to-noise ratio, and that still stands. I don't know if we can assume the margins exist for a move to PAM-16.
Kamen Rider Blade - Wednesday, January 12, 2022 - link
What about doing signaling like what RAM & Memory Controllers do with DDR / QDR / ODR?Is that viable?
mode_13h - Thursday, January 13, 2022 - link
> What about doing signaling like what RAM & Memory Controllers do with DDR / QDR / ODR?DRAM uses a parallel datapath, whereas PCIe is serial. Serial interconnects have clocking as an intrinsic part of the signal. So, the concept of multiplying the data transmission intervals relative to the clock period doesn't apply.
Oxford Guy - Wednesday, January 12, 2022 - link
Meanwhile, GPUs with 64-bit buses and 4GB of VRAM are actually being written about in gaming contexts.mode_13h - Wednesday, January 12, 2022 - link
Like all the RX 6000-series AMD GPUs, it uses Infinity Cache to compensate for the narrower link width. So, it's not as if the only thing that changed was the link width. In the end, all the card needs to do is deliver better value than other current offerings, at current market prices. So, we really just need to see how it benchmarks.And regarding prices, it seems clear the 4 GB memory capacity was targeted at preventing Ethereum mining. It's not a great solution, but also not one that can be easily circumvented.
Oxford Guy - Thursday, January 13, 2022 - link
That’s nice.Duncan Macdonald - Wednesday, January 12, 2022 - link
Consumer products are unlikely to see PCIe 6.0 for some time due to the bandwidth limits of the main memory. Even a dual DIMM DDR5 memory can not provide enough bandwidth for a 128GB/sec PCIe link. (Corsair DDR5-6400 can manage 51GB/sec per DIMM,)To provide the required bandwidth will require a workstation or server processor with more memory channels (Threadripper, EPYC or Xeon).
The additional costs from a 4 channel memory system mean the there will be little demand for a consumer CPU to have such a capability.
Tomatotech - Wednesday, January 12, 2022 - link
As other posters have said, this is about:(1) expanding the gateway between CPU and everything else. A good set of PCIe 6.0 lanes will make getting data into and out of CPU much easier;
(2) reducing hardware costs by reducing the number of lanes needed for a hardware item. E.g. a SSD that needed 4x PCIe 4.0 lanes will need only 2x PCIe 5.0 lanes, and only 1x PCIe 6.0 lane, with a reduction in costs at each step.
(3) enabling higher SSD speeds. PCIe 5.0 has only been out for a few months, and already some 5.0 SSDs are maxing it out. It's embarrassing for the PCI-SIG standards commission that a standard that has only been out a few months is already inadequate. These standards are intended to be sufficient for at least the next few years. Hence there is a bit of a rush to get 6.0 out relatively soon to enable the industry to move forward.
Yes these high-speed SSDs are unaffordable to the typical person in the street, but there seems to be high demand from datacentres for large ultra-fast SSDs, and the PCI-SIG standard commission is in the unusual position of being the ones holding up this work.
I have to say I'm very impressed at the success of flash-based SSDs. I remember when they first came out - they were slower than HDDs and expensive and low capacity. Now the flash-based tech seems almost infinitely extensible to faster and faster speeds - it's almost like a doubling of speed every two years, combined with staggering capacities (for these with deep pockets).
mode_13h - Wednesday, January 12, 2022 - link
> expanding the gateway between CPU and everything else.Most communication the CPU does with devices is via memory. Doing otherwise would kill CPU performance, due to the latencies involved in PCIe transactions, and PCIe 6.0 won't fix that. CXL should help, but I'm not aware of a roadmap for bringing it to consumers.
> reducing hardware costs by reducing the number of lanes needed for a hardware item.
Could be, but it depends on what hardware costs you're talking about. If it's the motherboard, then what you're saying is that it can wire up fewer lanes in its various slots, however that will provide terrible performance on older, wider peripherals. So, that seems unlikely.
The next issue is that you're presuming it's a net-savings for device makers to upgrade to the next PCIe version and halve the width, which won't always be true. I'm sure it's not currently true of PCIe 5.0, and won't be true of PCIe 6.0 for a while, if ever.
> PCIe 5.0 has only been out for a few months, and already some 5.0 SSDs are maxing it out.
> It's embarrassing for the PCI-SIG standards commission
You're talking about the Samsung PM1743? That's a U.2 drive, which means it's only x4. Are you aware there are x8 and I think even x16 PCIe SSDs? All you'd have to do is put one in a 2.5" U.2 form factor with an upgraded PCIe controller.
Also, the PCI-SIG is an industry consortium. It's primarily run through the action of member companies who are building products incorporating PCIe. It's not an independent organization that's like competing with industry in some sort of race. I think they know where industry is at and what's needed, which is why they pushed onward to finalize PCIe 6.0, so soon after 5.0.
Also, I wouldn't say flash is the main use case for PCIe 6.0. It's probably compute accelerators, networking, and CXL-connected DRAM (since CXL piggybacks on part of PCIe and is backed by many of the same companies) and maybe Optane.
Of course, none of those are consumer use cases. I think Intel was over-ambitious in even rolling out PCIe 5.0. Whether it was a smart move or not depends partly on how many issues customers have with PCIe 5.0 peripherals they actually try to use in these motherboards. The other concern is board cost, although supply-chain issues make it hard to know much much PCIe 5.0 is to blame for that mess.
IntelUser2000 - Wednesday, January 12, 2022 - link
Uhh, who cares about sequential bandwidth on an SSD at this point? I think PCIe 4.0 SSDs pretty much cover all the scenarios. Maybe even 3.0.The sequential bandwidth is only achievable on select code under special circumstances and the real bandwidth falls far short of that. Far, far short.
mode_13h - Thursday, January 13, 2022 - link
I agree, in general. I happen to have a specific use case of editing large video files without transcoding, where saving the edited video clip actually runs at the sequential speed of my NVMe SSD. Granted, it's an older SSD, but it's something that takes long enough that I have to wait for it.DougMcC - Thursday, January 13, 2022 - link
It seems inevitable to me that SSDs will move to an x16 link standard, it's just a matter of time. Either Intel or AMD will get motherboards with true dual x16 in the next generation, and someone will build an SSD that plugs into that and outclasses the competition in performance, and then all the other makers will push for standardization.mode_13h - Thursday, January 13, 2022 - link
> It seems inevitable to me that SSDs will move to an x16 link standardThere are some datacenter x8 or x16 SSDs, for a long time already.
> Either Intel or AMD will get motherboards with true dual x16 in the next generation
Okay, so I guess you mean for consumers. Well, PCIe SSD cards for consumers are nothing new, nor are carrier-cards for M.2 drives. It's a niche product, with most users probably plugging them into workstation or HEDT motherboards.
DougMcC - Thursday, January 13, 2022 - link
Yes for consumers. I think the tipping point into x16 being standard is close.mode_13h - Thursday, January 13, 2022 - link
Anyone with a use case for so much I/O can *already* get a workstation or HEDT motherboard and something like this:https://www.anandtech.com/show/16247/highpoint-upd...
The reason why PCIe x16 SSDs won't go mainstream is that the market for such insane amounts of I/O throughput is too small. It's a niche product and a very expensive one, at that.
Dolda2000 - Wednesday, January 12, 2022 - link
If PCIe 6.0 is 1b/1b, what does that mean for clock recovery? As far as I'm aware, the whole reason for the bit scrambling was to provide enough transitions to recover the clock reliably, so surely there has to be some replacement for it?mode_13h - Thursday, January 13, 2022 - link
> If PCIe 6.0 is 1b/1b, what does that mean for clock recovery?FLITs surely have a frame structure the recipient can lock onto.
Dolda2000 - Saturday, January 15, 2022 - link
If that's the idea, then surely they must have minimum guaranteed frequency that makes the "1b/1b" claim a bit disingenuous.mode_13h - Sunday, January 16, 2022 - link
According to the PCIe 6.0 FAQ, Flits are 256-bytes. If you want to talk about protocol overhead, that's something engineers would consider distinct from bit-encoding. The actual overhead added by Flits would depend on the size of the FEC and CRC fields, as well as other fixed protocol structures. Unfortunately, the FAQ doesn't go into that level of detail. However, for PAM4 + Flits to be a net win vs. 128/130, they must total < 32 bits.Source: https://pcisig.com/faq?field_category_value%5B%5D=...
name99 - Wednesday, January 12, 2022 - link
PCIe seems to perform two roles:- as a common language for independently designed chip-to-chip communication (think eg ethernet or SSD that's directly on board, and sometimes on-package)
- as a total connection out to slots or to TB sockets
My question is whether using the same spec for both these roles remains optimal? How much of the extra complexity (and ultimately power expenditure) of PCIe6 is driven by the need to maintain length specs and slot/socket compatibility?
Or, to put it differently, is there scope in PCIe7 for a split between PCIe-short-length (which either simplifies the protocol or doubles the speed -- BUT only promises to work over short, high quality, PCB distances) and a PCIe-external -- which pays the costs (power, voltage, complexity...) of maintaining long distances and external sockets/slots?
The Von Matrices - Wednesday, January 12, 2022 - link
That's basically the way PCIe 4, 5, and 6 are already implemented. The trace lengths allowed are very short, and if you want to go longer you need retransmitters that add power and cost.mode_13h - Thursday, January 13, 2022 - link
> The trace lengths allowed are very short, and if you want to go longerI think it's the overhead of FLITs, PAM-4, FEC, etc. that he wants to avoid for in-package applications.
mode_13h - Thursday, January 13, 2022 - link
> PCIe seems to perform two roles:> - as a common language for independently designed chip-to-chip communication
>
> ...
>
> My question is whether using the same spec for both these roles remains optimal?
Isn't it already superseded in this role? We have CXL, CCIX, Gen-Z (OMG, what a horrible name for an interconnect - just try searching for it!), and then ARM has its own CoreLink/CCI.
So, you no longer need to piggyback atop the full PCIe stack, as there are better-suited solutions (most of which do seem to borrow PCIe's PHY).
eastcoast_pete - Thursday, January 13, 2022 - link
Which of the current consumer CPUs and PCH chipsets could actually support a data rate anywhere close to this speed? I think even most CPU to RAM speeds are lower, unless we go to something like Apple's M Max SoC with very wide DDR5 or some server CPUs. But, yes, faster PCIe is always a good idea, if it stays affordable. (Big if, I know)mode_13h - Thursday, January 13, 2022 - link
Intel massively jumped the gun on rolling out PCIe 5.0 to consumers. I think they simply don't need that level of I/O bandwidth, unless we start seeing a major resurgence in multi-GPU rendering.Before worrying about doubling desktop PC bandwidth *again*, let's see how quickly PCIe 5.0 is even adopted by peripheral makers. By the time that happens, the approach of having some in-package memory might've finally taken hold.
mode_13h - Thursday, January 13, 2022 - link
> faster PCIe is always a good idea, if it stays affordable.It's not just affordability. It's also a question of power & heat.
Ethos Evoss - Thursday, January 13, 2022 - link
Of course it is not new technology and they on purpose didn't implemented it earlier because they making money and not rushing for ordinary customersTo skip speed straight away..
They already have done even pcie7 or pcie 8 but they won't tell you they keeping it for later.. Just to keep business going
mode_13h - Thursday, January 13, 2022 - link
Do you have any evidence to support those claims, or merely cynicism?The PCI-SIG is not a secret society. They have many members and are quite public about their progress. Indeed, it's in their interest to do so.
https://pcisig.com/newsroom
Are you aware that technology move incrementally for *reasons*? Faster interconnect standards depend on numerous technological advancements, most notably advancements in semiconductor fabrication. That stuff takes time. And then, once the technological foundations are in place, engineers have to work out reliable and cost-effective ways to use it. After that, companies need to build and test IP, supporting chips (like retimers and switches), and even the test equipment, itself. Finally, end-products can come to market that embrace the new standard. All of that takes a lot of time, and you can't really speed it up any further, without exponentially increasing development costs while probably only hastening the pace of advancements a little bit.
Finally, PCIe derives its benefit from its incredibly wide adoption. It's an example of the "network-effect" in action. This further complicates the standardization, implementation, and testing phases, but it's more than worth it, in order to have such incredibly wide compatibility across devices and generations. About the only thing I can think of that's in the same league as PCIe is Ethernet.
mode_13h - Thursday, January 13, 2022 - link
I thought of another point, which is that even *if* the technology existed to go straight from PCIe 3.0 to what we have in 5.0, in a cost-effective way, or to skip 5 and go to 6, or to condense the sequence of 4, 5, 6 into 2 iterations, there's a further benefit from moving it a deliberate and step-wise manner.The rationale is that you can find many *new* products shipping with only PCIe 2.0 or 3.0, for instance. Particularly embedded CPUs and ASICs made on an older process node, or where energy-efficiency is at a greater premium. It's therefore fundamentally worthwhile to have a ladder where the gaps between the rungs aren't too wide, so that each product can rather precisely dial in the spec that makes the most sense for it.
Also, a fairly even doubling in speeds makes life simpler when you're dealing with lane aggregation, splitting, step-downs, etc.
GNUminex_l_cowsay - Monday, February 7, 2022 - link
>As always, PCIe 6.0 is backwards compatible with earlier specifications; so older devices will work in newer hosts, and newer devices will work in older hosts.Correction. Newer devices might work in older hosts; depending on the features used in the device and whether the vendor chooses to implement backwards compatibility. Let's not forget that PCIe 2.2+ allows greater power draw than 2.0 and a new packet format that results in newer devices not always working in older hosts even if they can work with newer hosts running in lower pcie version modes.
pogsnet - Monday, February 7, 2022 - link
We haven't fully utilized PCIe 5.0 now we have 6. That's too fast. Our hardware is already obsolete before we bought them :(penguinslovebananas - Monday, April 18, 2022 - link
Like so many have said before we are hitting a real limit to what can be accomplished with copper interconnects. A look at modern networking is a good demonstration of this fact. Ethernet connections using 25g or higher require optical transmission for any meaningful length beyond a single rack. While DAC, direct attach copper, cables, using coax or biaxial cabling, are possible, they are much more expensive, less modular, and come with severe length restrictions, limiting their use to inter-rack components. When the 10G BASE-T, 10 gig Ethernet over twisted pair, standard was released I remember reading predictions, it would be the last stop for BASE-T, although 25G BASE-T does exist, I have never seen actual hardware. The integration of optical interconnects into pc hardware, on pcbs; motherboards, between cpus and even for internet to external, are a major focus of current research and many prototypes have been demonstrated with commercialization, I predict, within 5, but likely closer to 3, especially in the data center space. I think the major push to get to PCIE 6 was to enable cache coherent technology, CLX CCIX etc., which will completely revolutionize the cloud, data center,HPC computing landscape. With CLX as an example, with PCIE 5 enabling the first version 1.1, major uplift will come with version 2.0. CXL 2.0 allows switching, connecting multiple devices to one host, and pooling, one/multiple devices to multiple hosts, in addition to encryption and various integrity/root of trust methods. This will allow the major change in computer architecture I mentioned. Memory pooling, both conventional, and PCIE card based, think of it like a JBOD, but for memory instead of storage, will be a major use case. TLDR CXL 2.0 will make all aspects of servers disaggregated. Does your organization need to expand memory, get one of the devices I describe above, good on memory, but lacking in compute, get devices that do only that, storage, same, get the idea? With this disaggregation manufactures will be able to specialize single task products, bringing greatly improved performance/efficiency from the ability to optimize those products to their sole function instead of moderate performance on multiple functions. One can only imagine the possible ways to take advantage of this technology. Now that we have a roadmap, through PCIE 6, that enables full implementation of CXL, the timeline to the next standard can afford to wait a little longer for optical interconnects to come to market and mature.penguinslovebananas - Monday, April 18, 2022 - link
BTW, to anyone who think it is a conspiracy to make more money. PCI-SIG is a nonprofit and not a manufacturer or producer of anything aside from the standards themselves. The individuals design these standards are experts in their particular field, with some being the top, and you would be hard pressed to find a tech expert who would intentionally advocate for the halting of progress. In addition to the highly respected individuals who do the day to day, many hundreds of companies, I think it is getting close to 1000, are members of PCI-SIG. it is simply not possible to get that many diverse organizations cooperating at a level high enough for a plot of this magnitude to take place.