No it won't. Intel will debut PCIe 4.0 with Ice Lake Xeon CPUs, probably not with Ice Lake-S/H CPUs for desktop (which I don't think will ever be released; they will probably can that release and launch Tiger Lake CPUs for desktop directly) and probably the upcoming Ice Lake-U/Y CPUs for laptops and convertibles will not have PCIe 4.0 either. Even Cooper Lake, the 14nm node based successor of Cascade Lake, will lack PCIe 4.0
Arsenica im confused with ALL the " Llake named " cpus... can never tell which is which, which it out, coming out, or future chips... is there no one at intel that has the power to change the names of the chips ? or is it a tactic to confuse people ??? geeze... as if the WAY TO MANY cpu sku's isnt bad enough.... one of the local stores here, desktop only.. has approx 22 cpu's listed !!!!!!!!
Intel loves to confuse consumers. Dual-core i7 laptops, which chips are dual core, quad core, hex core? Why charge extra for HyperThreading at this point, outside of the low end i3 vs. i5?
" bronze/silver/gold/platinum thing" that is with the xeons, correct ?? i was just talking about desktop, and what a local store has on their site for desktop ( non server/workstation ) cpus ..
mode_13h ahhhh :-) still.. WAY to many.. its one of the reasons i went with the 5930k i have, i tried to pick one on 1151 i think it was back then.. while i was able to choose a mobo, trying to chose a cpu... gave me a head ache... so i said forget it.. grabbed a x99 deluxe to go with the 5930k.. and left it at that...
They might on the desktop due to how things are aligning with 10 nm production. Right now it is looking like Ice Lake will be a mobile (PCIe 3.0) and server (PCIe 4.0?) only. Comet Lake is the next major desktop part and that is expected to have PCIe 3.0 and leverage DDR4 too. The platform after that is expected to bring both DDR5 and PCIe 5.0 to desktops simultaneously but that is years down the road.
It's good to have options, but for now the industry should shift to Gen-Z: 200+ GB/s bandwidth available now, with future updates up to 400+ GB/s. Also it has <100 ns latency, compared to the 700 something ns latency of PCIe I'm aware of. Best of all it breaks the fixed CPU-memory structure of 1 CPU only capable of a certain DDR version RAM. This will allow us to upgrade and use a mix of NVRAM/persistent memory/SCM as they become available in the next few years; some of which have the potential to provide RAM-like bandwidth and latency or better with SSD-like non-volatile capacity. Single thread performance is anyway just growing at around 3% YoY, so I see these new memories to be the only big performance upgrade in next few years.
Of course I don't expect Intel to give us Gen-Z coz it wants to push their own memory products(3D Xpoint) and connectivity solutions like CXL(using PCIe) and it will be leveraging its majority market share to do so. Thus my hope is from AMD. It's about time we get some healthy competition in CPU market and let us consumers benefit from latest technology, instead of monopolies manipulating markets to maximize their profit and use old tech. Also can't wait to use an 8K monitor directly with a laptop.
I haven't heard Gen-Z having a solution for coherence yet. This is why the shorter range interconnects are still needed also. I too would like an industry wide applicable, ultra high-speed standard interconnect to emerge so that we could make those plug-and-play "mainframes" by just throwing boxes of processing, storage/memory and everything else together.
Does Gen-Z support cache coherency? Yes. Gen-Z supports cache coherency in point-to-point, meshed, and switch-based topologies. Cache coherency can be used between processors with accelerators, accelerators with accelerators, or accelerators with memory and storage.
Thanks. Things change so fast these days. You blink and your incoherent neighbor's children are all coherent and well-behaved. ;)
As soon as I saw those 2x10Gb ports in the new Mac Pro, I started to think about clustering. Apple, give the generation z what it deserves and implement Gen-Z for connecting those petabyte flash storage boxes and additional Macs for the cases when one Xeon with GPUs just doesn't cut it. You already have the rack model, after all.
PCI Express would hang off the Gen-Z bus, so the system as a whole could be Gen-Z and still have PCI Express slots. Bandwidth/lanes to each slot would be dynamic as needed, so you could have every slot get 16 lanes.
Apple needs to get away from Intel, which has shown a complete disregard for security. If I were the sort of person to write malware, I'd have a field day with all the security problems with Intel processors, knowing that too many people don't install updates.
That doesn't actually mean anything. Technically, you can say the same thing for PCI-X from before PCIe. Technically, Ethernet also supports cache coherency. There are such systems even in existence which is more than you can say for Gen-Z. Gen-Z will have its uses, but marketing hype isn't product reality.
Also on the topic of latency, I think even a figure around 100ns will be a bottleneck esp. with the upcoming NVRAM technologies. I hope they can improve it
Gen-Z is largely a solution looking for a problem. Nothing they are really trying to solve is going to be actually useful to solve. Dis-aggregation is just a bad idea and completely counter to the actual trends in technology and power. Esp wrt to memory.
I base my opinion on having been down this road before multiple times and designing actual high speed interconnects used in hundreds of millions of computers. Gen-Z is trying to push the roughly the same koolaid that Future I/O and Next Gen I/O pushed both before and after they merged into Infiniband. In the end, IB was just lower latency networking (despite plenty of work to make it work in other areas).
I/O will still connect to whatever PCI SIG goes with since that is where the installed base will go. Gen-Z might have a life as a replacement for IB in the network niche but doubtful, Ethernet has learned the lesson well and is now working on latency to counter just that. More likely, Gen-Z might find a niche as a standardized side band for accelerators if it is lucky. Everything else basically died when intel announce CXL as effectively an extension of PCIe.
Real NVRAM technologies will either have a dedicated memory bus interface, share an existing memory bus interface, or not actually be real usable NVRAM but instead SSDs.
Don't know about Nvidia, but Intel has a conflict of interest coz they wanna push their own memory and interconnect solutions, and tap into that TAM as shown in their latest investor meeting: https://s21.q4cdn.com/600692695/files/doc_presenta...
Thus my hope is from AMD. It's getting better with the CPU's, and Gen-Z can be a really good USP for it. It would need the support of memory and equipment manufacturers, but then the good news is that other than Intel and Nvidia, pretty much every big name company is a member of Gen-Z consortium.
And all the big names that actually do anything are also members now of CXL and PCI SIG. AMD will go with what PCI SIG supports which will be CXL which runs on PCIe.
It is easy to sign onto a working group, it has nothing to do with you actually using the solution the working group is pushing. In most cases, you join in just to get the info.
Look at the history. Infiniband was the previous Gen-Z. It had support from all the big names. EVERYONE at the time: Intel, Compaq, Dell, HP, IBM, Intel, Microsoft, Sun. That was quite literally the entirety of the computer universe at the time. Gen-Z is basically the carbon copy and will largely have the same issues.
PCI SIG is VERY VERY good at what it does. It has existed forever. It basically owns I/O. If they aren't pushing an I/O standard, it has zero chance to be mainstream. They know what they are doing.
PCI SIG took 7 years to get to 4.0, so I won't call it "very very good". In fact that slow progress is one of the main reasons behind the alternatives like Gen-Z, esp. when you think about the bandwidth requirement growth in data-centers.
It is indeed hard to change status quo, esp. when there's a monopoly (like Intel) but future is not always the same as past, and technologies change all the time. At the end of the day the market needs decide everything.
The PC segment is shifting to all SaaS - everything accessed through a browser, even gaming very soon - as a result the priorities are changing to portability and longer battery backup. Therefore there are less(not zero) chances for my dream of having a Gen-Z enabled laptop in which I can switch NVRAM as they come in the future. CPU single-thread progress of 3% YoY is disappointing, so de-coupling memory from CPU and encouraging competition in memory market will be a good performance boost. Lastly I also like the idea of one interconnect tech covering all connections, from memory to I/O to networking, and having excess bandwidth for future VR, 16K or else.
On the other hand PC sales have been decreasing since 2011, and are projected to dwindle more and more every year. Meanwhile data-centers have been and will continue to keep increasing, even Intel has officially changed its brand from PC-centric to data-centric. Gen-Z's current bandwidth of 200 GB/s is something PCIe will reach in 6.0 which is due in 2021 if everything goes well, let alone the actual implementation of it. Therefore I doubt data-centers will just wait years for PCI SIG to figure it out and ignore a solution already available, which is probably why I've already seen Gen-Z in AMD HPC presentations.
The PC market became boring since the second generation Core i series of chips, and people saw little reason to upgrade their computers, including enthusiasts. With AMD pushing the industry forward again, there are reasons why people are upgrading computers they built only two years ago. Dual-core on the desktop will finally be dead, and hopefully dual-core laptops will be dead next year(I hate that AMD offered the Ryzen 3 2200U and 3200U).
NVLink is a proprietary Nvidia technology, so not only will it cost to get the license, it will depend on Nvidia whether or not and to whom they decide to license. Gen-Z and even PCIe are free to implement for everyone. Also I don't think NVLink can act as a universal connection technology like Gen-Z coz it was developed to connect CPU and GPU
NVLink isn't some magical wonder tech the just nobody else could do. It's not that different to PCIe electrically. The serial lanes concept of PCIe is also used by NVLink, as is the line coding.
What makes NVLink (and comparable technologies) so much faster is, that they don't have to function in a PCI(e)-like environment. NVidia can specify much shorter lanes and use higher SerDes rates. They can also chose vastly different footprints and routing options, that couldn't be used for PCIe. Lastly, PCIe has to be affordable and universally usable, which does not apply to NVLink. They can spec the most expensive hardware and ignore many usecases, and thus run at much higher rates. Money isn't really all that relevant to NVLink customers, it's quite relevant to the PCIe target group. Supporting cheap clients is something that PCIe has to do, but NVLink can afford not to. Running lanes in a densely populated board without needing an extra PCB layer is what PCIe has to do. Demanding 2 extra board layers is something NVLink can easily do.
Of course, since it's proprietary and nVidia is secretive as usual, we don't really know and I can only speculate upon publicly available information.
Full duplex would be 128 GB/sec + 128 GB/sec = 256 GB/sec: just as shown in the PCI-SIG chart up above. To say 128 GB/sec full-duplex, and then to qualify "128 GB/sec in each direction", makes no sense.
Ryan’s correct on this one, AFAICT. If a link can provide 128 GB/s in both directions at the same time, that’s the equivalent of 128 GB/s full-duplex. If you wanted to be super pedantic, PCIe lanes are actually dual-simplex—each transmission direction gets its own dedicated signaling pair. And the sum of the simplex bandwidth, or 256 GB/s, would be the aggregate bandwidth.
And to be fair to everyone, i struggled with this a bit because the terminology here is so loose. You have things like Ethernet where the quoted speeds are in each direction, and then things like PCIe where the number is the aggregate, and further things like UHS where sometimes the tech is outright half duplex.
So "full duplex" seems like the easiest way to convey the idea that it's not an aggregate measure, but that the link has simultaneous up and down transfers running at the same high speeds. Flipping through some technical definitions, I haven't found anything that's a better fit.
But if someone has a suggestion for what to call it that might work better, I'm all ears.
To me, "full duplex X Gbps" translates to X cumulative for both directions together. Instead of using "full/half duplex" I'd just say "in each direction".
Full-duplex means "in both directions at the same time". So saying "n units/s full-duplex" is the same as saying "n units/s in both directions at the same time". Adding both directions together results in the aggregate bandwidth of the link, which is twice what is available in a particular direction.
PCI-SIG may be touting that number in their slides because it's the bigger number, but that's not how I/O interfaces are marketed to consumers—it's either the full-duplex rate that's advertised, or the simplex rate in the case of interfaces like DisplayPort or HDMI that only operate in a single direction.
What's wrong with just saying "Full-duplex; XYZ GBps per direction."?
When I read some number quoted as the full-duplex rate, I assume that's the sum of both directions. However, because data flows are rarely symmetrical and available bandwidth in one direction can rarely be substituted for the other, the uni-directional rate is usually the most interesting & useful spec.
That's redundant information, full-duplex is just the technical term for "X information-rate in both directions". If that's not the case it's not full-duplex. This is a universally accepted definition that has held since the term was defined.
Of course, that doesn't mean that marketeers without an understanding of the definition won't in some cases just double the number since it looks better - but then they usually also won't use full-duplex to describe it, jut a pointless aggregate data rate.
Full duplex (or dual simplex) is "xx bandwidth in each direction *simultaneously*". Half duplex (or simplex) is *not* simultaneously. The PCIe protocol is the former. There are minor differences between full duplex & dual simplex and half duplex & simplex that are not particularly relevant here.
You're assuming the lane count stays constant. I think the mainstream CPUs, chipsets, and peripherals will start dropping lane counts, by the time they adopt PCIe 5.0.
Well, it's a legit way to counter the increased power consumption, also while cutting costs.
Honestly, do you think we actually need x4 NVMe @ PCIe 5.0+? I can't see the use case for that, on the desktop. And any drives that can exceed even what PCIe 4.0 provides are going to run too hot.
Same with GPUs. I don't see what's going to change that's going to deliver a measurable benefit for *most* GPUs at > x8 @ PCIe 4.0. You can already find some entry-level GPUs with x8 interfaces - a trend I see continuing to move up the range. By PCIe 5.0, some cards & mobos might even drop the x16 connector.
And 10 Gig NICs will do fine at just x1 @ PCIe 4.0.
So, why do you really need so many lanes, in a desktop? It seems to me that this push for faster connectivity is driven almost 100% by cloud & HPC.
With gpus no, but with nvme drives yes. Look how long we've been stuck at 3.5GB/sec with nvme pcie3 drives. As soon as pcie4 came out we're upto 4.8GB/sec already. Give them available bandwidth and they'll use it up.
You can't buy a NVMe drive of any type that can hit 3.5GB/s in any meaningful workload. PCIe 4.0 won't change that. You'll just get some bigger more useless marketing numbers while actual performance remains pretty much unchanged. You can also take something like a 970pro and run it half width or data rate and not notice any actual performance difference.
Other than the fact that it's going to complicate my when to build a new PC decision this is great news.
My nominal plan was to replace my 4790K in 2021/2 at 6/7 years old. At that point I was expecting to jump on the DDR5 bandwagon and either have PCIe5 for future proofing or have it be reasonably clear that it's signalling is too expensive to implement on consumer hardware and that 4.0 would be top of the line for anything I'd need for a reasonably long time.
Waiting for PCIe6 would push things out to to the 2023/4 timeframe, by the end of which my (jan 2015) system would be pushing a decade which is a bit nervous making in terms of reliability even if 4 physical cores haven't become a high end gaming bottleneck yet.
> My nominal plan was to replace my 4790K in 2021/2 at 6/7 years old.
The best time to upgrade is when you're starting to be limited by your current hardware. Beyond that, I'd say only delay if a new standard is right around the corner, at the point when you're almost ready to pull the trigger.
PCIe 3.0 still isn't really a bottleneck, in PCs. I think PCIe 4.0 will be fine for quite a while. As I said above, I expect the main difference with PCIe 5.0 is that mainstream CPUs, chipsets, and peripherals will start cutting lanes. This will reduce costs and power consumption, without much impact on performance.
my nominal year 6/7 timeframe is because I want to replace my system on my own time not in a frantic rush because something failed leaving me down. That means I want to replace it before it gets old enough to start climbing the back side of the reliability bathtub curve.
Why do believe 6/7 years is the right number, if you want to avoid hardware failures? Except for some PSUs, nothing has a warranty that long.
If downtime is really an issue, then you should replace stuff before it goes out of warranty. Otherwise, just suck it up and replace stuff when it breaks.
BTW, speaking of power, I recommend using a high-quality PSU and UPS (i.e. with AVR). In my experience, you'll get more life out of your components, that way. Also, don't overclock, and either buy over-spec'd or ECC memory.
experience with prior systems having elevated failure rates as they approached a decade old, combined with the increasing difficulty of getting parts. The 2 year window fits new product cycles where (pre 10nm faceplant) Intel was issuing significant upgrades every 2 years; a pattern that AMD appears to be copying with Zen. Demoted/retired before 8 years means I start looking around 6, with timing being tied to major product refreshes along with time/budget constraints.
I'm one of those that uses the hardware as long as I can before upgrading. Once I start to notice slow downs in my system I either change out things like hard drive & upgrade to something like a SSD drive. I always try to max out the memory in my case 32GB was my boards max so that is what it got. I over clock everything that can be over clocked to the max. This has allowed me to keep on using my system form 2012 up until today without any speed issues. The only reason I am going to upgrade my system this fall is because of the next gen consoles and the push in the market for more cores are more cool. I feel that my 4/8 CPU will become a huge bottleneck once the next gen consoles come out with their 8/16 CPU and the game industry really starts to support CPU's with those core counts and my fake 8 thread CPU will just slowly chug along in the games at that point. There is only one game right now that seems to make my CPU feel like it could use more cores and doe snot run 100% all of the time. I got a lot fo time in on this CPU but it is time to say good bye to it and demote it to some lesser tasks and not be on my desktop as my gaming station now I just have to figure out which new CPU to get in the fall.
I wonder if this will make it possible to implement an inexpensive Optane page file that rivals DRAM without having to use Optane DIMMs? I've got a friend who occasionally needs about 512 GB of RAM, and that's expensive. But PCIe 6.0 x4 will have 32 GB/s, which is almost as much as dual channel DDR4. So I could imagine installing an Optane M.2 x4 SSD and designating it as the location for the page or swap file. This might be easier and cheaper than buying a server board that supports Optane DIMMs or hundreds of GB of RAM.
I'm not sure Intel will keep making Optane SSDs. I think their long-term goal is to use Optane DIMMs as leverage to help sell Intel CPUs.
Perhaps Micron will continue offering 3D XPoint in SSDs, but probably only for the enterprise market. And those probably won't be much cheaper than RAM.
I've been to a few of the 802.3 meetings where PAM4's development has happened and I've drank with some of the people representing big players. I've also tested high speed ethernet and seen PCIe PHY testing. I will be shocked if there is PCIe 6.0 harware ready in the next ten years. PAM4 is not a trivial problem. It isn't "throw twice as many EQ taps and deal with half the SNR and call it good" scenario. Fortunately IEEE is all public but they'll need to actually understand how these lofty models are made and why before they can hope to implement their own PAM4 over a long, nasty channel.
Drinking is the only way humans can get through a week of standards development. If you think it's easier than I'm making it sound then I invite you try to even get up to speed.
Are you considering the orders of magnitude shorter distances involved in PCIe than Ethernet over copper? PCIe can also be constrained to shielded PCB layers, whereas Ethernet over copper has to contend with more EMI.
I'd like to believe that the PCIe SIG wouldn't publish such plans without anyone ever conducting lab tests to validate their feasibility.
Calls of interest and feasibility studies are industry standard. Following ethernet demonstrates good feasibility on paper.
Ethernet is a lot more than twisted pair and RJ-45. SFP is good for up to 25 GBaud over a few meters. Going to 50 GBaud and still going a few meters on expensive channels is what the IEEE 802.3ck WG is currently trying to achieve. Most of where these very high baud rates see use is in optical, chip to chip, and chip to module applications. Those are all also covered by ethernet.
Higher order modulation comes with trade-offs, no doubt, which is why the industry has clung to NRZ for as long as possible wherever they can, but that hasn't stopped the 802.11 folks from commercializing 1024-QAM in consumer gear years before 802.11ax was even finalized.
Pretty much the entire industry seems to have stopped pushing NRZ at around 28 and change Gbaud and switched to PAM4 for future generations. That's why I seriously wonder about PCIe 5.0 using 32 Gbaud NRZ. Have existing transceivers demonstrated enough headroom to do that? And is that really more cost effective than switching to PAM4 with half the Nyquist rate? Obviously the answers must be "yes", but for some reason I just envisioned the order being reversed, i.e PCIe Gen5 introducing PAM4 and Gen6 going to 32 Gbaud.
I think it comes down to cost. If your customers are willing to pay for large engineering costs, large power budgets, and lots of silicon then yeah it can be done. For electrical signaling you can always try to push the baud rate higher but you start brushing up serious engineering and cost challenges past 25 GBaud. It’s more reasonable on shorter channels, like die packaging.
"This allows PAM4 to carry twice as much data as NRZ without having to double the transmission bandwidth, which for PCIe 6.0 would have resulted in a frequency around 30GHz(!)." Does this imply that PCIe 5.0 and 6.0 use a 15Ghz clock?
No, because there's nothing that currently implements PCIe 5.0. PCIe 4.0 peripherals are just ramping up.
Even if you had a CPU/mobo with PCIe 5, you've got to have peripherals that can take advantage of it. And if you look at the additional power requirements from adding PCIe 4.0 support, you can see that such features aren't exactly "free". In this light, we don't actually *want* the capability too far in advance of any way to utilize it.
Probably not because they won't miss out on all of those sales of products such as new mainboards and all other devices that will support PCIe 5.0. It just means your shiny new main board won't be at the forefront of technology for very long just like those of us that will be investing in boards that will support PCIe 4.0 tech.
Very unlikely. If anything, the move to PAM4 likely ensures a very long life span for PCIe 5.0 due to the increased costs of moving to PCIe 6.0. Highend stuff will quickly migrate to PCIe 6.0 as this makes 1 Tbit/s Ethernet feasible for host adapters. That spec is also currently going through development so I would expect the first 1 Tbit Ethernet IO cards* to arrive shortly after PCIe 6.0 is finalized.
*I have a strong suspicion that the first 1 Tbit Ethernet products will directly be integrated into chip packaging. Very probable that silicon photonics will be involved.
I don’t see any compelling reasons to switch from 4.0 to 5.0 in the next 5 years. By the time those reasons appear, 6.0 will be available. So yeah, I think 5.0 is pointless.
PCIe 5.0 will have its spot in embedded and mobile where the reduction in trace lengths are not as critical to them. The benefit here is that fewer lanes will be necessary to reach a given bandwidth which leads to smaller chip packages and fewer traces on boards. The bonus here is that PCIe 5.0 is the ceiling as mobile/embedded can throttle down the clocks to 4.0 or 3.0 speeds if/when the extra bandwidth is not necessary to conserve power. More power efficient to run external buses at higher clocks than to go wider with more lanes
A basic example would be M.2 SSDs of today that are generally 4x PCIe 3.0 which is the same bandwidth as 1x PCIe 5.0. Another example would be that a dual 10 Gbit NIC can run at full bandwidth with a single PCIe 5.0 lane to the host chip. 25GBaseT is also supposed to emerge in this time period which could also leverage narrow PCIe 5.0 support.
Good point about single lane use cases, however I'm not sure many manufactures will want to change their designs so often (most probably are just now starting to think about adopting 4.0).
Perhaps a dumb question, but here goes nonetheless: could this be made somehow backwards-compatible to allow for >PCIe 3.0 speeds on PCIe 3.0-spec boards (with updated controllers, obviously)? Given how the increased signalling frequency seems to be the main driver for shortening trace lengths and requiring redrivers with 4.0 (and even further with 5.0 from what I understand), could PAM4 be implemented at a lower frequency for effectively doubled transfer rates compared to 3.0 without dramatically affecting trace complexity? I get that a PAM4 signal is inherently more vulnerable to interference, but as the article states, it also has FEC to combat that.
In other words, could PCIe 6.0 compatibility bring with it a "piggyback" double-speed protocol that can stack on top of previous standards? Given that any 6.0 host and device should be able to step down to 5.0 (which seems to be roughly "6.0 without PAM4"), 4.0, 3.0 and so on, could they then implement a "3.0+PAM4" or "4.0+PAM4" mode? In my mind it sounds feasible - and like a very good idea - but I'm no engineer.
Unfortunately no. As you may have noticed PAM4 relies on the receiver and transmitter to be able to decipher multiple voltage levels as multiple bits rolled into one. Effectively that means that each lane requires a low precision ADC and a DAC to read the multiple voltage levels. PCIe 3.0, 4.0 and 5.0 will not have such a requirement so there's no way they could decipher a multi-bit signal. You could argue that they could just add it as a "0.5" standard, sort of making it PCIe 3.5. So it'd be 3.0 with PAM4. However, the big problem is that using a multi-bit signal incurs a higher cost, adds more latency, and requires more power than just doubling the speed, even with the needed repeaters. So it doesn't make sense to use multi-bit signalling before you've run out feasible ways to just increasing the signalling speed.
That's basically what I described. The poster above asked the question why we couldn't introduce multi-bit signalling with previous generations of PCIe and I gave the technical and physical reason on how we're going to design and implement PAM4 on PCIe 6.0.
But I’m disagreeing. I don’t think the receivers are implementing multi bit ADCs. They demux the LSB and MSB and use zero crossing slicers. It’s different than typical multi bit ADCs made with resistor ladders or decoders. It’s different encoding and has different signal integrity characteristics.
PAM 4 doesn't multiplex two serial data streams, the SERDES simply encodes 2 bits per symbol. So instead of reading the signal level as either high or low for each transfer (like NRZ), a PAM4 receiver reads it as high, half-high, half-low, or low.
And there is absolutely no chance of previous generation PCIe PHYs being able to adopt PAM4.
That's not what I was asking about either - but rather if adopting PAM4 as an add-on (as SaturnusDK said above, as a retroactive ".05") to previous standards might be a better route to cost-efficient >PCIe 3.0 speeds as you'd avoid some of the signalling/trace length issues of 4.0 and 5.0.
I appreciate the great responses though, and it seems I probably underestimated the cost/complexity of implementing PAM4 in silicon. Still, I have to wonder how this will scale as PAM4 adoption rises when PCIe 6.0 hits the market - might economics of scale bring this cost down? Complex PCBs aren't getting any cheaper any time soon, so silicon seems to be the only avenue for cost reduction to avoid the seemingly incessant price creep associated with faster I/O.
I'm not an EE, but it seems to me that PAM4 still requires higher signal integrity than NRZ. It's just not as difficult as a further frequency doubling. The addition of FEC would seem to be an implicit acknowledgement of this. However, I could also imagine potential implications on practical trace length or PCB structure.
"Actual Bandwidth" is the approximate aggregate bandwidth provided by the links described in the various PCI specification releases, aligned with their release date. "I/O Bandwidth" is a plot of a theoretical doubling of link bandwidth every 3 years starting with the 0.13 GB/s of the original PCI spec back in 1992. I guess this shows how well they're doing based on an arbitrary comparison that they selected in order to demonstrate how well they're doing.
It's as if they're saying "hey, we're staying ahead of the curve", except they're the ones who picked "the curve" by which to judge their progress.
IMO, it would've been more informative to fit an exponential curve to their actual schedule, so we can see roughly what rate of improvement they're delivering.
Are there any NAND or SSD makers with public (or even leaked) roadmaps? Not many people care much about which storage devices they'll be able to buy in 4-5 years.
Yes there are roadmap for NAND. The problem is they focus on cost and capacity, and don't often mention transfer speed. My guess is that they are not sure if Transfer speed will continue to sell in the future.
I'm curious if PCIe 6.0 will permit PAM4 data transmission but at the reduced clocks of PCIe 4.0/3.0 etc. in a low power mode. This would still be additional bandwidth vs. the previous standards and likely saves some complexity without have to toggle between PAM4 and NRZ as often. Otherwise I'm curious what the turn around time for a PAM4-to-NRZ transition would be on the bus and how much energy could be spent thrashing across that transition.
The PCI specification dictates a maximum 36dB signal loss on both PCIe 4.0, 5.0 and 6.0. That alone should tell you that a multi-bit signal, ie. a signal with several discrete voltage states, uses more power than increasing the frequency since the signal loss has to be to the lowest discrete voltage state meaning that the transmitter will need to run at higher voltage which incurs a higher power loss. The reason for even using a multi-bit signal is due to the fact that with PCIe 5.0 we're already at 16GHz with a corresponding maximum PCB trace length of about 80mm between repeaters or signal conditioners. Doubling speed again, and thereby halving the trace length again isn't a feasible option. That is the only reason to go multi-bit. The disadvantages are too great to implement it at lower speeds.
I think PCIe 5.0+ will be great for eGPUs. Currently there are only 4 PCIe 3.0 lanes available over TB3. With 5.0, even sticking with just 4 lanes would give as much bandwidth as PCIe 3.0 x16. So, apart from a little protocol overhead, near full potential of the GPU could be utilized, as opposed to the current 10%-40% performance hit.
The PCIe improvements don't necessarily translate to Thunderbolt. Thunderbolt isn't just limited by PCIe, Thunderbolt is explicitly designed to get up to PCIe speeds with as little cost as possible.
The main issue is guaranteeing signal integrity at long distances in thin, cheap cables. This means you can't just put current thunderbolt hardware on a faster PCIe connection and it will just get faster.
Over copper, cross-talk at those enormous clock rates is a significant issue - and PAM4 coding makes this much worse. There's a lot of catching up for Thunderbolt to do to achieve PCIe 5.0 signalling over copper, and I don't think the tech will be there for years to come.
Thunderbolt 3 is already severely limited: While the maximum specified copper cable has 3 meters, PCIe 3 speed signalling only works over 0.5 meters.
In all likelyhood, even with future technology in active cables, the USB Type-C connector will itself become a major issue and I find it highly unlikely that it could support PCIe 5 signalling.
Thus, a future, faster Thunderbolt will be an entirely new technology. I don't see the eGPU concept itself surviving into that future.
After all, what's the point. A eGPU box is barely smaller than a full blown PC that would easily outperform any notebook. You might as well just make it a full blown computer and just do data transfer over Thunderbolt to have all your data available on the small gaming-box, you connect to your notebook.
After reading the comments in the article on the DisplayPort 2.0 announcement, I would like to retract this Toslink reference. There's much discussion of optical cabling, in that thread.
Brother printer support offer full functionality by providing printing, copying, scanning, and faxing capability from a single machine. They're available as inkjet or laser models with a variety of feature configurations to suit business and personal users and carries a wide range of Brother all-in-ones, including mono laser, color laser, and inkjet to meet home, home office, business, and school needs. Brother Printer Not printing Black? Solved https://www.brotherprintersupport.co.uk/2019/06/10... https://www.brotherprintersupport.co.uk
Cooper Lake ended up not having PCIE4. Looks like some Tiger Lake chips and Ice Lake Server chips will have PCIE4 in second half of 2020, as well as in some Optane SSD drives.
PCIE5/CXL is on the roadmap for 2021, both in the Xe HPC GPUs and in the Sapphire Rapids Xeon chips being used in the Aurora exascale project.
So, as it is turning out, Intel isn't skipping pcie4 or pcie5. Both were demonstrated in agilex fpga chiplets in 2019, and a Stratix 10 dx chip also had pcie4, sampled in 2019.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
119 Comments
Back to Article
shabby - Tuesday, June 18, 2019 - link
I guess intel will skip 4 and 5 because their new desktop products will be out in 2021? Amiright hstewart?HStewart - Tuesday, June 18, 2019 - link
You guess is a good as mine. But I believe the previous articles stated Intel is going 5 Only PCI 4 is skipArsenica - Wednesday, June 19, 2019 - link
Cascade lake in LGA4189 will feature PCIe 4.0Santoval - Wednesday, June 19, 2019 - link
No it won't. Intel will debut PCIe 4.0 with Ice Lake Xeon CPUs, probably not with Ice Lake-S/H CPUs for desktop (which I don't think will ever be released; they will probably can that release and launch Tiger Lake CPUs for desktop directly) and probably the upcoming Ice Lake-U/Y CPUs for laptops and convertibles will not have PCIe 4.0 either.Even Cooper Lake, the 14nm node based successor of Cascade Lake, will lack PCIe 4.0
halcyon - Wednesday, June 19, 2019 - link
Source for that?Arsenica - Wednesday, June 19, 2019 - link
I confused Cascade Lake with Ice lake. Ice lake in LGA4189 will have PCIe 4.0Korguz - Thursday, June 20, 2019 - link
Arsenica im confused with ALL the " Llake named " cpus... can never tell which is which, which it out, coming out, or future chips... is there no one at intel that has the power to change the names of the chips ? or is it a tactic to confuse people ??? geeze... as if the WAY TO MANY cpu sku's isnt bad enough.... one of the local stores here, desktop only.. has approx 22 cpu's listed !!!!!!!!Targon - Thursday, June 20, 2019 - link
Intel loves to confuse consumers. Dual-core i7 laptops, which chips are dual core, quad core, hex core? Why charge extra for HyperThreading at this point, outside of the low end i3 vs. i5?mode_13h - Thursday, June 20, 2019 - link
Intel is now a company lead by its marketing department. Product naming and differentiation is how those people "add value".mode_13h - Thursday, June 20, 2019 - link
BTW, I would add to your list the whole bronze/silver/gold/platinum thing.Anyway, at least there's (still) this: https://ark.intel.com/content/www/us/en/ark.html
Korguz - Thursday, June 20, 2019 - link
" bronze/silver/gold/platinum thing" that is with the xeons, correct ?? i was just talking about desktop, and what a local store has on their site for desktop ( non server/workstation ) cpus ..mode_13h - Friday, June 21, 2019 - link
Yeah, I was riffing on the theme of them creating confusing with their branding mumbo jumbo.Korguz - Friday, June 21, 2019 - link
mode_13h ahhhh :-) still.. WAY to many.. its one of the reasons i went with the 5930k i have, i tried to pick one on 1151 i think it was back then.. while i was able to choose a mobo, trying to chose a cpu... gave me a head ache... so i said forget it.. grabbed a x99 deluxe to go with the 5930k.. and left it at that...GreenReaper - Sunday, June 23, 2019 - link
Intel started off with canyons; then their 10nm ambitions melted and we ended up with a bunch of lakes.mode_13h - Tuesday, July 2, 2019 - link
Nice.Qasar - Wednesday, June 19, 2019 - link
i dont remember reading any where that intel will skip pcie 4 and go straight to pcie 5.....Kevin G - Wednesday, June 19, 2019 - link
They might on the desktop due to how things are aligning with 10 nm production. Right now it is looking like Ice Lake will be a mobile (PCIe 3.0) and server (PCIe 4.0?) only. Comet Lake is the next major desktop part and that is expected to have PCIe 3.0 and leverage DDR4 too. The platform after that is expected to bring both DDR5 and PCIe 5.0 to desktops simultaneously but that is years down the road.Korguz - Thursday, June 20, 2019 - link
i will believe it when it actually comes out.. im sick of intel and their roadmaps that never happen....dgingeri - Wednesday, June 19, 2019 - link
They said the 6.0 standard specs won't be out until the end of 2021, so Intel will likely not get it implemented until late 2022 or later.Death666Angel - Tuesday, June 18, 2019 - link
"Intel has already committed to PCIe 5.0-capable CPUs in 2021" And we know how reliable they've been these past several years..... I just had to. :DLuffy1piece - Tuesday, June 18, 2019 - link
Haha wordballsystemlord - Tuesday, June 18, 2019 - link
Don't worry, Intel will support PCIe 5.0 on their 14nm+++++++++ process. :Dhalcyon - Wednesday, June 19, 2019 - link
in 2022 .... 2 or 22 or 222. Add as many years as you want. It'll be a 2 anyway.Really, I lost ALL hope on Intel roadmaps years ago.
Luffy1piece - Tuesday, June 18, 2019 - link
Shift to Gen-Z already!It's good to have options, but for now the industry should shift to Gen-Z: 200+ GB/s bandwidth available now, with future updates up to 400+ GB/s. Also it has <100 ns latency, compared to the 700 something ns latency of PCIe I'm aware of. Best of all it breaks the fixed CPU-memory structure of 1 CPU only capable of a certain DDR version RAM. This will allow us to upgrade and use a mix of NVRAM/persistent memory/SCM as they become available in the next few years; some of which have the potential to provide RAM-like bandwidth and latency or better with SSD-like non-volatile capacity. Single thread performance is anyway just growing at around 3% YoY, so I see these new memories to be the only big performance upgrade in next few years.
Of course I don't expect Intel to give us Gen-Z coz it wants to push their own memory products(3D Xpoint) and connectivity solutions like CXL(using PCIe) and it will be leveraging its majority market share to do so. Thus my hope is from AMD. It's about time we get some healthy competition in CPU market and let us consumers benefit from latest technology, instead of monopolies manipulating markets to maximize their profit and use old tech. Also can't wait to use an 8K monitor directly with a laptop.
TeXWiller - Tuesday, June 18, 2019 - link
I haven't heard Gen-Z having a solution for coherence yet. This is why the shorter range interconnects are still needed also. I too would like an industry wide applicable, ultra high-speed standard interconnect to emerge so that we could make those plug-and-play "mainframes" by just throwing boxes of processing, storage/memory and everything else together.Luffy1piece - Tuesday, June 18, 2019 - link
This what I found on http://genzconsortium.org/faqs/Does Gen-Z support cache coherency?
Yes. Gen-Z supports cache coherency in point-to-point, meshed, and switch-based topologies. Cache coherency can be used between processors with accelerators, accelerators with accelerators, or accelerators with memory and storage.
TeXWiller - Wednesday, June 19, 2019 - link
Thanks. Things change so fast these days. You blink and your incoherent neighbor's children are all coherent and well-behaved. ;)As soon as I saw those 2x10Gb ports in the new Mac Pro, I started to think about clustering. Apple, give the generation z what it deserves and implement Gen-Z for connecting those petabyte flash storage boxes and additional Macs for the cases when one Xeon with GPUs just doesn't cut it. You already have the rack model, after all.
mode_13h - Wednesday, June 19, 2019 - link
It's using an Intel CPU. It doesn't seem to me like Gen-Z would work particularly well as a bolt-on to PCIe.Maybe, when Apple finally brings their own ARM cores to the desktop, Gen-Z will figure into their plans.
Targon - Wednesday, June 19, 2019 - link
PCI Express would hang off the Gen-Z bus, so the system as a whole could be Gen-Z and still have PCI Express slots. Bandwidth/lanes to each slot would be dynamic as needed, so you could have every slot get 16 lanes.mode_13h - Wednesday, June 19, 2019 - link
It'd be quite a feat for Apple to wrangle such a mod from Intel. That's all I'm sayin'.Targon - Thursday, June 20, 2019 - link
Apple needs to get away from Intel, which has shown a complete disregard for security. If I were the sort of person to write malware, I'd have a field day with all the security problems with Intel processors, knowing that too many people don't install updates.mode_13h - Thursday, June 20, 2019 - link
> Apple needs to get away from IntelThat might be about to happen.
ats - Wednesday, June 19, 2019 - link
That doesn't actually mean anything. Technically, you can say the same thing for PCI-X from before PCIe. Technically, Ethernet also supports cache coherency. There are such systems even in existence which is more than you can say for Gen-Z. Gen-Z will have its uses, but marketing hype isn't product reality.ats - Wednesday, June 19, 2019 - link
Gen-Z is pretty much vapor and pretty much useless for main memory. And no one has demonstrated this mythical <100ns latency for Gen-Z.Luffy1piece - Wednesday, June 19, 2019 - link
Why the dislike for Gen-Z?I'm basing my opinion on https://www.youtube.com/watch?v=OeJxZMTgCcE&in...
(Gen-Z starts at 8 minutes) and many other sources. Curious about how did you come to your conclusion
Also on the topic of latency, I think even a figure around 100ns will be a bottleneck esp. with the upcoming NVRAM technologies. I hope they can improve it
ats - Thursday, June 20, 2019 - link
Gen-Z is largely a solution looking for a problem. Nothing they are really trying to solve is going to be actually useful to solve. Dis-aggregation is just a bad idea and completely counter to the actual trends in technology and power. Esp wrt to memory.I base my opinion on having been down this road before multiple times and designing actual high speed interconnects used in hundreds of millions of computers. Gen-Z is trying to push the roughly the same koolaid that Future I/O and Next Gen I/O pushed both before and after they merged into Infiniband. In the end, IB was just lower latency networking (despite plenty of work to make it work in other areas).
I/O will still connect to whatever PCI SIG goes with since that is where the installed base will go. Gen-Z might have a life as a replacement for IB in the network niche but doubtful, Ethernet has learned the lesson well and is now working on latency to counter just that. More likely, Gen-Z might find a niche as a standardized side band for accelerators if it is lucky. Everything else basically died when intel announce CXL as effectively an extension of PCIe.
Real NVRAM technologies will either have a dedicated memory bus interface, share an existing memory bus interface, or not actually be real usable NVRAM but instead SSDs.
Targon - Wednesday, June 19, 2019 - link
Intel and NVIDIA decided against joining the Gen-Z Consortium, so I doubt that Intel or NVIDIA will support it for at least 5-7 more years.Luffy1piece - Wednesday, June 19, 2019 - link
Don't know about Nvidia, but Intel has a conflict of interest coz they wanna push their own memory and interconnect solutions, and tap into that TAM as shown in their latest investor meeting: https://s21.q4cdn.com/600692695/files/doc_presenta...Thus my hope is from AMD. It's getting better with the CPU's, and Gen-Z can be a really good USP for it. It would need the support of memory and equipment manufacturers, but then the good news is that other than Intel and Nvidia, pretty much every big name company is a member of Gen-Z consortium.
ats - Thursday, June 20, 2019 - link
And all the big names that actually do anything are also members now of CXL and PCI SIG. AMD will go with what PCI SIG supports which will be CXL which runs on PCIe.It is easy to sign onto a working group, it has nothing to do with you actually using the solution the working group is pushing. In most cases, you join in just to get the info.
Look at the history. Infiniband was the previous Gen-Z. It had support from all the big names. EVERYONE at the time: Intel, Compaq, Dell, HP, IBM, Intel, Microsoft, Sun. That was quite literally the entirety of the computer universe at the time. Gen-Z is basically the carbon copy and will largely have the same issues.
PCI SIG is VERY VERY good at what it does. It has existed forever. It basically owns I/O. If they aren't pushing an I/O standard, it has zero chance to be mainstream. They know what they are doing.
Luffy1piece - Thursday, June 20, 2019 - link
PCI SIG took 7 years to get to 4.0, so I won't call it "very very good". In fact that slow progress is one of the main reasons behind the alternatives like Gen-Z, esp. when you think about the bandwidth requirement growth in data-centers.It is indeed hard to change status quo, esp. when there's a monopoly (like Intel) but future is not always the same as past, and technologies change all the time. At the end of the day the market needs decide everything.
The PC segment is shifting to all SaaS - everything accessed through a browser, even gaming very soon - as a result the priorities are changing to portability and longer battery backup. Therefore there are less(not zero) chances for my dream of having a Gen-Z enabled laptop in which I can switch NVRAM as they come in the future. CPU single-thread progress of 3% YoY is disappointing, so de-coupling memory from CPU and encouraging competition in memory market will be a good performance boost. Lastly I also like the idea of one interconnect tech covering all connections, from memory to I/O to networking, and having excess bandwidth for future VR, 16K or else.
On the other hand PC sales have been decreasing since 2011, and are projected to dwindle more and more every year. Meanwhile data-centers have been and will continue to keep increasing, even Intel has officially changed its brand from PC-centric to data-centric. Gen-Z's current bandwidth of 200 GB/s is something PCIe will reach in 6.0 which is due in 2021 if everything goes well, let alone the actual implementation of it. Therefore I doubt data-centers will just wait years for PCI SIG to figure it out and ignore a solution already available, which is probably why I've already seen Gen-Z in AMD HPC presentations.
Targon - Thursday, June 20, 2019 - link
The PC market became boring since the second generation Core i series of chips, and people saw little reason to upgrade their computers, including enthusiasts. With AMD pushing the industry forward again, there are reasons why people are upgrading computers they built only two years ago. Dual-core on the desktop will finally be dead, and hopefully dual-core laptops will be dead next year(I hate that AMD offered the Ryzen 3 2200U and 3200U).mode_13h - Friday, June 21, 2019 - link
> I hate that AMD offered the Ryzen 3 2200U and 3200USo, you'd rather have a quad-core Gemini Lake-based Chomebook?
I won't go as far as "two cores good, four cores bad", but some dual-cores still beat some quad-cores.
mode_13h - Friday, June 21, 2019 - link
Well... kinda hard to overlook NVLink. Not that it will become standard, but it's out there.Luffy1piece - Saturday, June 22, 2019 - link
NVLink is a proprietary Nvidia technology, so not only will it cost to get the license, it will depend on Nvidia whether or not and to whom they decide to license. Gen-Z and even PCIe are free to implement for everyone. Also I don't think NVLink can act as a universal connection technology like Gen-Z coz it was developed to connect CPU and GPUmode_13h - Sunday, June 23, 2019 - link
NVLink is currently used for CPU <-> GPU and GPU <-> GPU. Now that Nvidia owns Mellanox, maybe well also see it used to connect NICs.thomasg - Monday, June 24, 2019 - link
NVLink isn't some magical wonder tech the just nobody else could do.It's not that different to PCIe electrically.
The serial lanes concept of PCIe is also used by NVLink, as is the line coding.
What makes NVLink (and comparable technologies) so much faster is, that they don't have to function in a PCI(e)-like environment.
NVidia can specify much shorter lanes and use higher SerDes rates. They can also chose vastly different footprints and routing options, that couldn't be used for PCIe.
Lastly, PCIe has to be affordable and universally usable, which does not apply to NVLink.
They can spec the most expensive hardware and ignore many usecases, and thus run at much higher rates.
Money isn't really all that relevant to NVLink customers, it's quite relevant to the PCIe target group.
Supporting cheap clients is something that PCIe has to do, but NVLink can afford not to.
Running lanes in a densely populated board without needing an extra PCB layer is what PCIe has to do.
Demanding 2 extra board layers is something NVLink can easily do.
Of course, since it's proprietary and nVidia is secretive as usual, we don't really know and I can only speculate upon publicly available information.
mode_13h - Tuesday, July 2, 2019 - link
Well, thanks for speculating. That's more than I knew.sheh - Tuesday, June 18, 2019 - link
The table should say "half duplex" rather than "full duplex" (also in an older article on PCIe).Ryan Smith - Tuesday, June 18, 2019 - link
Full duplex is correct. It's 128GB/sec in each direction, simultaneously.boeush - Tuesday, June 18, 2019 - link
Full duplex would be 128 GB/sec + 128 GB/sec = 256 GB/sec: just as shown in the PCI-SIG chart up above. To say 128 GB/sec full-duplex, and then to qualify "128 GB/sec in each direction", makes no sense.repoman27 - Tuesday, June 18, 2019 - link
Ryan’s correct on this one, AFAICT. If a link can provide 128 GB/s in both directions at the same time, that’s the equivalent of 128 GB/s full-duplex. If you wanted to be super pedantic, PCIe lanes are actually dual-simplex—each transmission direction gets its own dedicated signaling pair. And the sum of the simplex bandwidth, or 256 GB/s, would be the aggregate bandwidth.Ryan Smith - Wednesday, June 19, 2019 - link
And to be fair to everyone, i struggled with this a bit because the terminology here is so loose. You have things like Ethernet where the quoted speeds are in each direction, and then things like PCIe where the number is the aggregate, and further things like UHS where sometimes the tech is outright half duplex.So "full duplex" seems like the easiest way to convey the idea that it's not an aggregate measure, but that the link has simultaneous up and down transfers running at the same high speeds. Flipping through some technical definitions, I haven't found anything that's a better fit.
But if someone has a suggestion for what to call it that might work better, I'm all ears.
sheh - Wednesday, June 19, 2019 - link
To me, "full duplex X Gbps" translates to X cumulative for both directions together.Instead of using "full/half duplex" I'd just say "in each direction".
repoman27 - Wednesday, June 19, 2019 - link
Full-duplex means "in both directions at the same time". So saying "n units/s full-duplex" is the same as saying "n units/s in both directions at the same time". Adding both directions together results in the aggregate bandwidth of the link, which is twice what is available in a particular direction.PCI-SIG may be touting that number in their slides because it's the bigger number, but that's not how I/O interfaces are marketed to consumers—it's either the full-duplex rate that's advertised, or the simplex rate in the case of interfaces like DisplayPort or HDMI that only operate in a single direction.
mode_13h - Wednesday, June 19, 2019 - link
What's wrong with just saying "Full-duplex; XYZ GBps per direction."?When I read some number quoted as the full-duplex rate, I assume that's the sum of both directions. However, because data flows are rarely symmetrical and available bandwidth in one direction can rarely be substituted for the other, the uni-directional rate is usually the most interesting & useful spec.
thomasg - Monday, June 24, 2019 - link
That's redundant information, full-duplex is just the technical term for "X information-rate in both directions".If that's not the case it's not full-duplex.
This is a universally accepted definition that has held since the term was defined.
thomasg - Monday, June 24, 2019 - link
Of course, that doesn't mean that marketeers without an understanding of the definition won't in some cases just double the number since it looks better - but then they usually also won't use full-duplex to describe it, jut a pointless aggregate data rate.Santoval - Wednesday, June 19, 2019 - link
Full duplex (or dual simplex) is "xx bandwidth in each direction *simultaneously*". Half duplex (or simplex) is *not* simultaneously. The PCIe protocol is the former.There are minor differences between full duplex & dual simplex and half duplex & simplex that are not particularly relevant here.
ballsystemlord - Tuesday, June 18, 2019 - link
Ryan, your spelling and grammar is great, but you're typing too fast and missing the "e" on PCIe a lot:"...the group made it clear that they were not just going to make up for lost time after PCI 3.0..."
"...but it will be interesting to see where things stand in a few years once PCI 6.0 is in the middle of ramping up."
"As for end users and general availability of PCI 6.0 products,..."
Thanks for the article, as always!
ballsystemlord - Wednesday, June 19, 2019 - link
Yikes, should have used an "are" instead of an "is". Sorry.PeachNCream - Tuesday, June 18, 2019 - link
Will need 2 tiny, useless cooling fans instead of just one! Double the chances of failure after 6 months of occasional use!mode_13h - Tuesday, June 18, 2019 - link
You're assuming the lane count stays constant. I think the mainstream CPUs, chipsets, and peripherals will start dropping lane counts, by the time they adopt PCIe 5.0.shabby - Tuesday, June 18, 2019 - link
Don't give intel any ideas, last thing we need is less lanes.Amd on the other hand... more lanes for everyone!
mode_13h - Wednesday, June 19, 2019 - link
Well, it's a legit way to counter the increased power consumption, also while cutting costs.Honestly, do you think we actually need x4 NVMe @ PCIe 5.0+? I can't see the use case for that, on the desktop. And any drives that can exceed even what PCIe 4.0 provides are going to run too hot.
Same with GPUs. I don't see what's going to change that's going to deliver a measurable benefit for *most* GPUs at > x8 @ PCIe 4.0. You can already find some entry-level GPUs with x8 interfaces - a trend I see continuing to move up the range. By PCIe 5.0, some cards & mobos might even drop the x16 connector.
And 10 Gig NICs will do fine at just x1 @ PCIe 4.0.
So, why do you really need so many lanes, in a desktop? It seems to me that this push for faster connectivity is driven almost 100% by cloud & HPC.
shabby - Wednesday, June 19, 2019 - link
With gpus no, but with nvme drives yes. Look how long we've been stuck at 3.5GB/sec with nvme pcie3 drives. As soon as pcie4 came out we're upto 4.8GB/sec already. Give them available bandwidth and they'll use it up.ats - Wednesday, June 19, 2019 - link
You can't buy a NVMe drive of any type that can hit 3.5GB/s in any meaningful workload. PCIe 4.0 won't change that. You'll just get some bigger more useless marketing numbers while actual performance remains pretty much unchanged. You can also take something like a 970pro and run it half width or data rate and not notice any actual performance difference.AKA sequential transfer rates are useless.
mode_13h - Wednesday, June 19, 2019 - link
But they'll use it up for *what*? Why do desktop PCs need more than the ~8 GB/sec they'll get with x4 PCIe 4.0?On a workstation, I could imagine a need for paging in a multi-TB dataset. But I'm specifically talking about mainstream desktop CPUs & chipsets.
DanNeely - Tuesday, June 18, 2019 - link
Other than the fact that it's going to complicate my when to build a new PC decision this is great news.My nominal plan was to replace my 4790K in 2021/2 at 6/7 years old. At that point I was expecting to jump on the DDR5 bandwagon and either have PCIe5 for future proofing or have it be reasonably clear that it's signalling is too expensive to implement on consumer hardware and that 4.0 would be top of the line for anything I'd need for a reasonably long time.
Waiting for PCIe6 would push things out to to the 2023/4 timeframe, by the end of which my (jan 2015) system would be pushing a decade which is a bit nervous making in terms of reliability even if 4 physical cores haven't become a high end gaming bottleneck yet.
mode_13h - Tuesday, June 18, 2019 - link
> My nominal plan was to replace my 4790K in 2021/2 at 6/7 years old.The best time to upgrade is when you're starting to be limited by your current hardware. Beyond that, I'd say only delay if a new standard is right around the corner, at the point when you're almost ready to pull the trigger.
PCIe 3.0 still isn't really a bottleneck, in PCs. I think PCIe 4.0 will be fine for quite a while. As I said above, I expect the main difference with PCIe 5.0 is that mainstream CPUs, chipsets, and peripherals will start cutting lanes. This will reduce costs and power consumption, without much impact on performance.
DanNeely - Tuesday, June 18, 2019 - link
my nominal year 6/7 timeframe is because I want to replace my system on my own time not in a frantic rush because something failed leaving me down. That means I want to replace it before it gets old enough to start climbing the back side of the reliability bathtub curve.mode_13h - Wednesday, June 19, 2019 - link
Why do believe 6/7 years is the right number, if you want to avoid hardware failures? Except for some PSUs, nothing has a warranty that long.If downtime is really an issue, then you should replace stuff before it goes out of warranty. Otherwise, just suck it up and replace stuff when it breaks.
BTW, speaking of power, I recommend using a high-quality PSU and UPS (i.e. with AVR). In my experience, you'll get more life out of your components, that way. Also, don't overclock, and either buy over-spec'd or ECC memory.
DanNeely - Wednesday, June 19, 2019 - link
experience with prior systems having elevated failure rates as they approached a decade old, combined with the increasing difficulty of getting parts. The 2 year window fits new product cycles where (pre 10nm faceplant) Intel was issuing significant upgrades every 2 years; a pattern that AMD appears to be copying with Zen. Demoted/retired before 8 years means I start looking around 6, with timing being tied to major product refreshes along with time/budget constraints.rocky12345 - Wednesday, June 19, 2019 - link
I'm one of those that uses the hardware as long as I can before upgrading. Once I start to notice slow downs in my system I either change out things like hard drive & upgrade to something like a SSD drive. I always try to max out the memory in my case 32GB was my boards max so that is what it got. I over clock everything that can be over clocked to the max. This has allowed me to keep on using my system form 2012 up until today without any speed issues. The only reason I am going to upgrade my system this fall is because of the next gen consoles and the push in the market for more cores are more cool. I feel that my 4/8 CPU will become a huge bottleneck once the next gen consoles come out with their 8/16 CPU and the game industry really starts to support CPU's with those core counts and my fake 8 thread CPU will just slowly chug along in the games at that point. There is only one game right now that seems to make my CPU feel like it could use more cores and doe snot run 100% all of the time. I got a lot fo time in on this CPU but it is time to say good bye to it and demote it to some lesser tasks and not be on my desktop as my gaming station now I just have to figure out which new CPU to get in the fall.Mikewind Dale - Tuesday, June 18, 2019 - link
I wonder if this will make it possible to implement an inexpensive Optane page file that rivals DRAM without having to use Optane DIMMs? I've got a friend who occasionally needs about 512 GB of RAM, and that's expensive. But PCIe 6.0 x4 will have 32 GB/s, which is almost as much as dual channel DDR4. So I could imagine installing an Optane M.2 x4 SSD and designating it as the location for the page or swap file. This might be easier and cheaper than buying a server board that supports Optane DIMMs or hundreds of GB of RAM.mode_13h - Wednesday, June 19, 2019 - link
I'm not sure Intel will keep making Optane SSDs. I think their long-term goal is to use Optane DIMMs as leverage to help sell Intel CPUs.Perhaps Micron will continue offering 3D XPoint in SSDs, but probably only for the enterprise market. And those probably won't be much cheaper than RAM.
willis936 - Tuesday, June 18, 2019 - link
I've been to a few of the 802.3 meetings where PAM4's development has happened and I've drank with some of the people representing big players. I've also tested high speed ethernet and seen PCIe PHY testing. I will be shocked if there is PCIe 6.0 harware ready in the next ten years. PAM4 is not a trivial problem. It isn't "throw twice as many EQ taps and deal with half the SNR and call it good" scenario. Fortunately IEEE is all public but they'll need to actually understand how these lofty models are made and why before they can hope to implement their own PAM4 over a long, nasty channel.Yojimbo - Tuesday, June 18, 2019 - link
Maybe they are better at it when they aren't drinking...willis936 - Tuesday, June 18, 2019 - link
Drinking is the only way humans can get through a week of standards development. If you think it's easier than I'm making it sound then I invite you try to even get up to speed.http://www.ieee802.org/3/ck/public/18_05/index.htm...
mode_13h - Wednesday, June 19, 2019 - link
Are you considering the orders of magnitude shorter distances involved in PCIe than Ethernet over copper? PCIe can also be constrained to shielded PCB layers, whereas Ethernet over copper has to contend with more EMI.I'd like to believe that the PCIe SIG wouldn't publish such plans without anyone ever conducting lab tests to validate their feasibility.
willis936 - Wednesday, June 19, 2019 - link
Calls of interest and feasibility studies are industry standard. Following ethernet demonstrates good feasibility on paper.Ethernet is a lot more than twisted pair and RJ-45. SFP is good for up to 25 GBaud over a few meters. Going to 50 GBaud and still going a few meters on expensive channels is what the IEEE 802.3ck WG is currently trying to achieve. Most of where these very high baud rates see use is in optical, chip to chip, and chip to module applications. Those are all also covered by ethernet.
repoman27 - Wednesday, June 19, 2019 - link
Higher order modulation comes with trade-offs, no doubt, which is why the industry has clung to NRZ for as long as possible wherever they can, but that hasn't stopped the 802.11 folks from commercializing 1024-QAM in consumer gear years before 802.11ax was even finalized.Pretty much the entire industry seems to have stopped pushing NRZ at around 28 and change Gbaud and switched to PAM4 for future generations. That's why I seriously wonder about PCIe 5.0 using 32 Gbaud NRZ. Have existing transceivers demonstrated enough headroom to do that? And is that really more cost effective than switching to PAM4 with half the Nyquist rate? Obviously the answers must be "yes", but for some reason I just envisioned the order being reversed, i.e PCIe Gen5 introducing PAM4 and Gen6 going to 32 Gbaud.
willis936 - Wednesday, June 19, 2019 - link
I think it comes down to cost. If your customers are willing to pay for large engineering costs, large power budgets, and lots of silicon then yeah it can be done. For electrical signaling you can always try to push the baud rate higher but you start brushing up serious engineering and cost challenges past 25 GBaud. It’s more reasonable on shorter channels, like die packaging.Sivar - Wednesday, June 19, 2019 - link
"This allows PAM4 to carry twice as much data as NRZ without having to double the transmission bandwidth, which for PCIe 6.0 would have resulted in a frequency around 30GHz(!)."Does this imply that PCIe 5.0 and 6.0 use a 15Ghz clock?
PixyMisa - Wednesday, June 19, 2019 - link
16GHz, yes.Kastriot - Wednesday, June 19, 2019 - link
So PCI-e on X570 is already outdated?mode_13h - Wednesday, June 19, 2019 - link
No, because there's nothing that currently implements PCIe 5.0. PCIe 4.0 peripherals are just ramping up.Even if you had a CPU/mobo with PCIe 5, you've got to have peripherals that can take advantage of it. And if you look at the additional power requirements from adding PCIe 4.0 support, you can see that such features aren't exactly "free". In this light, we don't actually *want* the capability too far in advance of any way to utilize it.
InvidiousIgnoramus - Wednesday, June 19, 2019 - link
So uh, are we going to end up completely leapfrogging 5.0?rocky12345 - Wednesday, June 19, 2019 - link
Probably not because they won't miss out on all of those sales of products such as new mainboards and all other devices that will support PCIe 5.0. It just means your shiny new main board won't be at the forefront of technology for very long just like those of us that will be investing in boards that will support PCIe 4.0 tech.Kevin G - Wednesday, June 19, 2019 - link
Very unlikely. If anything, the move to PAM4 likely ensures a very long life span for PCIe 5.0 due to the increased costs of moving to PCIe 6.0. Highend stuff will quickly migrate to PCIe 6.0 as this makes 1 Tbit/s Ethernet feasible for host adapters. That spec is also currently going through development so I would expect the first 1 Tbit Ethernet IO cards* to arrive shortly after PCIe 6.0 is finalized.*I have a strong suspicion that the first 1 Tbit Ethernet products will directly be integrated into chip packaging. Very probable that silicon photonics will be involved.
p1esk - Wednesday, June 19, 2019 - link
I don’t see any compelling reasons to switch from 4.0 to 5.0 in the next 5 years. By the time those reasons appear, 6.0 will be available. So yeah, I think 5.0 is pointless.FreckledTrout - Wednesday, June 19, 2019 - link
Of course there are in the serve space. Lots of compelling reasons for IO, networking, and many GPU's for compute.Kevin G - Wednesday, June 19, 2019 - link
PCIe 5.0 will have its spot in embedded and mobile where the reduction in trace lengths are not as critical to them. The benefit here is that fewer lanes will be necessary to reach a given bandwidth which leads to smaller chip packages and fewer traces on boards. The bonus here is that PCIe 5.0 is the ceiling as mobile/embedded can throttle down the clocks to 4.0 or 3.0 speeds if/when the extra bandwidth is not necessary to conserve power. More power efficient to run external buses at higher clocks than to go wider with more lanesA basic example would be M.2 SSDs of today that are generally 4x PCIe 3.0 which is the same bandwidth as 1x PCIe 5.0. Another example would be that a dual 10 Gbit NIC can run at full bandwidth with a single PCIe 5.0 lane to the host chip. 25GBaseT is also supposed to emerge in this time period which could also leverage narrow PCIe 5.0 support.
p1esk - Wednesday, June 19, 2019 - link
Good point about single lane use cases, however I'm not sure many manufactures will want to change their designs so often (most probably are just now starting to think about adopting 4.0).mode_13h - Wednesday, June 19, 2019 - link
Given the consensus that 6.0 will be more expensive and dissipate higher-power than 5.0, I don't foresee 5.0 being skipped.Valantar - Wednesday, June 19, 2019 - link
Perhaps a dumb question, but here goes nonetheless: could this be made somehow backwards-compatible to allow for >PCIe 3.0 speeds on PCIe 3.0-spec boards (with updated controllers, obviously)? Given how the increased signalling frequency seems to be the main driver for shortening trace lengths and requiring redrivers with 4.0 (and even further with 5.0 from what I understand), could PAM4 be implemented at a lower frequency for effectively doubled transfer rates compared to 3.0 without dramatically affecting trace complexity? I get that a PAM4 signal is inherently more vulnerable to interference, but as the article states, it also has FEC to combat that.In other words, could PCIe 6.0 compatibility bring with it a "piggyback" double-speed protocol that can stack on top of previous standards? Given that any 6.0 host and device should be able to step down to 5.0 (which seems to be roughly "6.0 without PAM4"), 4.0, 3.0 and so on, could they then implement a "3.0+PAM4" or "4.0+PAM4" mode? In my mind it sounds feasible - and like a very good idea - but I'm no engineer.
SaturnusDK - Wednesday, June 19, 2019 - link
Unfortunately no. As you may have noticed PAM4 relies on the receiver and transmitter to be able to decipher multiple voltage levels as multiple bits rolled into one. Effectively that means that each lane requires a low precision ADC and a DAC to read the multiple voltage levels. PCIe 3.0, 4.0 and 5.0 will not have such a requirement so there's no way they could decipher a multi-bit signal.You could argue that they could just add it as a "0.5" standard, sort of making it PCIe 3.5. So it'd be 3.0 with PAM4. However, the big problem is that using a multi-bit signal incurs a higher cost, adds more latency, and requires more power than just doubling the speed, even with the needed repeaters.
So it doesn't make sense to use multi-bit signalling before you've run out feasible ways to just increasing the signalling speed.
willis936 - Wednesday, June 19, 2019 - link
I think most PAM4 receivers actually demux into a serial NRZ stream and run that like a standard SerDes.SaturnusDK - Wednesday, June 19, 2019 - link
That's basically what I described. The poster above asked the question why we couldn't introduce multi-bit signalling with previous generations of PCIe and I gave the technical and physical reason on how we're going to design and implement PAM4 on PCIe 6.0.willis936 - Wednesday, June 19, 2019 - link
But I’m disagreeing. I don’t think the receivers are implementing multi bit ADCs. They demux the LSB and MSB and use zero crossing slicers. It’s different than typical multi bit ADCs made with resistor ladders or decoders. It’s different encoding and has different signal integrity characteristics.repoman27 - Wednesday, June 19, 2019 - link
PAM 4 doesn't multiplex two serial data streams, the SERDES simply encodes 2 bits per symbol. So instead of reading the signal level as either high or low for each transfer (like NRZ), a PAM4 receiver reads it as high, half-high, half-low, or low.And there is absolutely no chance of previous generation PCIe PHYs being able to adopt PAM4.
Valantar - Wednesday, June 19, 2019 - link
That's not what I was asking about either - but rather if adopting PAM4 as an add-on (as SaturnusDK said above, as a retroactive ".05") to previous standards might be a better route to cost-efficient >PCIe 3.0 speeds as you'd avoid some of the signalling/trace length issues of 4.0 and 5.0.I appreciate the great responses though, and it seems I probably underestimated the cost/complexity of implementing PAM4 in silicon. Still, I have to wonder how this will scale as PAM4 adoption rises when PCIe 6.0 hits the market - might economics of scale bring this cost down? Complex PCBs aren't getting any cheaper any time soon, so silicon seems to be the only avenue for cost reduction to avoid the seemingly incessant price creep associated with faster I/O.
mode_13h - Wednesday, June 19, 2019 - link
I'm not an EE, but it seems to me that PAM4 still requires higher signal integrity than NRZ. It's just not as difficult as a further frequency doubling. The addition of FEC would seem to be an implicit acknowledgement of this. However, I could also imagine potential implications on practical trace length or PCB structure.mattkiss - Wednesday, June 19, 2019 - link
What's the difference between "Actual Bandwidth" and "I/O Bandwidth" on the chart?repoman27 - Wednesday, June 19, 2019 - link
"Actual Bandwidth" is the approximate aggregate bandwidth provided by the links described in the various PCI specification releases, aligned with their release date. "I/O Bandwidth" is a plot of a theoretical doubling of link bandwidth every 3 years starting with the 0.13 GB/s of the original PCI spec back in 1992. I guess this shows how well they're doing based on an arbitrary comparison that they selected in order to demonstrate how well they're doing.mode_13h - Wednesday, June 19, 2019 - link
Yeah, that was my read.It's as if they're saying "hey, we're staying ahead of the curve", except they're the ones who picked "the curve" by which to judge their progress.
IMO, it would've been more informative to fit an exponential curve to their actual schedule, so we can see roughly what rate of improvement they're delivering.
ksec - Wednesday, June 19, 2019 - link
So are there any NAND maker that has a roadmap for 16GB/s SSD in 2023? Using eight Channel that is 2GB/s, doesn't seems too far stretched?Valantar - Wednesday, June 19, 2019 - link
Are there any NAND or SSD makers with public (or even leaked) roadmaps? Not many people care much about which storage devices they'll be able to buy in 4-5 years.ksec - Thursday, June 20, 2019 - link
Yes there are roadmap for NAND. The problem is they focus on cost and capacity, and don't often mention transfer speed. My guess is that they are not sure if Transfer speed will continue to sell in the future.Kevin G - Wednesday, June 19, 2019 - link
I'm curious if PCIe 6.0 will permit PAM4 data transmission but at the reduced clocks of PCIe 4.0/3.0 etc. in a low power mode. This would still be additional bandwidth vs. the previous standards and likely saves some complexity without have to toggle between PAM4 and NRZ as often. Otherwise I'm curious what the turn around time for a PAM4-to-NRZ transition would be on the bus and how much energy could be spent thrashing across that transition.SaturnusDK - Wednesday, June 19, 2019 - link
The PCI specification dictates a maximum 36dB signal loss on both PCIe 4.0, 5.0 and 6.0. That alone should tell you that a multi-bit signal, ie. a signal with several discrete voltage states, uses more power than increasing the frequency since the signal loss has to be to the lowest discrete voltage state meaning that the transmitter will need to run at higher voltage which incurs a higher power loss. The reason for even using a multi-bit signal is due to the fact that with PCIe 5.0 we're already at 16GHz with a corresponding maximum PCB trace length of about 80mm between repeaters or signal conditioners. Doubling speed again, and thereby halving the trace length again isn't a feasible option. That is the only reason to go multi-bit. The disadvantages are too great to implement it at lower speeds.mode_13h - Wednesday, June 19, 2019 - link
It would also be interesting to use 6.0 signalling at lower clock rates, for higher-noise environments that might benefit from FEC.bharatlagali - Monday, June 24, 2019 - link
I think PCIe 5.0+ will be great for eGPUs. Currently there are only 4 PCIe 3.0 lanes available over TB3. With 5.0, even sticking with just 4 lanes would give as much bandwidth as PCIe 3.0 x16. So, apart from a little protocol overhead, near full potential of the GPU could be utilized, as opposed to the current 10%-40% performance hit.thomasg - Monday, June 24, 2019 - link
The PCIe improvements don't necessarily translate to Thunderbolt.Thunderbolt isn't just limited by PCIe, Thunderbolt is explicitly designed to get up to PCIe speeds with as little cost as possible.
The main issue is guaranteeing signal integrity at long distances in thin, cheap cables.
This means you can't just put current thunderbolt hardware on a faster PCIe connection and it will just get faster.
Over copper, cross-talk at those enormous clock rates is a significant issue - and PAM4 coding makes this much worse.
There's a lot of catching up for Thunderbolt to do to achieve PCIe 5.0 signalling over copper, and I don't think the tech will be there for years to come.
Thunderbolt 3 is already severely limited: While the maximum specified copper cable has 3 meters, PCIe 3 speed signalling only works over 0.5 meters.
In all likelyhood, even with future technology in active cables, the USB Type-C connector will itself become a major issue and I find it highly unlikely that it could support PCIe 5 signalling.
Thus, a future, faster Thunderbolt will be an entirely new technology.
I don't see the eGPU concept itself surviving into that future.
After all, what's the point. A eGPU box is barely smaller than a full blown PC that would easily outperform any notebook.
You might as well just make it a full blown computer and just do data transfer over Thunderbolt to have all your data available on the small gaming-box, you connect to your notebook.
mode_13h - Tuesday, July 2, 2019 - link
> a future, faster Thunderbolt will be an entirely new technology.Optical. Is there any reason you couldn't cram that kind of bandwidth over something like a Toslink cable?
mode_13h - Saturday, July 6, 2019 - link
After reading the comments in the article on the DisplayPort 2.0 announcement, I would like to retract this Toslink reference. There's much discussion of optical cabling, in that thread.Maybe someday, optical cabling will return.
Brother Printers - Tuesday, July 2, 2019 - link
Brother printer support offer full functionality by providing printing, copying, scanning, and faxing capability from a single machine. They're available as inkjet or laser models with a variety of feature configurations to suit business and personal users and carries a wide range of Brother all-in-ones, including mono laser, color laser, and inkjet to meet home, home office, business, and school needs.Brother Printer Not printing Black?
Solved
https://www.brotherprintersupport.co.uk/2019/06/10...
https://www.brotherprintersupport.co.uk
https://www.brotherprintersupport.co.uk/brother-pr...
<a href="https://www.brotherprintersupport.co.uk/brother-pr... Printer Support</a> or Call : +44-121-286-4615
How to Solve Installation Problem with my Brother printer using a USB/local connection
https://www.brotherprintersupport.co.uk/2019/06/27...
<a href="https://www.brotherprintersupport.co.uk/2019/06/27... Printer Support</a> or Call : +44-121-286-4615
Brother Printer Drivers Call: +44-121-286-4615
https://www.brotherprintersupport.co.uk/2019/06/27...
<a href="https://www.brotherprintersupport.co.uk/2019/06/27... Printer Support Drivers</a> or Call : +44-121-286-4615
<a href="https:// https://www.brotherprintersupport.co.uk/2019/06/28... Printer Support Drivers</a> or Call : +44-121-286-4615
<a href="https://www.brotherprintersupport.co.uk/brother-pr... printer error 1168</a>
<a href="https://www.brotherprintersupport.co.uk/brother-pr... printer error 35</a>
<a href="https://www.brotherprintersupport.co.uk/brother-pr... printer error ys-02</a>
mode_13h - Saturday, July 6, 2019 - link
Somebody please kill this spam.JayNor - Sunday, June 7, 2020 - link
Cooper Lake ended up not having PCIE4.Looks like some Tiger Lake chips and Ice Lake Server chips will have PCIE4 in second half of 2020, as well as in some Optane SSD drives.
PCIE5/CXL is on the roadmap for 2021, both in the Xe HPC GPUs and in the Sapphire Rapids Xeon chips being used in the Aurora exascale project.
So, as it is turning out, Intel isn't skipping pcie4 or pcie5. Both were demonstrated in agilex fpga chiplets in 2019, and a Stratix 10 dx chip also had pcie4, sampled in 2019.