They were talking about the new PCIe Airwave standard. They refer to the channels as "air traces". Anyway, you can't have the radios too close or it jacks the signal up!
There is no AM4 specification anywhere. There is way more (+400 or so) pins compared to the previous slot. I wonder why. Could it include support for 256 lines of data bus to memory, instead of 128 currently used by all CPUs in the class?
Funny you brought this up ... I'm wondering if the expansion/doubling of DDR4/DDR5 bank groups effectively means the same, and 'next-gen' memory controllers are in the cue ...
"Video Output (for Raven Ridge)./ USB 3 / Audio(I2S interface) directly from CPU"
These are tiny.
My point is it is 8 cores in a mainstream CPU instead of 1, but memory bus is still 128 bits like it was 15 years ago or so. And like it is on phones now. And the only tasks where CPU performance matter today are on big data sets which do not fit in caches. Having 4 independent 64-bit channels, or maybe even 8 32-bit, would be nice.
But not like in Threadripper configuration with memory controllers on different chips with added latencies etc, using a huge 4000-pin slots supporting 8 channels but using only half of them.
This wouldn't be an issue in a chiplet configuration. The I/O is in the, wait for it, I/O die. The real issues have already been addressed by Valantar. Mainly being that outside of edge cases (better suited to a HEDT platform), the only common case that needs it is a fairly large iGPU. But even then, it wouldn't be enough. The first XB1 doubled the width of their DDR3 bus and if it wasn't for the SRAM (in the hands of a good dev anyway) it would have had total garbage useable bandwidth.
If you really need massive memory bandwidth for an iGPU, instead of doubling the DDR bus width, add a secondary GDDR or HBM interface for the iGPU. Pin for pin GDDR and especially HBM beat the absolute snot out of DDR. The latency is higher but again for iGPUs it wouldn't matter. Put a socket/slot on the board for HBM or GDDR, like an oldschool optional cache. You wouldn't even need to include it on every board.
Why do you need 16 cores with 32 threads when you have only 128 bits of memory to feed them? WHAT are you going to process over and over and over which all fits into cache? Video does not. RAW pics from 42+ Mpix cameras do not. Large codebases which builds can be paralleled to 32 threads do not, neither sources nor binaries.
Peak performance numbers mean nothing if you as a user do not benefit. If it takes 10ms after your click on 4 cores it does not matter if it takes 3ms on 16 cores. If it takes 10 minutes on 4 cores, it does not fit into cache and is not going to take 3 minutes on 16 cores limited by the memory bus.
peevee, i think you are over thinking this.. there has been tests done, i think even AT has done a few, most of the consumer software we use.. in not memory limited, after use of ram passed a certain speed, and latency, there is very little difference, so your thoughts on how wide the bus is, 128 bits or 64 bits.. has little effect, except when there is an IGP envolved, as seen here : https://www.anandtech.com/show/7364/memory-scaling... im sure you would be able to find more if you looked....
Adding memory channels dramatically increases motherboard costs (due to more layers being needed to route the traces), which isn't tenable for a mainstream platform. Just compare the cheapest TR4 or 2011 motherboards to the cheapest AM4 or 1151 ones. Also, instead of a wider bus we have dramatically increased memory transfer rates, which DDR5 will again give a massive boost (2x for the mainstream). It hasn't increased proportionally to CPU performance, of course not, but there are extremely few applications that are memory bound on modern PCs. iGPU gaming is pretty much the only common one.
I don't think the current physical location of DIMM slots is good. I'd prefer to have 4 slots on 4 sides of CPU, each as close as possible to reduce latency and power. The best would be the slots literally on the sides of CPU socket, with contacts not even touching the board. Then the board itself can be 1-layer, simple and cheap.
I be a little concern on first generation PCIe 4.0 support. Is 100% backwards compatible with PCIe 3.0. It might better to wait until more products are out.
No, he's not high, he's just assuming that HStewart is bring pro-Intel and casually dissuading purchase of a product related to AMD. Of course if it turns out that Intel is actually first to market, PCIe 4.0 will suddenly be 100% reliable and there will be no reason to buy obsolete hardware. ;-p
Intel first to market with what, PCIe 4.0 support? This looks highly unlikely. On the other hand Intel has not yet announced, to my knowledge, if Ice Lake-U and -Y CPUs will have PCIe 4.0 controllers (their mid power -H and high power -S Ice Lake CPUs are apparently being pulled back deep into 2020 or even 2021...). Could their Sunny Lake cores support PCIe 4.0 from top to bottom? Maybe. Or maybe not, this *is* Intel after all. Still, debuting PCIe 4.0 in laptops and ultraportables seems way too bold and implausible to me.
A just leaked Xeon roadmap has Icelake with PCIe4 in 2020q2, and Saphire Rapids with DDR5 and PCIe5 in 2021q1.
No idea if the corresponding consumer platforms will maintain IO parity with the server ones. I'd rank PCIe4 as most likely just to mitigate the impact of AMD first; DDR5 in the middle, with delaying a year if the price premium is too high as plausible; and PCIe5 as least likely. Very short maximum trace lengths without repeaters or mobo tech that would massively increase board prices lean towards confirming prior speculation that PCIe5 would be a server only tech.
Why do people assume that - because they are bias themselves. This would not matter who comes out with it.
In some ways this was a test for your bias peoplewho think if people like Intel, they are bias against. anything else. This attitude honestly is why I will never pick a non Intel product - assuming it Windows based - this includes the slow Windows for ARM's, But I have a Galaxy Tab S3 and Galaxy Note 8 which I love.
I even try an AMD derived GPU in my Dell XPS 15 2in1
Biggest thing I hate about AMD fans is they keep trying to push makers to come out with AMD based notebook - but do any of them actually buy them have desktops. Maybe some people have them also - but my guess is NO.
I normally would stay out of desktop cpu discussions, because I believe as CPU's becomes more and more efficient and use less power - cpu makers will drop the difference and desktops will have same cpus as laptops.
you are one to talk Hstewart... calling people on here bias.. when you, yourself.. are blatantly pro-intel, 90% of your comments.. show this FACT. im not biased against intel, OR amd.. but right now.. AMD has more interesting products coming out then your beloved intel for the last like 5 years... not to mention the ongoing promise about 10nm products being released.. " This attitude honestly is why I will never pick a non Intel product" no.. you will ONLY buy intel.. cause to you.. intel is the god of the cpu.. and there is nothing else out there... nice try.. but blaming those that like amd over your beloved intel, is not a reason...
" Biggest thing I hate about AMD fans is they keep trying to push makers to come out with AMD based notebook - but do any of them actually buy them have desktops. Maybe some people have them also - but my guess is NO." for me at least.. this whole quote, is WRONG. i have 2 AMD based notebooks, i have 6 comps currently topgether in my house.. 4 are intel based, 2 are AMD... so nice try with that assumption.
> In some ways this was a test for your bias peoplewho think if people like Intel, they are bias against. anything else. This attitude honestly is why I will never pick a non Intel product
So in other words you're clearly (and admittedly) trolling? That is, posting something in such a way as to instigate an argument (aka, trolling)??
um sa666666, from what i have seen on here.. thats exactly what HStewart does.. and that is his ONLY intent.. is to troll.. although his posts, and replies from others to his posts, can be quite entertaining... HStewart is getting a little long in the tooth on here....
This has not been an issue the PCIe spec has had, PCIe2 & 3 devices were always fully backwards and forwards compatible, there is little reason to expect different for PCIe4 & 5.
I too would expect that level of comptibility. As far as what PCI 4 means on these boards... I had assumed it would just allow for more lanes to be used thus improving bandwidth for the entire system but not throughput for a particular component. Am I off base here? A GPU could get its firmware upgraded but would it be able to exploit higher throughput?
Am I thinking about this incorrectly?(obviously this is not my area of expertise)
Increased throughput comes from more being transferred with an *existing* number of lanes.
Think of each lane as a road (or better, a rail line). It's possible to configure a lane to accommodate specially-designed vehicles which move at twice the speed (or are double-decker, or a mix of the two). Those designed for looser tolerances remain in a separate lane at the previous speed. Of course this only works if the road/rail is sturdy enough - if it was marginal to begin with, it may crumble under the increased traffic.
The main reason some motherboards may support PCIe 4.0 is that their involvement is relatively simple - providing a flat wire between two components. It's the components at the ends that are complicated - in this case, the CPU and the GPU.
A GPU can only exploit higher speeds if its software *and* hardware is capable - so it's unlikely that we'll see such upgrades in software. You'll need a new GPU, designed for PCIe 4.0, although it'll work with any version - just maybe not quite as fast.
Likewise, where a motherboard's involvement is greater than a wire (i.e. multi-function bridge chips), there will be no improvement; it'll run at the same speed, and any peripherals behind the bridge will be limited to existing speeds, too.
(In a similar way, Category 5e Ethernet cables can be used to push 2.5 Gbps or even 5 Gbps, rather than the original 1 Gbps; but that doesn't mean your old Ethernet router can now run at 5 Gbps. There are new error-correction encodings, protocols, clocks need to be faster, etc.
On the plus side, if you use PCIe 4.0, you halve the number of lanes needed to support an Ethernet transciever, which may help increase the number of 2.5 Gbps or 5 Gbps ports...)
"Am I off base here?... Am I thinking about this incorrectly?" Short and long answer : yes. PCIe 4.0 will allow the exact opposite of that, just as happened with the transition from PCIe 2.0 to 3.0. Instead of requiring more PCIe lanes for more I/O bandwidth it can allow double the bandwidth -either to individual devices, to the entire system or both- with the same number of lanes. Alternatively it can provide the same bandwidth with half the number of lanes, four times the bandwidth with double the lanes -an extreme and implausible example- or anything between the above "pairs" (such as three times the bandwidth with +50% more lanes).
It will 100% be backwards compatible with PCIe 3.0, because devices/slot adhering to the PCIe spec are extensively tested for backwards compatibility of at least one PCIe generation. The backwards compatibility with PCIe 2.0 will of course be less clear and probably not guaranteed.
It's nice motherboard manufactures are adding all the possible new features to old models but I doubt this will effect anything other than marketing. What I mean is if the top PCIe slots, which are often x 16, are the only slots with short enough traces to support PCIe gen 4 what card could possible benefit from going from a PCIe x16 gen 3 to gen 4. Nothing I know of needs that amount of bandwidth on the consumer side of things. I also don't see a need more more bandwidth on 16x slots for years to come. Sure it's nice to have but still food for thought. Where I see this mattering is for thing like 2x and 4x slots for nvme drives.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
32 Comments
Back to Article
willis936 - Tuesday, May 21, 2019 - link
PCIe 4.0 specifications state that traces should be under a minimum lengthShouldn't this say "under a maximum length"?
Alexvrb - Tuesday, May 21, 2019 - link
They were talking about the new PCIe Airwave standard. They refer to the channels as "air traces". Anyway, you can't have the radios too close or it jacks the signal up!peevee - Tuesday, May 21, 2019 - link
There is no AM4 specification anywhere.There is way more (+400 or so) pins compared to the previous slot.
I wonder why. Could it include support for 256 lines of data bus to memory, instead of 128 currently used by all CPUs in the class?
Smell This - Tuesday, May 21, 2019 - link
Funny you brought this up ...I'm wondering if the expansion/doubling of DDR4/DDR5 bank groups effectively means the same, and 'next-gen' memory controllers are in the cue ...
brakdoo - Tuesday, May 21, 2019 - link
Memory banks are internal. They have nothing to do with IO lanes.BTW AM3 didn't support Video Output (for Raven Ridge)./ USB 3 / Audio(I2S interface) directly from CPU so don't expect too much.
AM4 ends 2020 or 21 (probably 21) because of DDR5 but CPUs might be backwards compatible like AM2 to AM3 and the change from DDR2 to DDR3.
shing3232 - Tuesday, May 21, 2019 - link
Just like AM2+ and AM3 :ppeevee - Tuesday, May 21, 2019 - link
"Video Output (for Raven Ridge)./ USB 3 / Audio(I2S interface) directly from CPU"These are tiny.
My point is it is 8 cores in a mainstream CPU instead of 1, but memory bus is still 128 bits like it was 15 years ago or so. And like it is on phones now. And the only tasks where CPU performance matter today are on big data sets which do not fit in caches. Having 4 independent 64-bit channels, or maybe even 8 32-bit, would be nice.
peevee - Tuesday, May 21, 2019 - link
But not like in Threadripper configuration with memory controllers on different chips with added latencies etc, using a huge 4000-pin slots supporting 8 channels but using only half of them.Alexvrb - Tuesday, May 21, 2019 - link
This wouldn't be an issue in a chiplet configuration. The I/O is in the, wait for it, I/O die. The real issues have already been addressed by Valantar. Mainly being that outside of edge cases (better suited to a HEDT platform), the only common case that needs it is a fairly large iGPU. But even then, it wouldn't be enough. The first XB1 doubled the width of their DDR3 bus and if it wasn't for the SRAM (in the hands of a good dev anyway) it would have had total garbage useable bandwidth.If you really need massive memory bandwidth for an iGPU, instead of doubling the DDR bus width, add a secondary GDDR or HBM interface for the iGPU. Pin for pin GDDR and especially HBM beat the absolute snot out of DDR. The latency is higher but again for iGPUs it wouldn't matter. Put a socket/slot on the board for HBM or GDDR, like an oldschool optional cache. You wouldn't even need to include it on every board.
peevee - Wednesday, May 22, 2019 - link
Why do you need 16 cores with 32 threads when you have only 128 bits of memory to feed them? WHAT are you going to process over and over and over which all fits into cache? Video does not. RAW pics from 42+ Mpix cameras do not. Large codebases which builds can be paralleled to 32 threads do not, neither sources nor binaries.Peak performance numbers mean nothing if you as a user do not benefit. If it takes 10ms after your click on 4 cores it does not matter if it takes 3ms on 16 cores. If it takes 10 minutes on 4 cores, it does not fit into cache and is not going to take 3 minutes on 16 cores limited by the memory bus.
Qasar - Wednesday, May 22, 2019 - link
peevee,i think you are over thinking this.. there has been tests done, i think even AT has done a few, most of the consumer software we use.. in not memory limited, after use of ram passed a certain speed, and latency, there is very little difference, so your thoughts on how wide the bus is, 128 bits or 64 bits.. has little effect, except when there is an IGP envolved, as seen here : https://www.anandtech.com/show/7364/memory-scaling... im sure you would be able to find more if you looked....
Valantar - Tuesday, May 21, 2019 - link
Adding memory channels dramatically increases motherboard costs (due to more layers being needed to route the traces), which isn't tenable for a mainstream platform. Just compare the cheapest TR4 or 2011 motherboards to the cheapest AM4 or 1151 ones. Also, instead of a wider bus we have dramatically increased memory transfer rates, which DDR5 will again give a massive boost (2x for the mainstream). It hasn't increased proportionally to CPU performance, of course not, but there are extremely few applications that are memory bound on modern PCs. iGPU gaming is pretty much the only common one.peevee - Wednesday, May 22, 2019 - link
I don't think the current physical location of DIMM slots is good. I'd prefer to have 4 slots on 4 sides of CPU, each as close as possible to reduce latency and power. The best would be the slots literally on the sides of CPU socket, with contacts not even touching the board. Then the board itself can be 1-layer, simple and cheap.HStewart - Tuesday, May 21, 2019 - link
I be a little concern on first generation PCIe 4.0 support. Is 100% backwards compatible with PCIe 3.0. It might better to wait until more products are out.phoenix_rizzen - Tuesday, May 21, 2019 - link
Aww, are you upset that AMD might ship PCIe 4.0 before your pet Intel does?imaheadcase - Tuesday, May 21, 2019 - link
Are you high right now or something, no one was even mentioning Intel.GreenReaper - Tuesday, May 21, 2019 - link
No, he's not high, he's just assuming that HStewart is bring pro-Intel and casually dissuading purchase of a product related to AMD. Of course if it turns out that Intel is actually first to market, PCIe 4.0 will suddenly be 100% reliable and there will be no reason to buy obsolete hardware. ;-pSantoval - Wednesday, May 22, 2019 - link
Intel first to market with what, PCIe 4.0 support? This looks highly unlikely. On the other hand Intel has not yet announced, to my knowledge, if Ice Lake-U and -Y CPUs will have PCIe 4.0 controllers (their mid power -H and high power -S Ice Lake CPUs are apparently being pulled back deep into 2020 or even 2021...). Could their Sunny Lake cores support PCIe 4.0 from top to bottom? Maybe. Or maybe not, this *is* Intel after all. Still, debuting PCIe 4.0 in laptops and ultraportables seems way too bold and implausible to me.DanNeely - Wednesday, May 22, 2019 - link
A just leaked Xeon roadmap has Icelake with PCIe4 in 2020q2, and Saphire Rapids with DDR5 and PCIe5 in 2021q1.No idea if the corresponding consumer platforms will maintain IO parity with the server ones. I'd rank PCIe4 as most likely just to mitigate the impact of AMD first; DDR5 in the middle, with delaying a year if the price premium is too high as plausible; and PCIe5 as least likely. Very short maximum trace lengths without repeaters or mobo tech that would massively increase board prices lean towards confirming prior speculation that PCIe5 would be a server only tech.
https://wccftech.com/intel-xeon-roadmap-leak-10nm-...
https://www.eetasia.com/news/article/18061502-pcie...
HStewart - Wednesday, May 22, 2019 - link
This would be same as Intel came out with PCIe 4.0, will it be backwards compatible and what about PCIe 5.0 coming out a year.I have old Xeon 5160, it was a pain to get a graphics card for because makers stop making cards for what I believe is PCIe 2.0 at the time.
HStewart - Wednesday, May 22, 2019 - link
Why do people assume that - because they are bias themselves. This would not matter who comes out with it.In some ways this was a test for your bias peoplewho think if people like Intel, they are bias against. anything else. This attitude honestly is why I will never pick a non Intel product - assuming it Windows based - this includes the slow Windows for ARM's, But I have a Galaxy Tab S3 and Galaxy Note 8 which I love.
I even try an AMD derived GPU in my Dell XPS 15 2in1
Biggest thing I hate about AMD fans is they keep trying to push makers to come out with AMD based notebook - but do any of them actually buy them have desktops. Maybe some people have them also - but my guess is NO.
I normally would stay out of desktop cpu discussions, because I believe as CPU's becomes more and more efficient and use less power - cpu makers will drop the difference and desktops will have same cpus as laptops.
Lord of the Bored - Wednesday, May 22, 2019 - link
All I have to say is ROFLKorguz - Wednesday, May 22, 2019 - link
you are one to talk Hstewart... calling people on here bias.. when you, yourself.. are blatantly pro-intel, 90% of your comments.. show this FACT. im not biased against intel, OR amd.. but right now.. AMD has more interesting products coming out then your beloved intel for the last like 5 years... not to mention the ongoing promise about 10nm products being released.. " This attitude honestly is why I will never pick a non Intel product" no.. you will ONLY buy intel.. cause to you.. intel is the god of the cpu.. and there is nothing else out there... nice try.. but blaming those that like amd over your beloved intel, is not a reason..." Biggest thing I hate about AMD fans is they keep trying to push makers to come out with AMD based notebook - but do any of them actually buy them have desktops. Maybe some people have them also - but my guess is NO." for me at least.. this whole quote, is WRONG. i have 2 AMD based notebooks, i have 6 comps currently topgether in my house.. 4 are intel based, 2 are AMD... so nice try with that assumption.
sa666666 - Wednesday, May 22, 2019 - link
> In some ways this was a test for your bias peoplewho think if people like Intel, they are bias against. anything else. This attitude honestly is why I will never pick a non Intel productSo in other words you're clearly (and admittedly) trolling? That is, posting something in such a way as to instigate an argument (aka, trolling)??
Qasar - Wednesday, May 22, 2019 - link
um sa666666, from what i have seen on here.. thats exactly what HStewart does.. and that is his ONLY intent.. is to troll.. although his posts, and replies from others to his posts, can be quite entertaining... HStewart is getting a little long in the tooth on here....Reflex - Tuesday, May 21, 2019 - link
This has not been an issue the PCIe spec has had, PCIe2 & 3 devices were always fully backwards and forwards compatible, there is little reason to expect different for PCIe4 & 5.cfineman - Tuesday, May 21, 2019 - link
I too would expect that level of comptibility. As far as what PCI 4 means on these boards... I had assumed it would just allow for more lanes to be used thus improving bandwidth for the entire system but not throughput for a particular component. Am I off base here? A GPU could get its firmware upgraded but would it be able to exploit higher throughput?Am I thinking about this incorrectly?(obviously this is not my area of expertise)
GreenReaper - Tuesday, May 21, 2019 - link
Increased throughput comes from more being transferred with an *existing* number of lanes.Think of each lane as a road (or better, a rail line). It's possible to configure a lane to accommodate specially-designed vehicles which move at twice the speed (or are double-decker, or a mix of the two). Those designed for looser tolerances remain in a separate lane at the previous speed. Of course this only works if the road/rail is sturdy enough - if it was marginal to begin with, it may crumble under the increased traffic.
The main reason some motherboards may support PCIe 4.0 is that their involvement is relatively simple - providing a flat wire between two components. It's the components at the ends that are complicated - in this case, the CPU and the GPU.
A GPU can only exploit higher speeds if its software *and* hardware is capable - so it's unlikely that we'll see such upgrades in software. You'll need a new GPU, designed for PCIe 4.0, although it'll work with any version - just maybe not quite as fast.
Likewise, where a motherboard's involvement is greater than a wire (i.e. multi-function bridge chips), there will be no improvement; it'll run at the same speed, and any peripherals behind the bridge will be limited to existing speeds, too.
(In a similar way, Category 5e Ethernet cables can be used to push 2.5 Gbps or even 5 Gbps, rather than the original 1 Gbps; but that doesn't mean your old Ethernet router can now run at 5 Gbps. There are new error-correction encodings, protocols, clocks need to be faster, etc.
On the plus side, if you use PCIe 4.0, you halve the number of lanes needed to support an Ethernet transciever, which may help increase the number of 2.5 Gbps or 5 Gbps ports...)
Santoval - Wednesday, May 22, 2019 - link
"Am I off base here?... Am I thinking about this incorrectly?"Short and long answer : yes. PCIe 4.0 will allow the exact opposite of that, just as happened with the transition from PCIe 2.0 to 3.0. Instead of requiring more PCIe lanes for more I/O bandwidth it can allow double the bandwidth -either to individual devices, to the entire system or both- with the same number of lanes.
Alternatively it can provide the same bandwidth with half the number of lanes, four times the bandwidth with double the lanes -an extreme and implausible example- or anything between the above "pairs" (such as three times the bandwidth with +50% more lanes).
Santoval - Wednesday, May 22, 2019 - link
It will 100% be backwards compatible with PCIe 3.0, because devices/slot adhering to the PCIe spec are extensively tested for backwards compatibility of at least one PCIe generation. The backwards compatibility with PCIe 2.0 will of course be less clear and probably not guaranteed.Skeptical123 - Wednesday, May 22, 2019 - link
It's nice motherboard manufactures are adding all the possible new features to old models but I doubt this will effect anything other than marketing. What I mean is if the top PCIe slots, which are often x 16, are the only slots with short enough traces to support PCIe gen 4 what card could possible benefit from going from a PCIe x16 gen 3 to gen 4. Nothing I know of needs that amount of bandwidth on the consumer side of things. I also don't see a need more more bandwidth on 16x slots for years to come. Sure it's nice to have but still food for thought. Where I see this mattering is for thing like 2x and 4x slots for nvme drives.Dug - Tuesday, May 28, 2019 - link
Multiple nvme drives on same slot.