We're almost in 2019, can we move on beyond those 16 + 4 PCI-E lanes for the CPU please?
I just want my GPU and M.2 storage connected directly to the CPU and not sharing bandwidth and latency with a million USB, SATA or audio ports and other traffic on the DMI uplink...
I'd not hold my breath. Adding more PCIe lanes to the CPU would drive up die sizes and board costs for the >90% of systems that don't have a GPU.
The only way I could see that happen is if Intel takes the CPU on Chipset stacking concept they showed at manufacturing day beyond the mobile demo to the desktop. Even then, I'd expect what they'd do is 16PCIe + ~8 configurable HSIO lanes so that entry level desktops could have 3-5x USB3, a 4/2 lane PCIe SSD and onboard wifi; either without needing a separate chip; or only with a tiny superbare bone chip to handle all the ultra-legacy and low bandwidth connections needed to control assorted chips on the board behind the scenes.
With that being a new manufacturing process though, I wouldn't expect to see it in the next year or two on the high volume mainstream desktop platform. Far more likely would be for it to launch as a premium option for top end laptop makers in the next year or two that trickles down over the to the rest of the market 2 or 4 years later.
I agree, but the Intel/AMD response would be that you should look at HEDT / Threadripper if you need more CPU PCI-E lanes.
Ryzen CPUs actually have 32 PCI-E lanes on the CPU, but the socket AM4 is only designed to for 16 GPU + 4 NVME + 4 to the SB. The other 8 aren't used. No idea why they didn't design AM4 to use all of them, unless it was for backwards compatibility with the pre-Ryzen CPUs.
The problem is that both companies big socket platforms are a lot more expensive; and 90% of it is for things that are irrelevant to the average enthusiast; while both companies mainstream sockets fall a little bit short. Intel's by forcing SSDs into the DMI bottleneck; AMD's just in that their current chipset is a more or less obsolete piece of junk (eg only supporting PCIe 2.0). A combination of AMD's 20 non chipset lanes and a chipset approaching what Intel's are capable of would cover most of the gap between the mainstream platforms and enthusiast goals without going the budget busting route of the big sockets.
Dunno that AMD's ever spoken about the unused 8 lanes. Could be cost reasons (would've made boards more expensive for legacy platforms); or even just to limit forward compatibility/confusion issues like the garbage fire Intel created when they had an LGA20xx generation that could have 16, 28, or 44 PCIe lanes and board makers either had to add a lot of extra complexity, have large chunks non-operational if using a low lane count chip, or ignore the potential of a number of lanes on the higher end chips.
It is backward compatibility. At this point one would think manufacturers would break that backwards compatibility (i.e. 2nd and 3rd m.2 slots not available for non-Ryzen processors).
So, please clarify this for someone who is not a computer scientist and is mildly drunk... if I buy a new Ryzen CPU, thinking I'm going to get 24 PCI-E lanes, I will in truth only be able to access 16, same as Intel? Or is it that I'd be able to access 24 whilst the CPU is designed for 32?
You can effectively use 20 lanes. The last 4 are used to connect the chipset on any but the lowest end boards which the CPU operate in SoC mode (and which probably will ignore the last 4 lanes entirely to save costs).
Well, that's the problem, I'm already on an HEDT platform but I'm missing on the the fastest CPUs for gaming in exchange of many PCI-E lanes and quad-channel memory that I don't need.
I suppose I just can't have it all on the consumer platform for the time being...
Why? Why are people still throwing out this old garbage of a viewpoint? They're just rebadged Qualcomm Atheros chips. The "Killer" part is just a software addon that makes it function like a router if you want. Do want router like function? Already have a dedicated router? Cool. Don't install the software. Are you saying Atheros are crap? There's nothing wrong with it. Everyone just says "Intel" because they're bog standard and had better support for Linux unlike Realtek but don't know much about it detail wise. Hell, other companies are doing better add-on chips now, especially past 1Gpbs, like Aquantia. But according to the mindshare....who cares right?
For a desktop PC with a full ATX motherboard, the presence of a Killer network adapter is a problem that can be overcome by simply adding a cheap RealTek or whatever else in an expansion slot. Its almost a non-issue since you can just disable it and forget it exists. It becomes a more significant issue on a laptop where you may have motherboard integrated hardware and no alternative or a warranty that would discourage a surgical procedure to remove and replace the hardware in question.
As for Rivet Networks, the issue there is the fact that the company is basically rebranding 3rd party hardware, adding worthless traffic prioritization software that while finally not entirely unstable still serves no useful purpose, and presenting it as a premium solution to the point that even journalists like those at Anandtech were, for a time, mindlessly touting them as a desirable feature despite having zero supporting evidence and not a single review for years. Even earlier than that, Killer NIC drivers were awful and thusly earned a well-deserved reputation as a pig wearing a lot of lipstick. People picking up Killer NICs in the post "my network adapter runs a Linux OS and has a freakish metal K heatsink on the SoC" era were under the impression they were getting superior hardware but actually ended up with a rebranded whatever that was festooned with iffy drivers and saddled with a dumpster fire attempt at filtering and prioritizing packets.
In the end, people are bitter. That historic distaste keeps getting reinforced by the number of times the brand has been spun off, bought, and sold. They got sick of being sold something that didn't work and are sick of seeing Killer hardware slung at them as premium or magically better without evidence to verify the software actually does anything more than eat CPU cycles for no reason. That breeds discontent and no small number of outspoken, fed up people that hope tech journalists will eventually get feedback up to OEMs like MSI so the company stops sticking buyers with Rivet's ethernet and wireless products because the companies don't understand the mindset of their potential customers. Its taken us years of railing in comments about Killer NICs to even get to the point where Anandtech doesn't tell us "It's got a primo-uber-tuber-you-have-to-love-it-Killer-NIC lol disclaimer no benchmarkz whatsoever hahaha!" when a product passes through here with Rivet's rebadged gear soldered on it. Its a good change when the Killer NIC just gets noted with a neutral comment on the opening page, but I think a lot of us are waiting with our sour grapes for the day when Rivet Networks goes out of business so we can happily buy a computer with literally any other network adapter in it.
Honestly, I've been wary of them ever since I ran into a driver issue that caused some nasty memory leaks. That was a fun one to diagnose... "Hm, why is it that whenever I download anything, my memory usage shoots through the roof?" All I can say is that it's a good thing that I build my main machines with 32GB+ of memory!
I hate to see Spectre/Meltdown this that shit patches in the OS row under Test Setup. Thanks Intel for your performance turned security risks features.
I hate to see Spectre/Meltdown this that shit patches in the OS row under Test Setup. Thanks Intel for your performance turned security risks features.
I recently purchased this mobo, and I have to say I am very pleased with it and it looks great with the grey/gun metal color and RGB panel (really does look pretty cool). This is a very solid high quality board. You can just feel the "heft" and quality of it when making your build. I have typically been an ASUS fan over the years, but I am very happy with this MSI board. I'm currently getting a 5GHz OC on all 8 cores on my i9 9900K at 1.3v and no other settings changed. But I'm just starting to play with it. Thanks for the review.
i have the z390 ace and i purchase the h100i platinum cooler. it has a 3 pin plug for the header. but this board has 4 pin headers on the board? i have seen several people say this cooler is compatible with this board how are they pluging the 3 pin into the 4 pin fan header?
Thank you for posting DPC latency tests! Buying motherboard is shot in the dark. There are so many features, some differ per hardware and can cause huge input lag! From motherboard to motherboard: it is crazy how much mouse movement differs! Also some 500$+ ASUS mobos suffer from high DPC latency! So you can buy even highly rated mobos, which cost 500$ and then still get piece of garbage. Also there is 8k polling mouse now, which most of motherboards can't even handle. And 500hz monitors in 1-2 years. Also timer resolution supports only 0.5ms max. interval.
I wish you would test even most infamous BIOS features, which cause input lag. But that's perhaps to much to ask. You would have to test literally every BIOS feature, which could, but doesn't have to lag... Also some can differ per hw configurations so...
Thank you for bringing attention to this! And for testing! You are MVP!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
25 Comments
Back to Article
rsandru - Monday, December 17, 2018 - link
We're almost in 2019, can we move on beyond those 16 + 4 PCI-E lanes for the CPU please?I just want my GPU and M.2 storage connected directly to the CPU and not sharing bandwidth and latency with a million USB, SATA or audio ports and other traffic on the DMI uplink...
DanNeely - Monday, December 17, 2018 - link
I'd not hold my breath. Adding more PCIe lanes to the CPU would drive up die sizes and board costs for the >90% of systems that don't have a GPU.The only way I could see that happen is if Intel takes the CPU on Chipset stacking concept they showed at manufacturing day beyond the mobile demo to the desktop. Even then, I'd expect what they'd do is 16PCIe + ~8 configurable HSIO lanes so that entry level desktops could have 3-5x USB3, a 4/2 lane PCIe SSD and onboard wifi; either without needing a separate chip; or only with a tiny superbare bone chip to handle all the ultra-legacy and low bandwidth connections needed to control assorted chips on the board behind the scenes.
With that being a new manufacturing process though, I wouldn't expect to see it in the next year or two on the high volume mainstream desktop platform. Far more likely would be for it to launch as a premium option for top end laptop makers in the next year or two that trickles down over the to the rest of the market 2 or 4 years later.
DigitalFreak - Monday, December 17, 2018 - link
I agree, but the Intel/AMD response would be that you should look at HEDT / Threadripper if you need more CPU PCI-E lanes.Ryzen CPUs actually have 32 PCI-E lanes on the CPU, but the socket AM4 is only designed to for 16 GPU + 4 NVME + 4 to the SB. The other 8 aren't used. No idea why they didn't design AM4 to use all of them, unless it was for backwards compatibility with the pre-Ryzen CPUs.
DanNeely - Monday, December 17, 2018 - link
The problem is that both companies big socket platforms are a lot more expensive; and 90% of it is for things that are irrelevant to the average enthusiast; while both companies mainstream sockets fall a little bit short. Intel's by forcing SSDs into the DMI bottleneck; AMD's just in that their current chipset is a more or less obsolete piece of junk (eg only supporting PCIe 2.0). A combination of AMD's 20 non chipset lanes and a chipset approaching what Intel's are capable of would cover most of the gap between the mainstream platforms and enthusiast goals without going the budget busting route of the big sockets.Dunno that AMD's ever spoken about the unused 8 lanes. Could be cost reasons (would've made boards more expensive for legacy platforms); or even just to limit forward compatibility/confusion issues like the garbage fire Intel created when they had an LGA20xx generation that could have 16, 28, or 44 PCIe lanes and board makers either had to add a lot of extra complexity, have large chunks non-operational if using a low lane count chip, or ignore the potential of a number of lanes on the higher end chips.
namechamps - Monday, December 17, 2018 - link
It is backward compatibility. At this point one would think manufacturers would break that backwards compatibility (i.e. 2nd and 3rd m.2 slots not available for non-Ryzen processors).philehidiot - Monday, December 17, 2018 - link
So, please clarify this for someone who is not a computer scientist and is mildly drunk... if I buy a new Ryzen CPU, thinking I'm going to get 24 PCI-E lanes, I will in truth only be able to access 16, same as Intel? Or is it that I'd be able to access 24 whilst the CPU is designed for 32?DanNeely - Monday, December 17, 2018 - link
You can effectively use 20 lanes. The last 4 are used to connect the chipset on any but the lowest end boards which the CPU operate in SoC mode (and which probably will ignore the last 4 lanes entirely to save costs).tvanpeer - Monday, December 17, 2018 - link
Sure you can: get an AMD CPU.shaolin95 - Monday, December 24, 2018 - link
Sure and then get a performance hit. No thanksThe_Assimilator - Monday, December 17, 2018 - link
Congratulations, you're among a tiny minority of users. If you really want or need that feature, pony up the cash to step up to the HEDT segment.rsandru - Monday, December 17, 2018 - link
Well, that's the problem, I'm already on an HEDT platform but I'm missing on the the fastest CPUs for gaming in exchange of many PCI-E lanes and quad-channel memory that I don't need.I suppose I just can't have it all on the consumer platform for the time being...
CheapSushi - Monday, December 17, 2018 - link
Are you talking just about the cheapest platforms? Because it does exist already.oRAirwolf - Monday, December 17, 2018 - link
Stopped reading as soon as I saw the word "Killer."The_Assimilator - Monday, December 17, 2018 - link
Ditto; that trash does not belong on a board that costs nearly three hundred dollars.CheapSushi - Monday, December 17, 2018 - link
Why? Why are people still throwing out this old garbage of a viewpoint? They're just rebadged Qualcomm Atheros chips. The "Killer" part is just a software addon that makes it function like a router if you want. Do want router like function? Already have a dedicated router? Cool. Don't install the software. Are you saying Atheros are crap? There's nothing wrong with it. Everyone just says "Intel" because they're bog standard and had better support for Linux unlike Realtek but don't know much about it detail wise. Hell, other companies are doing better add-on chips now, especially past 1Gpbs, like Aquantia. But according to the mindshare....who cares right?PeachNCream - Tuesday, December 18, 2018 - link
For a desktop PC with a full ATX motherboard, the presence of a Killer network adapter is a problem that can be overcome by simply adding a cheap RealTek or whatever else in an expansion slot. Its almost a non-issue since you can just disable it and forget it exists. It becomes a more significant issue on a laptop where you may have motherboard integrated hardware and no alternative or a warranty that would discourage a surgical procedure to remove and replace the hardware in question.As for Rivet Networks, the issue there is the fact that the company is basically rebranding 3rd party hardware, adding worthless traffic prioritization software that while finally not entirely unstable still serves no useful purpose, and presenting it as a premium solution to the point that even journalists like those at Anandtech were, for a time, mindlessly touting them as a desirable feature despite having zero supporting evidence and not a single review for years. Even earlier than that, Killer NIC drivers were awful and thusly earned a well-deserved reputation as a pig wearing a lot of lipstick. People picking up Killer NICs in the post "my network adapter runs a Linux OS and has a freakish metal K heatsink on the SoC" era were under the impression they were getting superior hardware but actually ended up with a rebranded whatever that was festooned with iffy drivers and saddled with a dumpster fire attempt at filtering and prioritizing packets.
In the end, people are bitter. That historic distaste keeps getting reinforced by the number of times the brand has been spun off, bought, and sold. They got sick of being sold something that didn't work and are sick of seeing Killer hardware slung at them as premium or magically better without evidence to verify the software actually does anything more than eat CPU cycles for no reason. That breeds discontent and no small number of outspoken, fed up people that hope tech journalists will eventually get feedback up to OEMs like MSI so the company stops sticking buyers with Rivet's ethernet and wireless products because the companies don't understand the mindset of their potential customers. Its taken us years of railing in comments about Killer NICs to even get to the point where Anandtech doesn't tell us "It's got a primo-uber-tuber-you-have-to-love-it-Killer-NIC lol disclaimer no benchmarkz whatsoever hahaha!" when a product passes through here with Rivet's rebadged gear soldered on it. Its a good change when the Killer NIC just gets noted with a neutral comment on the opening page, but I think a lot of us are waiting with our sour grapes for the day when Rivet Networks goes out of business so we can happily buy a computer with literally any other network adapter in it.
Aikouka - Wednesday, December 19, 2018 - link
Honestly, I've been wary of them ever since I ran into a driver issue that caused some nasty memory leaks. That was a fun one to diagnose... "Hm, why is it that whenever I download anything, my memory usage shoots through the roof?" All I can say is that it's a good thing that I build my main machines with 32GB+ of memory!sharathc - Monday, December 17, 2018 - link
I hate to see Spectre/Meltdown this that shit patches in the OS row under Test Setup. Thanks Intel for your performance turned security risks features.sharathc - Monday, December 17, 2018 - link
I hate to see Spectre/Meltdown this that shit patches in the OS row under Test Setup. Thanks Intel for your performance turned security risks features.hapkiman - Tuesday, December 18, 2018 - link
I recently purchased this mobo, and I have to say I am very pleased with it and it looks great with the grey/gun metal color and RGB panel (really does look pretty cool). This is a very solid high quality board. You can just feel the "heft" and quality of it when making your build. I have typically been an ASUS fan over the years, but I am very happy with this MSI board. I'm currently getting a 5GHz OC on all 8 cores on my i9 9900K at 1.3v and no other settings changed. But I'm just starting to play with it. Thanks for the review.Byte - Thursday, December 20, 2018 - link
Wondering if the ESS Sabre really does anything to help the sound quality. Might get a MEG once they get cheaper just to try it.Orange_Swan - Sunday, January 13, 2019 - link
why does every motherboard HAVE to have that stupid PS/2 port, especially on mITX motherboards, where you could have an extra pair of USB sockets.rickybobby3 - Thursday, November 7, 2019 - link
i have the z390 ace and i purchase the h100i platinum cooler. it has a 3 pin plug for the header. but this board has 4 pin headers on the board? i have seen several people say this cooler is compatible with this board how are they pluging the 3 pin into the 4 pin fan header?empleat - Wednesday, January 6, 2021 - link
Thank you for posting DPC latency tests! Buying motherboard is shot in the dark. There are so many features, some differ per hardware and can cause huge input lag! From motherboard to motherboard: it is crazy how much mouse movement differs! Also some 500$+ ASUS mobos suffer from high DPC latency! So you can buy even highly rated mobos, which cost 500$ and then still get piece of garbage. Also there is 8k polling mouse now, which most of motherboards can't even handle. And 500hz monitors in 1-2 years. Also timer resolution supports only 0.5ms max. interval.I wish you would test even most infamous BIOS features, which cause input lag. But that's perhaps to much to ask. You would have to test literally every BIOS feature, which could, but doesn't have to lag... Also some can differ per hw configurations so...
Thank you for bringing attention to this! And for testing! You are MVP!