Boards like this show one of the reasons why we're never going to get HEDT level connectivity in mainstream platforms: The jumbo sockets needed to cram that many IO pins in are too big to fit on anything smaller than full ATX without various weird to grotesque compromises.
I'm not sure that that PCIe slot is the first limiting factor (it'd still likely need to be removed or reduced down to say 4x). Look at the mounting holes for the motherboard: there *might* barely be enough room to fit two more slots in between them. It looks likes some VRM components would have to move too.
just a quick look at asrockrack's rome8-2t shows that 8 ddr4-dimm-sockets are possible on epyc with a full pcie x16 slot 7 (the first if counting from the cpu). https://www.asrockrack.com/general/productdetail.a...
HEDT by definition will never be mainstream. Any mainstream user doesn't need such high IO requirements and HEDT motherboards areca mixed bag with regard to pcie slot layout. Not to mention that AMD is just as stingy on Pcie bandwidth from the CPU as Intel. Pcie 4.0 doesn't mean much if you're still restricted to 16 lanes and you can't trade them for 32 Pcie 3.0 lanes.
Why would you only have 16 lanes, or even 32? This is an Epyc board. 128 lanes at the socket(which I believe is twice what Intel's best offer is). Even if you halve that because PCIe4(skeptical), that's still 64 lanes.
There was no halving of PCIe lanes due to the switch to PCIe 4.0 Rome has 128 PCIe 4.0 lanes and the Zen 2 based Threadripper has 64 PCIe 4.0 lanes. The same number as the previous generation but with double the bandwidth per lane.
The article is about an EPYC motherboard, which is a server motherboard with 128 PCIe 4.0 lanes. Threadripper also provides 64 PCIe 4.0 (or 3.0 lanes) is it is not "stingy" either. Ryzen could well be considered stingy, with 24 PCIe 4.0/3.0 lanes in total from the CPU, of which only 20 are usable, 4 of which are intended for an M.2 SSD and 16 for a graphics card. What does Ryzen have to do with EPYC though?
ATX12VO (12 volt only) is taking aim at the 24pin. It removes all the legacy power from the 24 pin connector (mobo has to make 3.3v for PCIe, 5V for USB, and both for sata) replacing it with a 10pin connector with 50% more 12V capacity than the ancient 24pin, and removing all legacy voltages from the PSU. It's both less ambitious than it could have been (higher PSU voltages for less total wires needed), and somewhat sloppy in the short term (new power headers on the mobo mean that in the short term it doesn't free up much if any PCB area) and not forward/back compatible with conventional ATX (vs a 12pin connector that would've had a 4th ground, 1 each 3.3V and 5V, and had the sata connectors still on the PSU). This is because Intel just created a standardized version of what Dell, HP, Lenovo, etc were already doing for their desktop systems to simplify supply chains and allow smaller OEMs to join in on the savings.
The issues related to SATA power being a klunky hack should go away over the next 3-8 (guess) years as SATA transitions from being a feature check on almost all boards to being dropped from chipsets with anyone wanting to build a storage server needing to install a SATA card (presumably a next generation model designed for the 12VO era that also provides SATA power hookups).
The whole thing is a bit of a mess in the short term; but the blame should go to the people who were asleep on the switch and didn't create a reduced to 12/14 pin main ATX connector 10-15 years ago after both Intel and AMD moved CPU power to the 12V rail and the amount of 3.3/5v available became far in excess of system needs.
Tell ya how I would've done ATX12VO. PCIe power connectors exclusively. Six should cover most use cases, I reckon. Two eight-pin connectors to the motherboard, a third for dedicated CPU power, a six-pin to a "power hub" next to the drive mounts that carries 3v and 5v regulators and a handful of SATA power connectors, and two of whatever is needed by the video card.
Admittedly the power hub is duplicated effort, since the mainboard already needs to generate the lower voltages for fans and PCIe slots and external ports, but I think it is worth it to centralize all drive power in a single off-board location both for cable management and so that can be omitted if unwanted.
almost all storage servers already employ backplanes that take 12V and convert it to 5V & 3.3V as needed. this makes redundant PSU's as well as power distribution within a server much simpler.
also a picopsu style legacy adapter could easily be shipped with any ATX12VO PSU...
also at 6 out of eight memory channels plus ~100 out of 128 pcie lanes this matx board already exposes three thirds of epyc's full IO.
plus i am certain asrockrack could make a similar matx board fully exploiting epyc's IO with just two additional dimm slots and three to four more slimline x8 plugs.
the tricky part would be the power circuitry as the topmost part of the board would be quite packed, but this seems achievable by dropping tow 1GbE sockets with the Intel i210-AT IC giving a more regular 2 10GbE Ethernet capability. Also dropping 6 SATA ports for 3 slimline x8 ports should be relatively easy.
alas thats not what asrockrack's driving customer(s) had in mind and so we get this board.
Visit this site if you are tired of writing challenging assignments. There are much more interesting things to do. Looking for an online helper? No need to rack your brains. Read more about this service. https://writepaperfor.me/do-my-homework
Make 6150 bucks every month… Start doing online computer-based work through our website. I have been working from home for 4 years now and I love it. I don’t have a boss standing over my shoulder and I make my own hours. The tips below are very informative and anyone currently working from home or planning to in the future could use this website. WWW.iⅭash68.ⅭOⅯ
I know a lot of the "abnormal" comparisons were about mini-ITX boards, but those changes don't really seem abnormal to me. Mini-ITX has some very specific size constraints, so trade-offs like limiting the DIMM slots to two isn't that uncommon. Although I suppose you could say making a board with four SO-DIMM slots instead of two full-size could be considered abnormal, as that's not the norm for mini-ITX boards.
I've purchased several ASRockRack mini-ITX boards for Xeon-based mini-servers that were great for portability. I used them for work at expos where their footprint was small enough to house two servers in the area where usually only one server would fit, which then gave me the redundant servers I needed. So for that purpose, I appreciate ASRockRack making some boards in small form factors like mini-ITX that can shoehorn some large CPUs on them. The power & portability was definitely a win for my use-case.
How much space is allowed between the PCIe 4.0 slots? It looks to be a "normal" amount, but hard to tell from a picture. The reason why I ask is another use scenario, in which the availability of multiple PCI 4.0 connections outweighs the downside of only having 6 memory slots.
I'm really curious where this board is going. It's not just the ram configuration being weird enough to probably turn most customers off; but it doesn't have a standard power hookup. No conventional 24pin ATX plug, and no next generation10pin ATX12VO plug either.
I thought the atx12vo was 10 pins. this one appears to be 1x4pin and 2x8pin (like an old school 20pin). does anyone know what psu is required and what cable?
After looking at the pricing of 128 Gb ECC RAM, Ian's other possible explanation that this board let's one recycle (expensive) memory for a six bank Xeon board might well be spot on. Six of those puppies (for 512 Gb total) are a lot more than even the most expensive Rome CPU, so it's a possibility. Has anyone out there actually done that (bought a MB specifically to reuse expensive memory but with a different CPU)? Curious.
Ah, now i geddit... recycle Xeon 3 channel ram.It does sound a clever move by asrock to preserve the expensive ram investment with a switch ... and make room for slots & lower board cost.
Its everything...shutting down bioscontrollers to find a mem reserve...find that mem BW. When you've stopped... Look one more dimm slot.. One more channel distro firmware update sarcasm...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
27 Comments
Back to Article
DanNeely - Monday, June 8, 2020 - link
Boards like this show one of the reasons why we're never going to get HEDT level connectivity in mainstream platforms: The jumbo sockets needed to cram that many IO pins in are too big to fit on anything smaller than full ATX without various weird to grotesque compromises.romrunning - Monday, June 8, 2020 - link
If the PCIe slot nearest the CPU was dropped, is it possible they could have added two DIMM slots for the full 8 channels supported by EPYC?Kevin G - Monday, June 8, 2020 - link
I'm not sure that that PCIe slot is the first limiting factor (it'd still likely need to be removed or reduced down to say 4x). Look at the mounting holes for the motherboard: there *might* barely be enough room to fit two more slots in between them. It looks likes some VRM components would have to move too.bernstein - Monday, June 8, 2020 - link
just a quick look at asrockrack's rome8-2t shows that 8 ddr4-dimm-sockets are possible on epyc with a full pcie x16 slot 7 (the first if counting from the cpu). https://www.asrockrack.com/general/productdetail.a...ipkh - Monday, June 8, 2020 - link
HEDT by definition will never be mainstream. Any mainstream user doesn't need such high IO requirements and HEDT motherboards areca mixed bag with regard to pcie slot layout.Not to mention that AMD is just as stingy on Pcie bandwidth from the CPU as Intel. Pcie 4.0 doesn't mean much if you're still restricted to 16 lanes and you can't trade them for 32 Pcie 3.0 lanes.
Lord of the Bored - Monday, June 8, 2020 - link
Why would you only have 16 lanes, or even 32? This is an Epyc board. 128 lanes at the socket(which I believe is twice what Intel's best offer is).Even if you halve that because PCIe4(skeptical), that's still 64 lanes.
Santoval - Wednesday, June 10, 2020 - link
There was no halving of PCIe lanes due to the switch to PCIe 4.0 Rome has 128 PCIe 4.0 lanes and the Zen 2 based Threadripper has 64 PCIe 4.0 lanes. The same number as the previous generation but with double the bandwidth per lane.Santoval - Wednesday, June 10, 2020 - link
The article is about an EPYC motherboard, which is a server motherboard with 128 PCIe 4.0 lanes. Threadripper also provides 64 PCIe 4.0 (or 3.0 lanes) is it is not "stingy" either. Ryzen could well be considered stingy, with 24 PCIe 4.0/3.0 lanes in total from the CPU, of which only 20 are usable, 4 of which are intended for an M.2 SSD and 16 for a graphics card. What does Ryzen have to do with EPYC though?Samus - Monday, June 8, 2020 - link
It isn't just the sockets. A lot of the legacy connections of modern PC's have got to do. I'm looking at you 24+8 ATX power connectors!DanNeely - Monday, June 8, 2020 - link
ATX12VO (12 volt only) is taking aim at the 24pin. It removes all the legacy power from the 24 pin connector (mobo has to make 3.3v for PCIe, 5V for USB, and both for sata) replacing it with a 10pin connector with 50% more 12V capacity than the ancient 24pin, and removing all legacy voltages from the PSU. It's both less ambitious than it could have been (higher PSU voltages for less total wires needed), and somewhat sloppy in the short term (new power headers on the mobo mean that in the short term it doesn't free up much if any PCB area) and not forward/back compatible with conventional ATX (vs a 12pin connector that would've had a 4th ground, 1 each 3.3V and 5V, and had the sata connectors still on the PSU). This is because Intel just created a standardized version of what Dell, HP, Lenovo, etc were already doing for their desktop systems to simplify supply chains and allow smaller OEMs to join in on the savings.The issues related to SATA power being a klunky hack should go away over the next 3-8 (guess) years as SATA transitions from being a feature check on almost all boards to being dropped from chipsets with anyone wanting to build a storage server needing to install a SATA card (presumably a next generation model designed for the 12VO era that also provides SATA power hookups).
The whole thing is a bit of a mess in the short term; but the blame should go to the people who were asleep on the switch and didn't create a reduced to 12/14 pin main ATX connector 10-15 years ago after both Intel and AMD moved CPU power to the 12V rail and the amount of 3.3/5v available became far in excess of system needs.
Lord of the Bored - Monday, June 8, 2020 - link
Tell ya how I would've done ATX12VO. PCIe power connectors exclusively. Six should cover most use cases, I reckon.Two eight-pin connectors to the motherboard, a third for dedicated CPU power, a six-pin to a "power hub" next to the drive mounts that carries 3v and 5v regulators and a handful of SATA power connectors, and two of whatever is needed by the video card.
Admittedly the power hub is duplicated effort, since the mainboard already needs to generate the lower voltages for fans and PCIe slots and external ports, but I think it is worth it to centralize all drive power in a single off-board location both for cable management and so that can be omitted if unwanted.
bernstein - Tuesday, June 9, 2020 - link
almost all storage servers already employ backplanes that take 12V and convert it to 5V & 3.3V as needed. this makes redundant PSU's as well as power distribution within a server much simpler.also a picopsu style legacy adapter could easily be shipped with any ATX12VO PSU...
bernstein - Monday, June 8, 2020 - link
huh? full atx IS the mainstream platform...also at 6 out of eight memory channels plus ~100 out of 128 pcie lanes this matx board already exposes three thirds of epyc's full IO.
plus i am certain asrockrack could make a similar matx board fully exploiting epyc's IO with just two additional dimm slots and three to four more slimline x8 plugs.
the tricky part would be the power circuitry as the topmost part of the board would be quite packed, but this seems achievable by dropping tow 1GbE sockets with the Intel i210-AT IC giving a more regular 2 10GbE Ethernet capability. Also dropping 6 SATA ports for 3 slimline x8 ports should be relatively easy.
alas thats not what asrockrack's driving customer(s) had in mind and so we get this board.
bernstein - Monday, June 8, 2020 - link
amendment: by today's standards epyc is an IO monster, so exposing 3/4 of epyc's IO is already great!KateNill - Wednesday, June 10, 2020 - link
Visit this site if you are tired of writing challenging assignments. There are much more interesting things to do. Looking for an online helper? No need to rack your brains. Read more about this service.https://writepaperfor.me/do-my-homework
ritawgoodman12 - Thursday, June 18, 2020 - link
Make 6150 bucks every month… Start doing online computer-based work through our website. I have been working from home for 4 years now and I love it. I don’t have a boss standing over my shoulder and I make my own hours. The tips below are very informative and anyone currently working from home or planning to in the future could use this website. WWW.iⅭash68.ⅭOⅯromrunning - Monday, June 8, 2020 - link
I know a lot of the "abnormal" comparisons were about mini-ITX boards, but those changes don't really seem abnormal to me. Mini-ITX has some very specific size constraints, so trade-offs like limiting the DIMM slots to two isn't that uncommon. Although I suppose you could say making a board with four SO-DIMM slots instead of two full-size could be considered abnormal, as that's not the norm for mini-ITX boards.I've purchased several ASRockRack mini-ITX boards for Xeon-based mini-servers that were great for portability. I used them for work at expos where their footprint was small enough to house two servers in the area where usually only one server would fit, which then gave me the redundant servers I needed. So for that purpose, I appreciate ASRockRack making some boards in small form factors like mini-ITX that can shoehorn some large CPUs on them. The power & portability was definitely a win for my use-case.
eastcoast_pete - Monday, June 8, 2020 - link
How much space is allowed between the PCIe 4.0 slots? It looks to be a "normal" amount, but hard to tell from a picture. The reason why I ask is another use scenario, in which the availability of multiple PCI 4.0 connections outweighs the downside of only having 6 memory slots.DanNeely - Monday, June 8, 2020 - link
I'm really curious where this board is going. It's not just the ram configuration being weird enough to probably turn most customers off; but it doesn't have a standard power hookup. No conventional 24pin ATX plug, and no next generation10pin ATX12VO plug either.bernstein - Monday, June 8, 2020 - link
looks like one of the ATX12VO boards. yay! now we just need nice compact ATX12VO PSU's...jpmomo - Monday, June 8, 2020 - link
I thought the atx12vo was 10 pins. this one appears to be 1x4pin and 2x8pin (like an old school 20pin). does anyone know what psu is required and what cable?bernstein - Tuesday, June 9, 2020 - link
yeah my bad... presumably the board ships with an 2x8pin to 24pin adapter...eastcoast_pete - Monday, June 8, 2020 - link
After looking at the pricing of 128 Gb ECC RAM, Ian's other possible explanation that this board let's one recycle (expensive) memory for a six bank Xeon board might well be spot on. Six of those puppies (for 512 Gb total) are a lot more than even the most expensive Rome CPU, so it's a possibility. Has anyone out there actually done that (bought a MB specifically to reuse expensive memory but with a different CPU)? Curious.eastcoast_pete - Monday, June 8, 2020 - link
Of course, that'll be 768 Gb total. Not a good idea to do math in my head while listening to a conference call that just won't end.msroadkill612 - Tuesday, June 9, 2020 - link
Ah, now i geddit... recycle Xeon 3 channel ram.It does sound a clever move by asrock to preserve the expensive ram investment with a switch ... and make room for slots & lower board cost.carcakes - Tuesday, June 9, 2020 - link
Its everything...shutting down bioscontrollers to find a mem reserve...find that mem BW. When you've stopped... Look one more dimm slot.. One more channel distro firmware update sarcasm...carcakes - Tuesday, June 9, 2020 - link
Or its just Optane \o/