Not necessarily. For example, Supermicro builds most of its Xeon E3 boards in multiple flavors: some with multiple 1GbE, some with 10GbE. Evidently there's a market for boards without the built-in 10GbE. Of course you can always add it yourself via PCIe slot. Related: how cool would it be if boards like these had integrated OcuLink ports?
I mean it's not strictly server more workstation/hobbyist focused and 10g has additional costs like the switches etc. I agree however that four 1Gbe ports are nonsensical. Really don't get that. What does oen do with 4 Ethernet ports? What is missing is a middle-ground. 2.5 and 5 gbe capable motherboards. 2x2.5gbe would be completely fine here. You can configure them as needed for fallback or teaming.
I just bought a mini-itx board and ran in that issue. You get either 1gbe or 10gbe and the later with at least a $150 additional price tag. 2.5gbe? only found it in some lga1200 xeon-w boards but those don't have a bmc. bummer.
Four 1Gb NIC ports would be great for a edge router. Add in a 10Gb card if you need a datacenter link, should be great for small enterprise clients (150-500 users). Drop in a Ryzen GE series CPU, and you have a great low power high throughput firewall. The board layout is even great for a 2U chassis, airflow in rack chassis aren't great for traditional ram/CPU position layout.
The exact smae argument could be made the other way around, one 10GBe port is great for any use case involving high traffic, if you want 4 ports for fallback buy a 4x1Gb card.
I _think_ the reason this motherboard exists at all is that some particular LARGE customer (Facebook?) wanted it. Sure, it would be great to have variations, like Supermicro does for their Xeon E3 boards. But I guess market is not there at the moment to support this... and refusal of AMD to market Ryzen as an E3 competitor isn't helping AT ALL. In the meantime, I'm glad this board... exists? Still can't order one!
Single 10G says storage server, not "high traffic" in general. Multiple 1G ports are better for security. You know you can just buy that 10G card if you need it?
It costs very little and uses very few lanes, though - depending on how they've done it, it could be as few as one lane for the 4 1GbE ports but is likely no more than 2. The management port will be using another, but that's still plenty left over for whatever the user needs.
"The exact smae argument could be made the other way around" Only if you ignore cost! It makes sense to integrate the minimum where upgrades are possible, rather than forcing the far higher cost of 10GbE everyone who buys your board.
I also agree that dual 2.5 Gbps would be more ideal as the market begins to move away from 1 Gbps. There are niche uses for quad Eth port boards, but the ones I'm most familiar with tend to use smaller form-factor boards.
I get the feeling that this was designed for a specific industrial/embedded customer with a unique use case who didn't mind Asrock releasing to the general market.
Because regardless of the bandwidth, a single 10G nic is a single point of failure, which is a big NO NO in a corporate Enterprise IT environment. Multi 1GB nics are used (still very much) for LACP links spanning multiple switching fabrics. Also highly used on VMWare and HyperV hosts to separate out management traffic, VMotion, etc... and for aggregation and link failover.
Heh. I agree with you about 10-BaseT, SFP+ would be preferable if 10GbE *needs* to be present. I don't have $ or power budget for 25gb/40gb/100gb network... but understand those are requirements for some. I'm curious how your ideal board would allocate its limited PCIe lanes among PCIe slots, M.2 slots, OcuLink,...?
10GBase-T uses the same cabling as 1000Base-T, assuming the network was built with any future proofing so you can basically just plug it in. 25GBase-T probably won't happen. 'sides, YOU are not the market. What's useless to you is probably useful for someone. Also besides, you can disable those integrated NICs.
> 10GBase-T uses the same cabling as 1000Base-T, > assuming the network was built with any future proofing
Depends on when. It might've been built with Cat 6, rather than Cat 6A. And even that has shorter length limitations and requires greater power expenditure than we're used to with Gigabit.
BTW, there's no such standard as Cat 6e. If you see someone selling cable as Cat 6e, treat it as plain Cat 6, but with a bit more suspicion.
I think 2x2.5G would be more appropriate for the target market of this board. Anybody considering 10Gbe is likely on the verge of adopting 25/40/100G anyway, in which case the PCIe slot will be utilized.
The other head scratcher is why the M2 slot isn’t PCIe 4.0 - the allocation of PCIe lanes to ports on this board is very strange.
Do you have personal experience running 2.5GbE? I've seen reports of problems using both Intel and Realtek chipsets. Whereas 10GbE is very mature and well-supported. Upside of being "obsolete" :-)
This board runs the M.2 slot from the B550 chipset, which limits its speed to PCIe 3.0. The upside of this choice is an extra PCIe 4.0 x4 slot from the CPU. Into which you could install an M.2 carrier board if you need your SSD on PCIe 4.0. Personally I'd try bifurcating the PCIe 4.0 x16 slot and running a quad M.2 card there, and whatever other PCIe card in the x4 slot.
The B550 can't bifurcate the x4 slot, but it apparently can the x16 slot. In the case of some boards with multiple PCIe 4.0 NVMe M.2 connectors, they start by cutting the x16 slot bandwidth, then after a third M.2 drive is installed they either totally disable the x4 slot or run the x16 slot at x4, configurable in the BIOS (in the case of the Gigabyte B550 Aurus Master)
Personally no I'm not running any 2.5G stuff, and based on what you are stating, maybe that's why there hasn't been adoption. I agree going with a mature solution but 2.5G isn't exactly new and by now you'd think the bugs are worked out. 2.5G is, after all, based on a lower handshake of 10Gbe, and at long distances 10Gbe actually negotiates at 2.5G, and I have installed 2.5G cards in the field that connect to 10Gbe ports at 2.5G. It's the damn SFP adapters that are all proprietary with their individual standards so you just need to make those up with whatever chipset the NIC you are connecting has.
Regarding NVMe on B550, I'm not sure what you are getting at. There have been B550 boards on the market for over a year that have not one, not two, but three native PCIe4 NVMe M.2 slots direct from the chipset. Obviously having many M.2 slots impedes on other PCIe x4\x8\x16 slot bandwidth because the consumer Ryzen's don't offer many lanes. But that doesn't mean this board should leave support out entirely as the M.2 could just cut into the x4 or x16 slot bandwidth.
Well, the main benefit is cable length and compatibility. If the speed is fast enough for you, then it seems an attractive option for those with legacy cabling.
Probably 100mbps if it was PCI. The 100Mbps ISA NICs were pretty damn pricy because by the time 100Mbps became commonplace, ISA was on its way out and PCI was becoming mainstream (Pentium-era.)
By preference, but some datacenters use Cat6 and others use SFP. Others have already moved up to 25GbE. 10GBaseT is perfect for workstations, but not necessarily so for servers.
Really? For what? Management? Twisted-pair is very energy-intensive at 10 Gigabits, and can't go much above. So, I'd imagine they just use it for management @ 1 Gbps.
Within racks, I'd expect to see SFP+ over copper. Between racks, it's optical all the way.
I've toured a lot of datacenters in my lifetime and I can honestly say I haven't seen copper wiring used for anything but IPMI and in extreme cases POTS for telephone backup comms though even this is mostly dead now as it has been replaced by cellular. Even HP ILO2 supports fiber for remote management, and you can bet at the distance and energy profile data centers are working with, they use fiber wherever they can.
I can understand the sentiment of wanting 10G to take hold, so that prices will come down.
Some of ASRock Rack's boards do have an option that includes a 10 Gigabit controller. I've had my eye on the X570D4U-2L2T, in fact.
I think the reason this probably lacks 10 Gigabit is that they have two X570-based boards with 10 Gigabit (and one without). The point of the B550 board is to be lower-priced. So, if someone wants it, they can easily just step up to one of the X570s that has it.
Would many of this board's prospective users really use a GTX 980 and 1200W 75% PSU? Personally if I'm willing to spend that much electricity on a server, I'd go EPYC. The benefit of this board AFAICT is the ability to construct a true Ryzen server (i.e. ECC RAM and OoB management) that uses less power, and may cost less, than EPYC. Tradeoff is far less I/O bandwidth, and lower limit on CPU cores. Anyway, back to point: to lower power use with this you'd want to match power supply to actual requirement, something ~400W, preferably 90+ Gold or better efficiency, is probably a lot more appropriate.
As best I've been able to determine, Ryzen APUs do not support ECC unless they're of the "Pro" variety. Read somewhere this is because the iGPU must also support ECC, and most of them do not. Don't know why AMD makes this info so difficult to find out, OR why they've so far shipped Pro APUs only to OEMs, none to retail. It's like they don't WANT to compete with Xeon E3. </rant>
Well that all should mean that (for example) if the board supported my Ryzen 1700, that it also would allow for ECC RAM? I know it doesn't support the 1700, curious to see if it would support my Ryzen 1600 (AF stepping)
I don't understand why more motherboards don't have the memory slots setup this way. Most cases I see for sale have airflow from front to rear so would benefit a memory slot setup like this board has. Maybe I'm missing something...
I dot know tons of stuff about servers but why arent there basically any VRM heatsink? just not nessacary because of no oc? but wouldn't you want better power delivery and in return, lower VRM temps for stability anyway?
Not necessary really. IIRC they're rated at a pretty high temp usually (100+C) and as long as they have some airflow they're fine and usually servers have some pretty good sideways airflow going on. If you're going to OC - then yes you're going to run the VRMs hard and should have a heatsink, but in this application there's no OC facility.
No OC is most of it. When running within their power limts Ryzen chips are very efficient, and the VRMs are jsut not going to get that hot. Even boards with sub par VRMs only put out 4-5 watts of heat at full load with a 95 watt CPU.
Given these boards are usually put into either server style chassis with tons of airflow or have top down coolers that will blow onto the VRMs temps should be fine.
"Users with Ryzen desktop processors can only use non-ECC DDR4, while users with Ryzen Pro models with Radeon Graphics and PRO technologies can use ECC memory."
Uhh.. ECC memory works just fine with bog standard (aka "not Pro") Ryzen CPU's and has LITERALLY since their launch in 2017.
While this is true, AMD and motherboard manufacturers are distressingly cagey about whether ECC and ECC error reporting actually work. If you care about this, you need to do your own searches. There have been cases of ECC support being added or removed on successive motherboard BIOS revisions. The different mainstream mfgs have different attitudes regarding ECC RAM: MSI pretty much ignores it, Gigabyte says they support ECC on _some_ boards, Asus seems somewhat better, and ASRock appears to be the best bet. If only Supermicro would give us a non-Threadripper Ryzen board...
AFAIK all ASUS (as well as all Asrock) boards support ECC. We have several servers with (working) ECC with Ryzen CPUs (without Pro): 1600X, 1800X, 3700X, 3900X, 5800X. If AMD sold the Pro models in retail and guaranteed ECC functionality, we would be willing to pay a little extra for that. As for the Pro models, I once compared the specification of one with the corresponding non-Pro model, and wrt ECC they were the same. Can anyone name a Pro model where AMD guarantees more ECC functionality?
The difference between ECC support of Pro and non-Pro CPUs is supposedly that AMD only tests and guarantees it on the Pros. For the non-Pro CPUs, it's up to the motherboard vendor to test and support.
As for APUs, AMD disables ECC support on the non-Pro APUs. I guess that's because the main customers for APUs with ECC are corporations, and so it's like a favor to big OEMs, giving them a lock on the corporate market (since the Pro versions seem to be EOM-only).
Ecc functionality still works even with the non-pro CPUs (just official stance is it doesn't work even thought it does, not like Intel where if its an i5 or higher ecc automatically doesn't work) ddr5 is going to change this problem with Intel as ecc is baked into ddr5 and can't be disabled and sold "as a enterprise" feature
"The B550D4-4L also doesn't include integrated audio, so users looking to build an audio workstation will need to rely on external audio controllers."
With respect, this comment is illogical. I have never heard of any DAWorkstation using on-board audio, and don't ever expect to. NOT having on-board audio to disable is a major advantage for a DAW. Also, the DPC issues here are perhaps characteristic of early drivers. There is no inherent reason I can see for this board to have worse performance than others with the same chipset... Is there? I agree that 10G would be preferred for a DAW.
ASPEED BMCs are such garbage. This has the same ARM11 core as a first gen Raspberry Pi. Just imagine how slow software rendering is on such a core, and that's the graphics performance you get on these things.
I have an ASRock board with one of these BMCs, and 2D graphics even feels slow at 1024x768 (which is the resolution that the EDID of my analog KVM seems to advertise, even though the monitor is higher).
I have the EPYCD8-2T board that has the same ASPEED AST2500 BMC. My experience has been very positive with the network interface GUI being very responsive at 1440p. Just to give a different anecdotal experience.
For remote GUI stuff, I use X11 over ssh. I only would remotely use the BMC to access BIOS settings and for web-based admin.
Another interesting benefit of BMC is its ability to upgrade BIOS without booting the CPU (i.e. in case you buy the board with a CPU that's newer than its existing BIOS supports).
My main reason for choosing a B550 motherboard for a recent build is that all the X570 boards seemed to need a fan on the chipset that would be blocked by any video card.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
73 Comments
Back to Article
YB1064 - Thursday, May 20, 2021 - link
A professional board should have 10GbE ethernet. Period.fmyhr - Thursday, May 20, 2021 - link
Not necessarily. For example, Supermicro builds most of its Xeon E3 boards in multiple flavors: some with multiple 1GbE, some with 10GbE. Evidently there's a market for boards without the built-in 10GbE. Of course you can always add it yourself via PCIe slot. Related: how cool would it be if boards like these had integrated OcuLink ports?beginner99 - Thursday, May 20, 2021 - link
I mean it's not strictly server more workstation/hobbyist focused and 10g has additional costs like the switches etc. I agree however that four 1Gbe ports are nonsensical. Really don't get that. What does oen do with 4 Ethernet ports? What is missing is a middle-ground. 2.5 and 5 gbe capable motherboards. 2x2.5gbe would be completely fine here. You can configure them as needed for fallback or teaming.I just bought a mini-itx board and ran in that issue. You get either 1gbe or 10gbe and the later with at least a $150 additional price tag. 2.5gbe? only found it in some lga1200 xeon-w boards but those don't have a bmc. bummer.
Drkrieger01 - Thursday, May 20, 2021 - link
Four 1Gb NIC ports would be great for a edge router. Add in a 10Gb card if you need a datacenter link, should be great for small enterprise clients (150-500 users). Drop in a Ryzen GE series CPU, and you have a great low power high throughput firewall. The board layout is even great for a 2U chassis, airflow in rack chassis aren't great for traditional ram/CPU position layout.TheinsanegamerN - Thursday, May 20, 2021 - link
The exact smae argument could be made the other way around, one 10GBe port is great for any use case involving high traffic, if you want 4 ports for fallback buy a 4x1Gb card.fmyhr - Thursday, May 20, 2021 - link
I _think_ the reason this motherboard exists at all is that some particular LARGE customer (Facebook?) wanted it. Sure, it would be great to have variations, like Supermicro does for their Xeon E3 boards. But I guess market is not there at the moment to support this... and refusal of AMD to market Ryzen as an E3 competitor isn't helping AT ALL. In the meantime, I'm glad this board... exists? Still can't order one!BedfordTim - Friday, May 21, 2021 - link
You have hit the nail on the head, but the customer is probably smaller. Most of the oddball industrial boards exist for someone's specific purpose.mode_13h - Friday, May 21, 2021 - link
> some particular LARGE customer (Facebook?) wanted it.Not Facebook, given they founded the Open Compute Project, 10 years ago.
Look at ASRock Rack's catalog and you'll see a lot of boards like these.
mode_13h - Friday, May 21, 2021 - link
Speaking of which, their B550D4M model has a OCP 2.0 Mezzanine connector A (PCIe x8).https://www.asrockrack.com/general/productdetail.a...
bananaforscale - Saturday, May 22, 2021 - link
Single 10G says storage server, not "high traffic" in general. Multiple 1G ports are better for security. You know you can just buy that 10G card if you need it?mode_13h - Saturday, May 22, 2021 - link
Yeah, but 5x 1 gigabit ports is kinda ridiculous. It's not as if that costs nothing and uses no PCIe lanes.Spunjji - Monday, May 24, 2021 - link
It costs very little and uses very few lanes, though - depending on how they've done it, it could be as few as one lane for the 4 1GbE ports but is likely no more than 2. The management port will be using another, but that's still plenty left over for whatever the user needs.Spunjji - Monday, May 24, 2021 - link
"The exact smae argument could be made the other way around"Only if you ignore cost! It makes sense to integrate the minimum where upgrades are possible, rather than forcing the far higher cost of 10GbE everyone who buys your board.
fmyhr - Thursday, May 20, 2021 - link
Yup! Love that they put GOOD 1Gb NICs in there: i210s. Perfect for edge router, physically isolating different networks.Lucky Stripes 99 - Thursday, May 20, 2021 - link
I also agree that dual 2.5 Gbps would be more ideal as the market begins to move away from 1 Gbps. There are niche uses for quad Eth port boards, but the ones I'm most familiar with tend to use smaller form-factor boards.I get the feeling that this was designed for a specific industrial/embedded customer with a unique use case who didn't mind Asrock releasing to the general market.
BedfordTim - Friday, May 21, 2021 - link
You could for example hook up 4 GiGE cameras. Most can't take advantage of 2.5Gbe ports, but saturate a 1Gbe port.BedfordTim - Friday, May 21, 2021 - link
There are quite a few Atom boards with 2.5Gbe ports now.ZENSolutionsLLC - Friday, May 21, 2021 - link
Because regardless of the bandwidth, a single 10G nic is a single point of failure, which is a big NO NO in a corporate Enterprise IT environment. Multi 1GB nics are used (still very much) for LACP links spanning multiple switching fabrics. Also highly used on VMWare and HyperV hosts to separate out management traffic, VMotion, etc... and for aggregation and link failover.Jorgp2 - Friday, May 21, 2021 - link
The fuck kind of server would have 2.5G or 5G ethernet?bananaforscale - Saturday, May 22, 2021 - link
A roll your own NAS.im.thatoneguy - Thursday, May 20, 2021 - link
Please stop putting 10g ports one servers.They're always 10-BaseT which is useless to me. They take up pcie lanes. And 25gb/40gb/100gb is imminently supplanting 10gb.
It's too late for 10g especially baseT
fmyhr - Thursday, May 20, 2021 - link
Heh. I agree with you about 10-BaseT, SFP+ would be preferable if 10GbE *needs* to be present. I don't have $ or power budget for 25gb/40gb/100gb network... but understand those are requirements for some. I'm curious how your ideal board would allocate its limited PCIe lanes among PCIe slots, M.2 slots, OcuLink,...?bananaforscale - Saturday, May 22, 2021 - link
10GBase-T uses the same cabling as 1000Base-T, assuming the network was built with any future proofing so you can basically just plug it in. 25GBase-T probably won't happen. 'sides, YOU are not the market. What's useless to you is probably useful for someone. Also besides, you can disable those integrated NICs.mode_13h - Saturday, May 22, 2021 - link
> 10GBase-T uses the same cabling as 1000Base-T,> assuming the network was built with any future proofing
Depends on when. It might've been built with Cat 6, rather than Cat 6A. And even that has shorter length limitations and requires greater power expenditure than we're used to with Gigabit.
BTW, there's no such standard as Cat 6e. If you see someone selling cable as Cat 6e, treat it as plain Cat 6, but with a bit more suspicion.
Samus - Thursday, May 20, 2021 - link
I think 2x2.5G would be more appropriate for the target market of this board. Anybody considering 10Gbe is likely on the verge of adopting 25/40/100G anyway, in which case the PCIe slot will be utilized.The other head scratcher is why the M2 slot isn’t PCIe 4.0 - the allocation of PCIe lanes to ports on this board is very strange.
fmyhr - Thursday, May 20, 2021 - link
Do you have personal experience running 2.5GbE? I've seen reports of problems using both Intel and Realtek chipsets. Whereas 10GbE is very mature and well-supported. Upside of being "obsolete" :-)This board runs the M.2 slot from the B550 chipset, which limits its speed to PCIe 3.0. The upside of this choice is an extra PCIe 4.0 x4 slot from the CPU. Into which you could install an M.2 carrier board if you need your SSD on PCIe 4.0. Personally I'd try bifurcating the PCIe 4.0 x16 slot and running a quad M.2 card there, and whatever other PCIe card in the x4 slot.
lightningz71 - Thursday, May 20, 2021 - link
Does this board even support 4way bifurcation of the PCIe x16 slot?Samus - Friday, May 21, 2021 - link
The B550 can't bifurcate the x4 slot, but it apparently can the x16 slot. In the case of some boards with multiple PCIe 4.0 NVMe M.2 connectors, they start by cutting the x16 slot bandwidth, then after a third M.2 drive is installed they either totally disable the x4 slot or run the x16 slot at x4, configurable in the BIOS (in the case of the Gigabyte B550 Aurus Master)Samus - Friday, May 21, 2021 - link
Personally no I'm not running any 2.5G stuff, and based on what you are stating, maybe that's why there hasn't been adoption. I agree going with a mature solution but 2.5G isn't exactly new and by now you'd think the bugs are worked out. 2.5G is, after all, based on a lower handshake of 10Gbe, and at long distances 10Gbe actually negotiates at 2.5G, and I have installed 2.5G cards in the field that connect to 10Gbe ports at 2.5G. It's the damn SFP adapters that are all proprietary with their individual standards so you just need to make those up with whatever chipset the NIC you are connecting has.Regarding NVMe on B550, I'm not sure what you are getting at. There have been B550 boards on the market for over a year that have not one, not two, but three native PCIe4 NVMe M.2 slots direct from the chipset. Obviously having many M.2 slots impedes on other PCIe x4\x8\x16 slot bandwidth because the consumer Ryzen's don't offer many lanes. But that doesn't mean this board should leave support out entirely as the M.2 could just cut into the x4 or x16 slot bandwidth.
mode_13h - Friday, May 21, 2021 - link
> Do you have personal experience running 2.5GbE?Well, the main benefit is cable length and compatibility. If the speed is fast enough for you, then it seems an attractive option for those with legacy cabling.
bananaforscale - Saturday, May 22, 2021 - link
This.mode_13h - Friday, May 21, 2021 - link
> I think 2x2.5G would be more appropriate for the target market of this board.Probably the main issue is that support for 2.5 GigE is (still?) uncommon on enterprise switches.
> Anybody considering 10Gbe is likely on the verge of adopting 25/40/100G anyway
A lot of people are just starting to move up to 10 GigE. Anything faster doesn't make a lot of sense for SOHO applications.
bananaforscale - Saturday, May 22, 2021 - link
Especially considering how overpriced 10G twisted pair NICs are.mode_13h - Saturday, May 22, 2021 - link
Eh, I got a pair 2 years ago for < $100 each. I've spent more on a 3Com 10 Megabit PCI NIC, back in the late 90's. Or maybe it was 100 Mbps.Samus - Monday, May 24, 2021 - link
Probably 100mbps if it was PCI. The 100Mbps ISA NICs were pretty damn pricy because by the time 100Mbps became commonplace, ISA was on its way out and PCI was becoming mainstream (Pentium-era.)Even now an 100Mbps ISA network card is $50+
PixyMisa - Friday, May 21, 2021 - link
By preference, but some datacenters use Cat6 and others use SFP. Others have already moved up to 25GbE. 10GBaseT is perfect for workstations, but not necessarily so for servers.mode_13h - Saturday, May 22, 2021 - link
> some datacenters use Cat6Really? For what? Management? Twisted-pair is very energy-intensive at 10 Gigabits, and can't go much above. So, I'd imagine they just use it for management @ 1 Gbps.
Within racks, I'd expect to see SFP+ over copper. Between racks, it's optical all the way.
Samus - Monday, May 24, 2021 - link
I've toured a lot of datacenters in my lifetime and I can honestly say I haven't seen copper wiring used for anything but IPMI and in extreme cases POTS for telephone backup comms though even this is mostly dead now as it has been replaced by cellular. Even HP ILO2 supports fiber for remote management, and you can bet at the distance and energy profile data centers are working with, they use fiber wherever they can.[email protected] - Friday, May 21, 2021 - link
Agree, companies are saving money and customers are paying more.Spunjji - Monday, May 24, 2021 - link
That's an opinion, for sure.mode_13h - Monday, May 24, 2021 - link
I can understand the sentiment of wanting 10G to take hold, so that prices will come down.Some of ASRock Rack's boards do have an option that includes a 10 Gigabit controller. I've had my eye on the X570D4U-2L2T, in fact.
I think the reason this probably lacks 10 Gigabit is that they have two X570-based boards with 10 Gigabit (and one without). The point of the B550 board is to be lower-priced. So, if someone wants it, they can easily just step up to one of the X570s that has it.
spikebike - Tuesday, December 28, 2021 - link
The article mentions a variant of the B550d4 with 2x10G.fmyhr - Thursday, May 20, 2021 - link
Would many of this board's prospective users really use a GTX 980 and 1200W 75% PSU? Personally if I'm willing to spend that much electricity on a server, I'd go EPYC. The benefit of this board AFAICT is the ability to construct a true Ryzen server (i.e. ECC RAM and OoB management) that uses less power, and may cost less, than EPYC. Tradeoff is far less I/O bandwidth, and lower limit on CPU cores. Anyway, back to point: to lower power use with this you'd want to match power supply to actual requirement, something ~400W, preferably 90+ Gold or better efficiency, is probably a lot more appropriate.Einy0 - Thursday, May 20, 2021 - link
Agreed, I'd much rather see power consumption numbers using just the BMC for video and an appropriately sized power supply.dsplover - Thursday, May 20, 2021 - link
Excellent niche board, perfect for Pro Audio Workstation, as well as a 1U build.Just need the 5700G to get me to jump.
Nice Review
hansmuff - Thursday, May 20, 2021 - link
"The only negative in performance came in our DPC latency testing, with our results showing that this board isn't suitable for DAW systems."dsplover - Thursday, May 20, 2021 - link
DPC Latency is of no concern as I run an external DSP Audio/MIDI Interface.Been using ASRock Rack boards for years, even though they “bench” poorly.
It’s the stability that counts for me. They are pricey but then so am I....
hansmuff - Tuesday, May 25, 2021 - link
Fair enough!edogawaconan - Thursday, May 20, 2021 - link
From the manufacturer's page:"For AMD Ryzen Desktop Processors with Radeon Graphics, ECC support is only with Processors with PRO technologies"
Pretty sure this means only G series of Ryzen don't have ECC support.
fmyhr - Thursday, May 20, 2021 - link
As best I've been able to determine, Ryzen APUs do not support ECC unless they're of the "Pro" variety. Read somewhere this is because the iGPU must also support ECC, and most of them do not. Don't know why AMD makes this info so difficult to find out, OR why they've so far shipped Pro APUs only to OEMs, none to retail. It's like they don't WANT to compete with Xeon E3. </rant>bill.rookard - Thursday, May 20, 2021 - link
Well that all should mean that (for example) if the board supported my Ryzen 1700, that it also would allow for ECC RAM? I know it doesn't support the 1700, curious to see if it would support my Ryzen 1600 (AF stepping)mode_13h - Friday, May 21, 2021 - link
> curious to see if it would support my Ryzen 1600 (AF stepping)No way. I can already guess it won't support anything older than Ryzen 3000-series, but the CPU-support list is here:
https://www.asrockrack.com/general/productdetail.a...
domboy - Thursday, May 20, 2021 - link
I don't understand why more motherboards don't have the memory slots setup this way. Most cases I see for sale have airflow from front to rear so would benefit a memory slot setup like this board has. Maybe I'm missing something...Linustechtips12#6900xt - Thursday, May 20, 2021 - link
I dot know tons of stuff about servers but why arent there basically any VRM heatsink? just not nessacary because of no oc? but wouldn't you want better power delivery and in return, lower VRM temps for stability anyway?bill.rookard - Thursday, May 20, 2021 - link
Not necessary really. IIRC they're rated at a pretty high temp usually (100+C) and as long as they have some airflow they're fine and usually servers have some pretty good sideways airflow going on. If you're going to OC - then yes you're going to run the VRMs hard and should have a heatsink, but in this application there's no OC facility.TheinsanegamerN - Thursday, May 20, 2021 - link
No OC is most of it. When running within their power limts Ryzen chips are very efficient, and the VRMs are jsut not going to get that hot. Even boards with sub par VRMs only put out 4-5 watts of heat at full load with a 95 watt CPU.Given these boards are usually put into either server style chassis with tons of airflow or have top down coolers that will blow onto the VRMs temps should be fine.
mode_13h - Friday, May 21, 2021 - link
I see a heatsink both in front and in back of the CPU socket. What are those, then?Linustechtips12#6900xt - Monday, May 24, 2021 - link
yes, but they are basically nothing that's why I said "but why aren't there basically any VRM heatsink?" not why arent there VRM heatsink/Heatsinks.docbones - Thursday, May 20, 2021 - link
Weird board. Would have also expected 10gb ethernet and also many more Sata ports.Cooe - Thursday, May 20, 2021 - link
"Users with Ryzen desktop processors can only use non-ECC DDR4, while users with Ryzen Pro models with Radeon Graphics and PRO technologies can use ECC memory."Uhh.. ECC memory works just fine with bog standard (aka "not Pro") Ryzen CPU's and has LITERALLY since their launch in 2017.
fmyhr - Thursday, May 20, 2021 - link
While this is true, AMD and motherboard manufacturers are distressingly cagey about whether ECC and ECC error reporting actually work. If you care about this, you need to do your own searches. There have been cases of ECC support being added or removed on successive motherboard BIOS revisions. The different mainstream mfgs have different attitudes regarding ECC RAM: MSI pretty much ignores it, Gigabyte says they support ECC on _some_ boards, Asus seems somewhat better, and ASRock appears to be the best bet. If only Supermicro would give us a non-Threadripper Ryzen board...AntonErtl - Friday, May 21, 2021 - link
AFAIK all ASUS (as well as all Asrock) boards support ECC. We have several servers with (working) ECC with Ryzen CPUs (without Pro): 1600X, 1800X, 3700X, 3900X, 5800X. If AMD sold the Pro models in retail and guaranteed ECC functionality, we would be willing to pay a little extra for that. As for the Pro models, I once compared the specification of one with the corresponding non-Pro model, and wrt ECC they were the same. Can anyone name a Pro model where AMD guarantees more ECC functionality?mode_13h - Friday, May 21, 2021 - link
The difference between ECC support of Pro and non-Pro CPUs is supposedly that AMD only tests and guarantees it on the Pros. For the non-Pro CPUs, it's up to the motherboard vendor to test and support.As for APUs, AMD disables ECC support on the non-Pro APUs. I guess that's because the main customers for APUs with ECC are corporations, and so it's like a favor to big OEMs, giving them a lock on the corporate market (since the Pro versions seem to be EOM-only).
mode_13h - Friday, May 21, 2021 - link
> EOM-onlytypo: should be OEM-only.
Slash3 - Friday, May 21, 2021 - link
Correct. The board does indeed support ECC in that way. Gavin misinterpreted the specifications; no idea why, as it is quite clear.https://www.asrockrack.com/general/productdetail.a...
"DDR4 288-pin ECC*/non-ECC UDIMM
* For AMD Ryzen Desktop Processors with Radeon Graphics, ECC support is only with Processors with PRO technologies."
Non-Pro APUs have always been the exception, and do not support ECC on any platform.
Jorgp2 - Friday, May 21, 2021 - link
>Uhh.. ECC memory works just fine with bog standard (aka "not Pro") Ryzen CPU's and has LITERALLY since their launch in 2017.I don't think you understand what kind of board this is.
If the data sheet says it only supports ECC for select SKUs, then it only supports ECC for select SKUs.
There is no halfway for the target market.
leexgx - Friday, June 18, 2021 - link
Ecc functionality still works even with the non-pro CPUs (just official stance is it doesn't work even thought it does, not like Intel where if its an i5 or higher ecc automatically doesn't work) ddr5 is going to change this problem with Intel as ecc is baked into ddr5 and can't be disabled and sold "as a enterprise" featureMeJ - Friday, May 21, 2021 - link
"The B550D4-4L also doesn't include integrated audio, so users looking to build an audio workstation will need to rely on external audio controllers."With respect, this comment is illogical. I have never heard of any DAWorkstation using on-board audio, and don't ever expect to. NOT having on-board audio to disable is a major advantage for a DAW. Also, the DPC issues here are perhaps characteristic of early drivers. There is no inherent reason I can see for this board to have worse performance than others with the same chipset... Is there? I agree that 10G would be preferred for a DAW.
mode_13h - Friday, May 21, 2021 - link
Lol. Yeah, integrated audio on server boards that even have it tends to be a minimal implementation, with lots of crosstalk and interference.mode_13h - Friday, May 21, 2021 - link
ASPEED BMCs are such garbage. This has the same ARM11 core as a first gen Raspberry Pi. Just imagine how slow software rendering is on such a core, and that's the graphics performance you get on these things.I have an ASRock board with one of these BMCs, and 2D graphics even feels slow at 1024x768 (which is the resolution that the EDID of my analog KVM seems to advertise, even though the monitor is higher).
Ninhalem - Friday, May 21, 2021 - link
I have the EPYCD8-2T board that has the same ASPEED AST2500 BMC. My experience has been very positive with the network interface GUI being very responsive at 1440p. Just to give a different anecdotal experience.mode_13h - Saturday, May 22, 2021 - link
For remote GUI stuff, I use X11 over ssh. I only would remotely use the BMC to access BIOS settings and for web-based admin.Another interesting benefit of BMC is its ability to upgrade BIOS without booting the CPU (i.e. in case you buy the board with a CPU that's newer than its existing BIOS supports).
Frank_M - Saturday, June 19, 2021 - link
My main reason for choosing a B550 motherboard for a recent build is that all the X570 boards seemed to need a fan on the chipset that would be blocked by any video card.