Coming from the HPC space, seeing 512GB-1TB of RAM was pretty regular, but seeing 1.5TB-2TB was rare, but did occur. However, now with systems being able to have 6TB of RAM in a single 4U rack server is pretty incredible (4P servers with 96 DIMMs, Intel E7 v2 support).
However, there are a few odd things about this board. For one, the QSFP+ is totally unnecessary, as it only supports 2x10GbE, and is not either 1) Infiniband or 2) 40GbE. Sure, with LACP, you could have bonded 20GbE, but you either need a splitter cable (QSFP+ to 4x SFP, with 2 SFP unusable) or a switch that supports multiple links over QSFP+ (a 40GbE with 10GbE breakout capabilities). Also, the decision to use the SFF-8087 connectors for the SATA and individual ports for SAS confounds me, as you lose the sideband support with individual cables, and onboard SATA doesn't support the sideband, thus losing some functionality with some backplanes. Also, the card Gigabyte advertises with this board is an LSI 2308, an HBA and not a full hardware RAID.
Some of Gigabyte's B2B systems have intrigued me, especially their 8x Tesla/Phi system in 2U, but this board just doesn't seem completely thought out.
I run multiple database servers with 2TB of ram. My next round is slated for 4TB. And absolutely no joke, they reversed the SAS and SATA connectors in a monumentally stupid move.
Well, surprisingly no gaming benchmarks this time, but what's with the "professional performance" benches? How many professionals out there make their money on running cinebench? How about some real workstation workloads for a change?
I actually applauded the absence of gaming benchmarks this time. As for whether this is for a workstation machines, I'd say it is far more suited for a workstation than suited for running winrar and image viewing software.
And just to note this "review" of a "serve" motherboard doesn't have a single server benchmark whatsoever...
My usual gripe about acronyms that are not accompanied by an explanation when the term is first used. THis time aggravated by a typo, I'm sure.
"The system is based on the C612 chipset, which is similar to the consumer based X99 but with 2P related features, such as MTCP over PCIe."
I'm pretty sure you meant to type MCTP (Management Component Transport Protocol) and not the mTCP (microTCP?) or MTCP (Malaysian Technical Cooperation Programme or has something to do with Transport Layer Support for Highly Available Network Services)
I'm not surprised that there's no temperature data in the BIOS. Server admins don't look at the BIOS after they complete initial setup (or a major overhaul). It's accessible from the BMC, where it's useful in a server environment. When a server overheats, the admin is usually not in the same room—and often not in the same building, or even the same state. The important question is how the BMC firmware does at exposing that data for out-of-band management via IPMI, SNMP, or another standard solution. Does it play well with an Avocent UMG managment device, for instance? As a server admin, I could care less about seeing the temperature in the BIOS. What I care about is that my chosen monitoring solution can see if the temperature is going up—or any hardware fault is detected—and page me, even if the operating system isn't running. That's what BMCs are for!
Don't apologize for using 240VAC power. Chances are very good that, even in a U.S. data center, it'll be on 240VAC power. Given the current needs of most servers, it's impractical to use 120VAC power in server racks—you'll run out of available amperage on your 120VAC power-distribution unit (power strip) long before you use all the outlets. Keep going down that road and you waste rack space powering PDUs with two or three cords plugged into them. It's much easier and more efficient all the way around to use 240VAC PDUs and power in the data center. Comparing a 20-amp 120V circuit to a 20-amp 240V circuit, you can plug at least twice as many of a given server model into the 240V circuit. Because the U.S. National Electrical Code restricts you to using no more than 80% of the rated circuit capacity for a constant load, you can plug in 16A of load on that 20A circuit. If the servers draw 6A at 120V or 3A at 240V, you can plug in two servers to the 120V power strip, or five servers into the 240V strip, before you overload it. So, once you get beyond a handful of computers, 240V is the way to go in the datacenter (if you're using AC power).
mass server racks are Pure DC in some cases or 240v (i would of thought there be some very basic Temp monitoring in the BIOS but guess most of this is exposed elsewhere
208V 3-phase is probably more popular than 240V, as most electricity is generated as 3-phase, and using all 3 phases is important for efficiently using the power without being charged for a poor power factor.
In, you're still using the wrong source link for the C-ray test. The Blinkenlights site is a mirror over which I have no control; I keep the main c-ray page on my SGI site. Google for, "sgidepot 'c-ray'", 1st hit will be the correct URL.
Apart from that, thanks for the review!
One question: will you ever be able to review any quad-socket systems or higher? I'd love to know how well some of the other tests scale, especially CB R15.
I was excited to see the QSFP, but it seems like it's not put to use. I've been loving our mellanox switches, they have QSFP and you can run 40Gbe or 4 x 10Gbe with a breakout cable, with each port. It provides absolutely ridiculous port density and great cost. You can find SX1012s (12 port QSFP) for under $5k, and have 48 10G ports in 1/2U at about $100/port. No funny business with extra costs to license ports. The twinax cable is much cheaper than buying 10G optics, too, but you have to stay close. Usually you only need fibre on the uplinks, anyway.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
17 Comments
Back to Article
PCTC2 - Wednesday, December 3, 2014 - link
Coming from the HPC space, seeing 512GB-1TB of RAM was pretty regular, but seeing 1.5TB-2TB was rare, but did occur. However, now with systems being able to have 6TB of RAM in a single 4U rack server is pretty incredible (4P servers with 96 DIMMs, Intel E7 v2 support).However, there are a few odd things about this board. For one, the QSFP+ is totally unnecessary, as it only supports 2x10GbE, and is not either 1) Infiniband or 2) 40GbE. Sure, with LACP, you could have bonded 20GbE, but you either need a splitter cable (QSFP+ to 4x SFP, with 2 SFP unusable) or a switch that supports multiple links over QSFP+ (a 40GbE with 10GbE breakout capabilities). Also, the decision to use the SFF-8087 connectors for the SATA and individual ports for SAS confounds me, as you lose the sideband support with individual cables, and onboard SATA doesn't support the sideband, thus losing some functionality with some backplanes. Also, the card Gigabyte advertises with this board is an LSI 2308, an HBA and not a full hardware RAID.
Some of Gigabyte's B2B systems have intrigued me, especially their 8x Tesla/Phi system in 2U, but this board just doesn't seem completely thought out.
jhh - Wednesday, December 3, 2014 - link
I suspect the QSFP was designed to support a Fortville, but they didn't get them qualified in time. That would get them a true 40 Gig port, or 4x10Gfackamato - Friday, December 5, 2014 - link
What's fortville?Cstefan - Friday, December 5, 2014 - link
Intel 40GBE QSFP+Nothing the consumer need worry over for a long time yet.
Klimax - Sunday, December 7, 2014 - link
With some results already available:http://www.tweaktown.com/reviews/6857/supermicro-a...
Cstefan - Friday, December 5, 2014 - link
I run multiple database servers with 2TB of ram. My next round is slated for 4TB. And absolutely no joke, they reversed the SAS and SATA connectors in a monumentally stupid move.ddriver - Wednesday, December 3, 2014 - link
Well, surprisingly no gaming benchmarks this time, but what's with the "professional performance" benches? How many professionals out there make their money on running cinebench? How about some real workstation workloads for a change?JeffFlanagan - Wednesday, December 3, 2014 - link
This isn't a workstation, or a gaming machine.ddriver - Wednesday, December 3, 2014 - link
I actually applauded the absence of gaming benchmarks this time. As for whether this is for a workstation machines, I'd say it is far more suited for a workstation than suited for running winrar and image viewing software.And just to note this "review" of a "serve" motherboard doesn't have a single server benchmark whatsoever...
mpbrede - Wednesday, December 3, 2014 - link
My usual gripe about acronyms that are not accompanied by an explanation when the term is first used. THis time aggravated by a typo, I'm sure."The system is based on the C612 chipset, which is similar to the consumer based X99 but with 2P related features, such as MTCP over PCIe."
I'm pretty sure you meant to type MCTP (Management Component Transport Protocol) and not the mTCP (microTCP?) or MTCP (Malaysian Technical Cooperation Programme or has something to do with Transport Layer Support for Highly Available Network
Services)
macwhiz - Wednesday, December 3, 2014 - link
I'm not surprised that there's no temperature data in the BIOS. Server admins don't look at the BIOS after they complete initial setup (or a major overhaul). It's accessible from the BMC, where it's useful in a server environment. When a server overheats, the admin is usually not in the same room—and often not in the same building, or even the same state. The important question is how the BMC firmware does at exposing that data for out-of-band management via IPMI, SNMP, or another standard solution. Does it play well with an Avocent UMG managment device, for instance? As a server admin, I could care less about seeing the temperature in the BIOS. What I care about is that my chosen monitoring solution can see if the temperature is going up—or any hardware fault is detected—and page me, even if the operating system isn't running. That's what BMCs are for!Don't apologize for using 240VAC power. Chances are very good that, even in a U.S. data center, it'll be on 240VAC power. Given the current needs of most servers, it's impractical to use 120VAC power in server racks—you'll run out of available amperage on your 120VAC power-distribution unit (power strip) long before you use all the outlets. Keep going down that road and you waste rack space powering PDUs with two or three cords plugged into them. It's much easier and more efficient all the way around to use 240VAC PDUs and power in the data center. Comparing a 20-amp 120V circuit to a 20-amp 240V circuit, you can plug at least twice as many of a given server model into the 240V circuit. Because the U.S. National Electrical Code restricts you to using no more than 80% of the rated circuit capacity for a constant load, you can plug in 16A of load on that 20A circuit. If the servers draw 6A at 120V or 3A at 240V, you can plug in two servers to the 120V power strip, or five servers into the 240V strip, before you overload it. So, once you get beyond a handful of computers, 240V is the way to go in the datacenter (if you're using AC power).
leexgx - Wednesday, December 3, 2014 - link
mass server racks are Pure DC in some cases or 240v (i would of thought there be some very basic Temp monitoring in the BIOS but guess most of this is exposed elsewhereso i agree with this post
jhh - Thursday, December 4, 2014 - link
208V 3-phase is probably more popular than 240V, as most electricity is generated as 3-phase, and using all 3 phases is important for efficiently using the power without being charged for a poor power factor.mapesdhs - Thursday, December 4, 2014 - link
In, you're still using the wrong source link for the C-ray test. The Blinkenlights site is
a mirror over which I have no control; I keep the main c-ray page on my SGI site.
Google for, "sgidepot 'c-ray'", 1st hit will be the correct URL.
Apart from that, thanks for the review!
One question: will you ever be able to review any quad-socket systems or higher?
I'd love to know how well some of the other tests scale, especially CB R15.
Ian.
fackamato - Friday, December 5, 2014 - link
No 40Gb benchmarks?sor - Monday, December 8, 2014 - link
I was excited to see the QSFP, but it seems like it's not put to use. I've been loving our mellanox switches, they have QSFP and you can run 40Gbe or 4 x 10Gbe with a breakout cable, with each port. It provides absolutely ridiculous port density and great cost. You can find SX1012s (12 port QSFP) for under $5k, and have 48 10G ports in 1/2U at about $100/port. No funny business with extra costs to license ports. The twinax cable is much cheaper than buying 10G optics, too, but you have to stay close. Usually you only need fibre on the uplinks, anyway.dasco - Saturday, March 9, 2019 - link
Does it support udimm. As the documentation says that it supports only rdimm or lrdimm.Does gskill ram used in this test is udimm or rdimm Ecc ram.