The GA-7PESH3 is a tower chassis board, the GA-7PESH4 is a rack mount chassis board.
On a tower chassis the air comes in and then up, lending to the design of the GA-7PESH3. The SAS connectors on the front edge of the board is another give away. On a rack mount the thick SAS cables would block the insertion of the SAS hot swap backplane.
The GA-7PESH4 looks designed for a rack mount case where the air is pushed through the chassis from front to back. Your typical rack mounted chassis has the HDDs and backplane up front, followed y RAM/CPU(s), and finally the peripherals. If the RAM was oriented differently it would block airflow in a 1U or possibly 2U chassis and cause overheating of the backside components. The location of the SAS connectors is the other give away, being located out of the way of a potential SAS backplane.
The GA-6PSXV4 is small enough and oriented such that it could be in a tower or a rack mounted chassis.
Or at least that's how I would interpret it based on my data center experience.
So close, yet such bad choices. WHY!!! do MB makers keep putting consumer crap on server boards? One 3Gx16 lane PCIe slot is understandable (GPU computing or one of those big fat PCIe-SSD). But the rest should have been 3Gx8 and 2Gx8/2Gx4. Instead they wasted !!2!! entire slots on 2G-1 and PCI?!?! If they were going to put a SAS chip on it (and I wouldn't have - people want to use the PCIe lanes for add-in card eg. Fiber-Channel) then at least spend the extra $10 and use SAS6. It's not 2009.
The ATX/Tower boards featured in this article are server/workstation.
Excluding the PESH4 which is clearly rack mount and needs tro be used in a validated chassis with airflow baffles
You can either use them for server in a server or office environment or as a professional workstation. This is where the audio, usb3 and 'pointless expansion slots' come in.
Either for multimedia, science or networking.
As for onbord SAS, not every SI uses the onboard chip despite if its a real SAS or not.
Some SI assume the onboard is crap/fakeraid and use their standard fleet deployed addin card
Some need battery backup or a validated solution, mobos may not offer a BBU option
Some want the abililty of removing the whole card/array subsystem out of a failed or reundant server without thinking about rebuild or compatibily between different subchips or roms.
On some boards, the non SAS sku might not be avalible in the local channel.
"they are the only manufacturer to support a system fully populated with DDR3-1600MHz 16GB RDIMM 1.35V modules. "
I am not sure of the validity of this comment, as name brand or even some whitebox servers can take 512/768 in a 2P config with the right RDIMMs. Unless they are reffering to 1600 which makes less sense as the higher end Xeon parts are 1600 anyway.
Yeah the key point there was the 1600 MHz. Standard usage is to be 1600 MHz when one module per channel, then 1333 MHz when using two modules per channel to maintain signal coherency. I have had a few emails in the past couple of months with some users with HPC usage scenarios that cry out for memory density + bandwidth on the CPU, saying that 2400 MHz in a non-ECC environment is great. Moving towards that on the 2P Xeon/ECC side can only be a good thing.
From the information of Gigabyte's page for the 7PESH3, there are 4 PCI-E 3.0 x16 slots and 3 PCI-E 3.0 x8 slots. Knowing that S2011 chips have 32 integrated PCI-E lanes, set up as x16/x16/x8 or x16/x8/x8/x8, which slots split an x16 connection into dual x8?
S2011 technically have 40 PCIe lanes, hence the 16+8+8+8 configuration. For ease of use, manufacturers tend to split the x16 to the slot directly below it. However with two GPUs, there is up to 80 lanes available, meaning that two x16s can come from CPU1 and two x16s can come from CPU2 (with all sorts of variants regarding splitting and the spare 8 lanes).
I'll see if I can get hold of a block diagram so we know what is what :)
So, the LSI controller would be using available PCI-E x8, I kind of figured that's how the board was laid out, but wasn't sure, since Gigabyte didn't go into detail on the board's spec page.
Thanks for finding out, Ian.
Now, if there was a wider version of the 7PESH3, with 8 DIMMS per socket... Would make for a nice 'compact' computing/rendering system.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
12 Comments
Back to Article
Jammrock - Tuesday, January 29, 2013 - link
The GA-7PESH3 is a tower chassis board, the GA-7PESH4 is a rack mount chassis board.On a tower chassis the air comes in and then up, lending to the design of the GA-7PESH3. The SAS connectors on the front edge of the board is another give away. On a rack mount the thick SAS cables would block the insertion of the SAS hot swap backplane.
The GA-7PESH4 looks designed for a rack mount case where the air is pushed through the chassis from front to back. Your typical rack mounted chassis has the HDDs and backplane up front, followed y RAM/CPU(s), and finally the peripherals. If the RAM was oriented differently it would block airflow in a 1U or possibly 2U chassis and cause overheating of the backside components. The location of the SAS connectors is the other give away, being located out of the way of a potential SAS backplane.
The GA-6PSXV4 is small enough and oriented such that it could be in a tower or a rack mounted chassis.
Or at least that's how I would interpret it based on my data center experience.
IanCutress - Tuesday, January 29, 2013 - link
Makes sense :) I should intern at a data center for a week or two at some point.Jammrock - Tuesday, January 29, 2013 - link
Buy some earplugs first :)shogun18 - Tuesday, January 29, 2013 - link
So close, yet such bad choices. WHY!!! do MB makers keep putting consumer crap on server boards? One 3Gx16 lane PCIe slot is understandable (GPU computing or one of those big fat PCIe-SSD). But the rest should have been 3Gx8 and 2Gx8/2Gx4. Instead they wasted !!2!! entire slots on 2G-1 and PCI?!?! If they were going to put a SAS chip on it (and I wouldn't have - people want to use the PCIe lanes for add-in card eg. Fiber-Channel) then at least spend the extra $10 and use SAS6. It's not 2009.NitroWare - Tuesday, January 29, 2013 - link
They slots are not pointless.The ATX/Tower boards featured in this article are server/workstation.
Excluding the PESH4 which is clearly rack mount and needs tro be used in a validated chassis with airflow baffles
You can either use them for server in a server or office environment or as a professional workstation. This is where the audio, usb3 and 'pointless expansion slots' come in.
Either for multimedia, science or networking.
As for onbord SAS, not every SI uses the onboard chip despite if its a real SAS or not.
Some SI assume the onboard is crap/fakeraid and use their standard fleet deployed addin card
Some need battery backup or a validated solution, mobos may not offer a BBU option
Some want the abililty of removing the whole card/array subsystem out of a failed or reundant server without thinking about rebuild or compatibily between different subchips or roms.
On some boards, the non SAS sku might not be avalible in the local channel.
NitroWare - Tuesday, January 29, 2013 - link
"they are the only manufacturer to support a system fully populated with DDR3-1600MHz 16GB RDIMM 1.35V modules. "I am not sure of the validity of this comment, as name brand or even some whitebox servers can take 512/768 in a 2P config with the right RDIMMs. Unless they are reffering to 1600 which makes less sense as the higher end Xeon parts are 1600 anyway.
IanCutress - Wednesday, January 30, 2013 - link
Yeah the key point there was the 1600 MHz. Standard usage is to be 1600 MHz when one module per channel, then 1333 MHz when using two modules per channel to maintain signal coherency. I have had a few emails in the past couple of months with some users with HPC usage scenarios that cry out for memory density + bandwidth on the CPU, saying that 2400 MHz in a non-ECC environment is great. Moving towards that on the 2P Xeon/ECC side can only be a good thing.JMC2000 - Tuesday, January 29, 2013 - link
From the information of Gigabyte's page for the 7PESH3, there are 4 PCI-E 3.0 x16 slots and 3 PCI-E 3.0 x8 slots. Knowing that S2011 chips have 32 integrated PCI-E lanes, set up as x16/x16/x8 or x16/x8/x8/x8, which slots split an x16 connection into dual x8?IanCutress - Wednesday, January 30, 2013 - link
S2011 technically have 40 PCIe lanes, hence the 16+8+8+8 configuration. For ease of use, manufacturers tend to split the x16 to the slot directly below it. However with two GPUs, there is up to 80 lanes available, meaning that two x16s can come from CPU1 and two x16s can come from CPU2 (with all sorts of variants regarding splitting and the spare 8 lanes).I'll see if I can get hold of a block diagram so we know what is what :)
Ian
IanCutress - Wednesday, January 30, 2013 - link
After taking to GB, the official block diagram is under NDA, but I was told:CPU 1 controls PCIe 1-4. 1 and 3 are x16, which drop to x8 if 2 and 4 are populated.
CPU 2 controls PCIe 5-7. 5 and 7 are x16, and 5 will drop to x8 if 6 is populated.
So they are only using 32 lanes from each CPU, but with 4-way it is a full x16/x16/x16/x16 without PLX 8747 chips.
JMC2000 - Wednesday, January 30, 2013 - link
So, the LSI controller would be using available PCI-E x8, I kind of figured that's how the board was laid out, but wasn't sure, since Gigabyte didn't go into detail on the board's spec page.Thanks for finding out, Ian.
Now, if there was a wider version of the 7PESH3, with 8 DIMMS per socket... Would make for a nice 'compact' computing/rendering system.
zer0sum - Wednesday, January 30, 2013 - link
So what sort of price are these boards?I sure could use a GA-7PESH4 in my home Vx environment