for VM Hosts servers, after getting pass the first problem of storage space...
the next problem is normally not enough Real Memory. 4 CPUs with 24 Cores in total, can handle a lot of VMs, but you normally run out of storage space first (which can actually be resolved), but Memory, you've basically hit a Brick wall..
Companies like V3sys.com use the Fusion-io cards to break past those barriers. They can run 50 to 100 VM's on one server as the Fusion-io card can serve as both storage and RAM for the VMs.
You spent only a few hours at the largest IT fair there is and come to the conclusion its becoming less international every year just because the few booths you visited mostly presented you with german material?
Did you go there during the press days or public opening?
"The HTX slot offers a low latency 6.4 GB/s interface. Together with these kind of "NUMA connected" cards, it is easier to build a very low latency HPC cluster.
However, it seems that the HTX slot is at the end of its lifetime. The upcoming Xeons seem to come with a PCI-express 3.0 controller integrated, so they should be able to offer a low latency interface of up to 12.8 GB/s, or twice as much."
I was just looking at the HTX specs/whitepapers, and HTX 3.0 has an aggregate bandwidth of 20.8GB/sec or 10.4GB/sec in each direction. Are you sure it is just a HTX 1.0/2.0 slot or is it HTX 3.0? All 6000-series Opterons support up to 3.2GHz or 6.4GT/s.
And why exactly would you even want Tegra (regardless of version) in a server? It brings nothing to a server, Atom has a use, low power, x86 compatibility and is ideal for low use, high IO workloads. What niche would Tegra serve in a rackmount server?
Good discussion of low-power servers and where they make sense and don't from James Hamilton at Amazon Web Services (this is his personal blog, not official Amazon): http://perspectives.mvdirona.com/2010/05/18/WhenVe...
"Where very low-power, low-cost servers win is: 1. Very cold storage workloads....The core challenge with cold storage apps is that overall system cost is dominated by disk but the disk needs to be attached to a server. We have to amortize the cost of the server over the attached disk storage. The more disk we attach to a single server, the lower the cost. But, the more disk we attach to a single server, the larger the failure zone. Nobody wants to have to move 64 to 128 TB every time a server fails. The tension is more disk to server ratio drives down costs but explodes the negative impact of server failures. So, if we have a choice of more disks to a given server or, instead, to use a smaller, cheaper server, the conclusion is clear. Smaller wins. This is a wonderful example of where low-power servers are a win. 2. Workloads with good scaling characteristics and non-significant local resource requirements. Web workloads that just accept connections and dispatch can run well on these processors. However, we still need to consider the “and non-significant local resource” clause. If the workload scales perfectly but each interaction needs access to very large memories for example, it may be poor choice for Wimpy nodes. If the workload scales with CPU and local resources are small, Wimpy nodes are a win."
The other key thing is there are a few circumstances that make virtualization a very bad choice. Virtual severs handles heavy IO very poorly with significant latency even if the individual workloads are terribly small. This is an area where the Atom servers are big sellers. Negligible power load, small form factor (1U, 18" deep) and a full hardware even if it's crappy hardware. One of the models comes with two mini-itx atom boards in the same 1U and they each have their own hard drive.
This is a niche market, but based on Supermicro producing several models I bet they are selling quite a few. I'd be curious to find out if Supermicro would reveal how many of the Atom servers they sell. I know there is quite a bit of discussion around about these things being the perfect custom firewall (one of those heavy IO loads) on a corporate network.
"However, it seems that the HTX slot is at the end of its lifetime. The upcoming Xeons seem to come with a PCI-express 3.0 controller integrated, so they should be able to offer a low latency interface of up to 12.8 GB/s, or twice as much."
AFAIR Hyper Transport has, thanks to the communication protocol itself, about 1/2 the latency of PCI. Regardless the physical layer.
That's what I didn't understand. HT3.0 has lower latency and higher bandwidth than PCI-E 3.0, so how can it be near the end of its life? HTX3.0 has been in use ever since the launch of C32/G34.
Actually the article is quite correct. There are a limited number of HTX expansion cards produced and since most of these cards are designed for specialized tasks. There's a considerably small market for these cards to begin with. Its easier to kill off HTX in favor of a PCI-E where the same card can be used in both Intel and AMD platforms. With PCI-E 3.0, HTX is essentially dead. Now, that doesn't mean HT is dead; only HTX. Even now with Infiniband cards there are far better optimization to be made on the software side to increase performance than eeking out the very drop of latency from HTX over PCI-E.
Anyone notice the incorrect specs. According to their datasheet the 6F+ has only two slots the HTX and 1 PCI-E. If its like the usual Supermicro board the remaining slots should be unpopulated. The 6F as it appears has only 68 PCI-E lanes not 80 indicated in the article. 5 full lane PCI-E 16x slots made for anything less than a 4U/workstation is a considerably more rare.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
17 Comments
Back to Article
Gami - Monday, March 14, 2011 - link
Damn, I wish I was Limited to 1 TB of RAM right now.AlExAkE - Tuesday, March 15, 2011 - link
hahaha u wish u had such a problem hah? :) xxaxa I'd take my 4GB to the corner and stay there without saying anything...vol7ron - Tuesday, March 15, 2011 - link
I can hardly imagine what you would need that much RAM in a PC for. Servers I understand, PCs not so much.Gami - Tuesday, March 15, 2011 - link
for VM Hosts servers,after getting pass the first problem of storage space...
the next problem is normally not enough Real Memory.
4 CPUs with 24 Cores in total, can handle a lot of VMs, but you normally run out of storage space first (which can actually be resolved), but Memory, you've basically hit a Brick wall..
vbrisebo - Wednesday, March 16, 2011 - link
Companies like V3sys.com use the Fusion-io cards to break past those barriers. They can run 50 to 100 VM's on one server as the Fusion-io card can serve as both storage and RAM for the VMs.oneoho - Tuesday, March 15, 2011 - link
werd, 1TB of ram? holy shiet think of the VMs!Griswold - Monday, March 14, 2011 - link
You spent only a few hours at the largest IT fair there is and come to the conclusion its becoming less international every year just because the few booths you visited mostly presented you with german material?Did you go there during the press days or public opening?
JMC2000 - Monday, March 14, 2011 - link
"The HTX slot offers a low latency 6.4 GB/s interface. Together with these kind of "NUMA connected" cards, it is easier to build a very low latency HPC cluster.However, it seems that the HTX slot is at the end of its lifetime. The upcoming Xeons seem to come with a PCI-express 3.0 controller integrated, so they should be able to offer a low latency interface of up to 12.8 GB/s, or twice as much."
I was just looking at the HTX specs/whitepapers, and HTX 3.0 has an aggregate bandwidth of 20.8GB/sec or 10.4GB/sec in each direction. Are you sure it is just a HTX 1.0/2.0 slot or is it HTX 3.0? All 6000-series Opterons support up to 3.2GHz or 6.4GT/s.
mino - Monday, March 14, 2011 - link
Plus a lower latency on top of it.PCIe is not really a HTX competitor. That hypothetical QPI slot is.
MySchizoBuddy - Monday, March 14, 2011 - link
When will we see Tegra 2 based Server.rahvin - Monday, March 14, 2011 - link
And why exactly would you even want Tegra (regardless of version) in a server? It brings nothing to a server, Atom has a use, low power, x86 compatibility and is ideal for low use, high IO workloads. What niche would Tegra serve in a rackmount server?zhill - Monday, March 14, 2011 - link
Good discussion of low-power servers and where they make sense and don't from James Hamilton at Amazon Web Services (this is his personal blog, not official Amazon):http://perspectives.mvdirona.com/2010/05/18/WhenVe...
"Where very low-power, low-cost servers win is:
1. Very cold storage workloads....The core challenge with cold storage apps is that overall system cost is dominated by disk but the disk needs to be attached to a server. We have to amortize the cost of the server over the attached disk storage. The more disk we attach to a single server, the lower the cost. But, the more disk we attach to a single server, the larger the failure zone. Nobody wants to have to move 64 to 128 TB every time a server fails. The tension is more disk to server ratio drives down costs but explodes the negative impact of server failures. So, if we have a choice of more disks to a given server or, instead, to use a smaller, cheaper server, the conclusion is clear. Smaller wins. This is a wonderful example of where low-power servers are a win.
2. Workloads with good scaling characteristics and non-significant local resource requirements. Web workloads that just accept connections and dispatch can run well on these processors. However, we still need to consider the “and non-significant local resource” clause. If the workload scales perfectly but each interaction needs access to very large memories for example, it may be poor choice for Wimpy nodes. If the workload scales with CPU and local resources are small, Wimpy nodes are a win."
rahvin - Monday, March 14, 2011 - link
The other key thing is there are a few circumstances that make virtualization a very bad choice. Virtual severs handles heavy IO very poorly with significant latency even if the individual workloads are terribly small. This is an area where the Atom servers are big sellers. Negligible power load, small form factor (1U, 18" deep) and a full hardware even if it's crappy hardware. One of the models comes with two mini-itx atom boards in the same 1U and they each have their own hard drive.This is a niche market, but based on Supermicro producing several models I bet they are selling quite a few. I'd be curious to find out if Supermicro would reveal how many of the Atom servers they sell. I know there is quite a bit of discussion around about these things being the perfect custom firewall (one of those heavy IO loads) on a corporate network.
mino - Monday, March 14, 2011 - link
"However, it seems that the HTX slot is at the end of its lifetime. The upcoming Xeons seem to come with a PCI-express 3.0 controller integrated, so they should be able to offer a low latency interface of up to 12.8 GB/s, or twice as much."AFAIR Hyper Transport has, thanks to the communication protocol itself, about 1/2 the latency of PCI. Regardless the physical layer.
JMC2000 - Tuesday, March 15, 2011 - link
That's what I didn't understand. HT3.0 has lower latency and higher bandwidth than PCI-E 3.0, so how can it be near the end of its life? HTX3.0 has been in use ever since the launch of C32/G34.jcandle - Wednesday, March 16, 2011 - link
Actually the article is quite correct. There are a limited number of HTX expansion cards produced and since most of these cards are designed for specialized tasks. There's a considerably small market for these cards to begin with. Its easier to kill off HTX in favor of a PCI-E where the same card can be used in both Intel and AMD platforms. With PCI-E 3.0, HTX is essentially dead. Now, that doesn't mean HT is dead; only HTX. Even now with Infiniband cards there are far better optimization to be made on the software side to increase performance than eeking out the very drop of latency from HTX over PCI-E.jcandle - Wednesday, March 16, 2011 - link
Anyone notice the incorrect specs. According to their datasheet the 6F+ has only two slots the HTX and 1 PCI-E. If its like the usual Supermicro board the remaining slots should be unpopulated. The 6F as it appears has only 68 PCI-E lanes not 80 indicated in the article. 5 full lane PCI-E 16x slots made for anything less than a 4U/workstation is a considerably more rare.H8QGL-6F/H8QGL-iF:
3 PCI-Express 2.0 x16
2 PCI-Express 2.0 x8 (using x16 slot)
1 PCI-Express 2.0 x4 (using x16 slot)
H8QGL-6F+/H8QGL-iF+:
1 HyperTransport slot
1 PCI-Express 2.0 x16