Hey guys, you've got to do better than this. The only thing that drew me to this article was the Name "SGI" and your explanation of their system is nothing.
Why not just come out and say . . " Hey, look what I've got pictures of". Thats about all the use I have for the "article". Sorry if you do not like that Johan, but the truth hurts.
It is clear that we do not focus on the typical SGI market. But you have noticed that from the other competitors and you know that HPC is not our main expertise, virtualization is. It is not really clear what your complaint is, so I assume that it is the lack of HPC benchmarks. Care to make your complaint a little more constructive?
i'll defend Johan here...SGI has basically cornered themselves into the cloud scale market place where their BTO-style of engagement has really allowed them to prosper. If you wanted a competitive story there, the Dell DCS series of servers (C6100, for example) would be a better comparison.
While the 815 is great value where the host is CPU bound, most VM workloads seem to be memory limited rather than processing power. Another consideration is server (in particularly memory) longevity which is something where the 810 inherits the 910s RAS features while the 815 misses out.
I am not disagreeing with your conclusion that the 815 is great value but only if your workload is CPU bound and if you are willing to take the risk of not having RAS features in a data center application.
True that there is a RAS difference, but you do have to weigh the budget differences and power differences to determine whether the RAS levels of either the R815 (or even a xeon 5600 system) are not sufficient for your application. Keep in mind that the xeon 7400 series did not have these RAS features, so if you were comfortable with the RAS levels of the 7400 series for these apps, then you have to question whether the new RAS features are a "must have". I am not saying that people shouldn't want more RAS (everyone should), but it is more a question of whether it is worth paying the extra price up front and the extra price every hour at the wall socket.
For virtualization, the last time I talked to the VM vendors about attach rate, they said that their attach rate to platform matched the market (i.e. ~75% of their software was landing on 2P systems). So in the case of virtualization you can move to the R815 and still enjoy the economics of the 2P world but get the scalability of the 4P products.
I don't disagree but the RAS issue also dictates the longevity of the platform. I have been in the hosting business for a while and we see memory errors bring down 2 year+ old HP blades in alarming numbers. If you budget for a 4 year life cycle, then RAS has to be high on your list of features to make that happen.
Generally I would agree except that 2yr old HP blades (G5) are the worst way to ascertain commodity x86 platform reliability. Reasons: 1) inadequate cooling setup (you better keep c7000 input air well below 20C at all costs) 2) FBDIMM love to overheat 3) G5 blade mobos are BIG MESS when it comes to memory compatibility => they clearly underestimated the tolerances needed
4) All the points above hold true at least compared to HS21* and except 1) also against bl465*
Speaking about 3yrs of operations of all three boxen in similar conditions. The most clear thi became to us when building power got cutoff and all our BladeSystems got dead within minutes (before running out of UPS by any means) while our 5yrs old BladeCenter (hosting all infrastructure services) remained online even at 35C (where the temp platoed thanks to dead HP's) Ironically, thanks to the dead production we did not have to kill infrastructure at all as the UPS's lasted for the 3 hours needed easily ...
So, the "product differentiators" from HP are because they primarily sell through partner channels. This is the model IBM used for years; and if you buy your ProLiants through an HP partner and not a mere reseller, they will know the product line and can configure the hardware however you want. HP does very well at making their servers extremely modular, but you do have to know how/where to find the part numbers. Often this information is not widely distributed outside of HP's partner sales trainings (which are very good.)
If you're used to the Dell model of sales, it doesn't make much sense. But because Dell sells directly, their policies for channel partners are stupid (the wholesale price for a channel partner is often higher than the retail price for a direct customer.) But because partners have more pricing and configuration flexibility from HP, the partner can often beat Dell's direct price with HP hardware (which IMO is higher quality than Dell anyway.) Dell doesn't want their partners to compete with their direct sales and HP doesn't want to jeopardize their partner relationships by pushing direct sales too hard.
There are pros and cons to each approach, and it all depends on how you handle your IT. If most of it is in-house, but you're not quite big enough to have an internal buyer who would take in the HP sales training, Dell makes a lot of sense because, well, it's easy to understand and most HP partners make their money off implementation services, not hardware sales. Dell is willing to work with you a little more. But if you look at companies where IT is not a core competency (regional insurance companies, banks, etc) a lot of them use consultants to do IT projects/maintenance and HP is making an absolute killing in this market.
insightful and enlightening comment. Still, there is a point where extremely modular increases the complexity and price too much. The result is a slightly higher price (which is still acceptable, but sometimes also small configuration mistakes which cause extra delays. The result is significant higher cost. And this happens regularly as even trained people make mistakes. So my first impression is that HP should lower the complexity a bit.
True; but without a "direct sales" option they have no way to offer flexible configurations without having a different part number for every possible configuration. Most HP partners will simply use a sales quote tool to build server configurations (in fact; this is exactly what Dell does if you order through their sales reps, which is how you get the best prices.) Again though; HP partners are unlikely to give you a sweetheart deal unless you're buying implementation services from them as well. They make 5-10% on the hardware and 80-150% on the labor,
But I'll tell you now that HP was consistently able to beat Dell on price through channels over the last 5 years. IMO this is because Dell has the same sort of parts system internally; HP cuts costs by not bothering to make sense of it all and just pushing it off onto their resellers. They're pretty much not interested in selling direct to consumers because it's really just a small part of their business.
What's killed Dell's profits over the last few years has been that the economic troubles have pushed small/midsize companies to outsource their IT. The companies they outsource to are probably HP partners. Thus, when these companies need hardware, it's likely to be HP (used to be IBM as well; but IBM's support is pathetic and their prices are in the stratosphere.) Channel resellers are also used to dealing with complicated product lists (last I checked Symantec's product book had something like 25,000 individual SKUs) so it's probably not going to change. If anything, it's likely to get worse. For all the consumerization in IT, the enterprise side is only getting more complicated. I wouldn't try to spec a server from HP without at least being familiar with their product line and the options it offers.
Being a bit weird we buy Dell PCs (and a sprinkling of Macs) and HP servers.
The Dell PCs are cheap and do the job, initial sales calls are good and they will bust a gut on price but beyond that Dell are pretty much hopeless at support in the UK, with our sales manager changing 3 or 4 times a year and never refering any support calls to some else in the team w have never heard of before.
Our HP partner, however, is much more stable and they are generally knowledgeable and help configure servers accordingly though for the most part they are actually straight forward until some nutty developer wants 16 disks locally in an 580 g5. And we have a single point of contact for everything.
You pays your money and you takes your choice.
Did mention the next day or two (HP) versus 1 to 2 weeks (Dell) delivery options?
The draw full of 2 GB DDR3 RAM I have from our HP blades is very irritating, I wish HP would supply with out any RAM installed, it is such a waste.
Great review, I love it when different platforms are compared to each other. Also happy to see AMD hold their own to the much lauded 7500 series Xeons in a market that I feel AMD is better suited for (VM Servers)
However, its possible I missed it, but was the price of the SGI system listed anywhere? It would have been nice to see the price of each system as configured side by side.
"Comparing the dual with the quad Opteron 6174 power numbers, we notice a relatively high increase in power: 244 Watt. So for each Opteron that we add, we measure 122 W at the wall. This 122 W includes a few Watts of PSU losses, VRM and DIMM wiring losses. So the real power consumed by the processor is probably somewhere between 100 and 110W. Which is much closer to the TDP (115W) than the ACP (80W) of this CPU."
when the power draw test was done between 2 socket and 4 socket dell 815 did you remain with the same amount of dimms? so you divided the 2 socket amount in the 4 socket?
On the power draw calculation don't forget that you also have an additional SR5690 to account for which is 18W TDP, electronics etc, so I don't think it will be operating close to TDP but neither to ACP :)
btw a lot of mistakes with the HP 387G7 which should be 380G7
Thanks, appreciate you took the time to let us know. We went through 5 weeks of intensive testing and my eyes still hurt from looking at the countless excel sheets, with endless power and response time readings. ;-)
at the end of page 12, you allude to a performance per watt analysis. looks like you forgot to put it up. i'm chomping at the bit to see those numbers!
please disregard me if i failed to rtfa correctly. Anandtech is the best; your (all of you collectively) articles are brilliant and correct down to the smallest details. This is another article that was an absolute joy to read. :]
Well you can't really calculate it, as it depends on the situation. On low load loads, the system that consumes the less, is the winner, on the condition that the response times stay low. But of course, if your systems are running at low load all time, there might be something wrong: you should have bought more RAM and consolidated more VMs per system.
At higher loads, the power consumption at high load divided by the throughput (vApusmark) is close to the truth. But it is definitely not the performance/watt number for everyone
It depends on your workloads. The more critical processing power (think response time SLA) is, the more the last mentioned calculation makes sense. The more we are talking about lots of lightly loaded VMs (like authentification servers, fileservers etc.), the more simply looking at the energy consumed at page 12 make sense.
Now to the small ammount of mess in there: "the CPUs consume more than the ACP ratings that AMD mentions everywhere"
1) Avegare CPU Power (ACP) is NOT supposed/marketed to represent 100% load power use Wikipedia: "The average CPU power (ACP), is a scheme to characterize power consumption of new central processing units under "average" daily usage..."
2) 122W at the wall and 110W at the CPU ??? Are you telling us the PSU's are 95% along with VRM/power/fans at 95% efficiency ? (0.95*0.95*1.22=1.10) . Sorry to spoil the party but that is NOT the case. 122W at wall means 100W at CPU at the most realistically 95W.
"1) Avegare CPU Power (ACP) is NOT supposed/marketed to represent 100% load power use Wikipedia: "The average CPU power (ACP), is a scheme to characterize power consumption of new central processing units under "average" daily usage...""
You are right. But what value does it have? As an admin I want to know what the maximum could be realistically (TDP is the absolute maximum for non-micro periods) and if you read between the lines that is more or less what AMD communicated (see their white paper). if it is purely "average", it has no meaning, because average power can be a quite a bit lower as some servers will run at 30% on average, others at 60%.
These PSU are supposed to be 92-94% efficient and AFAIK the VRMs are at least 90% efficient. So 122 x 0.92 x 0.90 = 101 W.
Well, I was bit unslept when writing it but anyway. So got a bit harser than should have.
In my experience the ACP values pretty well represent your average loaded server (<= 80% load). But that is not the point.
AMD created ACP in a response to the fact that their TDP numbers are conservative while Intel's are optimistic. That was the main cause wery well known to you as well.
Call me an ass but I certainly do not remember AT bitching about Intel TDPs no bein representative (during last 6 years at least). And we all know too well that those NEVER represented the real power use of their boxen nor did they EVER represented what the "TDP" moniker stands for.
Currently the situation is as such that identical 2P AMD box with 80W ACP has ~ the same power requirements as 2P Intel box with 80W TDP. You have just proven that.
Therefore I believe it would be fair to stop bitching about AMD (or Intel) cheating in marketing (both do) and just say whether the numbers are comparable or not. Arguing about spin wattage is not really needed.
"Arguing about spin wattage is not really needed. "
I have to disagree. The usual slogan is "don't look at TDP, look at measurements". What measurments? The totally unrealistic SPECpower numbers?
It is impossible for review sites to test all CPUs. So it is up to vendors to gives us a number that does not have to be accurate on a few percent, but that let us select CPUs quickly.
Customers should have one number that allows them to calculate worst case numbers which are realistic (heavily load webserver for example, not a thermal virus). So all CPU vendors should agree on a standard. That is not bitching, but is a real need of the sysadmins out there.
One thing I would love to see is having the lowest end HP server put to its paces. So far it seems to us a the best option for vCenter hosting in small environments (with FT Vm's hosting vCenter).
same here, we moved also to 385g7 with the new 8-12core cpu's, Nice servers with huge core count since we never run more vCPU then pCPU in a system. Dell 815 looks like a good solution also, it was mentioned in the review the BL685 and DL585 are way more expensive.
Great article. I suggest to include some HPC benchmarks other than STREAM. For instance, DGEMM performance would be interesting (using MKL and ACML for Intel and AMD platforms).
One thing I would like to point out is that most of the customers I work with use VMWare in an enterprise scenario. Failover/HA is usually a large issue. As such we usually create (or at least recommend) VMWare clusters with 2 or 3 nodes. As such each node is limited to roughly 40% usage (memory/CPU) so that if a failure occurs there is minimal/0 service disruption. So we usually don't run highly loaded ESX hosts. So the 40% load numbers are the most interesting. Good article and lots to think about when deploying these systems....
It would be nice to see some comparisons of blade systems in a similar vein to this article.
Also you say that one system is better at say DBs whilst the the other is better at VMware, what about if you are running say a SQL database on a VMware platform? Which one would be best for that? How much does the application you are running in the VM affect the comparative performance figures you produce?
is it really a question, anyone who has used both DRAC and ILO knows who wins. everyone at my current company has a tear come to their eyes when we remember ILO. over 4 years of supporting Proliants vs 1 year of Dell, i've had more hw problems with Dell. i've never before seen firmware brick a server, but they did it with a 2850, the answer, new motherboard. yay!
This article should be renamed servers clash, finding alternatives to the Intel architecture. Yes it's slightly overpriced but it's extremely well put together. Only in the last few months has the 12c Opteron become an option. It's surprising you can build Dell 815's with four 71xx series and 10GB Nics for under a down payment on a house. This was not the case recently. It's a good article but it's clearly aimed to show that you can have great AMD alternatives for a bit more. The most interesting part of the article was how well AMD competed against a much more expensive 7500 series Xeon server. I enjoyed the article it was informative but the showdown style format was simply wrong for the content. Servers aren't commodity computers like desktops. They are aimed at a different type of user and I don't think that showdowns of vastly dissimilar hardware, from different price points and performance points, serve to inform IT Pros of anything they didn't already know. Spend more money for more power and spend it wisely......
First off, I am glad that Anandtech is reviewing server systems, however I came away with more questions than answers after reading this article.
First off, please test comparable systems. Your system specs were all over the board and there were way to many variables that can effect performance for any relevant data to be extracted from your tests.
Second, HP, SGI and Dell will configure your system to spec... i.e. use 4GB dimms, drives, etcetera if you call them. However something that should be noted is that HP memory must be replaced with HP memory, something that is an important in making a purchase. HP, puts a "thermal sensor" on their dimms, that forces you to buy their overpriced memory (also the reason they will use 1GB dimms, unless you spec otherwise).
Third, if this is going to be a comparison, between three manufactures offerings, compare those offerings. I came away feeling I should buy an IBM system (which wasn't even "reviewed")
Lastly read the critiques others have written here, most a very valid.
I can not agree with this. I have noticed too many times that sysadmins make the decision to go for a certain system too early, relying too much on past experiences. The choice for "quad socket rack" or "dual socket blade" should not be made because you are used to deal with these servers or because your partner pushes you in that direction.
Just imagine that the quad Xeon 7500 would have done very well in the power department. Too many people would never consider them because they are not used to buy higher end systems. So they would populate a rack full of blades and lose the RAS, scalability and performance advantages.
I am not saying that this gutfeeling is wrong most of the time, but I am advocating to keep an open mind. So the comparison of very different servers that can all do the job is definitely relevant.
These VMWare benchmarks are worthless. I've been digesting this for a long long time and just had a light bulb moment when re-reading the review. You run highly loaded Hypervisors. NOONE does this in the Enterprise space. To make sure I'm not crazy I just called several other IT folks who work in large (read 500+ users minimum most in the thousands) and they all run at <50% load on each server to allow for failure. I personally run my servers at 60% load and prefer running more servers to distribute I/O than running less servers to consolidate heavily. With 3-5 servers I can really fine tune the storage subsystem to remove I/O bottlenecks from both the interface and disk subsystem. I understand that testing server hardware is difficult especially from a Virtualization standpoint, and I can't readily offer up better solutions to what you're trying to accomplish all I can say is that there need to be more hypervisors tested and some thought about workloads would go a long way. Testing a standard business on Windows setup would be informative. This would be an SQL Server, an Exchange Server, a Share Point server, two DC's, and 100 users. I think every server I've ever seen tested here is complete overkill for that workload but that's an extremely common workload. A remote environment such as TS or Citrix is another very common use of virtualization. The OS craps out long before hardware does when running many users concurrently in a remote environment. Spinning up many relatively weak VM's is perfect for this kind of workload. High performance Oracle environments are exactly what's being virtualized in the Server world yet it's one of your premier benchmarks. I've never seen a production high load Oracle environment that wasn't running on some kind of physical cluster with fancy storage. Just my 2 cents.
"You run highly loaded Hypervisors. NOONE does this in the Enterprise space."
I agree. Isn't that what I am saying on page 12:
"In the real world you do not run your virtualized servers at their maximum just to measure the potential performance. Neither do they run idle."
The only reason why we run with highly loaded hypervisors is to measure the peak throughput of the platform. Like VMmark. We know that is not realworld, and does not give you a complete picture. That is exactly the reason why there is a page 12 and 13 in this article. Did you miss those?
Hi, please use a better camera for pictures of servers that costs thousands of dollars In full size the pictures look terrible, way too much grain The camera you use is a prime example of how far marketing have managed to take these things 10MP on a sensor that is 1/2.3 " (6.16 x 4.62 mm, 0.28 cm²) A used DSLR with a decent 50mm prime lens plus a tripod really does not cost that much for a site like this
I may be one of the many "silent" readers of your reviews Johan, but putting aside all the nasty or not-so-bright comments, I would like to commend you and the AT team for putting up such excellent reviews, and also for using industry-standard benchmarks like SAPS to measure throughput of the x86 servers.
Great work and looking forward to more of these types of reviews!
Johan - You note for the R815: Make sure you populate at least 32 DIMMs, as bandwidth takes a dive at lower DIMM counts. Could you elaborate on this? We have a R815 with 16x2GB and not seeing the expected performance for our very CPU intensive app perhaps adding another 16x2GB might help
This comment you quoted was written in the summary of the quad Xeon box.
16 DIMMs is enough for the R815 on the condition that you have one DIMM in each channel. Maybe you are placing the DIMMs wrongly? (Two DIMMs in one channel, zero DIMM in the other?)
I've been looking around for some results comparing maxed-out servers but I am not finding any.
The Xeon 5600 platform clocks the memory down to 800MHz whenever 3 dimms per channel are used, and I believe in some/all cases the full 1066/1333MHz speed (depends on model) is only available when 1 dimm per channel is used. This could be huge compared with an AMD 6100 solution at 1333MHz all the time, or a Xeon 7560 system at 1066 all the time (although some vendors clock down to 978MHz with some systems - IBM HX5 for example). I don't know if this makes a real-world difference on typical virtualization workloads, but it's hard to say because the reviewers rarely try it.
It does make me wonder about your 15-dimm 5600 system, 3 dimms per channel @800MHz on one processor with 2 DPC @ full speed on the other. Would it have done even better with a balanced memory config?
I realize you're trying to compare like to like, but if you're going to present price/performance and power/performance ratios you might want to consider how these numbers are affected if I have to use slower 16GB dimms to get the memory density I want, or if I have to buy 2x as many VMware licenses or Windows Datacenter processor licenses because I've purchased 2x as many 5600-series machines.
The previous post is correct in that the Xeon 5600 memory configuration is flawed. You are running the processor in a degraded state 1 due to the unbalanced memory configuration as well as the differing memory speeds.
The Xeon 5600 processors can run at 1333MHz (with the correct DIMMs) with up to 4 ranks per channel. Going above this results in the memory speed clocking down to 800MHz which does result in a performance drop to the applications being run.
I know this is an old post but I'm looking at putting 4 SSDs in a Dell poweredge and had a question for you.
What raid card did you use with the above setup?
Currently a new Dell poweredge R510 comes with a PERC H700 raid card with 1GB cache and this is connect to a hot swap chassis. Dell want £1500 per SSD (crazy!) so I'm looking to buy 4 intel 520s and setup them up in raid 10.
I just wanted to know what raid card you used and if you had a trouble with it and what raid setup you used?
I recently bought a G7 from www.itinstock.com and if I am honest it is perfect for my needs, i don't see the point in the higher end ones when it just works out a lot cheaper to buy the parts you need and add them to the G7.
This article is about Dell servers in 2010. For comparison purposes here is an overview of a Dell PowerEdge M420 Blade server. This video has cool effects with upbeat production values. Please check it out, thanks: https://www.youtube.com/watch?v=iKIG430z0PI
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
51 Comments
Back to Article
jdavenport608 - Thursday, September 9, 2010 - link
Appears that the pros and cons on the last page are not correct for the SGI server.Photubias - Thursday, September 9, 2010 - link
If you view the article in 'Print Format' than it shows correctly.Seems to be an Anandtech issue ... :p
Ryan Smith - Thursday, September 9, 2010 - link
Fixed. Thanks for the notice.yyrkoon - Friday, September 10, 2010 - link
Hey guys, you've got to do better than this. The only thing that drew me to this article was the Name "SGI" and your explanation of their system is nothing.Why not just come out and say . . " Hey, look what I've got pictures of". Thats about all the use I have for the "article". Sorry if you do not like that Johan, but the truth hurts.
JohanAnandtech - Friday, September 10, 2010 - link
It is clear that we do not focus on the typical SGI market. But you have noticed that from the other competitors and you know that HPC is not our main expertise, virtualization is. It is not really clear what your complaint is, so I assume that it is the lack of HPC benchmarks. Care to make your complaint a little more constructive?davegraham - Monday, September 13, 2010 - link
i'll defend Johan here...SGI has basically cornered themselves into the cloud scale market place where their BTO-style of engagement has really allowed them to prosper. If you wanted a competitive story there, the Dell DCS series of servers (C6100, for example) would be a better comparison.cheers,
Dave
tech6 - Thursday, September 9, 2010 - link
While the 815 is great value where the host is CPU bound, most VM workloads seem to be memory limited rather than processing power. Another consideration is server (in particularly memory) longevity which is something where the 810 inherits the 910s RAS features while the 815 misses out.I am not disagreeing with your conclusion that the 815 is great value but only if your workload is CPU bound and if you are willing to take the risk of not having RAS features in a data center application.
JFAMD - Thursday, September 9, 2010 - link
True that there is a RAS difference, but you do have to weigh the budget differences and power differences to determine whether the RAS levels of either the R815 (or even a xeon 5600 system) are not sufficient for your application. Keep in mind that the xeon 7400 series did not have these RAS features, so if you were comfortable with the RAS levels of the 7400 series for these apps, then you have to question whether the new RAS features are a "must have". I am not saying that people shouldn't want more RAS (everyone should), but it is more a question of whether it is worth paying the extra price up front and the extra price every hour at the wall socket.For virtualization, the last time I talked to the VM vendors about attach rate, they said that their attach rate to platform matched the market (i.e. ~75% of their software was landing on 2P systems). So in the case of virtualization you can move to the R815 and still enjoy the economics of the 2P world but get the scalability of the 4P products.
tech6 - Thursday, September 9, 2010 - link
I don't disagree but the RAS issue also dictates the longevity of the platform. I have been in the hosting business for a while and we see memory errors bring down 2 year+ old HP blades in alarming numbers. If you budget for a 4 year life cycle, then RAS has to be high on your list of features to make that happen.mino - Thursday, September 9, 2010 - link
Generally I would agree except that 2yr old HP blades (G5) are the worst way to ascertain commodity x86 platform reliability.Reasons:
1) inadequate cooling setup (you better keep c7000 input air well below 20C at all costs)
2) FBDIMM love to overheat
3) G5 blade mobos are BIG MESS when it comes to memory compatibility => they clearly underestimated the tolerances needed
4) All the points above hold true at least compared to HS21* and except 1) also against bl465*
Speaking about 3yrs of operations of all three boxen in similar conditions. The most clear thi became to us when building power got cutoff and all our BladeSystems got dead within minutes (before running out of UPS by any means) while our 5yrs old BladeCenter (hosting all infrastructure services) remained online even at 35C (where the temp platoed thanks to dead HP's)
Ironically, thanks to the dead production we did not have to kill infrastructure at all as the UPS's lasted for the 3 hours needed easily ...
Exelius - Thursday, September 9, 2010 - link
So, the "product differentiators" from HP are because they primarily sell through partner channels. This is the model IBM used for years; and if you buy your ProLiants through an HP partner and not a mere reseller, they will know the product line and can configure the hardware however you want. HP does very well at making their servers extremely modular, but you do have to know how/where to find the part numbers. Often this information is not widely distributed outside of HP's partner sales trainings (which are very good.)If you're used to the Dell model of sales, it doesn't make much sense. But because Dell sells directly, their policies for channel partners are stupid (the wholesale price for a channel partner is often higher than the retail price for a direct customer.) But because partners have more pricing and configuration flexibility from HP, the partner can often beat Dell's direct price with HP hardware (which IMO is higher quality than Dell anyway.) Dell doesn't want their partners to compete with their direct sales and HP doesn't want to jeopardize their partner relationships by pushing direct sales too hard.
There are pros and cons to each approach, and it all depends on how you handle your IT. If most of it is in-house, but you're not quite big enough to have an internal buyer who would take in the HP sales training, Dell makes a lot of sense because, well, it's easy to understand and most HP partners make their money off implementation services, not hardware sales. Dell is willing to work with you a little more. But if you look at companies where IT is not a core competency (regional insurance companies, banks, etc) a lot of them use consultants to do IT projects/maintenance and HP is making an absolute killing in this market.
JohanAnandtech - Thursday, September 9, 2010 - link
insightful and enlightening comment. Still, there is a point where extremely modular increases the complexity and price too much. The result is a slightly higher price (which is still acceptable, but sometimes also small configuration mistakes which cause extra delays. The result is significant higher cost. And this happens regularly as even trained people make mistakes. So my first impression is that HP should lower the complexity a bit.Exelius - Saturday, September 11, 2010 - link
True; but without a "direct sales" option they have no way to offer flexible configurations without having a different part number for every possible configuration. Most HP partners will simply use a sales quote tool to build server configurations (in fact; this is exactly what Dell does if you order through their sales reps, which is how you get the best prices.) Again though; HP partners are unlikely to give you a sweetheart deal unless you're buying implementation services from them as well. They make 5-10% on the hardware and 80-150% on the labor,But I'll tell you now that HP was consistently able to beat Dell on price through channels over the last 5 years. IMO this is because Dell has the same sort of parts system internally; HP cuts costs by not bothering to make sense of it all and just pushing it off onto their resellers. They're pretty much not interested in selling direct to consumers because it's really just a small part of their business.
What's killed Dell's profits over the last few years has been that the economic troubles have pushed small/midsize companies to outsource their IT. The companies they outsource to are probably HP partners. Thus, when these companies need hardware, it's likely to be HP (used to be IBM as well; but IBM's support is pathetic and their prices are in the stratosphere.) Channel resellers are also used to dealing with complicated product lists (last I checked Symantec's product book had something like 25,000 individual SKUs) so it's probably not going to change. If anything, it's likely to get worse. For all the consumerization in IT, the enterprise side is only getting more complicated. I wouldn't try to spec a server from HP without at least being familiar with their product line and the options it offers.
lorribot - Friday, September 10, 2010 - link
Being a bit weird we buy Dell PCs (and a sprinkling of Macs) and HP servers.The Dell PCs are cheap and do the job, initial sales calls are good and they will bust a gut on price but beyond that Dell are pretty much hopeless at support in the UK, with our sales manager changing 3 or 4 times a year and never refering any support calls to some else in the team w have never heard of before.
Our HP partner, however, is much more stable and they are generally knowledgeable and help configure servers accordingly though for the most part they are actually straight forward until some nutty developer wants 16 disks locally in an 580 g5. And we have a single point of contact for everything.
You pays your money and you takes your choice.
Did mention the next day or two (HP) versus 1 to 2 weeks (Dell) delivery options?
The draw full of 2 GB DDR3 RAM I have from our HP blades is very irritating, I wish HP would supply with out any RAM installed, it is such a waste.
AllYourBaseAreBelong2Us - Thursday, September 9, 2010 - link
Nice article but HP either sells DL380 (Intel) or DL385 (AMD) servers. Please correct all the DL387 references.JohanAnandtech - Thursday, September 9, 2010 - link
Yes, fixed that one. I always get in trouble with these number codes.Stuka87 - Thursday, September 9, 2010 - link
Great review, I love it when different platforms are compared to each other. Also happy to see AMD hold their own to the much lauded 7500 series Xeons in a market that I feel AMD is better suited for (VM Servers)However, its possible I missed it, but was the price of the SGI system listed anywhere? It would have been nice to see the price of each system as configured side by side.
vol7ron - Thursday, September 9, 2010 - link
Pictures on Page 5-6 look delicious.Nice article
duploxxx - Thursday, September 9, 2010 - link
"Comparing the dual with the quad Opteron 6174 power numbers, we notice a relatively high increase in power: 244 Watt. So for each Opteron that we add, we measure 122 W at the wall. This 122 W includes a few Watts of PSU losses, VRM and DIMM wiring losses. So the real power consumed by the processor is probably somewhere between 100 and 110W. Which is much closer to the TDP (115W) than the ACP (80W) of this CPU."when the power draw test was done between 2 socket and 4 socket dell 815 did you remain with the same amount of dimms? so you divided the 2 socket amount in the 4 socket?
On the power draw calculation don't forget that you also have an additional SR5690 to account for which is 18W TDP, electronics etc, so I don't think it will be operating close to TDP but neither to ACP :)
btw a lot of mistakes with the HP 387G7 which should be 380G7
eanazag - Thursday, September 9, 2010 - link
This is a strong article. Very helpful and most of us basically need to decide which customer we are and what matches our apps and usage requirements.JohanAnandtech - Friday, September 10, 2010 - link
Thanks, appreciate you took the time to let us know. We went through 5 weeks of intensive testing and my eyes still hurt from looking at the countless excel sheets, with endless power and response time readings. ;-)FourthLiver - Thursday, September 9, 2010 - link
at the end of page 12, you allude to a performance per watt analysis. looks like you forgot to put it up. i'm chomping at the bit to see those numbers!please disregard me if i failed to rtfa correctly. Anandtech is the best; your (all of you collectively) articles are brilliant and correct down to the smallest details. This is another article that was an absolute joy to read. :]
JohanAnandtech - Thursday, September 9, 2010 - link
Well you can't really calculate it, as it depends on the situation. On low load loads, the system that consumes the less, is the winner, on the condition that the response times stay low. But of course, if your systems are running at low load all time, there might be something wrong: you should have bought more RAM and consolidated more VMs per system.At higher loads, the power consumption at high load divided by the throughput (vApusmark) is close to the truth. But it is definitely not the performance/watt number for everyone
It depends on your workloads. The more critical processing power (think response time SLA) is, the more the last mentioned calculation makes sense. The more we are talking about lots of lightly loaded VMs (like authentification servers, fileservers etc.), the more simply looking at the energy consumed at page 12 make sense.
mino - Thursday, September 9, 2010 - link
First, congratulations to a great article !Now to the small ammount of mess in there:
"the CPUs consume more than the ACP ratings that AMD mentions everywhere"
1) Avegare CPU Power (ACP) is NOT supposed/marketed to represent 100% load power use
Wikipedia: "The average CPU power (ACP), is a scheme to characterize power consumption of new central processing units under "average" daily usage..."
2) 122W at the wall and 110W at the CPU ??? Are you telling us the PSU's are 95% along with VRM/power/fans at 95% efficiency ? (0.95*0.95*1.22=1.10)
. Sorry to spoil the party but that is NOT the case. 122W at wall means 100W at CPU at the most realistically 95W.
Otherwise a great work. Keep is up!
JohanAnandtech - Friday, September 10, 2010 - link
"1) Avegare CPU Power (ACP) is NOT supposed/marketed to represent 100% load power useWikipedia: "The average CPU power (ACP), is a scheme to characterize power consumption of new central processing units under "average" daily usage...""
You are right. But what value does it have? As an admin I want to know what the maximum could be realistically (TDP is the absolute maximum for non-micro periods) and if you read between the lines that is more or less what AMD communicated (see their white paper). if it is purely "average", it has no meaning, because average power can be a quite a bit lower as some servers will run at 30% on average, others at 60%.
These PSU are supposed to be 92-94% efficient and AFAIK the VRMs are at least 90% efficient. So 122 x 0.92 x 0.90 = 101 W.
mino - Saturday, September 11, 2010 - link
Well, I was bit unslept when writing it but anyway. So got a bit harser than should have.In my experience the ACP values pretty well represent your average loaded server (<= 80% load). But that is not the point.
AMD created ACP in a response to the fact that their TDP numbers are conservative while Intel's are optimistic. That was the main cause wery well known to you as well.
Call me an ass but I certainly do not remember AT bitching about Intel TDPs no bein representative (during last 6 years at least).
And we all know too well that those NEVER represented the real power use of their boxen nor did they EVER represented what the "TDP" moniker stands for.
Currently the situation is as such that identical 2P AMD box with 80W ACP has ~ the same power requirements as 2P Intel box with 80W TDP. You have just proven that.
Therefore I believe it would be fair to stop bitching about AMD (or Intel) cheating in marketing (both do) and just say whether the numbers are comparable or not.
Arguing about spin wattage is not really needed.
JohanAnandtech - Monday, September 13, 2010 - link
"Arguing about spin wattage is not really needed. "I have to disagree. The usual slogan is "don't look at TDP, look at measurements". What measurments? The totally unrealistic SPECpower numbers?
It is impossible for review sites to test all CPUs. So it is up to vendors to gives us a number that does not have to be accurate on a few percent, but that let us select CPUs quickly.
Customers should have one number that allows them to calculate worst case numbers which are realistic (heavily load webserver for example, not a thermal virus). So all CPU vendors should agree on a standard. That is not bitching, but is a real need of the sysadmins out there.
mino - Thursday, September 9, 2010 - link
One thing I would love to see is having the lowest end HP server put to its paces.So far it seems to us a the best option for vCenter hosting in small environments (with FT Vm's hosting vCenter).
Maybe even run 1-tile vAPUS (v1? perhaps) on it ?
m3rdpwr - Thursday, September 9, 2010 - link
I would have prepared to have had the DL385 G7 compared.They can be had with 8 and 12 core CPU's.
We have close to 200 HP servers of all models, rack and blades.
Many running vm in our Data Center.
-Mario
duploxxx - Friday, September 10, 2010 - link
same here, we moved also to 385g7 with the new 8-12core cpu's, Nice servers with huge core count since we never run more vCPU then pCPU in a system. Dell 815 looks like a good solution also, it was mentioned in the review the BL685 and DL585 are way more expensive.cgaspar - Friday, September 10, 2010 - link
The word you're looking for is "authentication". Is a simple spell check so much to ask?JohanAnandtech - Friday, September 10, 2010 - link
Fixed.ESetter - Friday, September 10, 2010 - link
Great article. I suggest to include some HPC benchmarks other than STREAM. For instance, DGEMM performance would be interesting (using MKL and ACML for Intel and AMD platforms).mattshwink - Friday, September 10, 2010 - link
One thing I would like to point out is that most of the customers I work with use VMWare in an enterprise scenario. Failover/HA is usually a large issue. As such we usually create (or at least recommend) VMWare clusters with 2 or 3 nodes. As such each node is limited to roughly 40% usage (memory/CPU) so that if a failure occurs there is minimal/0 service disruption. So we usually don't run highly loaded ESX hosts. So the 40% load numbers are the most interesting. Good article and lots to think about when deploying these systems....lorribot - Friday, September 10, 2010 - link
It would be nice to see some comparisons of blade systems in a similar vein to this article.Also you say that one system is better at say DBs whilst the the other is better at VMware, what about if you are running say a SQL database on a VMware platform? Which one would be best for that? How much does the application you are running in the VM affect the comparative performance figures you produce?
spinning rust - Saturday, September 11, 2010 - link
is it really a question, anyone who has used both DRAC and ILO knows who wins. everyone at my current company has a tear come to their eyes when we remember ILO. over 4 years of supporting Proliants vs 1 year of Dell, i've had more hw problems with Dell. i've never before seen firmware brick a server, but they did it with a 2850, the answer, new motherboard. yay!pablo906 - Saturday, September 11, 2010 - link
This article should be renamed servers clash, finding alternatives to the Intel architecture. Yes it's slightly overpriced but it's extremely well put together. Only in the last few months has the 12c Opteron become an option. It's surprising you can build Dell 815's with four 71xx series and 10GB Nics for under a down payment on a house. This was not the case recently. It's a good article but it's clearly aimed to show that you can have great AMD alternatives for a bit more. The most interesting part of the article was how well AMD competed against a much more expensive 7500 series Xeon server. I enjoyed the article it was informative but the showdown style format was simply wrong for the content. Servers aren't commodity computers like desktops. They are aimed at a different type of user and I don't think that showdowns of vastly dissimilar hardware, from different price points and performance points, serve to inform IT Pros of anything they didn't already know. Spend more money for more power and spend it wisely......echtogammut - Saturday, September 11, 2010 - link
First off, I am glad that Anandtech is reviewing server systems, however I came away with more questions than answers after reading this article.First off, please test comparable systems. Your system specs were all over the board and there were way to many variables that can effect performance for any relevant data to be extracted from your tests.
Second, HP, SGI and Dell will configure your system to spec... i.e. use 4GB dimms, drives, etcetera if you call them. However something that should be noted is that HP memory must be replaced with HP memory, something that is an important in making a purchase. HP, puts a "thermal sensor" on their dimms, that forces you to buy their overpriced memory (also the reason they will use 1GB dimms, unless you spec otherwise).
Third, if this is going to be a comparison, between three manufactures offerings, compare those offerings. I came away feeling I should buy an IBM system (which wasn't even "reviewed")
Lastly read the critiques others have written here, most a very valid.
JohanAnandtech - Monday, September 13, 2010 - link
"First off, please test comparable systems."I can not agree with this. I have noticed too many times that sysadmins make the decision to go for a certain system too early, relying too much on past experiences. The choice for "quad socket rack" or "dual socket blade" should not be made because you are used to deal with these servers or because your partner pushes you in that direction.
Just imagine that the quad Xeon 7500 would have done very well in the power department. Too many people would never consider them because they are not used to buy higher end systems. So they would populate a rack full of blades and lose the RAS, scalability and performance advantages.
I am not saying that this gutfeeling is wrong most of the time, but I am advocating to keep an open mind. So the comparison of very different servers that can all do the job is definitely relevant.
pablo906 - Saturday, September 11, 2010 - link
These VMWare benchmarks are worthless. I've been digesting this for a long long time and just had a light bulb moment when re-reading the review. You run highly loaded Hypervisors. NOONE does this in the Enterprise space. To make sure I'm not crazy I just called several other IT folks who work in large (read 500+ users minimum most in the thousands) and they all run at <50% load on each server to allow for failure. I personally run my servers at 60% load and prefer running more servers to distribute I/O than running less servers to consolidate heavily. With 3-5 servers I can really fine tune the storage subsystem to remove I/O bottlenecks from both the interface and disk subsystem. I understand that testing server hardware is difficult especially from a Virtualization standpoint, and I can't readily offer up better solutions to what you're trying to accomplish all I can say is that there need to be more hypervisors tested and some thought about workloads would go a long way. Testing a standard business on Windows setup would be informative. This would be an SQL Server, an Exchange Server, a Share Point server, two DC's, and 100 users. I think every server I've ever seen tested here is complete overkill for that workload but that's an extremely common workload. A remote environment such as TS or Citrix is another very common use of virtualization. The OS craps out long before hardware does when running many users concurrently in a remote environment. Spinning up many relatively weak VM's is perfect for this kind of workload. High performance Oracle environments are exactly what's being virtualized in the Server world yet it's one of your premier benchmarks. I've never seen a production high load Oracle environment that wasn't running on some kind of physical cluster with fancy storage. Just my 2 cents.pablo906 - Saturday, September 11, 2010 - link
High performance Oracle environments are exactly what's being virtualized in the Server world yet it's one of your premier benchmarks./edit should read
High performance Oracle environments are exactly what's not being virtualized in the Server world yet it's one of your premier benchmarks.
JohanAnandtech - Monday, September 13, 2010 - link
"You run highly loaded Hypervisors. NOONE does this in the Enterprise space."I agree. Isn't that what I am saying on page 12:
"In the real world you do not run your virtualized servers at their maximum just to measure the potential performance. Neither do they run idle."
The only reason why we run with highly loaded hypervisors is to measure the peak throughput of the platform. Like VMmark. We know that is not realworld, and does not give you a complete picture. That is exactly the reason why there is a page 12 and 13 in this article. Did you miss those?
Per Hansson - Sunday, September 12, 2010 - link
Hi, please use a better camera for pictures of servers that costs thousands of dollarsIn full size the pictures look terrible, way too much grain
The camera you use is a prime example of how far marketing have managed to take these things
10MP on a sensor that is 1/2.3 " (6.16 x 4.62 mm, 0.28 cm²)
A used DSLR with a decent 50mm prime lens plus a tripod really does not cost that much for a site like this
I love server pron pictures :D
dodge776 - Friday, September 17, 2010 - link
I may be one of the many "silent" readers of your reviews Johan, but putting aside all the nasty or not-so-bright comments, I would like to commend you and the AT team for putting up such excellent reviews, and also for using industry-standard benchmarks like SAPS to measure throughput of the x86 servers.Great work and looking forward to more of these types of reviews!
lonnys - Monday, September 20, 2010 - link
Johan -You note for the R815:
Make sure you populate at least 32 DIMMs, as bandwidth takes a dive at lower DIMM counts.
Could you elaborate on this? We have a R815 with 16x2GB and not seeing the expected performance for our very CPU intensive app perhaps adding another 16x2GB might help
JohanAnandtech - Tuesday, September 21, 2010 - link
This comment you quoted was written in the summary of the quad Xeon box.16 DIMMs is enough for the R815 on the condition that you have one DIMM in each channel. Maybe you are placing the DIMMs wrongly? (Two DIMMs in one channel, zero DIMM in the other?)
anon1234 - Sunday, October 24, 2010 - link
I've been looking around for some results comparing maxed-out servers but I am not finding any.The Xeon 5600 platform clocks the memory down to 800MHz whenever 3 dimms per channel are used, and I believe in some/all cases the full 1066/1333MHz speed (depends on model) is only available when 1 dimm per channel is used. This could be huge compared with an AMD 6100 solution at 1333MHz all the time, or a Xeon 7560 system at 1066 all the time (although some vendors clock down to 978MHz with some systems - IBM HX5 for example). I don't know if this makes a real-world difference on typical virtualization workloads, but it's hard to say because the reviewers rarely try it.
It does make me wonder about your 15-dimm 5600 system, 3 dimms per channel @800MHz on one processor with 2 DPC @ full speed on the other. Would it have done even better with a balanced memory config?
I realize you're trying to compare like to like, but if you're going to present price/performance and power/performance ratios you might want to consider how these numbers are affected if I have to use slower 16GB dimms to get the memory density I want, or if I have to buy 2x as many VMware licenses or Windows Datacenter processor licenses because I've purchased 2x as many 5600-series machines.
nightowl - Tuesday, March 29, 2011 - link
The previous post is correct in that the Xeon 5600 memory configuration is flawed. You are running the processor in a degraded state 1 due to the unbalanced memory configuration as well as the differing memory speeds.The Xeon 5600 processors can run at 1333MHz (with the correct DIMMs) with up to 4 ranks per channel. Going above this results in the memory speed clocking down to 800MHz which does result in a performance drop to the applications being run.
markabs - Friday, June 8, 2012 - link
Hi there,I know this is an old post but I'm looking at putting 4 SSDs in a Dell poweredge and had a question for you.
What raid card did you use with the above setup?
Currently a new Dell poweredge R510 comes with a PERC H700 raid card with 1GB cache and this is connect to a hot swap chassis. Dell want £1500 per SSD (crazy!) so I'm looking to buy 4 intel 520s and setup them up in raid 10.
I just wanted to know what raid card you used and if you had a trouble with it and what raid setup you used?
many thanks.
Mark
ian182 - Thursday, June 28, 2012 - link
I recently bought a G7 from www.itinstock.com and if I am honest it is perfect for my needs, i don't see the point in the higher end ones when it just works out a lot cheaper to buy the parts you need and add them to the G7.Chrisrodinis - Thursday, February 27, 2014 - link
This article is about Dell servers in 2010. For comparison purposes here is an overview of a Dell PowerEdge M420 Blade server. This video has cool effects with upbeat production values. Please check it out, thanks: https://www.youtube.com/watch?v=iKIG430z0PI