Comments Locked

19 Comments

Back to Article

  • Genx87 - Wednesday, March 15, 2006 - link

    Itanium's marketshare is most likely at the expense of PA-RISC which is what it is replacing on the HP side.

    IBM, Dell, or Sun arent interested.
    Outside of HP, what big OEM is shipping enough Itanium machines to bother mentioning?
  • logeater - Tuesday, March 14, 2006 - link

    Here's a quick tip: Olives liven up the most jejune of pasta dishes. I prefer Spanish myself, but even Greek Kalamata can give it that zesty flavour for your next party function or Sunday dinner. Be sure to wish and thoroughly pit the brown fruit before slicing them.
  • Phiro - Tuesday, March 14, 2006 - link

    And what's with all the bagging on server virtualization? I think Anandtech's viewpoint on this is too focused on a crappy product like Microsoft's Virtual Server.

    We use ESX 2.5 from VMWare where I work, and while management is still worried enough about the risks to run tier 1 services on VMWare, we run tons of tier 2 on VMWare and most of non-Production on VMWare. VMWare is poised to reengineer our disaster recovery systems as well.

    Going down the road, the global manager in ESX 3.0 looks like an absolutely killer feature and we will definitely rearchitect our environments to take advantage of it. We're already seeing tremendous cost savings in hardware with ESX 2.5, 3.0 will only increase that margin.

    If Anandtech can't follow what the market leader in server virtualization is doing, nor are they able to get the product working correctly, they might need to take a clue from what the rest of the world is doing. Normally you guys are pretty in tune with trends - I think you're way out in the rain on this one.
  • Stolly - Wednesday, March 15, 2006 - link

    Totally agree. That assesment of virtualisation displays a lack of real world experience. We have implemented ESX at a customer who have collapsed a 40 server system in less than 10. They have some 2 node clusters with one node being real hardware and one node being inside ESX, its the hardware nodes that remain a problem. Servers inside ESX are NOT more prone to problems.

    Plus, hardware migrations are a thing of the past. Using vmotion they can move a running server from one ESX server to another with 0 downtime, the users do not even notice. A multi week phased hardware migration can now be done in minutes. Thats the power of server virtualisation, and i'm suprised that Anandtech is not conversant with the latest state of the art.
  • JustAnAverageGuy - Monday, March 13, 2006 - link

    "no less than" = "up to"

    "No less than" implies that that it is a minimum amount.

    "Up to" implies that the value given is a maximum.

    I doubt a server requires a minimum of 128GB of RAM. :)

    However, another excellent article, as always, Johan.

    - JaAG
  • DSaum - Monday, March 13, 2006 - link

    One the basis of "a few vague benchmarks", you state "Montecito is not only a vast improvement compared to Madison when it comes to running typical database applications, but also the platform has simplified quite a bit too." What happened to your objectivity? LOL

  • dexvx - Tuesday, March 14, 2006 - link

    Here's the HP briefing of the Moniceto:

    http://www.hp.sk/mediaservis/prezentacie/pdf/7_Mon...">http://www.hp.sk/mediaservis/prezentaci..._Monteci...

    But lets roll over the basics:

    Madison: 1-1.5Ghz, 32KB L1, 256KB L2D, 9MB L3
    Moniceto: Dual Core 1.6Ghz+ with HT, 32KB L1, 1MB L2I and 256KB L2D, 2x 12MB L3

    It does not take a genius to come to the conclusion that Moniceto will be a LOT more performance oriented.
  • JohanAnandtech - Tuesday, March 14, 2006 - link

    Maybe because it is very obvious? 1 MB L2 instead of 256 KB L2, two cores versus one, 4 threads versus 1...that is more than enough to call the montecito a vast improvement over Madison in database and other enterprise applications.

    Would you need benchmarks to know that a clovertown which has twice the cores of Woodcrest, but the same architecture, is a vast improvement in these kind of benches?
  • FreshPrince - Monday, March 13, 2006 - link

    imagine the fps you'd get from that beast... :D

    SAS really isn't that impressive yet...

    the enclosures I've seen are mostly 12 drive external cases...which can't do much.

    The SAS white paper I've read described a much more scalable solution, and you can't find those enclosures anywhere yet...

    I'll stick to my NexSAN SATABeast.... :D

    SCSI backplane + 42, 500GB, SATA 3.0GB/S = 21TB raw in a 4U device.

    Until they come out with something equally impressive with SAS, don't bore me anymore ;)
  • cornfedone - Monday, March 13, 2006 - link

    From the crap Asus has shipped in the past three years they can't even deliver a properly functioning mainstream mobo, let alone a high-end product. Their SLI, An8, ATI 480/580 mobos are all riddled with voltage, BIOS and memory issues that Asus can't or won't fix. Their days are numbered.
  • AkaiRo - Monday, March 13, 2006 - link

    When you talk about SAS you have to clarify if you are referring to SAS 3.5" or SAS SFF (Small Form Factor). SAS 3.5", which is what the companies you are talking about in the article are using, is only a waypoint on the roadmap. SAS 3.5" and low-end/mid-range SATA enclosures use U320 connectors. High End SATA enclosures can use fibre or RJ-45 connectors as well. However, there are SAS (and SATA) SFF enclosures out on the market already (HP's Modular Storage Array 50 enclosure).

    SAS/SATA SFF is the designated target for the majority of storage subsystems in the next few years because server manufacturers are going to increasing focus more on spindle count affecting overall I/O than anything else. The SAS SFF drives use the platters from the 15,000rpm drives which are 2.5" in size, which is why the largest SAS SFF drives for now are 146GB. There is quite an initiative by the biggest players who deal in servers, workstations/desktops, AND notebooks, to move to a common platform for ALL three classes of machines, but it's a chicken and egg thing with everyone waiting for someone else to provide the incentive to make the switch.
  • Calin - Tuesday, March 14, 2006 - link

    The 2.5 inch drives are physically too small to reach high capacities, and many of the buyers don't know anything about the hard drive they have except capacity. As a result, a physically smaller, less warm, even supposedly higher performance drive at a higher price will be at disadvantage compared to a physically larger, warmer and even lower performance at a lower price. Especially taking into account that you can buy 500GB 3.5inch drives, but only 120GB 2.5inch drives
  • themelon - Monday, March 13, 2006 - link

    This is nothing new. Granted once you go beyond 4 you have to run them slower....
  • JohanAnandtech - Tuesday, March 14, 2006 - link

    8 Dimms per CPU was very uncommon and required expensive components and engineering. I have seen on the HP DL585, but there 8 DIMMs result in DDR266 speed, which is serious performance penalty. Most DDR boards are still limited to 4 DIMMs per CPU.

    With DDR-2 6 - 8 DIMMs per CPU is relatively easy to do, at least at DDR-II 667 speeds. You'll see 6-8 DIMMs also on affordable solutions, not on high eend server only. That is new :-)
  • Beenthere - Monday, March 13, 2006 - link

    SAS don't impress me none at this stage. Yes it's more reliable than SATA drives but almost anything is. Drive performance is virtually identical with SAS and SCSI 320. All I see is a lower manufacturing cost that hasn't been passed on yet.
  • ncage - Monday, March 13, 2006 - link

    Improving performance is not the whole point of SAS. SCSI 320 is already fast as it is. Heck SCSI 160 is fast. Anyawys i digress. Its the ability to use SATA cables in a server which is a big deal when your dealing with a little 1U case. Its also the ability to Mix/Match SATA with SCSI with for some data centers could dramtically save money. If you mixed SATA/SCSI you could have a combination of Peformance/Redudancy/Cost all in one package. Granted "Critical" data centers will probably be all SCSI. I wouldn't advise eBay put SATA drivers on their servers :). You can't expect each reviesion of storage connection technology to provide better performance...sometimes it not about peformance at all.
  • Calin - Tuesday, March 14, 2006 - link

    There are enough servers that don't need hard drive performance, and will run anything mirrored in RAM. As a result, one could use the same boxes, only with different hard drives for different tasks. Makes everything simpler if you have a single basic box.
  • dougSF30 - Monday, March 13, 2006 - link

    Rev E DC Opteron TDPs have also always been 95W. The SC Rev E parts were 89W.

    http://www.amdcompare.com/us%2Den/opteron/Default....">http://www.amdcompare.com/us%2Den/opteron/Default....

    You can look up the Rev E Opteron parts at the above link.

  • dougSF30 - Monday, March 13, 2006 - link

    These are likely not the parts you see at 68W with Rev F, so again, power is not rising (it is actually falling with Rev F).

    There has been a 68W "blade TDP" point that Rev E Opterons have been sold at, in addition to the 55W and 30W points.

    So, I suspect you are simply seeing 95W and 68W TDP families for Rev F, just like Rev E. Rev F will allow for higher frequency parts within those families, in part due to a DDR2 controller taking less power than DDR1, in part due to SiGe strain being incorporated into the 90nm process.

Log in

Don't have an account? Sign up now