Obvious answer is that Xeons perform better than Opterons.. Less obvious answer is.. well something along the line of "donation".. of some sort? Knowing the deposition of this site.
I believe the secondary reason is the same reason we're going with the L5640: power requirements. With a hard power cap, you want maximum performance per watt. Right now, Intel is winning in that area, so Xeon makes more sense.
Jarred, you know this is a lie. Socket C32 processors are less power consuming than anything else on the market right now, and it will fit the bill perfectly for Anandtech as a whole, but I guess if you chose this route it will upset your "sponsors", so, I understand the true meaning of this. ;)
Ah you mean that review where they didn't do any power consumption measurements? I'm sure their conclusions concerning that particular metric were great..
This is actually a very good question and something I will touch on in a later post. All of our component choices are our top picks for performance/power reasons. There's one exception and we'll get to that in a later post :)
I am also wondering why so much CPU power? I would also expect you to be IO Bound more than CPU bound. I would love to know how you handle things like fall over, storage, backups, and so one. Maybe I do not use Anandtech enough but I have never seen any down time or performance issues.
28 to 12 servers, so thats 6 servers per farm and the servers are probably clustered, so that 3 pairs, 1 for upgrades, 1 for backup and 1 live?.... Does sound like a shit load though....
Sounds like a shitload? It always cracks me up how the desktop junkies come into a Server thread and are amazed by a mediocre product or setup.
I work for HP and deal with mainly enterprise customers who are buying 16 blade servers with X5670s and 144GB of RAM in a single PO, only to turn around 2 months later and order more. And thats not including the orders from places like Microsoft and Facebook. Those ship out as an entire rack of equipment pre-cabled.
Before I jumped over to the dark side (Sales) I last worked for a small division of a fortune 500 and deployed 8 c7000 HP Chassis' and around 100 blade servers before I left earlier this year.
Me too... also, I'm interested in hearing how much service automation you actually plan to use and how you implement the automation. After all, when we talk about clouds, that would mean - in addition to just virtualization - implementing the service catalogs, image repositories of those services, workflows performing automated deployments of those services, workflows to modify/remove the services etc.
it isnt. Hell I wouldn't even put xenserver up there, but it looks like a client of ours will want to go that route. It should be interesting. We run esx and hyperv right now.
Agreed. Xenserver can virtualize any OS and does so better than many others. They've also got the line on virtualizing citrix farms. I've not gotten into the heavy stuff but I use Xenserver and like it. I liked the OSS version better but I've already migrated to citrix version... Last I checked, Hyper-V couldn't virtualize anything but Windows and they certainly don't mind it if anyone buys more licenses for a given piece of hardware. I use MS where I need to but avoid it if I can. And with virtualization, it lets me avoid a lot of expensive license purchases :) Yay me!
Are you building these servers from scratch (i.e. buying the parts separately) or are you buying this from a vendor? Would be interesting to know which way you went and why.
I was going to ask the same. I assume based on this first article that they are rolling their own. My question was going to be WHY?
If your site is important and by Virtualizing you are putting more eggs in each basket... I would think you would want one of the big 3 (HP/IBM/Dell) backing you up with a 24x7x4h Warranty uplift (very common across all vendors) in case of a hardware failure.
Another bonus is some if not all of those vendors offer Power Capping so you can make sure that your machines dont exceed that power cap you mentioned before. I know on the HPs with either some optional software or by using a c7000 blade chassis you can Cap across all your machines so there is a little wiggle room rather than each machine having a hard cap.
You guys need to check out Intel Node Manager technology where you can do group power capping at the server, rack, row or whole data center. This is one of the use cases for Node Manager to cap power across a group of servers. You can run higher performing CPU's as long as the power cap is sustained across the group (rack).
Funny :) They probably are hosted into a large datacenter that itself is running at its peak power draw (they can't get any more power from the power utility, usually)
Lots of CoLo facilities charge you based on power utilization because of the density that newer servers and Virtualization have brought. The more power used = the more heat produced that must be cooled as well.
In a shared CoLo, if every customer spiked their usage at the same time, you also run the risk of exceeding some choke point within the DC and bringing down alot more than your own Rack. So penalties for exceeding your power cap are motivation for the admins to keep their machines in line.
Not to mention that AT is in Europe somewhere and they seem to be more power conscious over there than the US.
Anand, can you post your average HW fault ratios for the previous infrastructure ?
What bugs me most about the recent trend of manycore CPUs is that a failure in a CPU is rendering more and more resources unavailable with each generation.
If you had a 4x4 config before (4sockets 4 cores each), one socket failure was 25% of CPU resources. Now you upgrade to a 2x6 config (2 sockets with 6 cores each) but each socket failure get's you down by 50% ...
Maybe AT is not that critical on this, but quite many applications are.
Well, turning off single chips that are defective aren't features that are present in simple dual/quad socket Xeon systems in the first place. That's the sort of thing that gets into HA systems, and would quite simply (especially today) cost more than an n+2 arrangement of servers. Probably even when n is 1.
The only way to run with a chip less is to drive to the datacenter, physically remove the dead chip, and put the server back into production. If you go to the trouble of that journey, you might as well bring along some spare parts and install them immediately.
These sorts of systems are slightly more common on memory, because a) there's typically more modules per server installed, and b) the chances of something going wrong with memory are higher. See for example the wikipedia page for IBM's ChipKill, among others. Even so, they only use that on medium-range servers and upwards -- and basic Intel quads don't come under that heading.
Anand's redundancy is based on the fact that the hardware pools consist of 12 physical servers, among which the running virtual servers can be (re-)distributed at need. My WAG is they won't even have anything in place to do that automatically, since there won't be all that many hardware failures in a system that's relatively small like this.
This structure is part of what makes virtualisation so attractive. It's not just that resources can be managed by putting multiple services onto a single host machine, it's especially that in the case of hardware failure or even excessive loads on a single virtual service you can redistribute the services to gain better use of your hardware. Multi-core CPUs are one of the biggest driving forces behind virtualisation, that and the fact that many services are getting relatively less resource hungry (ie, they're not getting bigger at the same pace as the hardware improves -- think DNS, IRC, DHCP, etc.etc.).
Anand, it would be great if you also could mention the hardware wear and tear during lung term usage. How long did the cooling fans last? How many have you changed? Powersupply breakdown? Harddisk breakdown?
These are interesting points for those who like to build servers that mechanically will last as long as possible.
I'll try and gather that information for our existing infrastructure. It's not too surprising but the number one issue we've had has been HDD failure, followed by PSU failure. We've actually never had a cooling fan go bad surprisingly enough :-)
Just curious as to why you still manage your own physical servers at all, especially for a content website...there are so many options which are more cost and labor effective and which provide services you otherwise need to implement and manage yourself.
It never ceases to amaze me how much more powerful and flexible AWS is than managing bare hardware.
Zero capital expense, no hardware depreciation schedules for tax purposes, instant and virtually unlimited upscaling (and downscaling - when was the last time you saw a colo effectively cost-managed for downscaling?!), CDNs, video streaming, relational and search-optimized databases, point and click image snapshots & backups, unlimited storage, physical region redundancy - all managed better than most dedicated IT teams, and with no sysadmin costs, and all manageable from any web browser - !?
I always look at DropBox - a phenomenal service with extraordinary requirements. If they can run off AWS with that kind of reliability and performance, you have to wonder why you're paying a sysadmin$50+/hr to work a screwdriver in a server room at all.
Personally, it's probably because it is what it is. Anandtech is a Hardware web site, and one of the things they have enjoyed playing with is the server side of life. If they can afford it, it allows them to get their hands dirty building it as well. Many places don't have the boss working on setting up the servers for the clouds on their network; even in IT focused shops.
I'm assuming (perhaps wrongly) that you'll be running ESX and consolidating further. Recent benchmarks showed quad socket Xeon systems outperforming dual socket systems by about 2.5x. Wouldn't it work even better then to run say 6, or maybe only 5 quad CPU servers? Just a thought.
My guess would be to allow them to maximize memory, which seems to be the true bane of any virtualized system. Our current at work bladecentre has 4 VM blades, running quad nahlem xeon's in 2 sockets with 48GB of ram each. Our blade cpu usage is normally in the neighborhood of 5-9% per blade, and typically about 55-59% memory usage.
What kind of databases are you using? Any NoSQL database in production?
Also, wouldn't the massive atom-based 120k server be good for what you are doing? How much pricier/cheaper would it have been?
How is most of the software built? Like .NET, php or Ruby? Have you ever gone back on the implementations and thought of changing? Have you looked at and considered F#/WebSharper?
wow, over $10k just for the CPUs (based on the price in the ad in the article, I'm sure they didn't pay nearly that) I'd be interested at the end of the whole server build if you could put what the whole upgrade costs would be.
I would have to agree with a few folks here that I would have thought that utilizing a hosting service provider to provide either a VPS or dedicated hardware would be much more cost effective and efficient solution. At the same time freeing the "staff" from having to tackle the day to day hardware issues that can creep up and allowing them to focus on their primary task/goal, writing great articles for their users.
That doesn't mean you can't have a server or two that you use for testing or development where you can literally pull it apart to your hearts desire.
I have been owing an AMD enabled personal computer for 10 years and never found any issue with the system. It's working well. I also enhanced by upgrading some devices, still it is working fine. Most of the part of my harddrive has been reserved for <a href="http://www.kaushalam.com/application-developement.... Application Development</a>
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
67 Comments
Back to Article
myfootsmells - Monday, August 30, 2010 - link
can't wait to see the progressWill Robinson - Monday, August 30, 2010 - link
This stuff WILL in fact run Crysis fully maxed right?AstroGuardian - Tuesday, August 31, 2010 - link
It will probably say: Your machine doesn't meet the minimum requirements for CrysisShloader - Monday, August 30, 2010 - link
Don't suppose this would be an opportunity for a giveaway? Hell I'd even pay shipping if you're tossing out some Athlon MP chips.johnsonx - Monday, August 30, 2010 - link
Socket-A Semprons make better Athlon MPs than most Athlon MPs did; you can get your hands on those easily.NICOXIS - Monday, August 30, 2010 - link
this will sound like the obvious AMD fanboy question, but why weren't AMD Opteron six core considered? or if they were, why you went for Intel?Just curious
Guessor - Monday, August 30, 2010 - link
Obvious answer is that Xeons perform better than Opterons.. Less obvious answer is.. well something along the line of "donation".. of some sort? Knowing the deposition of this site.JarredWalton - Monday, August 30, 2010 - link
I believe the secondary reason is the same reason we're going with the L5640: power requirements. With a hard power cap, you want maximum performance per watt. Right now, Intel is winning in that area, so Xeon makes more sense.redisnidma - Tuesday, August 31, 2010 - link
Jarred, you know this is a lie.Socket C32 processors are less power consuming than anything else on the market right now, and it will fit the bill perfectly for Anandtech as a whole, but I guess if you chose this route it will upset your "sponsors", so, I understand the true meaning of this. ;)
Voo - Tuesday, August 31, 2010 - link
What? Amd winning in the Performance/Watt category? Ah yeah and now please the review and how they tested to come to THAT conclusion.I mean come on, at least stay realistic with your claims.
mino - Tuesday, August 31, 2010 - link
Well being so smart you shall be able to read at least Anandtech's own 6100 evaluation.And they said that DESPITE their love affair with Nehalem.
Long story short, 6164HE and L5640 are the kings and trade blows to each other depending on the load.
Generally, for many mid-loaded VM's = better for AMD.
Few heavily optimized VM's => Intel wins.
Voo - Tuesday, August 31, 2010 - link
Ah you mean that review where they didn't do any power consumption measurements? I'm sure their conclusions concerning that particular metric were great..http://www.anandtech.com/show/2978/amd-s-12-core-m... ups nothing.
Ah but maybe http://www.anandtech.com/show/3648/xeon-7500-dell-... ? Oh Also no numbers?
Maybe they hid their measurements really well, but doesn't seem so
mino - Tuesday, August 31, 2010 - link
C32 in in a different performance class.If anything 6164HE is the way to go.
However it all depends on workload and, most importantly, on specific mobo/server availability/features.
DigitalFreak - Monday, August 30, 2010 - link
Please don't tell me you're starting that "Intel is buying off Anandtech" crap again.Googer - Tuesday, August 31, 2010 - link
Please correct me if I am wrong, but if my memory serves me accuratly; wasn't the IT section of Anandtech once solely sponsored by Intel?Mumrik - Monday, August 30, 2010 - link
Riiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiight....Meaker10 - Monday, August 30, 2010 - link
32nm and the requirement for low power.Anand Lal Shimpi - Monday, August 30, 2010 - link
This is actually a very good question and something I will touch on in a later post. All of our component choices are our top picks for performance/power reasons. There's one exception and we'll get to that in a later post :)Take care,
Anand
lwatcdr - Tuesday, August 31, 2010 - link
I am also wondering why so much CPU power? I would also expect you to be IO Bound more than CPU bound.I would love to know how you handle things like fall over, storage, backups, and so one.
Maybe I do not use Anandtech enough but I have never seen any down time or performance issues.
SolMiester - Tuesday, August 31, 2010 - link
28 to 12 servers, so thats 6 servers per farm and the servers are probably clustered, so that 3 pairs, 1 for upgrades, 1 for backup and 1 live?....Does sound like a shit load though....
Casper42 - Wednesday, September 1, 2010 - link
Sounds like a shitload? It always cracks me up how the desktop junkies come into a Server thread and are amazed by a mediocre product or setup.I work for HP and deal with mainly enterprise customers who are buying 16 blade servers with X5670s and 144GB of RAM in a single PO, only to turn around 2 months later and order more. And thats not including the orders from places like Microsoft and Facebook. Those ship out as an entire rack of equipment pre-cabled.
Before I jumped over to the dark side (Sales) I last worked for a small division of a fortune 500 and deployed 8 c7000 HP Chassis' and around 100 blade servers before I left earlier this year.
So 12 VM Hosts - small potatoes
Zorblack1 - Monday, August 30, 2010 - link
Are you serious? You come off stupid saying: "...but why weren't AMD Opteron six core considered?" When nothing of the sort was said in the article.The article was not about why they choose, just what they choose. Additionally they never said they didn't consider; go troll elsewhere.
Griswold - Tuesday, August 31, 2010 - link
Anand said its a good question, which makes you come off stupid and look like a troll.webdev511 - Tuesday, August 31, 2010 - link
Of course Opteron 6000 series really are best suited for situations where per-socket licensing comes into play, e.g. database and application servers.Suffice to say I'm looking forward to this series.
mfenn - Monday, August 30, 2010 - link
I'm very interested to hear which cloud infrastructure management software that you picked (VMWare, Eucalyptus, OpenNebula).ruusnak - Tuesday, August 31, 2010 - link
Me too... also, I'm interested in hearing how much service automation you actually plan to use and how you implement the automation. After all, when we talk about clouds, that would mean - in addition to just virtualization - implementing the service catalogs, image repositories of those services, workflows performing automated deployments of those services, workflows to modify/remove the services etc.jigglywiggly - Monday, August 30, 2010 - link
virtualbox is THE bestthat is all..
Oh and why need for so much cpu computing power? I'd be more troubled with DISK I/O performance.
DigitalFreak - Monday, August 30, 2010 - link
Since when is Virtual Box in the same league as VMware ESX, Citrix XenServer, etc. for enterprise server use?Adul - Tuesday, August 31, 2010 - link
it isnt. Hell I wouldn't even put xenserver up there, but it looks like a client of ours will want to go that route. It should be interesting. We run esx and hyperv right now.JHBoricua - Tuesday, August 31, 2010 - link
You don't consider XenServer enterprise ready and yet you follow that by stating you run Hyper-V? Talk about fail.sublifer - Tuesday, August 31, 2010 - link
^+1Agreed. Xenserver can virtualize any OS and does so better than many others. They've also got the line on virtualizing citrix farms. I've not gotten into the heavy stuff but I use Xenserver and like it. I liked the OSS version better but I've already migrated to citrix version... Last I checked, Hyper-V couldn't virtualize anything but Windows and they certainly don't mind it if anyone buys more licenses for a given piece of hardware. I use MS where I need to but avoid it if I can. And with virtualization, it lets me avoid a lot of expensive license purchases :) Yay me!
mino - Tuesday, August 31, 2010 - link
And epic one at that.Acanthus - Monday, August 30, 2010 - link
Although i think that is more the forum software than a hardware or resource issue.scooterlibby - Monday, August 30, 2010 - link
I don't know apricot one about servers, but this series looks to be quite interesting. Keep it up!joyufat - Monday, August 30, 2010 - link
Are you building these servers from scratch (i.e. buying the parts separately) or are you buying this from a vendor? Would be interesting to know which way you went and why.Casper42 - Wednesday, September 1, 2010 - link
I was going to ask the same.I assume based on this first article that they are rolling their own.
My question was going to be WHY?
If your site is important and by Virtualizing you are putting more eggs in each basket...
I would think you would want one of the big 3 (HP/IBM/Dell) backing you up with a 24x7x4h Warranty uplift (very common across all vendors) in case of a hardware failure.
Another bonus is some if not all of those vendors offer Power Capping so you can make sure that your machines dont exceed that power cap you mentioned before. I know on the HPs with either some optional software or by using a c7000 blade chassis you can Cap across all your machines so there is a little wiggle room rather than each machine having a hard cap.
dreamlane - Monday, August 30, 2010 - link
This is a good intro post, and I look forward to more info about your server build choices.drank12quartsstrohsbeer - Monday, August 30, 2010 - link
Are you paying full retail for this equipment?Casper42 - Wednesday, September 1, 2010 - link
Nobody pays retail in the server marketvol7ron - Monday, August 30, 2010 - link
is this a giveaway? i want :)Toadster - Tuesday, August 31, 2010 - link
You guys need to check out Intel Node Manager technology where you can do group power capping at the server, rack, row or whole data center. This is one of the use cases for Node Manager to cap power across a group of servers. You can run higher performing CPU's as long as the power cap is sustained across the group (rack).bryanW1995 - Tuesday, August 31, 2010 - link
why is there a hard power cap? are you running this out of anand's basement or something?Pedro80 - Tuesday, August 31, 2010 - link
Nice one :-)Calin - Tuesday, August 31, 2010 - link
Funny :)They probably are hosted into a large datacenter that itself is running at its peak power draw (they can't get any more power from the power utility, usually)
Casper42 - Wednesday, September 1, 2010 - link
Lots of CoLo facilities charge you based on power utilization because of the density that newer servers and Virtualization have brought. The more power used = the more heat produced that must be cooled as well.In a shared CoLo, if every customer spiked their usage at the same time, you also run the risk of exceeding some choke point within the DC and bringing down alot more than your own Rack. So penalties for exceeding your power cap are motivation for the admins to keep their machines in line.
Not to mention that AT is in Europe somewhere and they seem to be more power conscious over there than the US.
Adul - Tuesday, August 31, 2010 - link
I missed these server upgrade articles. I had not realize how many servers anandtech runs on now. wow.haplo602 - Tuesday, August 31, 2010 - link
Anand, can you post your average HW fault ratios for the previous infrastructure ?What bugs me most about the recent trend of manycore CPUs is that a failure in a CPU is rendering more and more resources unavailable with each generation.
If you had a 4x4 config before (4sockets 4 cores each), one socket failure was 25% of CPU resources. Now you upgrade to a 2x6 config (2 sockets with 6 cores each) but each socket failure get's you down by 50% ...
Maybe AT is not that critical on this, but quite many applications are.
JasperJanssen - Tuesday, August 31, 2010 - link
Well, turning off single chips that are defective aren't features that are present in simple dual/quad socket Xeon systems in the first place. That's the sort of thing that gets into HA systems, and would quite simply (especially today) cost more than an n+2 arrangement of servers. Probably even when n is 1.The only way to run with a chip less is to drive to the datacenter, physically remove the dead chip, and put the server back into production. If you go to the trouble of that journey, you might as well bring along some spare parts and install them immediately.
These sorts of systems are slightly more common on memory, because a) there's typically more modules per server installed, and b) the chances of something going wrong with memory are higher. See for example the wikipedia page for IBM's ChipKill, among others. Even so, they only use that on medium-range servers and upwards -- and basic Intel quads don't come under that heading.
Anand's redundancy is based on the fact that the hardware pools consist of 12 physical servers, among which the running virtual servers can be (re-)distributed at need. My WAG is they won't even have anything in place to do that automatically, since there won't be all that many hardware failures in a system that's relatively small like this.
This structure is part of what makes virtualisation so attractive. It's not just that resources can be managed by putting multiple services onto a single host machine, it's especially that in the case of hardware failure or even excessive loads on a single virtual service you can redistribute the services to gain better use of your hardware. Multi-core CPUs are one of the biggest driving forces behind virtualisation, that and the fact that many services are getting relatively less resource hungry (ie, they're not getting bigger at the same pace as the hardware improves -- think DNS, IRC, DHCP, etc.etc.).
haplo602 - Tuesday, August 31, 2010 - link
eh, I guess working with HP 9000 class HW biases my view a bit. I did realy not know that Xeon/Opteron systems were that dumb.Milleman - Tuesday, August 31, 2010 - link
Anand, it would be great if you also could mention the hardware wear and tear during lung term usage. How long did the cooling fans last? How many have you changed? Powersupply breakdown? Harddisk breakdown?These are interesting points for those who like to build servers that mechanically will last as long as possible.
/M
7Enigma - Tuesday, August 31, 2010 - link
I'd also find that very interesting....Anand Lal Shimpi - Tuesday, August 31, 2010 - link
I'll try and gather that information for our existing infrastructure. It's not too surprising but the number one issue we've had has been HDD failure, followed by PSU failure. We've actually never had a cooling fan go bad surprisingly enough :-)Take care,
Anand
judasmachine - Tuesday, August 31, 2010 - link
I want one, and by one I mean one set of two 6-cores totally in 24 threads....Stuka87 - Tuesday, August 31, 2010 - link
What do you guys use so many servers for? I don't see this site getting *THAT* much traffic to require that kind of horse power. But maybe I am wrong.Syran - Tuesday, August 31, 2010 - link
They do currently have 3800 active members just on the forums right at this minute, that has to call for some fairly decent database calls just there.brundlefly77 - Tuesday, August 31, 2010 - link
Just curious as to why you still manage your own physical servers at all, especially for a content website...there are so many options which are more cost and labor effective and which provide services you otherwise need to implement and manage yourself.It never ceases to amaze me how much more powerful and flexible AWS is than managing bare hardware.
Zero capital expense, no hardware depreciation schedules for tax purposes, instant and virtually unlimited upscaling (and downscaling - when was the last time you saw a colo effectively cost-managed for downscaling?!), CDNs, video streaming, relational and search-optimized databases, point and click image snapshots & backups, unlimited storage, physical region redundancy - all managed better than most dedicated IT teams, and with no sysadmin costs, and all manageable from any web browser - !?
I always look at DropBox - a phenomenal service with extraordinary requirements. If they can run off AWS with that kind of reliability and performance, you have to wonder why you're paying a sysadmin$50+/hr to work a screwdriver in a server room at all.
Syran - Tuesday, August 31, 2010 - link
Personally, it's probably because it is what it is. Anandtech is a Hardware web site, and one of the things they have enjoyed playing with is the server side of life. If they can afford it, it allows them to get their hands dirty building it as well. Many places don't have the boss working on setting up the servers for the clouds on their network; even in IT focused shops.klipprand - Tuesday, August 31, 2010 - link
Hi Anand,I'm assuming (perhaps wrongly) that you'll be running ESX and consolidating further. Recent benchmarks showed quad socket Xeon systems outperforming dual socket systems by about 2.5x. Wouldn't it work even better then to run say 6, or maybe only 5 quad CPU servers? Just a thought.
Kelly
Syran - Tuesday, August 31, 2010 - link
My guess would be to allow them to maximize memory, which seems to be the true bane of any virtualized system. Our current at work bladecentre has 4 VM blades, running quad nahlem xeon's in 2 sockets with 48GB of ram each. Our blade cpu usage is normally in the neighborhood of 5-9% per blade, and typically about 55-59% memory usage.mino - Tuesday, August 31, 2010 - link
For clustering there is a minimum count of servers necessary regardless of their performance.So basically the lowest amount making any sense is 2xMGMT(FT) + 6xPROD(2x 3 per site).
Until you NEED more than 6 2S serves PER CLUSTER it makes no sense going for the big boxen.
Unless, of course, RAS is not a requirement.
Trinity01 - Wednesday, September 1, 2010 - link
Here's an Idea: sweeptake the old parts!xakor - Wednesday, September 1, 2010 - link
What kind of databases are you using? Any NoSQL database in production?Also, wouldn't the massive atom-based 120k server be good for what you are doing? How much pricier/cheaper would it have been?
How is most of the software built? Like .NET, php or Ruby? Have you ever gone back on the implementations and thought of changing? Have you looked at and considered F#/WebSharper?
Thanks a lot.
Ammohunt - Wednesday, September 1, 2010 - link
What virtualization software do you plan to use?theangryintern - Wednesday, September 1, 2010 - link
wow, over $10k just for the CPUs (based on the price in the ad in the article, I'm sure they didn't pay nearly that) I'd be interested at the end of the whole server build if you could put what the whole upgrade costs would be.GullLars - Wednesday, September 1, 2010 - link
Will you be using ioDrive(s) in your I/O-intensive cluster?Michael McNamara - Wednesday, September 1, 2010 - link
I would have to agree with a few folks here that I would have thought that utilizing a hosting service provider to provide either a VPS or dedicated hardware would be much more cost effective and efficient solution. At the same time freeing the "staff" from having to tackle the day to day hardware issues that can creep up and allowing them to focus on their primary task/goal, writing great articles for their users.That doesn't mean you can't have a server or two that you use for testing or development where you can literally pull it apart to your hearts desire.
Cheers!
Kaushalam - Thursday, September 16, 2010 - link
I have been owing an AMD enabled personal computer for 10 years and never found any issue with the system. It's working well. I also enhanced by upgrading some devices, still it is working fine.Most of the part of my harddrive has been reserved for <a href="http://www.kaushalam.com/application-developement.... Application Development</a>