I work in India for a San Jose based Software Company (one Steve Jobs doesn't like too much). Recently we have had a virtual lab service introduced which enables us to deploy virtual machines and carry out short term development and testing. Now I have a two pronged question
1) Will there be a day when all(or most ) of the development/testing work across platforms will be migrated to the cloud? Will it be secure enough and do you think the results will be consistent when compared to using actual physical PC's with a single OS?
2) What is the future of Mac OS X VM support? Our company at this point doesn't offer macs for the virtual lab
1) Yes and you don't really mind about the consistency between physical and virtual as long as you run full virtual (that may include end-user equipment, yes). Server-side can and should* be virtualized anyway so in this sense, you must see that it is much easier to replicate a production environment with virtualization, and therefore have the testing for cheaper and better (you can have a perfect 1:1 copy, which is impossible - read too expensive - in physical environments). "Secure enough" ? If it's secure enough for production, it oughta be secure enough for dev. If your question is about test environments impacting production environments, you just need to make sure they run on separate pools / different hosts, including all the network / storage. (As in, if your test environment uses your production network and SAN, you may impact performance in production).
2) Google it, took me 2 minutes to have the answer.
There is already limited support for Mac OS X as a guest VM... but only OS X Server, and only on an OS X host (that's a licensing requirement, not a technical requirement). It's technically possible to run any version of OS X on any host (such as OS X on Windows) using "OSx86" modifications, but that's a EULA violation.
As far as I know, the only virtualization solution that offers official support to do this is Parallels. Virtualbox also has experimental support for doing it.
There are lots of aspects that you need to look at before you can state one has a more compelling offer then another. You can look technology wise, future wise, mngmnt wise, price wise, performance wise. Sure I think Anandtech could make a very compelling compare, but that will be one very high time consuming review that can only be brought into several pieces. Just out of experience.
Second to that there are always pro and contra for one and another. For example from a pricepoint it might look more interesting but you just compare it from a license perspective, but for example how many real life VM you can actually run on a physical box between vendors? You might be surprised .....
Let's not limit ourselves to just vmware or hyperV.
KVM is great technology and being built into the linux kernel takes advantage of developments from the linux community. kernel shared memory (KSM) is just one of many examples.
Many large company's are using kvm including IBM, Red Hat, Dell, HP, Intel. It is just one of several choices available.. but its free and open source code available.
My opinion is that KVM may become the leader in virtualization technology simply because of cost. Compare VMware licensing costs $$$$ to $0 for KVM ? That makes a huge difference in a large datacenter where someone is implementing their own private cloud but even more expensive for a large Cloud Service Provider with thousands of servers.
Let's also not forget that AWS is built on XEN.
As the acknowledged leader in Cloud that says a lot for XEN in itself.
What's with people calling it "kernel shared memory" lately? I thought it was "kernel samepage merging".
I use KVM at work specifically because we can run paravirtualized and I don't have to worry about recompiling modules/installing software (vmware) or running special kernel builds (Xen) whenever we upgrade. We can use the same kernel we do on our hardware and everywhere else. Also, performance was all around better compared to ESX4.1
That doesn't really have anything to do with virtualization, though. And the answer is, no, not for a large amount of bandwidth. VPNs require encryption, and the Atom is not ideally suited to encrypting large amounts of data.
Besides, it's probably cheaper just to get a Linode; it'll cost less than a dedicated Atom would.
What I see a lot is virtualization on blade chassis, the problem is that I'm not convinced this is the best way to go. more servers would mean more idle time in the end and more management.
I would like to know what hardware would be best to virtualize on. and then not what is better per server but what is better per X amount of money you spend over the 2/3 years you use the hardware. And better would probably mean something like faster / more reliable / power useage
After that is answered it would be great to have the same kind of insight into the storage part of virtualization.
there is no "best to virtualize on" until you get into specifics. For example, if you need to virtualize 100 servers, you already have a FC SAN, and you only have 10U of space for your CPU and memory in your data center, you'd better believe that a blade chassis will be the correct starting point.
On the other hand, if you just want to test/dev things on a new infrastructure and aren't sure how many servers you will need yet, any old poweredge 2800 with enough drives will do the trick just fine.
"what is better per X amount of money you spend over the 2/3 years you use the hardware"
If you're only going to use it for 2-3 years, don't buy it. It's probably cheaper if you rent it from a cloud or virtual hosting provider.
"And better would probably mean something like faster / more reliable / power useage"
Larger companies pay ridiculous sums of money to have people do this type of TCO calculation for them.
Most of the reliability is built into the software if you're virtualizing, so you actually try to get systems with less hardware redundancy... for example, you may choose a server with only one power supply instead of redundant power supplies if your design includes the ability for all the virtual machines on that physical host to transfer to other hosts in the event of a host failure.
The choice of faster vs power efficient depends on the applications you are delivering. If the apps are business critical, you choose fast. If the apps are just nice to haves but not critical to the business' success, you choose power efficient. The Xeon L series is the processor equivalent of "slower but more power efficient". Low voltage memory saves about 10% of power consumption over regular server memory.
If the project is large enough, you can spend extra on the performance virtualization farm for business critical apps and save more on the efficient virtualization farm for everything else.
As for shared storage, it's exactly the same. Different vendors with different products all aimed at different scenarios. Tiered storage, thin provisioning, high utilization, how many controllers per enclosure, how many LUNs per enclosure, DAS vs SAN vs NAS, FC vs iSCSI, FCoE vs 10GBE, SAS vs SATA vs SSD vs NLSAS... You just pick the ones you want based on how much budget you have versus how much performance they want. You explain the positives of some systems that are too expensive and the negatives of some systems that are under-budget and let the guys with the money choose their poison of preference.
You mentioned a lot of good points. Others to consider regarding costs of buy vs rent from a Cloud provider:
Insurance costs for facilities, equipment, liability (in case of loss)
HVAC - its not just the power for the servers but the AC and more importantly the backup systems.
Staff and Staff expertise - can you hire and retain expert staff. What does that cost?
If you own - you pay for server maintenance agreements with the vendors If you use the cloud - its built into the hourly costs
h/w renewal costs - if you own then you pay to upgrade h/w If you use the cloud - it is their job and they get larger discounts and thus costs/rates because of Scale than you probably can get.
Utility company's typically give large discounts to big users of power. Microsoft or Amazon will get a huge discount from the rates a small company would pay for electricity.
Also, something often overlooked it the Cloud SP's can locate their Data Centers (DC's) almost anywhere that they can find the best deals for the power the DC would consume... which passes on those savings to the users of that DC/cloud SP.
As you stated many people realize this is a complicated question that has different answers depending on the company/situation involved.
Some virtualization company offer free virtualization such as VirtualBox and others like VMWare offer hefty pricing for Enterprise customer. I am planning to start a small business in IT, and im wondering what benefits virtualization will bring to the organization, especially with small starting capital, i won't be able to afford hefty license (i mean i wont be using Corporate software such as SQL Server or Oracle but fall back into MYSQL or any other database). I also concerned because some of the license are GNU (which means any of the developed software must be GNU too --.--!) With clients only numbered in tens (10-30), would it be better to buy high end server (Xeon comes to mind), then virtualize the heck of it, or separate small server (multiple Phenom II X6) designed to balance the load? or if the number client increased to 100+?
Database also comes to mind, i presume you have to put either router or switch to the database server connecting with the virtual server then?
Also how do you use virtualization properly anyway? with central server, it is quite easy to manage things there, just run some server software and you are done, with virtualization, how do you divide things? (i am really clueless...)
In your case i would suggest getting 2 or 3 smaller servers and a storage (maybe something like this http://h18004.www1.hp.com/products/servers/prolian... because virtualization is meaningless witzhout shared storage. On servers you can then install hypervisor of your choice (all vendors have free editions), and start creating VMs. If you plan to use as much as opensource as possible I would recommend using ESXi (free VMWare hypervisor) because of it's wide range od OS support.
I think you clearly need to investigate some time into the IT area before you should start anything.
You are mixing OS hosted virtualization with bare metal virtualization, Consumer based systems with Servers. New on he networking topology, not even mentioning the storage part.
lots of work to do.
So to start with, any virtualization vendor these days has free virtualization platforms to offer most on each part. (OS hosted or bare metal) OS hosted virtualization is kids playground, you run dev/test on it if you want but production is out of the question (before the comments on this, definition of production off course...)
you need to know what storage sizing and features you need for your type of business, same for networking and what application communication you will have.
there are many tools to manage virtualization platforms, some are free some not, is really depending on the platform you intent to take.
That's the thing,i know all about the hardware part, even storage service using NAS or Active Directory (depends on the system, on the last system i use Novell for some outsource job company that needed it). Network topology dont matter, but if you use a virtualized server (say 1 server with 10 LAN slot - PCIE), do you need to connect the main server LAN or every single LAN need to be connected to a switch or router to be able to function? I have tried VMWare on home and apparently i can use my single LAN shared with the windows and virtual linux. Question is can i do this or should i install that? I just dont see the benefit of virtualization....probably if you are webhosting, that would be great for every user can have their own Dedicated Virtual Hosting, but for normal companies...i still have no ideas.
using virtualization for testing is great, for development, i'll die before i do it. My partner already complain about the slowness of his workstation with intel quad core (core2), giving him virtualization just wont do it.
so to clarify my question, what do i benefit from virtualization? (cloud when mature i can see the point, but virtualization for organization i still in the dark)
PS:sorry for the lack of information on the above post, shoulda clarified im not exactly newbie, more or less, but only newb on virtualization...
"I am planning to start a small business in IT, and im wondering what benefits virtualization will bring to the organization, especially with small starting capital"
You will have very few benefits up front, but if you expect to need 15+ servers in the first 3 years it can be a wise choice to start virtual so you can expand virtually.
The first benefit that small businesses realize with server virtualization is hardware consolidation. You can probably run 15 light use servers on a single C1100 for about $10k. Without virtualization, you'd be trying to find the cheapest servers possible and host each OSE on its own server. At about $1k per server, you'd be spending $15k to run the same number of servers and wasting lots of energy keeping them powered and cooled.
Later advantages are that you can leverage shared storage to thin provision your drive space so you don't have to take your servers off-line in the future to add storage. Another shared storage benefit is cheap cluster failover for your virtual compute systems. As the business grows, there are many other advantages you can use from ease of management to I/O shaping or I/O QoS.
"would it be better to buy high end server (Xeon comes to mind), then virtualize the heck of it, or separate small server (multiple Phenom II X6) designed to balance the load?"
Depends on the goal. If your goal is server consolidation, get the Xeon because you will be able to consolidate more systems per physical host. If your goal is business continuity through whatever method (clustering, load balancing, backup restores) then get the Phenoms.
"or if the number client increased to 100+?"
Number of clients is not a good way to measure your business' virtualization needs. You need to determine whether you would even need one extra server in your journey from 10 clients to 100 clients. If you decide that you won't need any extra systems, then the requirements are the same for both scenarios. Some companies will want to add extra capacity for various reasons. Other companies will try to slide by for as long as they can with what they have.
"Database also comes to mind, i presume you have to put either router or switch to the database server connecting with the virtual server then?"
Unless your virtualize the DB server, yes you will need a switch to connect them. If you do virtualize the DB server, the hypervisor's virtual switch will connect to the other virtual machines (and any physical machines).
"Also how do you use virtualization properly anyway? with central server, it is quite easy to manage things there, just run some server software and you are done, with virtualization, how do you divide things?"
you manage virtual servers with the management tools from the vendor you use. The system center stuff (SCCM, SCOM, VMM) for hyper-V, the xencenter stuff for xenserver, and the vcenter stuff for vmware.
I don't know what you mean by divide things, but I'm guessing you mean dividing computing resources. Each vendor has its own way of doing it, but dividing resources is the whole point. Also, any management software that you use to manage individual physical servers can still be used to manage virtual servers so no big difference from that perspective.
this answer my question at the very least :D so in effect, i could deploy Web Server, Novell Server, Repositories and Testing, even a File Server in a single server, divided by virtualization right? instead of dividing the server physically, i just consolidate the service, but install them in different VM.
got it, question half answered. :D now i had replied with more detailed question though...so it's only half answered. but thanks man for enlighting me
If you're developing a purely hosted solution, the use of MySQL and other GPL'd software does not require you to GPL your solution. The GPL only covers distribution, and there's no distribution in a hosted scenario.
In fact, you can even distribute GPL'd software with your proprietary software without GPLing the entire thing as long as you're not directly linking. A client/server relationship (distributing and using MySQL, but connecting through a socket) would not be a problem. Similarly, a lot of libraries are licensed under the LGPL, which *does* allow linking LGPL'd software into non-LGPL software without LGPLing the whole thing.
I think one of the main benefits of virtualization for a small company is disaster recovery. You can take snapshots of a machine's disks and store them off the box/network or even offsite so in the event of a major virus outbreak or some other system failure, restoring your equipment to a working state is as simple as copying the virtual disk images back to the host. Hell, you could even completely rebuild the host and as long as you have the virtual disk images, you can be right back to where you were previously in a matter of minutes.
You'll spend a bit more on the hardware, but the ease of recovering from something catastrophic will make you happy you spent it.
Currently, the VM hosts I work with are around $20,000. They're HP DL380's with two 6-core Xeons, 48 GB of RAM, 6 NIC's, two HBA's and I think these have 6 hard drives in RAID1+0 but not much local storage is used, all the virtual disks are on the SAN. However, you could easily get 6 disks in RAID1+0 to house your virtual disks locally. You could even step down to a DL360 and accomplish the same thing.
Lunan - I started and owned an IT consulting company in the Raleigh NC area until was bought out by a slightly larger company in 2008, whom I now work for. Most of our clients are in the 10 to 300 users range. I currently maintain about 50 clients in the area. We virtualize all of our customers moving forward, even if they only have one physical server. We made it a company policy about two years ago.
Please feel free to contact me externally to this forum, and i will help you in any way I can. My company's name is Netsmart INC. My contact info is on our website. Just look at my handle and match it up with my contact info on the site. (I dont want to post any emails, etc. here)
Ever since I learned about virtualization, I have embraced it 100% and it has paid big dividends for our staff and our customers.
There are plenty of articles comparing servers (both discrete and blades), but what I haven't seen (or have I just missed them) are good reviews testing the various interconnect fabrics. For example, I can build a cluster with HP DL380s and Arista 10gig switches, or I can build a cluster with an HP c7000 blade chassis and its integrated switch. (Likewise, LSI's SAS switch vs the SAS switch available for the blade server.)
What are the performance tradeoffs between best of breed and an integrated solution?
Does VMware plan to implement similar functionality to the recently introduced RemoteFX for their virtualization platform? With package-integrated GPUs, having tens of VMs with real GPU capacity doesn't seem too far-fetched.
Does VMware have a focus on any particular storage technology? It seems, from a functionality standpoint, that NFS is king. Going forward, would we be best served purchasing NFS-capable storage devices over block-level? Will block-level storage always be the performance king?
Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn more about use of this instance type for HPC applications.
Cluster GPU Quadruple Extra Large 22 GB memory, 33.5 EC2 Compute Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet
EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.
I'm not sure I'm qualified to say whether VMWare has a "focus on any particular storage technology" but there are definitely advantages and disadvantages to various technologies. In our case, we looked at 4Gb FibreChannel versus GigE iSCSI because we had both infrastructures in place. We opted for GigE because it allowed us to set up the hosts in a clustered, hot-failover configuration.
In our setup we've got Two dual-socket quad-core Xeons as the host servers and our storage resides on an Equalogic iSCSI SAN. We boot our VMs off the SAN and VMWare allows us to easily move running VMs from one head-node to another.. and to fail-over if one goes down. With two Gig-E switches and port aggregation you can get quite a bit of bandwidth and still retain fail-over in the network fabric.
The problem with FC is that it's a point to point connection. The server wants to 'own' the storage and it just isn't well suited to a clustered front end. We could boot our VMs off a FC box, but the problem arrises when we try to give two different boxes access to the same storage pool.
We're currently running something like 15 servers on our system, with one 5000 series SATA Equalogic box on the back end and we're not seeing any bottle-necks. On the CPU side, we've got loads of spare cycles. We run file servers, mail, web.. Typical University Departmental loads (from chatty to OMG Spam Flood).
As for NFS, I'd certainly prefer an iSCSI SAN solution if you can afford it. Block Level all the way. We're very happy with Equalogic, and we get a good discount in Edu. In fact, I'm lining up money to buy another device. Promise just came out with a proper SAN but it's still pricy. The advantage there is that it appears to be a lot cheaper to expand and their storage boxes tend to be very inexpensive and they've proven well made to us. I've got a number of them with zero issues over several years. Whatever you do, check with VMWare's compatibility list for storage.
and using the power numbers in that link, I've done a math that shows that a 2U server with 480 ARM cores will consume roughly the same amount of power that a 2U server with 4 quad(or six)-core xeons.
so, when you put LOTS of small cpus in the same space they end up consuming the same power as normal cpus. what is the advantage in using this hardware for cloud computing then?
Cost of power is one huge one. HVAC is the largest operating expense of most datacenters. Its not just the power to run the servers but the AC to remove the heat they generate.
Well, its not advantageous for every "use case".
But remember that not every application/process requires an 3Ghz cpu core.
There are many applications where an ARM is more than powerful enough to accomplish things. On most home computers people only use 10% of their cpu as it is and they already think they have fast computers.
In today's world we are more I/O bound than CPU bound.
Run some linux OS on the ARMs, load balance them and you can do incredible computing.
I mean comparing servers that would have similar amounts of performance, but comparing fewer high-end CPUs versus a lot of low-end CPUs. like the 480 ARM-cores someone posted on that article i said.
But I agree that for 16 core servers, using ARM/Atom would make sense if you have a light load. but only if you have sure that your load will stay low for a very long time. If you need to add hundreds of CPUs to keep up with the increase in usage, then it doesn't make sense.
Are there any updates, especially on the open source options, for support of 3d gaming in virtualized systems?
I know vmware workstation has some support for when using the same local machine - but when using the server based product it doesnt seem like there is much out there yet.
I spend a lot of time working with Citrix, and Citrix has some beta releases that support this using a couple of the Nvidia cards. It is coming, and citrix is THE leader in this type of technology. I figure by the end of this year we'll have something that is released from them.
The higher-end enterprise solutions from companies like VMWare can address this sort of scenario.
VMWare Infrastructure, for example, can address up to 2000 guests running on up to 200 hosts. VMWare vCenter Server can address up to 3000 guests running on 300 hosts per instance. Beyond that, I'm not sure; VMWare's product lineup is insanely confusing. You can run multiple vCenter instances, but that's multiple clouds at that point.
Eventually, it makes sense just to outsource to a cloud provider like Linode or Amazon rather than trying to do it yourself in-house. These companies don't necessarily provide a centralized management solution (beyond, for example, being able to manage all guests from a single web interface), but they'll take care of the infrastructure for you, and let you scale up to as many guest instances as your imagination desires. Netflix has done a number of presentations about why they decided to abandon their own infrastructure and move everything over to AWS that are quite insightful. Personally, I'd have picked Linode over AWS, but Amazon does have the advantage of having a bunch of hosted solutions (SimpleDB, S3) while Linode has concentrated on the core service, affordable, reliable, performant VM instances.
re - VMWare Infrastructure, for example, can address up to 2000 guests running on up to 200 hosts. VMWare vCenter Server can address up to 3000 guests running on 300 hosts per instance.
I don't think there is any one such number anyone can quote.
We have to remember that those numbers are highly variable as it really depends on what kind of applications are being used, how cpu or disk i/o bound they are etc.
example: If its just webservers then you get oneset of performance/capacity number. If its transaction Database servers then you'll get totally different performance/capacity number.
As my company moves towards a virtualized network, we are looking to leverage infiniband, and its capability to transport multiple protocols to reduce clutter and increase throughput.
I can find tons of information on initiators, HBA's, etc. I can find almost zero information on infiniband targets (SANS, NAS, etc.)
The most useful information I've found so far is on zfsbuild.com and even that is limited. Any chance of a somewhat deep dive on infiniband and it's role in virtualization? Especially with the capability (and capacity if you use 40Gbps IB) to transport ALL protocols over a single (or redundant) link is very intriguing to me.
We're a relatively small credit union (150 employees). Because we're a credit union we don't use many cloud services (mostly due to percieved privacy concerns.)
We run our banking solution inhouse on AIX hardware; most of our other IT services are now virtualised on Xenserver + a basic SAN for our live site, and direct-attached storage at our DR site.
Contrary to a comment someone has made earlier, (eg Virtualisation without a SAN is unworkable) we've found it's been very useful on directly-attached storage to reduce the complexity and cost of our physical hardware, and setup and maintenance of our DR site has been simplified (eg, just copy the VM to the DR hardware and turn it on.)
One thing that was really noticeable (and to me, annoying) was the dominance of VMWare in the market, and the reaction from many solutions providers when they learnt we *weren't* using VMware for our virtualisation solution. We've also found it difficult to find hardware and software certified to work with anything but VMWare,and a vendor willing to consider Xenserver or Hyper-V as a viable alternative to VMWare.
So, my question is - Do you see the dominance of VMWare in the market as an issue, especially in terms of industry expertise? Do you consider virtualisation to be a realistic solution for smaller businesses? Again, some earlier commenters are suggesting you should just go cloud instead? (we're not doing this for various reasons, mostly political and regulatory)
By the way, I like VMWare, however we can't justify the cost, given there are cheaper alternative solutions like hyper-V and Xenserver in the market. VMWare's aggressive marketing put me off, too.
Depends on what you call a SAN. Our customers are typically so small that going FC or 10Gb Ethernet would be a waste of resources. For those small customers we use HP MSA2000sa SAS SANs to which we connect up to 8 servers (or 4 dual path). No expensive switches necessary but still good shared storage.
Here's some perennial ideas I review from time to time and run into all the time. Keep in mind we're in our 2nd generation of virtualization on VMWare and run 90% virtualized, blades, and a big vendor's storage system which is three letters and doesn't start with I.
1) Simple Shared Storage Systems-Always comes up for DR and satellite offices. I'm talking about 2 SAS host arrays (e.g. Promise, etc.) or basic 2x host Fibre arrays. iSCSI @ 10Gb in a bake off for performance and features. You don't always need something from Data General.....Qualified vendor supported stuff.
2) Perennial MS vs VMWare. You've touched on performance below but things have changed and I'd like to see VSphere 4.1 in a really good performance bake off.
With CNAs in the picture:
3) CNA @ 10Gig review (ties into #1)-Setup ease of use, configuration, and performance from Emulex and Qlogic under Dell, IBM, HP. Lotsa vendor-supplied reviews out there, few unbiased.
4) Desktop Virtualization Round Up-Workstation class tools
5) Virtual Desktops-How big can I scale and what's it cost. RemoteFX vs. Teradici. That kinda stuff....it's in it's infancy and a good ROI study would be good. Sure if you listen to vendors it saves millions over a lifecycle but what's the real hardware requirements?!
BTW, I could care less about Xen but it's probably worth a token review :) When it catches up to the two big players I'll listen. I am a VMWare bigot though, it's good to have the rest and let everybody else chase you. You might need a Virtualization 101 based upon some of the feedback!
As another VMware bigot I couldn't agree more. Lots of interesting questions you asked. In the long rung I think Citrix has a problem. VMware is the market leader and currently has the best tech. Period. Microsoft however has deep, DEEP pockets and a great marketing machine and partner channel so they will become a very serious competitor to VMware. Citrix will be somewhere behind those two I think.
As for price, most people here seem to think Hyper-V is free. The VMware hypervisor ESXi is free. To run Hyper-V you need at least a Windows Server 2008 R2 license. Tha management tools for both VMware and Microsoft are definitely NOT free so you'd have to do a case by case comparison if price is the most important for you. For small outfits (3 hosts) VMware Essentials is very affordable and Essentials Plus offers HA and VMotion. I do hope though that they stop with limiting CPU cores in certain SKU's: this is simply stupid.
I know the hardware is just way too expensive, but there is so little data out there for it but....
UNIX Virtualization. Compare it against VMware/Linux features. HPVM and PowerVM are probably the two big players. I know HP machines came down a ton in price since the chipsets and memory are now the same as high end Xeons (blades only). I'm not sure about the IBM side. Maybe these options will become more important in the future. Maybe not if they cannot offer competitive features.
I am running 3 virtual servers at home. 2 Windows 2008 R2 servers and one Linux. They are all running on Hyper-V Core 2008 R2 stand-alone, which is free from MS. One server is Active Directory, the other Exchange, and the last is Firewall, VPN, Web Sever, and intrusion detection. I had little trouble setting up the MS servers, except that Exchange 2010 SP1 has to many pre-installations. The Linux server needed to have the drivers for the virtual NICs and hard drives to show up. You could use legacy but the virtual ones are faster.
My point is I went from 3 machines to 1, that is 5 power connecters as two had redundant power supplies, and the only thing I had to do was add more ram to the one to a total of 8GB. I am saving about 400 watts, which can really add up.
Not only Hyper-V is free but also ESXi and XenServer. Whatever platform you use, you'll have to purchase the licenses for the OS's in the VMs and the management tools if you want to use the really interesting stuff.
Personally I wouldn't know what user Hyper-V server (the free stuff) is because you don't get a GUI which might be a plus. I run Hyper-V on my 2008 R2 desktop machine but without GUI I'd use ESXi or XenServer (I chose Hyper-V because of limited space). Thinner and better support for non-Microsoft OS's, especially on ESXi.
I also always wonder whether you need or do not need an antivirus on your Hyper-V server
My question is what do the experts think of Fusion-IO's ioMemory and VSL (Virtual Storage Layer)? And how do they think this will effect virtualization and cloud computing in the short to medium timeframe, and long timeframe?
Fusion-IO has been around for a few years now, and have done an outstanding job accelerating servers. Since their architecture is super-scalar in highly parallell enviroments and optimized for latency and power-efficient IO's, i'd think they'd have a great future in virtualization.
There is so much to talk about here on so many fronts, there probably needs to be some narrowing of topic and focus to occur. Some thoughts:
- If you haven't virtualized servers yet, you're probably a smaller company.
- Most jobs and economic growth occur through small/medium businesses (at least in the US - according to the department of labor). I'd recommend a focus on companies that don't have a hundred servers (and big budgets)
- Desktop virtualization is all the buzz. I've been through the manufacturers materials and due to license costs (VMWare and Citrix) desktop virtualization is still expensive compared to 1:1 physical endpoints. You don't do it to save money (despite vendor claims which are mostly bolstered by the assumption you'll eliminate employees). It makes sense for hospitals and retail, but outside that? You still need MS licenses for the desktop, even thin clients aren't much less expensive than low-end workstations, and desktop virtualization still doesn't really work for laptop travelers (think airplane or non-connected environment).
- Before someone says it - MSFT licenses. Frankly, unless you are a giant company and don't want to keep track of licenses info yourself, the only reason I can see to get an enterprise license or Software Assurance is because you want one license key to rule them all (or you already use every MSFT product ever made). We purchase OEM still because it's almost 50% less expensive, despite the need for us to track license info internally. I've yet to see an SA that saves anyone money since MSFT has historically released products on a longer than 3-year product cycle. Google apps isn't there yet for our company, but it might be in the next three years.
- I read VMWare has reported less than 40% of the virtualization software purchased is actually deployed. It's the after-you-purchase gotch-ya's that I suspect hold up projects. What are those items?
- Someone else mentioned it, but there is a lack of independent testing. Gartner, Aberdeen, Forrester and the like only offer information based on reviews from people who have done the work, and that's like playing the "telephone game". Isn't there some information somewhere that breaks down: a. difficulty to install (do I need a product-specific software expert in order to do it?), b. ease of management, c. complexity, d. cost, e. disaster recovery capabilities (and ease of doing so), f. gotch-ya's. (for reference, our company network engineer learned how to do a hyper-v install in a day)
- I understand and appreciate all the Unix, Linux and Mac folks out there, but let's face it (if I stick with my SMB theme), most companies run Windows Server.
- Are there complexities or problems when virtualizing in production on a SAN and then having to migrate to DR with DAS (what about drive letters, targets, managing disk space, etc.)?
- Someone posted about NFS. Bleh. CIFS (like it or hate it) is more common.
- What's new with virtualization today, and does it only apply to mega-installs? What's new for next year. After that, I see a lack of relevance due to changing tech.
- Cloud computing...
- For a Small/Medium business, what makes the most sense to outsource. We look at this all the time and we just keep coming up with the same answers over and over. Because we already host all our application internally, it's difficult to move just one of them somewhere else. Why? Because then we need to massively increase bandwidth to keep the applications communicating without bottlnecks. That increases cost and eliminates it as a possibility. Our company is extremely cost conscious.
- Does cloudsourcing email make sense? It's much more expensive than doing it in-house for 1000 users. It's a no-brainer for a company of 25 or a 100 even.
- If you aren't performing research or transactional processing, does cloudsourcing make sense? If yes, then for what applications? I'd cross off file server or document management immediately because of connectivity/bandwidth costs.
There is probably more. Maybe a later post. Thanks!
- our customers are all SMB's (50 - 100 employees) and typically the only server that is NOT virtualized is the backup server
- desktop virtualization is costly due to MS licensing and the need for more SAN storage. However, you save costs on deployment of new desktops. It works really well for laptop users. I manage a VMware View environment and the sales people take their VM along with them. XenDesktop works differently than View but it looks interesting as well.
- Microsoft licensing is more than an easy way to keep track of software. OEM is cheaper in price but the various licensing programs can be cost savers elsewhere. You should NEVER simply make a comparison of €€ or $$$ there.
I think you must be working with VERY small SMB's but even there virtualization makes sense. Current server hardware is way too powerful for many SMB workloads but a lot of SMBs have more than one server. Putting them all on a single host will save you money and make migration to other hardware later on a lot easier, even if you don't use the management servers like vCenter or SCVMM.
Small example, I've got a customer with about 20 users and they want to use Remote Desktop Services (they got Microsoft licenses nearly for free). I ordered a PowerEdge T710 (dual Xeon E5620, 24 GB RAM, 5 x 146 GB 10k SAS) which is perfectly fine for their new SBS 2011 and a RDS server virtualized on the free ESXi. 2 separate servers would have cost more, used more power and space.
I share your doubts about the usefulness of putting some (or all) your servers in an external cloud. For most small and medium companies, this doesn't make sense. Bandwith and guaranteed connections are way too expensive. Just keep it in house.
As we all know, virtualization has only a point when there's a SAN in use. My question is, knowing that it is usually I/O bound, what kind of HDD are the best used in the SAN? Are they used few high I/O 15K/10K Harddrives or are they used a lot of 7200rpm drives? What's the best ratio between capacity and I/O performance in a virtualized environment?
You don't need a SAN to virtualize in a useful way but if you want all the bells and whistles, you need that SAN.
Bulk storage/archiving/backup is typically fine on 7200 rpm drives but if you want several VM's on a single LUN, forget about those things. The difference between 7200 rpm and 10k rpm is considerable but if your budget allows, go for 15k. Those disks will have a longer useful life. 7200 rpm's are really way too slow. Most of our customers have 3.5" 300 GB 15k SAS drives in their arrays (RAID1, 5 or 10 depending on the purpose) and I'd like to keep enough free space for snapshots and moving things about. Usually we also have a mirror of high capacity 7200 rpm drives (1 TB+) to store ISO's, VM templates and other stuff that needs disk space but not performance.
The next SANs we're going to sell will most definitely be with 2.5" disks. More spindles = more iops. We recently bumped into serious performance issues on a VDI setup on a 3.5" SAN that had enough disk space but not enough spindles. VMware recommends about 20 virtual desktops per LUN so a SAN with a mere 12 disks cannot host that many machines.
What you find is in more modern implementations of shared storage for VM's is that a lot of companies are drifting away from traditional "fast" shared storage like SAN / Fiber implementations and moving to iSCSi utilizing slower 7200 / 10K SAS or SATA based drives. The trade off is you have lager arrays adding a higher number of spindles which compensate for the loss of rotational access. SAN's are pricy if all you use them for is shared storage. The strengths and benefits in a SAN isn't just the performance but rather the software, replication, snapshots, and redundancy. Running VM's off of an iSCSi array with 7200 RPM SATA 2/3 or SAS drives works well for most applications and for the applications.
Newer storage products allow you to provide a hybrid of solutions allowing you to combine slower and faster storage cabinets that prioritize data utilization between the cabinets with the higher access pushed to the faster drives and the least accessed to the slower cabinets. Either way the long term maintenance and costs are much lower then the traditional SAN.
1) What would be your advice on how to decide whether to roll your own "cloud computing" stuff or to use a 3rd party like Amazon EC2, Microsoft Azure, Google App Engine, etc.
2) What would the benefits and disadvantages be for moving a site like Anandtech to: a) Amazon EC2 b) Microsoft Azure c) Google App Engine d) Some other cloud computing platform that you'd like to use as an example.
I think its a misconception to believe that larger companies have larger virtualized implementations. Especially in today's economy where big companies do not always equate to "big budgets" also in most cases those big budgets are their to maintain only as the cost of maintenance is usually over 50 percent of the average IT budget. Last year I sat in a room with other professionals representing various companies, some much larger than ours. When the question was proposed as to who had X% of their environment virtualized, it wasn't unusual to see that many of the larger companies had anywhere from 0 to 5% virtual. Most were just there to understand the concept. From my experience larger companies tend to move more methodically, its much harder to shift gears on a large infrastructure than a smaller one.
Virtualization and in this case Vsphere, is very compelling however it is definitely not for the gun shy. I had my team virtualize over 30% of our environment before I even let my CIO know we were doing it. I knew had he found out ahead of time he would have slowed us down. After converting several key systems and not hearing any complaints, I filled him in on what was going on. At that point he was very much for our endeavor. Granted I am not suggesting that everyone go out and do things in secret. its just that when Virtualization hits an environment I understand why some may feel cautious and thus not pull the trigger on a lot of their systems. I still refuse to virtualize Exchange, and only as a proof of concept have I virtualized a SQL 2008 server. Still even in that instance it is the only VM occupying a redundant host pair. The problem with HyperV? Overhead and management. Still immature compared to VMWare. Just virtualizing an OS isn't enough for a production environment. You need the ability to fail over, migrate, and expand and contract at will.
What I see is a convergence of higher level technologies. In the past topics like Disaster Recovery and Business Continuity as well as Security pretty much dominated public forums and marketing material. Security is a becoming a moot point (I know so un politically incorrect to say). Most companies spend an incredible amount of money annually to protect themselves from themselves. The human factor is the biggest cause of security problems. Missing backup tapes, stolen laptops, disgruntled employees downloading everything to a flash drive before they leave. Thats the most common forms of security problems companies face today. Granted there are ways to prevent all of that however once again, more money spent to protect your self from yourself. At the end of the day companies weigh what the truly are willing to risk and what is unacceptible, in the end most companies will risk more than they will prevent.
Today it's the "Cloud". Pushing all marketing BS aside, the true implementation of the cloud pretty much makes the previous era technology focus and pushes them back from the headliners to the feature list. Hosted storage, application virtualization, and hardware / OS agnosticism all hold DR, BC and Security features that businesses need and they come built in as "features" to a $50 a year per user product (Google Enterprise Apps) rather than hundreds of thousands of dollars of infrastructure along with hundreds of thousands of dollars annually for maintenance costs. The true argument in a business will be the psychological battle between freeing people from their Microsoft products and changing the way we distribute documents and collaborate. Unify the applications and products and make it accessible with a familiar interface find a alternative but intuitive method to collaborate and distribute documents and you pretty much eliminate the file format war.
IT's biggest fight comes in the form of the "invasion" of consumerism (ipads, Iphones, droids, zooms) which were designed to free the consumer but lack the controls IT departments require to maintain compliancy and information security. Virtualized application environments may solve that especially if they are hardware and OS agnostic. In this instance the application and files still exist in a cloud / data center environment with the device used only as the viewer. Thus devices that are traditionally consumer now become more acceptable in the work place.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
55 Comments
Back to Article
m.amitava - Thursday, March 17, 2011 - link
I work in India for a San Jose based Software Company(one Steve Jobs doesn't like too much). Recently we have had a virtual lab service introduced which enables us to deploy virtual machines and carry out short term development and testing. Now I have a two pronged question
1) Will there be a day when all(or most ) of the development/testing work across platforms will be migrated to the cloud? Will it be secure enough and do you think the results will be consistent when compared to using actual physical PC's with a single OS?
2) What is the future of Mac OS X VM support? Our company at this point doesn't offer macs for the virtual lab
L. - Thursday, March 17, 2011 - link
1) Yes and you don't really mind about the consistency between physical and virtual as long as you run full virtual (that may include end-user equipment, yes). Server-side can and should* be virtualized anyway so in this sense, you must see that it is much easier to replicate a production environment with virtualization, and therefore have the testing for cheaper and better (you can have a perfect 1:1 copy, which is impossible - read too expensive - in physical environments)."Secure enough" ? If it's secure enough for production, it oughta be secure enough for dev. If your question is about test environments impacting production environments, you just need to make sure they run on separate pools / different hosts, including all the network / storage. (As in, if your test environment uses your production network and SAN, you may impact performance in production).
2) Google it, took me 2 minutes to have the answer.
yioemolsdow - Wednesday, April 20, 2011 - link
★∵☆.◢◣ ◢◣
◢■■◣ ◢■■◣
◢■■■■■■■■■◣
◢■■■╭~~*╮((((( ■■■◣
◥■■/( '-' ) (' .' ) ■■■◤
◥■■■/■ ..../■ ■■◤
◥■■■■■◤ jordan air max oakland raiders $34a€“39;
◥■■■◤
◥■◤ Christan Audigier BIKINI JACKET $25;
▼
\ Ed Hardy AF JUICY POLO Bikini $25;
\
\ gstar coogi evisu true jeans $35;
\
\ gstar coogi evisu true jeans $35;
\
\ coogi DG edhardy gucci t-shirts $18;
● \ ●
《 》 》》
》 《
_?▂▃▄▅▆▇███▇▆▅▄▃▂
^__^:====( www etradinglife com )======
Guspaz - Thursday, March 17, 2011 - link
There is already limited support for Mac OS X as a guest VM... but only OS X Server, and only on an OS X host (that's a licensing requirement, not a technical requirement). It's technically possible to run any version of OS X on any host (such as OS X on Windows) using "OSx86" modifications, but that's a EULA violation.As far as I know, the only virtualization solution that offers official support to do this is Parallels. Virtualbox also has experimental support for doing it.
JarredWalton - Monday, March 21, 2011 - link
I think we need to change the title:"Ask the Experts, with answers by our expert readers!" :-)
Virtuwiz - Thursday, March 17, 2011 - link
Why can't you have a "showdown" between VMware and Microsofts HyperV tecnology.MS have a compelling story and a great pricepoint but the marketshare and market history for VMware makes makes this a hard decision to make.
With all the reviews at ananadtech i think that i possibly could be the real first documentet and public Clash of the titans.
duploxxx - Thursday, March 17, 2011 - link
There are lots of aspects that you need to look at before you can state one has a more compelling offer then another. You can look technology wise, future wise, mngmnt wise, price wise, performance wise. Sure I think Anandtech could make a very compelling compare, but that will be one very high time consuming review that can only be brought into several pieces. Just out of experience.Second to that there are always pro and contra for one and another. For example from a pricepoint it might look more interesting but you just compare it from a license perspective, but for example how many real life VM you can actually run on a physical box between vendors? You might be surprised .....
bmullan - Thursday, March 17, 2011 - link
Let's not limit ourselves to just vmware or hyperV.KVM is great technology and being built into the linux kernel takes advantage of developments from the linux community. kernel shared memory (KSM) is just one of many examples.
Many large company's are using kvm including IBM, Red Hat, Dell, HP, Intel. It is just one of several choices available.. but its free and open source code available.
My opinion is that KVM may become the leader in virtualization technology simply because of cost. Compare VMware licensing costs $$$$ to $0 for KVM ? That makes a huge difference in a large datacenter where someone is implementing their own private cloud but even more expensive for a large Cloud Service Provider with thousands of servers.
Let's also not forget that AWS is built on XEN.
As the acknowledged leader in Cloud that says a lot for XEN in itself.
sor - Friday, March 18, 2011 - link
What's with people calling it "kernel shared memory" lately? I thought it was "kernel samepage merging".I use KVM at work specifically because we can run paravirtualized and I don't have to worry about recompiling modules/installing software (vmware) or running special kernel builds (Xen) whenever we upgrade. We can use the same kernel we do on our hardware and everywhere else. Also, performance was all around better compared to ESX4.1
Calabros - Thursday, March 17, 2011 - link
a simple one: an Atom based server can be good as a VPN server?we need to reduce the cost of Breaking Great Firewalls here
Guspaz - Thursday, March 17, 2011 - link
That doesn't really have anything to do with virtualization, though. And the answer is, no, not for a large amount of bandwidth. VPNs require encryption, and the Atom is not ideally suited to encrypting large amounts of data.Besides, it's probably cheaper just to get a Linode; it'll cost less than a dedicated Atom would.
Kissaki - Thursday, March 17, 2011 - link
What I see a lot is virtualization on blade chassis, the problem is that I'm not convinced this is the best way to go. more servers would mean more idle time in the end and more management.I would like to know what hardware would be best to virtualize on. and then not what is better per server but what is better per X amount of money you spend over the 2/3 years you use the hardware. And better would probably mean something like faster / more reliable / power useage
After that is answered it would be great to have the same kind of insight into the storage part of virtualization.
meorah - Thursday, March 17, 2011 - link
there is no "best to virtualize on" until you get into specifics. For example, if you need to virtualize 100 servers, you already have a FC SAN, and you only have 10U of space for your CPU and memory in your data center, you'd better believe that a blade chassis will be the correct starting point.On the other hand, if you just want to test/dev things on a new infrastructure and aren't sure how many servers you will need yet, any old poweredge 2800 with enough drives will do the trick just fine.
"what is better per X amount of money you spend over the 2/3 years you use the hardware"
If you're only going to use it for 2-3 years, don't buy it. It's probably cheaper if you rent it from a cloud or virtual hosting provider.
"And better would probably mean something like faster / more reliable / power useage"
Larger companies pay ridiculous sums of money to have people do this type of TCO calculation for them.
Most of the reliability is built into the software if you're virtualizing, so you actually try to get systems with less hardware redundancy... for example, you may choose a server with only one power supply instead of redundant power supplies if your design includes the ability for all the virtual machines on that physical host to transfer to other hosts in the event of a host failure.
The choice of faster vs power efficient depends on the applications you are delivering. If the apps are business critical, you choose fast. If the apps are just nice to haves but not critical to the business' success, you choose power efficient. The Xeon L series is the processor equivalent of "slower but more power efficient". Low voltage memory saves about 10% of power consumption over regular server memory.
If the project is large enough, you can spend extra on the performance virtualization farm for business critical apps and save more on the efficient virtualization farm for everything else.
As for shared storage, it's exactly the same. Different vendors with different products all aimed at different scenarios. Tiered storage, thin provisioning, high utilization, how many controllers per enclosure, how many LUNs per enclosure, DAS vs SAN vs NAS, FC vs iSCSI, FCoE vs 10GBE, SAS vs SATA vs SSD vs NLSAS... You just pick the ones you want based on how much budget you have versus how much performance they want. You explain the positives of some systems that are too expensive and the negatives of some systems that are under-budget and let the guys with the money choose their poison of preference.
bmullan - Thursday, March 17, 2011 - link
You mentioned a lot of good points.Others to consider regarding costs of buy vs rent from a Cloud provider:
Insurance costs for facilities, equipment, liability (in case of loss)
HVAC - its not just the power for the servers but the AC and more importantly the backup systems.
Staff and Staff expertise - can you hire and retain expert staff. What does that cost?
If you own - you pay for server maintenance agreements with the vendors
If you use the cloud - its built into the hourly costs
h/w renewal costs - if you own then you pay to upgrade h/w
If you use the cloud - it is their job and they get larger discounts and thus costs/rates because of Scale than you probably can get.
Utility company's typically give large discounts to big users of power. Microsoft or Amazon will get a huge discount from the rates a small company would pay for electricity.
Also, something often overlooked it the Cloud SP's can locate their Data Centers (DC's) almost anywhere that they can find the best deals for the power the DC would consume... which passes on those savings to the users of that DC/cloud SP.
As you stated many people realize this is a complicated question that has different answers depending on the company/situation involved.
lunan - Thursday, March 17, 2011 - link
Some virtualization company offer free virtualization such as VirtualBox and others like VMWare offer hefty pricing for Enterprise customer.I am planning to start a small business in IT, and im wondering what benefits virtualization will bring to the organization, especially with small starting capital, i won't be able to afford hefty license (i mean i wont be using Corporate software such as SQL Server or Oracle but fall back into MYSQL or any other database). I also concerned because some of the license are GNU (which means any of the developed software must be GNU too --.--!)
With clients only numbered in tens (10-30), would it be better to buy high end server (Xeon comes to mind), then virtualize the heck of it, or separate small server (multiple Phenom II X6) designed to balance the load? or if the number client increased to 100+?
Database also comes to mind, i presume you have to put either router or switch to the database server connecting with the virtual server then?
Also how do you use virtualization properly anyway? with central server, it is quite easy to manage things there, just run some server software and you are done, with virtualization, how do you divide things? (i am really clueless...)
thanks for the answers everyone
bothari - Thursday, March 17, 2011 - link
In your case i would suggest getting 2 or 3 smaller servers and a storage (maybe something like this http://h18004.www1.hp.com/products/servers/prolian... because virtualization is meaningless witzhout shared storage. On servers you can then install hypervisor of your choice (all vendors have free editions), and start creating VMs. If you plan to use as much as opensource as possible I would recommend using ESXi (free VMWare hypervisor) because of it's wide range od OS support.Don't know what your budget is but you could also consider licensing to get some features which will make you work and sleep easier - at VMware that would be "Essentials Plus Licence" (http://www.vmware.com/products/vsphere/buy/small_b... while Microsoft offers System Center Essentials (http://www.microsoft.com/systemcenter/en/us/essent...
Hope this helps
GL in business waters
duploxxx - Thursday, March 17, 2011 - link
I think you clearly need to investigate some time into the IT area before you should start anything.You are mixing OS hosted virtualization with bare metal virtualization,
Consumer based systems with Servers.
New on he networking topology, not even mentioning the storage part.
lots of work to do.
So to start with, any virtualization vendor these days has free virtualization platforms to offer most on each part. (OS hosted or bare metal)
OS hosted virtualization is kids playground, you run dev/test on it if you want but production is out of the question (before the comments on this, definition of production off course...)
you need to know what storage sizing and features you need for your type of business, same for networking and what application communication you will have.
there are many tools to manage virtualization platforms, some are free some not, is really depending on the platform you intent to take.
lunan - Thursday, March 17, 2011 - link
That's the thing,i know all about the hardware part, even storage service using NAS or Active Directory (depends on the system, on the last system i use Novell for some outsource job company that needed it). Network topology dont matter, but if you use a virtualized server (say 1 server with 10 LAN slot - PCIE), do you need to connect the main server LAN or every single LAN need to be connected to a switch or router to be able to function?I have tried VMWare on home and apparently i can use my single LAN shared with the windows and virtual linux. Question is can i do this or should i install that?
I just dont see the benefit of virtualization....probably if you are webhosting, that would be great for every user can have their own Dedicated Virtual Hosting, but for normal companies...i still have no ideas.
using virtualization for testing is great, for development, i'll die before i do it. My partner already complain about the slowness of his workstation with intel quad core (core2), giving him virtualization just wont do it.
so to clarify my question, what do i benefit from virtualization? (cloud when mature i can see the point, but virtualization for organization i still in the dark)
PS:sorry for the lack of information on the above post, shoulda clarified im not exactly newbie, more or less, but only newb on virtualization...
meorah - Thursday, March 17, 2011 - link
"I am planning to start a small business in IT, and im wondering what benefits virtualization will bring to the organization, especially with small starting capital"You will have very few benefits up front, but if you expect to need 15+ servers in the first 3 years it can be a wise choice to start virtual so you can expand virtually.
The first benefit that small businesses realize with server virtualization is hardware consolidation. You can probably run 15 light use servers on a single C1100 for about $10k. Without virtualization, you'd be trying to find the cheapest servers possible and host each OSE on its own server. At about $1k per server, you'd be spending $15k to run the same number of servers and wasting lots of energy keeping them powered and cooled.
Later advantages are that you can leverage shared storage to thin provision your drive space so you don't have to take your servers off-line in the future to add storage. Another shared storage benefit is cheap cluster failover for your virtual compute systems. As the business grows, there are many other advantages you can use from ease of management to I/O shaping or I/O QoS.
"would it be better to buy high end server (Xeon comes to mind), then virtualize the heck of it, or separate small server (multiple Phenom II X6) designed to balance the load?"
Depends on the goal. If your goal is server consolidation, get the Xeon because you will be able to consolidate more systems per physical host. If your goal is business continuity through whatever method (clustering, load balancing, backup restores) then get the Phenoms.
"or if the number client increased to 100+?"
Number of clients is not a good way to measure your business' virtualization needs. You need to determine whether you would even need one extra server in your journey from 10 clients to 100 clients. If you decide that you won't need any extra systems, then the requirements are the same for both scenarios. Some companies will want to add extra capacity for various reasons. Other companies will try to slide by for as long as they can with what they have.
"Database also comes to mind, i presume you have to put either router or switch to the database server connecting with the virtual server then?"
Unless your virtualize the DB server, yes you will need a switch to connect them. If you do virtualize the DB server, the hypervisor's virtual switch will connect to the other virtual machines (and any physical machines).
"Also how do you use virtualization properly anyway? with central server, it is quite easy to manage things there, just run some server software and you are done, with virtualization, how do you divide things?"
you manage virtual servers with the management tools from the vendor you use. The system center stuff (SCCM, SCOM, VMM) for hyper-V, the xencenter stuff for xenserver, and the vcenter stuff for vmware.
I don't know what you mean by divide things, but I'm guessing you mean dividing computing resources. Each vendor has its own way of doing it, but dividing resources is the whole point. Also, any management software that you use to manage individual physical servers can still be used to manage virtual servers so no big difference from that perspective.
lunan - Thursday, March 17, 2011 - link
this answer my question at the very least :Dso in effect, i could deploy Web Server, Novell Server, Repositories and Testing, even a File Server in a single server, divided by virtualization right? instead of dividing the server physically, i just consolidate the service, but install them in different VM.
got it, question half answered. :D
now i had replied with more detailed question though...so it's only half answered. but thanks man for enlighting me
Guspaz - Thursday, March 17, 2011 - link
If you're developing a purely hosted solution, the use of MySQL and other GPL'd software does not require you to GPL your solution. The GPL only covers distribution, and there's no distribution in a hosted scenario.In fact, you can even distribute GPL'd software with your proprietary software without GPLing the entire thing as long as you're not directly linking. A client/server relationship (distributing and using MySQL, but connecting through a socket) would not be a problem. Similarly, a lot of libraries are licensed under the LGPL, which *does* allow linking LGPL'd software into non-LGPL software without LGPLing the whole thing.
Jeff7181 - Thursday, March 17, 2011 - link
I think one of the main benefits of virtualization for a small company is disaster recovery. You can take snapshots of a machine's disks and store them off the box/network or even offsite so in the event of a major virus outbreak or some other system failure, restoring your equipment to a working state is as simple as copying the virtual disk images back to the host. Hell, you could even completely rebuild the host and as long as you have the virtual disk images, you can be right back to where you were previously in a matter of minutes.You'll spend a bit more on the hardware, but the ease of recovering from something catastrophic will make you happy you spent it.
Currently, the VM hosts I work with are around $20,000. They're HP DL380's with two 6-core Xeons, 48 GB of RAM, 6 NIC's, two HBA's and I think these have 6 hard drives in RAID1+0 but not much local storage is used, all the virtual disks are on the SAN. However, you could easily get 6 disks in RAID1+0 to house your virtual disks locally. You could even step down to a DL360 and accomplish the same thing.
hoverman - Thursday, March 17, 2011 - link
Lunan - I started and owned an IT consulting company in the Raleigh NC area until was bought out by a slightly larger company in 2008, whom I now work for. Most of our clients are in the 10 to 300 users range. I currently maintain about 50 clients in the area. We virtualize all of our customers moving forward, even if they only have one physical server. We made it a company policy about two years ago.Please feel free to contact me externally to this forum, and i will help you in any way I can. My company's name is Netsmart INC. My contact info is on our website. Just look at my handle and match it up with my contact info on the site. (I dont want to post any emails, etc. here)
Ever since I learned about virtualization, I have embraced it 100% and it has paid big dividends for our staff and our customers.
handle.goes.here - Thursday, March 17, 2011 - link
There are plenty of articles comparing servers (both discrete and blades), but what I haven't seen (or have I just missed them) are good reviews testing the various interconnect fabrics. For example, I can build a cluster with HP DL380s and Arista 10gig switches, or I can build a cluster with an HP c7000 blade chassis and its integrated switch. (Likewise, LSI's SAS switch vs the SAS switch available for the blade server.)What are the performance tradeoffs between best of breed and an integrated solution?
VMguy - Thursday, March 17, 2011 - link
Does VMware plan to implement similar functionality to the recently introduced RemoteFX for their virtualization platform? With package-integrated GPUs, having tens of VMs with real GPU capacity doesn't seem too far-fetched.Does VMware have a focus on any particular storage technology? It seems, from a functionality standpoint, that NFS is king. Going forward, would we be best served purchasing NFS-capable storage devices over block-level? Will block-level storage always be the performance king?
Thanks
bmullan - Thursday, March 17, 2011 - link
I don't know about Vmware but Amazon's AWS EC2 cloud already offers large GPU based VM Clusters.Per AWS URL: http://aws.amazon.com/ec2/#instance
Cluster GPU Instances
Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn more about use of this instance type for HPC applications.
Cluster GPU Quadruple Extra Large 22 GB memory, 33.5 EC2 Compute Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet
EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.
ffakr - Friday, March 18, 2011 - link
I'm not sure I'm qualified to say whether VMWare has a "focus on any particular storage technology" but there are definitely advantages and disadvantages to various technologies.In our case, we looked at 4Gb FibreChannel versus GigE iSCSI because we had both infrastructures in place. We opted for GigE because it allowed us to set up the hosts in a clustered, hot-failover configuration.
In our setup we've got Two dual-socket quad-core Xeons as the host servers and our storage resides on an Equalogic iSCSI SAN. We boot our VMs off the SAN and VMWare allows us to easily move running VMs from one head-node to another.. and to fail-over if one goes down.
With two Gig-E switches and port aggregation you can get quite a bit of bandwidth and still retain fail-over in the network fabric.
The problem with FC is that it's a point to point connection. The server wants to 'own' the storage and it just isn't well suited to a clustered front end. We could boot our VMs off a FC box, but the problem arrises when we try to give two different boxes access to the same storage pool.
We're currently running something like 15 servers on our system, with one 5000 series SATA Equalogic box on the back end and we're not seeing any bottle-necks. On the CPU side, we've got loads of spare cycles. We run file servers, mail, web.. Typical University Departmental loads (from chatty to OMG Spam Flood).
As for NFS, I'd certainly prefer an iSCSI SAN solution if you can afford it. Block Level all the way.
We're very happy with Equalogic, and we get a good discount in Edu. In fact, I'm lining up money to buy another device.
Promise just came out with a proper SAN but it's still pricy. The advantage there is that it appears to be a lot cheaper to expand and their storage boxes tend to be very inexpensive and they've proven well made to us. I've got a number of them with zero issues over several years. Whatever you do, check with VMWare's compatibility list for storage.
marc1000 - Thursday, March 17, 2011 - link
in the news-story about Intel bringing atom to servers (published here a few days ago). in the comments, a certain link appeared. http://www.eetimes.com/electronics-news/4213963/Ca...and using the power numbers in that link, I've done a math that shows that a 2U server with 480 ARM cores will consume roughly the same amount of power that a 2U server with 4 quad(or six)-core xeons.
so, when you put LOTS of small cpus in the same space they end up consuming the same power as normal cpus. what is the advantage in using this hardware for cloud computing then?
bmullan - Thursday, March 17, 2011 - link
You asked what are the advantages?Cost of power is one huge one. HVAC is the largest operating expense of most datacenters. Its not just the power to run the servers but the AC to remove the heat they generate.
Well, its not advantageous for every "use case".
But remember that not every application/process requires an 3Ghz cpu core.
There are many applications where an ARM is more than powerful enough to accomplish things. On most home computers people only use 10% of their cpu as it is and they already think they have fast computers.
In today's world we are more I/O bound than CPU bound.
Run some linux OS on the ARMs, load balance them and you can do incredible computing.
Read this past AnandTech article:
http://www.anandtech.com/show/3768/seamicro-announ...
bmullan - Thursday, March 17, 2011 - link
I should have mentioned that I also thought ZT Systems 1RU 16 core ARM product sounds great.Because it uses less than 80W of power it doesn't even require a fan for cooling so its totally silent.
http://www.thinq.co.uk/2010/11/30/zt-systems-launc...
I am not sure that their product is available yet but they had announced it a couple months ago.
marc1000 - Friday, March 18, 2011 - link
I mean comparing servers that would have similar amounts of performance, but comparing fewer high-end CPUs versus a lot of low-end CPUs. like the 480 ARM-cores someone posted on that article i said.But I agree that for 16 core servers, using ARM/Atom would make sense if you have a light load. but only if you have sure that your load will stay low for a very long time. If you need to add hundreds of CPUs to keep up with the increase in usage, then it doesn't make sense.
docbones - Thursday, March 17, 2011 - link
Are there any updates, especially on the open source options, for support of 3d gaming in virtualized systems?I know vmware workstation has some support for when using the same local machine - but when using the server based product it doesnt seem like there is much out there yet.
hoverman - Thursday, March 17, 2011 - link
I spend a lot of time working with Citrix, and Citrix has some beta releases that support this using a couple of the Nvidia cards. It is coming, and citrix is THE leader in this type of technology. I figure by the end of this year we'll have something that is released from them.juhatus - Thursday, March 17, 2011 - link
what if you are going to thousands of wm's, say one per user. you could be talking ten's of thousands. what are the tools to handle that amount?Guspaz - Thursday, March 17, 2011 - link
The higher-end enterprise solutions from companies like VMWare can address this sort of scenario.VMWare Infrastructure, for example, can address up to 2000 guests running on up to 200 hosts. VMWare vCenter Server can address up to 3000 guests running on 300 hosts per instance. Beyond that, I'm not sure; VMWare's product lineup is insanely confusing. You can run multiple vCenter instances, but that's multiple clouds at that point.
Eventually, it makes sense just to outsource to a cloud provider like Linode or Amazon rather than trying to do it yourself in-house. These companies don't necessarily provide a centralized management solution (beyond, for example, being able to manage all guests from a single web interface), but they'll take care of the infrastructure for you, and let you scale up to as many guest instances as your imagination desires. Netflix has done a number of presentations about why they decided to abandon their own infrastructure and move everything over to AWS that are quite insightful. Personally, I'd have picked Linode over AWS, but Amazon does have the advantage of having a bunch of hosted solutions (SimpleDB, S3) while Linode has concentrated on the core service, affordable, reliable, performant VM instances.
bmullan - Thursday, March 17, 2011 - link
re - VMWare Infrastructure, for example, can address up to 2000 guests running on up to 200 hosts. VMWare vCenter Server can address up to 3000 guests running on 300 hosts per instance.I don't think there is any one such number anyone can quote.
We have to remember that those numbers are highly variable as it really depends on what kind of applications are being used, how cpu or disk i/o bound they are etc.
example:
If its just webservers then you get oneset of performance/capacity number.
If its transaction Database servers then you'll get totally different performance/capacity number.
James5mith - Thursday, March 17, 2011 - link
As my company moves towards a virtualized network, we are looking to leverage infiniband, and its capability to transport multiple protocols to reduce clutter and increase throughput.I can find tons of information on initiators, HBA's, etc. I can find almost zero information on infiniband targets (SANS, NAS, etc.)
The most useful information I've found so far is on zfsbuild.com and even that is limited. Any chance of a somewhat deep dive on infiniband and it's role in virtualization? Especially with the capability (and capacity if you use 40Gbps IB) to transport ALL protocols over a single (or redundant) link is very intriguing to me.
Fleepo - Thursday, March 17, 2011 - link
We're a relatively small credit union (150 employees). Because we're a credit union we don't use many cloud services (mostly due to percieved privacy concerns.)We run our banking solution inhouse on AIX hardware; most of our other IT services are now virtualised on Xenserver + a basic SAN for our live site, and direct-attached storage at our DR site.
Contrary to a comment someone has made earlier, (eg Virtualisation without a SAN is unworkable) we've found it's been very useful on directly-attached storage to reduce the complexity and cost of our physical hardware, and setup and maintenance of our DR site has been simplified (eg, just copy the VM to the DR hardware and turn it on.)
One thing that was really noticeable (and to me, annoying) was the dominance of VMWare in the market, and the reaction from many solutions providers when they learnt we *weren't* using VMware for our virtualisation solution. We've also found it difficult to find hardware and software certified to work with anything but VMWare,and a vendor willing to consider Xenserver or Hyper-V as a viable alternative to VMWare.
So, my question is -
Do you see the dominance of VMWare in the market as an issue, especially in terms of industry expertise?
Do you consider virtualisation to be a realistic solution for smaller businesses? Again, some earlier commenters are suggesting you should just go cloud instead? (we're not doing this for various reasons, mostly political and regulatory)
By the way, I like VMWare, however we can't justify the cost, given there are cheaper alternative solutions like hyper-V and Xenserver in the market. VMWare's aggressive marketing put me off, too.
HMTK - Friday, March 18, 2011 - link
Depends on what you call a SAN. Our customers are typically so small that going FC or 10Gb Ethernet would be a waste of resources. For those small customers we use HP MSA2000sa SAS SANs to which we connect up to 8 servers (or 4 dual path). No expensive switches necessary but still good shared storage.zephyrwind69 - Thursday, March 17, 2011 - link
Here's some perennial ideas I review from time to time and run into all the time. Keep in mind we're in our 2nd generation of virtualization on VMWare and run 90% virtualized, blades, and a big vendor's storage system which is three letters and doesn't start with I.1) Simple Shared Storage Systems-Always comes up for DR and satellite offices. I'm talking about 2 SAS host arrays (e.g. Promise, etc.) or basic 2x host Fibre arrays. iSCSI @ 10Gb in a bake off for performance and features. You don't always need something from Data General.....Qualified vendor supported stuff.
2) Perennial MS vs VMWare. You've touched on performance below but things have changed and I'd like to see VSphere 4.1 in a really good performance bake off.
With CNAs in the picture:
3) CNA @ 10Gig review (ties into #1)-Setup ease of use, configuration, and performance from Emulex and Qlogic under Dell, IBM, HP. Lotsa vendor-supplied reviews out there, few unbiased.
4) Desktop Virtualization Round Up-Workstation class tools
5) Virtual Desktops-How big can I scale and what's it cost. RemoteFX vs. Teradici. That kinda stuff....it's in it's infancy and a good ROI study would be good. Sure if you listen to vendors it saves millions over a lifecycle but what's the real hardware requirements?!
BTW, I could care less about Xen but it's probably worth a token review :) When it catches up to the two big players I'll listen. I am a VMWare bigot though, it's good to have the rest and let everybody else chase you. You might need a Virtualization 101 based upon some of the feedback!
HMTK - Friday, March 18, 2011 - link
As another VMware bigot I couldn't agree more. Lots of interesting questions you asked. In the long rung I think Citrix has a problem. VMware is the market leader and currently has the best tech. Period. Microsoft however has deep, DEEP pockets and a great marketing machine and partner channel so they will become a very serious competitor to VMware. Citrix will be somewhere behind those two I think.As for price, most people here seem to think Hyper-V is free. The VMware hypervisor ESXi is free. To run Hyper-V you need at least a Windows Server 2008 R2 license. Tha management tools for both VMware and Microsoft are definitely NOT free so you'd have to do a case by case comparison if price is the most important for you. For small outfits (3 hosts) VMware Essentials is very affordable and Essentials Plus offers HA and VMotion. I do hope though that they stop with limiting CPU cores in certain SKU's: this is simply stupid.
bgoobaah - Thursday, March 17, 2011 - link
I know the hardware is just way too expensive, but there is so little data out there for it but....UNIX Virtualization. Compare it against VMware/Linux features. HPVM and PowerVM are probably the two big players. I know HP machines came down a ton in price since the chipsets and memory are now the same as high end Xeons (blades only). I'm not sure about the IBM side. Maybe these options will become more important in the future. Maybe not if they cannot offer competitive features.
intelliclint - Friday, March 18, 2011 - link
I am running 3 virtual servers at home. 2 Windows 2008 R2 servers and one Linux. They are all running on Hyper-V Core 2008 R2 stand-alone, which is free from MS. One server is Active Directory, the other Exchange, and the last is Firewall, VPN, Web Sever, and intrusion detection. I had little trouble setting up the MS servers, except that Exchange 2010 SP1 has to many pre-installations. The Linux server needed to have the drivers for the virtual NICs and hard drives to show up. You could use legacy but the virtual ones are faster.My point is I went from 3 machines to 1, that is 5 power connecters as two had redundant power supplies, and the only thing I had to do was add more ram to the one to a total of 8GB. I am saving about 400 watts, which can really add up.
HMTK - Friday, March 18, 2011 - link
Not only Hyper-V is free but also ESXi and XenServer. Whatever platform you use, you'll have to purchase the licenses for the OS's in the VMs and the management tools if you want to use the really interesting stuff.Personally I wouldn't know what user Hyper-V server (the free stuff) is because you don't get a GUI which might be a plus. I run Hyper-V on my 2008 R2 desktop machine but without GUI I'd use ESXi or XenServer (I chose Hyper-V because of limited space). Thinner and better support for non-Microsoft OS's, especially on ESXi.
I also always wonder whether you need or do not need an antivirus on your Hyper-V server
GullLars - Saturday, March 19, 2011 - link
My question is what do the experts think of Fusion-IO's ioMemory and VSL (Virtual Storage Layer)? And how do they think this will effect virtualization and cloud computing in the short to medium timeframe, and long timeframe?Fusion-IO has been around for a few years now, and have done an outstanding job accelerating servers. Since their architecture is super-scalar in highly parallell enviroments and optimized for latency and power-efficient IO's, i'd think they'd have a great future in virtualization.
linesplice - Saturday, March 19, 2011 - link
There is so much to talk about here on so many fronts, there probably needs to be some narrowing of topic and focus to occur. Some thoughts:- If you haven't virtualized servers yet, you're probably a smaller company.
- Most jobs and economic growth occur through small/medium businesses (at least in the US - according to the department of labor). I'd recommend a focus on companies that don't have a hundred servers (and big budgets)
- Desktop virtualization is all the buzz. I've been through the manufacturers materials and due to license costs (VMWare and Citrix) desktop virtualization is still expensive compared to 1:1 physical endpoints. You don't do it to save money (despite vendor claims which are mostly bolstered by the assumption you'll eliminate employees). It makes sense for hospitals and retail, but outside that? You still need MS licenses for the desktop, even thin clients aren't much less expensive than low-end workstations, and desktop virtualization still doesn't really work for laptop travelers (think airplane or non-connected environment).
- Before someone says it - MSFT licenses. Frankly, unless you are a giant company and don't want to keep track of licenses info yourself, the only reason I can see to get an enterprise license or Software Assurance is because you want one license key to rule them all (or you already use every MSFT product ever made). We purchase OEM still because it's almost 50% less expensive, despite the need for us to track license info internally. I've yet to see an SA that saves anyone money since MSFT has historically released products on a longer than 3-year product cycle. Google apps isn't there yet for our company, but it might be in the next three years.
- I read VMWare has reported less than 40% of the virtualization software purchased is actually deployed. It's the after-you-purchase gotch-ya's that I suspect hold up projects. What are those items?
- Someone else mentioned it, but there is a lack of independent testing. Gartner, Aberdeen, Forrester and the like only offer information based on reviews from people who have done the work, and that's like playing the "telephone game". Isn't there some information somewhere that breaks down: a. difficulty to install (do I need a product-specific software expert in order to do it?), b. ease of management, c. complexity, d. cost, e. disaster recovery capabilities (and ease of doing so), f. gotch-ya's.
(for reference, our company network engineer learned how to do a hyper-v install in a day)
- I understand and appreciate all the Unix, Linux and Mac folks out there, but let's face it (if I stick with my SMB theme), most companies run Windows Server.
- Are there complexities or problems when virtualizing in production on a SAN and then having to migrate to DR with DAS (what about drive letters, targets, managing disk space, etc.)?
- Someone posted about NFS. Bleh. CIFS (like it or hate it) is more common.
- What's new with virtualization today, and does it only apply to mega-installs? What's new for next year. After that, I see a lack of relevance due to changing tech.
- Cloud computing...
- For a Small/Medium business, what makes the most sense to outsource. We look at this all the time and we just keep coming up with the same answers over and over. Because we already host all our application internally, it's difficult to move just one of them somewhere else. Why? Because then we need to massively increase bandwidth to keep the applications communicating without bottlnecks. That increases cost and eliminates it as a possibility. Our company is extremely cost conscious.
- Does cloudsourcing email make sense? It's much more expensive than doing it in-house for 1000 users. It's a no-brainer for a company of 25 or a 100 even.
- If you aren't performing research or transactional processing, does cloudsourcing make sense? If yes, then for what applications? I'd cross off file server or document management immediately because of connectivity/bandwidth costs.
There is probably more. Maybe a later post. Thanks!
HMTK - Sunday, March 20, 2011 - link
- our customers are all SMB's (50 - 100 employees) and typically the only server that is NOT virtualized is the backup server- desktop virtualization is costly due to MS licensing and the need for more SAN storage. However, you save costs on deployment of new desktops. It works really well for laptop users. I manage a VMware View environment and the sales people take their VM along with them. XenDesktop works differently than View but it looks interesting as well.
- Microsoft licensing is more than an easy way to keep track of software. OEM is cheaper in price but the various licensing programs can be cost savers elsewhere. You should NEVER simply make a comparison of €€ or $$$ there.
I think you must be working with VERY small SMB's but even there virtualization makes sense. Current server hardware is way too powerful for many SMB workloads but a lot of SMBs have more than one server. Putting them all on a single host will save you money and make migration to other hardware later on a lot easier, even if you don't use the management servers like vCenter or SCVMM.
Small example, I've got a customer with about 20 users and they want to use Remote Desktop Services (they got Microsoft licenses nearly for free). I ordered a PowerEdge T710 (dual Xeon E5620, 24 GB RAM, 5 x 146 GB 10k SAS) which is perfectly fine for their new SBS 2011 and a RDS server virtualized on the free ESXi. 2 separate servers would have cost more, used more power and space.
I share your doubts about the usefulness of putting some (or all) your servers in an external cloud. For most small and medium companies, this doesn't make sense. Bandwith and guaranteed connections are way too expensive. Just keep it in house.
erhardm - Sunday, March 20, 2011 - link
As we all know, virtualization has only a point when there's a SAN in use. My question is, knowing that it is usually I/O bound, what kind of HDD are the best used in the SAN? Are they used few high I/O 15K/10K Harddrives or are they used a lot of 7200rpm drives? What's the best ratio between capacity and I/O performance in a virtualized environment?HMTK - Wednesday, March 23, 2011 - link
You don't need a SAN to virtualize in a useful way but if you want all the bells and whistles, you need that SAN.Bulk storage/archiving/backup is typically fine on 7200 rpm drives but if you want several VM's on a single LUN, forget about those things. The difference between 7200 rpm and 10k rpm is considerable but if your budget allows, go for 15k. Those disks will have a longer useful life. 7200 rpm's are really way too slow. Most of our customers have 3.5" 300 GB 15k SAS drives in their arrays (RAID1, 5 or 10 depending on the purpose) and I'd like to keep enough free space for snapshots and moving things about. Usually we also have a mirror of high capacity 7200 rpm drives (1 TB+) to store ISO's, VM templates and other stuff that needs disk space but not performance.
The next SANs we're going to sell will most definitely be with 2.5" disks. More spindles = more iops. We recently bumped into serious performance issues on a VDI setup on a 3.5" SAN that had enough disk space but not enough spindles. VMware recommends about 20 virtual desktops per LUN so a SAN with a mere 12 disks cannot host that many machines.
schuang74 - Thursday, March 24, 2011 - link
What you find is in more modern implementations of shared storage for VM's is that a lot of companies are drifting away from traditional "fast" shared storage like SAN / Fiber implementations and moving to iSCSi utilizing slower 7200 / 10K SAS or SATA based drives. The trade off is you have lager arrays adding a higher number of spindles which compensate for the loss of rotational access. SAN's are pricy if all you use them for is shared storage. The strengths and benefits in a SAN isn't just the performance but rather the software, replication, snapshots, and redundancy. Running VM's off of an iSCSi array with 7200 RPM SATA 2/3 or SAS drives works well for most applications and for the applications.Newer storage products allow you to provide a hybrid of solutions allowing you to combine slower and faster storage cabinets that prioritize data utilization between the cabinets with the higher access pushed to the faster drives and the least accessed to the slower cabinets. Either way the long term maintenance and costs are much lower then the traditional SAN.
lyeoh - Monday, March 21, 2011 - link
1) What would be your advice on how to decide whether to roll your own "cloud computing" stuff or to use a 3rd party like Amazon EC2, Microsoft Azure, Google App Engine, etc.2) What would the benefits and disadvantages be for moving a site like Anandtech to:
a) Amazon EC2
b) Microsoft Azure
c) Google App Engine
d) Some other cloud computing platform that you'd like to use as an example.
lili75 - Monday, March 21, 2011 - link
welcomelili94 - Wednesday, March 23, 2011 - link
welcomeschuang74 - Thursday, March 24, 2011 - link
I think its a misconception to believe that larger companies have larger virtualized implementations. Especially in today's economy where big companies do not always equate to "big budgets" also in most cases those big budgets are their to maintain only as the cost of maintenance is usually over 50 percent of the average IT budget. Last year I sat in a room with other professionals representing various companies, some much larger than ours. When the question was proposed as to who had X% of their environment virtualized, it wasn't unusual to see that many of the larger companies had anywhere from 0 to 5% virtual. Most were just there to understand the concept. From my experience larger companies tend to move more methodically, its much harder to shift gears on a large infrastructure than a smaller one.Virtualization and in this case Vsphere, is very compelling however it is definitely not for the gun shy. I had my team virtualize over 30% of our environment before I even let my CIO know we were doing it. I knew had he found out ahead of time he would have slowed us down. After converting several key systems and not hearing any complaints, I filled him in on what was going on. At that point he was very much for our endeavor. Granted I am not suggesting that everyone go out and do things in secret. its just that when Virtualization hits an environment I understand why some may feel cautious and thus not pull the trigger on a lot of their systems. I still refuse to virtualize Exchange, and only as a proof of concept have I virtualized a SQL 2008 server. Still even in that instance it is the only VM occupying a redundant host pair. The problem with HyperV? Overhead and management. Still immature compared to VMWare. Just virtualizing an OS isn't enough for a production environment. You need the ability to fail over, migrate, and expand and contract at will.
What I see is a convergence of higher level technologies. In the past topics like Disaster Recovery and Business Continuity as well as Security pretty much dominated public forums and marketing material. Security is a becoming a moot point (I know so un politically incorrect to say). Most companies spend an incredible amount of money annually to protect themselves from themselves. The human factor is the biggest cause of security problems. Missing backup tapes, stolen laptops, disgruntled employees downloading everything to a flash drive before they leave. Thats the most common forms of security problems companies face today. Granted there are ways to prevent all of that however once again, more money spent to protect your self from yourself. At the end of the day companies weigh what the truly are willing to risk and what is unacceptible, in the end most companies will risk more than they will prevent.
Today it's the "Cloud". Pushing all marketing BS aside, the true implementation of the cloud pretty much makes the previous era technology focus and pushes them back from the headliners to the feature list. Hosted storage, application virtualization, and hardware / OS agnosticism all hold DR, BC and Security features that businesses need and they come built in as "features" to a $50 a year per user product (Google Enterprise Apps) rather than hundreds of thousands of dollars of infrastructure along with hundreds of thousands of dollars annually for maintenance costs. The true argument in a business will be the psychological battle between freeing people from their Microsoft products and changing the way we distribute documents and collaborate. Unify the applications and products and make it accessible with a familiar interface find a alternative but intuitive method to collaborate and distribute documents and you pretty much eliminate the file format war.
IT's biggest fight comes in the form of the "invasion" of consumerism (ipads, Iphones, droids, zooms) which were designed to free the consumer but lack the controls IT departments require to maintain compliancy and information security. Virtualized application environments may solve that especially if they are hardware and OS agnostic. In this instance the application and files still exist in a cloud / data center environment with the device used only as the viewer. Thus devices that are traditionally consumer now become more acceptable in the work place.
Anyhow my brain dump.
schuang74 - Thursday, March 24, 2011 - link
Wow I just read my post above... i am really tired. In retrospect. Sorry for the grammatical and spelling errors ;)