One of the first steps in a virtualization project is building a list of workloads that will get virtualized, with a measurement or estimate of the resources that they will need. X MHz and Y MB, sum everything up, and let's say you get 30 GHz of CPU power and 20 GB of RAM.
The hardware you'd like to run all those virtual machines on can handle two CPUs (dual-socket), four cores each (quad-core). That means that every physical server will give you between 20 and 25 GHz of CPU power. For memory, you'll buy 12 GB of RAM in each server.
So the plan is to buy two of those servers, right ?
Well, as long as your infrastructure is 100% healthy and running OK, two servers will do the job just fine. You've got enough resources, with a bit of headroom for overhead and future growth. But what happens when one of the physical servers is down ? Think of hardware problems, think of virtualization software upgrades, think of patching the hypervisor.
Then the available resources are down to 20 GHz and 12 GB of RAM. For CPU, 20 GHz means that every application will get 30% less than desired, and will therefore run a bit slow, probably noticable for users. Is that acceptable in these cases ?
And last but not least, memory. Temporarily, you've got just 12 GB of RAM available, and your VMs need 20 GB. Did you know that what happens depends on which hypervisor you've chosen ?
Now let me guess, did your Microsoft sales guy tell you about this difference ?
The hardware you'd like to run all those virtual machines on can handle two CPUs (dual-socket), four cores each (quad-core). That means that every physical server will give you between 20 and 25 GHz of CPU power. For memory, you'll buy 12 GB of RAM in each server.
So the plan is to buy two of those servers, right ?
Well, as long as your infrastructure is 100% healthy and running OK, two servers will do the job just fine. You've got enough resources, with a bit of headroom for overhead and future growth. But what happens when one of the physical servers is down ? Think of hardware problems, think of virtualization software upgrades, think of patching the hypervisor.
Then the available resources are down to 20 GHz and 12 GB of RAM. For CPU, 20 GHz means that every application will get 30% less than desired, and will therefore run a bit slow, probably noticable for users. Is that acceptable in these cases ?
And last but not least, memory. Temporarily, you've got just 12 GB of RAM available, and your VMs need 20 GB. Did you know that what happens depends on which hypervisor you've chosen ?
- With Hyper-V or Xen, you're in trouble. With 12 GB, you can run 60% of your VMs, and the rest stays down.
- With ESX and ESXi, you can start all your VMs, and just as with CPU, there's not enough resources so everything will slow down a bit. But it will run. This trick is called "memory overcommitment".
Now let me guess, did your Microsoft sales guy tell you about this difference ?
Comments