One of the first questions that comes up when talking about desktop virtualization is how much will it cost to virtualize my desktops? The problem with this question is everyone’s organization has a different set of applications with different workloads. I usually start off explaining this but customers still want me to guess. So we use estimates based on experience and industry standards…the problem with this? There really isn’t an industry standard, every organization really is different. The impact of these guesses we make is amplified by the number of desktops these organizations have. So in an organization with 50 desktops it’s really not that important to know the workload exactly, but in an organization with 2000 desktops if we’re off by 10 IOPS per desktop that is a huge amount of IO we’re off by. Businesses typically have many more desktops than servers too…so it’s that much more important that we’re accurate when doing desktop virtualization than server virtualization.
When companies undertook physical to virtual server migrations we performed consolidation analysis which analyzed the cpu, memory, disk, and network workloads at a virtual machine level and we created models which could tell us how many physical resources we would need to virtualize that currently physically distributed workload. This gave us enough detail to size virtual migrations and if there were some virtual machines with too high of cpu, memory, network, or disk IO we just left it off the list of servers to initially virtualize.
This resource sizing is again very important when you want to virtualize desktops, whether you’re doing hosted virtual desktops or hosted shared desktops via RDS or XenApp. More importantly and different from server consolidation analysis is when you’re doing it on the desktop we want to know the cpu, memory, network, and disk impact at a process level. This detailed information allows you to understand where the workload is coming from and in many cases deal with it prior to virtualizing. On the server side few organizations know at a process level where cpu, memory, and disk IO comes from and on the desktop side of the house even fewer have this view into their physical desktops because frankly they don’t need to when the workload is distributed. Combine these two items…server teams deploying desktop virtualization that aren’t in the habit of obtaining this information at this level and a desktop team that doesn’t have that type of workload information and you’ve got a recipe for potential disaster.
Here’s what happens if you don’t have tools to look inside your desktop VM’s. When you’re only getting half as many VM’s per server and the performance still sucks and you’re throwing money at servers you’ll be wishing you’d spent a pittance of that on software to actually tell you what app(s) are using all the CPU so that you might be able to mitigate it. Or maybe you’re seeing high disk latencies caused by high disk IO that you weren’t expecting and you want to know which apps inside the desktop VM’s are causing this…instead you’ll be reacting and throwing money at hardware because there isn’t time to do it right…and now your management has lost faith in their IT department or even worse, you.
Every analysis we have done for desktop virtualization has shown CPU or Disk utilization that both us as the consultant or the customer was not expecting and in many cases we were able to determine there was an issue that was able to be mitigated prior to virtualization. Do everyone a favor and don’t skip this step. This isn’t rocket science, I’m not unique in telling you to do this…either way I’m blogging it for you now so don’t say later nobody told you.