With a new generation of server hardware hitting the market I think it’s time to revisit this topic. First some background.
Provisioning Services in conjunction with Citrix XenApp allows us to create non-persistent XenApp worker nodes on demand, all of which are identical because they are streamed from a read-only image. This allows us to use a single operating system (Windows 2008 R2) to provide a desktop or applications to multiple users and at a lower cost than using the 1-to-1 OS-to-User model of VDI. Furthermore these XenApp worker nodes maintain their consistency by rebooting them on a regular basis (daily/weekly) at which time they flush their write cache and upon reboot return to their original gold image state. When using Provisioning Services each XenApp worker node uses a cache location for the write IO that they generate. The location of the write cache when using provisioning services is the key item we’re discussing in this post.
- Cache on device hard drive
- Cache on device hard drive persisted (experimental)
- Cache in device RAM
- Cache on server disk
- Cache on server persisted
The focus of this post is that in servers that are shipping today that easily hold 192GB of RAM (at a reasonable cost) we now have the option to instead of using virtual hard disks attached to each worker node (what I hear people using most often) that we can now use memory as a cache location and therefore offload the write IO to memory instead of disk, thereby decreasing the overall cost of deploying this solution even more and undoubtedly improving responsiveness and performance.
So in a server with 192GB of RAM and 16 physical cores lets see how this breaks down. With 16 cores we’re probably looking at a base configuration of eight 4 vCPU vm’s based on the latest testing from Citrix done here. Let’s take the example of eight 4vCPU vm’s with 16GB of RAM each.
|8 VM’s||16GB RAM||128GB RAM Total|
|8 VM’s||5GB RAM Cache||40GB RAM Total|
|168GB RAM Total per host|
As you might notice in the Citrix testing that Frank Anderson did was that min memory available was a little more than 50GB, we’re getting tight especially when you factor in the 6% threshold before hypervisors like vSphere will start inflating the balloon driver.
So that’s the math and we’re pretty close to maxing out 192GB of RAM, but with hosts holding even larger capacities this napkin math should hold up even better. Non-persistent Windows 2008 R2 XenApp desktops and apps with no disk IO impact…pretty sweet…so how come nobody I know is doing this?