Using Scalability Reports to Size VDI Deployments

Posted on 2010/09/03


Lately, I have been getting questions around how to “size” a XenDesktop deployment. However, with the recent release of the Hyper-V / Windows 7 / XenDesktop whitepaper I thought it would be a good time to go over just a few points on how you can leverage the scalability reports that have been released by Citrix and other vendors.

Citrix intentionally follows the same process for scalability testing so that the results can be used for comparing hardware and hypervisor platforms. Since it is impractical to create tests that match each customer’s environment, a standard configuration is selected and the performance data published. The choice of the Login VSI medium user workload allows for a consistent, repeatable, and slightly random workload to be run across different configurations to determine performance. Also, by using a readily available third-party tool for the workload, customers can easily reproduce the test results with an unbiased workload.

Sizing a XenDesktop environment should neither be done blindly nor should it rely solely on published scalability results. If you want to size your environment correctly, you must conduct a pilot with actual desktop users and their applications. The pilot results can then be used in combination with published data to determine the expected number of desktops a given hardware platform can support in your environment. Most initial pilots should focus on two key areas: CPU and storage activity. These two areas have a significant impact on a desktop virtualization project and are historically the ones that are undersized in larger environments. Certainly other areas can be monitored such as RAM and network activity, but these areas are not as difficult to size as CPU and storage.

In most cases, scalability results published by any vendor will not meet the requirements of your environment making it impossible to size your farm using the reported numbers. For instance, the Login VSI medium workload which uses Microsoft Office, IE, and Adobe generates about 5 IOPS per user when running in the steady-state loop. What is also true is that in order to prevent any network latency from affecting the response times, IE is using cached web pages for viewing. If a user was actually browsing web pages, both the network and storage activity would increase as the pages are read and cached by the browser. Recent tests have shown that a heavy IE workload can generate as much as 20-30 IOPS per session. Therefore, testing with your anticipated workload is paramount to obtaining realistic sizing data.

With a bit of logic, you can integrate the results from published reports with your pilot results to anticipate how the proposed XenDesktop environment will perform. For instance, the published results from the Citrix whitepaper show a BL460C G6 blade with 64GB RAM is capable of hosting about 75 Windows 7 users on Hyper-V with the Login VSI medium workload. Suppose you run a pilot with a similar hardware configuration but with your user workload and at the end of the pilot determine the optimal number of users per server is 50. Taking both pieces of data into account you can derive that your pilot workload requires approximately 50% more server capacity than the Login VSI medium workload.

Knowing this relationship allows you to leverage the studies published with the Login VSI medium workload. To determine how your workload would perform using the environment in the study, whether it be a different hypervisor or hardware, just adjust the Login VSI sizing results reported by 50% for your environment. Similar conclusions can be drawn regarding CPU usage or storage I/O operations (IOPS) when comparing published reports to pilot results.

If you found this information useful and would like to be notified of future blog posts, please follow me on Twitter @pwilson98.

Cross-post