VDI “The Missing Questions” #5: How does 1vCPU scale compared to 2vCPU’s?

More and more this novel idea of user classifications and workload profiles is being used to separate VDI user allocations. I’ve worked with many customers who prefer to stack rank their users based on the importance of their role/job function and the typical applications that user needs in their role as a means to (hopefully) gain a more appropriate VDI resource allocation. Again – this is a great idea and a good excuse for organizations to take a long hard look at their users and the applications they use day to day.

In case you are finding this blog for the first time, we have been attempting to defy blog physics and host a series of blogs – this requires the use of a manually updated table of contents:

Most of the time the three main items separating user classes are:

  1. vCPU quantity
  2. Memory allocation
  3. Disk space

The first sort of pitfall that I see occasionally is too much granularity in the workload profiles. Don’t get me wrong, if you have a good view into your users and applications that you see the need to support and manage 5 different user classifications – that’s great news! But most of the time it comes down to 3 particular types of user classifications:

  1. Gold (Multiple vCPU’s, a lot more RAM and disk space than other folks)
  2. Silver (Could be a couple of vCPU’s, usually more RAM than the OS calls for, can be required for specialized apps, etc)
  3. Bronze (These are almost always single vCPU and minimum amount of RAM profiles)

A good sort of buildup approach to start determining your workload profile requirements must take into consideration the users and compute requirements based on the apps those users will be running. In most cases, the Operating System you choose will be the foundation to start your buildup approach. The aging Windows XP platform is quickly being consumed by Windows 7 in the corporate workspace. There are few folks out there continuing to stand up net new systems for users and using Windows XP. This is for a number of reasons – most new PC’s and their manufacturers (not to mention this little company called Microsoft) are not developing drivers and supporting the workhorse XP operating system. Let’s be honest, Windows XP came out in 2001. Windows XP is older than my twin girls that are in 4th grade! It was a good ride, but it must come to an end. You probably noticed that I haven’t mentioned Windows 8 yet. After all, it is the newest desktop Operating System (OS) that Microsoft has out. There are a couple of reasons for this: Most corporate users don’t jump onto the latest OS because they have to support many users, must test/qualify their applications on a new operating system and as we all know – anything new usually has fixes and enhancements to follow. Plus, as a general rule of thumb, the first Service Pack must come out before anyone will give real consideration to mass deployment in any organization. Beyond the general newness of the Windows 8 OS, it will be interesting to see how “Corporate America” will integrate the new look and feel of Windows 8. With that being said, we have Windows 7 which came out in 2009 and already has Service Pack 1 with a host of subsequent updates. This is the OS that most folks are planning their VDI environments for. Per Microsoft, the requirements for Windows 7 are as follows:

You’ve heard about minimum system requirements right? They are minimums! Try playing your latest favorite video game on the “minimum system requirements” listed on the back of the package – not very fun or functional is it? Most of the folks I talk to are split on 32 v. 64 bit, but most of the time I see users being deployed with either 1.5GB to 4GB of RAM and 1 to 2vCPU’s. For the purposes of our testing and somewhat inline with the Gold/Silver/Bronze user space, we have a hybrid scenario of sorts. Hybrid meaning we have a “Bronz’ish Silver” user that has 1vCPU and 1.5GB of RAM running Windows 7 32 bit. We also have a “Silver’ish Gold” user that has 2vCPU’s and 1.5GB of RAM. The hybrid users are more or less irrelevant from a memory standpoint as the primary purpose of answering this question is to measure the impact of 1vCPU vs. 2 vCPU allocations on the same Login VSI workload. If you missed the Introduction to this series, you should take a look for the full Configuration Settings that we used.

Data time! Let’s start by looking at the E5-2643 Processor and see how it scales between the different vCPU configurations.

With 1vCPU, the E5-2643 system scaled to 81 virtual desktops before exceeding the VSImax. The same system only scaled to 54 virtual desktops when configured with two vCPUs (a 33% decrease in scale). There was little improvement to end-user latency by adding a second vCPU to the user’s virtual desktop, even at a low number of virtual desktops.  You can see this as the red and blue lines are pretty much following the same ramp up in terms of user count and latency.


Now look at the impact to the E5-2665 system. With 1vCPU, the E5-2665 system scaled to 130 virtual desktops before exceeding the VSImax. The same system only scaled to 93 virtual desktops when configured with two vCPUs (a 40% decrease in scale).

Unlike the E5-2643 system, the E5-2665 system showed a slight improvement in virtual desktop latency by utilizing two vCPUs. The slight improvement was realized below 45 total virtual desktops – but more to come on this in a later question.

Answer: Be careful what you ask for if you want 2vCPU desktops… If you find yourself in a “well, I would prefer to have 2vCPU’s…” situation and there is no real application or resource level need, your dollar will stretch a lot farther if you stick with a single CPU.

Just a quick example – if you were going to deploy 1,000 Virtual Desktops on the B200 M3/E5-2665 system we looked at. A single vCPU deployment would require 8 blades without redundancy. If you needed/wanted to use 2vCPU’s on each VM, you would need 11 servers without redundancy. 3 more servers doesn’t sound like much, but it is a 37.5% increase in the compute infrastructure needed!

In addition, the purpose of using multiple vCPU’s is to reduce user latency and application response times right? Well, take a close look at the ~45 user mark on the E5-2665 graph above? Notice anything?

What’s next? Come back next week as Jason explores this latency/multi-vCPU situation in more depth when he answers – “What do you really gain from a 2vCPU virtual desktop?”.

Tags: , ,


Distributed VDI for Enterprise Branches

IT managers are in an interesting situation – all the developments in virtualization, compute, and mobility are bringing new opportunities for architecting an efficient IT infrastructure. They are looking for ways to do more with less infrastructure. These developments are accelerating resource centralization, with more and more critical assets moving into the enterprise headquarters and data center and this is creating a ripple effect on branch and remote offices. To meet regulatory compliance and cost-control requirements, many organizations are optimizing resources and reducing complexity in the branch office.

Although centralizing branch resources and increasing access brings great benefits, it can also pose security, latency, business continuity and performance challenges.  Optimal business productivity is achieved only when the same level of services is available in the branch office as in the corporate headquarters. Branch-office networks need to be highly secure, available, remotely manageable, and extensible — and they must deliver application performance and quality of experience that is as good as in the main offices.

As organizations ride through the new wave of technology proliferation – increasing number of employees are using their own devices in corporate networks. Many organizations are looking at delivering virtual desktops to their employees so that employees can connect to corporate and branch resources securely and at the same time efficiently use the shared resources. Many IT managers have started with a centralized VDI architecture — providing virtual desktops from the companies’ headquarters or data center. The model has met with good success for users who are at the headquarter or campus environment as these locations tend to have a big , reliable WAN pipe to the virtualized environment. But when IT managers are trying to take the same model downstream to the branch employees – they are facing multiple challenges:

  1. Resiliency of the WAN link – outage will shutdown business at the branch as the branch employees cannot have access to their desktops
  2. Latency introduced by the WAN link – user experience is compromised at the branch. VDI rollouts to branch would fail miserably if user experience is not the same as what the users were used to before on their dedicated workstations or laptops.
  3. Bandwidth Congestion – multiple desktops and applications compete for bandwidth.

A few months back, Cisco and VMware introduced a unique solution called the “Office in a Box” http://blogs.cisco.com/borderless/cisco-office-in-a-box-cisco-ucs-e-series-with-vmware-view/. This solution provided IT Managers to provide VDI solutions on the Cisco UCS E-Series server built within the Cisco Integrated Services Router G2.

With the new release of VMware View 5.2, Cisco and VMware has built a Distributed VDI architecture which provides the benefits of a centralized management but virtualized desktops local to the branch. VMware vCenter and Broker would be located at the company data center and the virtual desktops can be hosted at the branch on the Cisco UCS E-Series server running in the ISR G2.

The key benefits with this Distributed VDI architecture are:

  1. Since the virtual desktops are local to the branch, the user experience is very superior.
  2. WAN outages do not affect users who are already working on their local desktops
  3. Management of the desktops is still centralized so that image management and patch updates can still be done centrally.

More details about the Distributed VDI architecture can be found at http://www.cisco.com/en/US/products/ps12629/prod_white_papers_list.html

Check out this video which provides details around this joint solution 

IT Managers can now design the optimal IT infrastructure that leverages the benefits of the new virtualized era across the organization.

 

Tags: , , , ,


Building on Success: Cisco and Intel Expand Partnership to Big Data

This has been an exciting week. Further expanding its Big Data portfolio, Cisco has announced collaboration with Intel, its long term partner, for the next generation of open platform for data management and analytics. The joint solution combines Intel® Distribution for Apache Hadoop Software with Cisco’s Common Platform Architecture (CPA) to deliver performance, capacity, and security for enterprise-class Hadoop deployments.

As described in my blog posting, the CPA is highly scalable architecture designed to meet variety of scale-out application demands that includes compute, storage, connectivity and unified management, already being deployed in a range of industries including finance, retail, service provider, content management and government. Unique to this architecture is the seamless data integration and management integration capabilities between big data applications and enterprise applications such as Oracle Database, Microsoft SQL Server, SAP and others, as shown below:
CPA Magt 1
The current version of the CPA offers two options depending on use case: Performance optimized — offers balanced compute power with I/O bandwidth optimized for price/performance, and Capacity optimized – for low cost per terabyte. The Intel® Distribution is supported for both performance optimized and capacity optimized options, and is available in single rack and multiple rack scale. See the new Solution Brief.

The Intel® Distribution is a controlled distribution based on the Apache Hadoop, with feature enhancements, performance optimizations, and security options that are responsible for the solution’s enterprise quality. The combination of the Intel® Distribution and Cisco UCS joins the power of big data with a dependable deployment model that can be implemented rapidly and scaled to meet performance and capacity of demanding workloads.  Enterprise-class services from Cisco and Intel can help with design, deployment, and testing, and organizations can continue to rely on these services through controlled and supported releases.

A performance optimized CPA rack running Intel® Distribution will be demonstrated at the Intel Booth at O’Reilly Strata Conference 2013 this week.

CPA at Strata 2013

References:
1. Cisco UCS with the Intel Distribution for Apache Hadoop — Solution Brief
2. Cisco’s Common Platform Architecture (CPA) for Big Data
3. Paul Perez and Boyd Davis on Cisco and Intel Partnership on Big Data (Video)
4. Cisco and Intel Announcement — blog by Didier Rombaut
5. Intel® Distribution for Apache Hadoop Software


Building on Success: Cisco and Intel Expand Partnership to Big Data

This has been an exciting week. Further expanding its Big Data portfolio, Cisco has announced collaboration with Intel, its long term partner, for the next generation of open platform for data management and analytics. The joint solution combines Intel® Distribution for Apache Hadoop Software with Cisco’s Common Platform Architecture (CPA) to deliver performance, capacity, and security for enterprise-class Hadoop deployments.

As described in my blog posting, the CPA is highly scalable architecture designed to meet variety of scale-out application demands that includes compute, storage, connectivity and unified management, already being deployed in a range of industries including finance, retail, service provider, content management and government. Unique to this architecture is the seamless data integration and management integration capabilities between big data applications and enterprise applications such as Oracle Database, Microsoft SQL Server, SAP and others, as shown below:
CPA Magt 1
The current version of the CPA offers two options depending on use case: Performance optimized — offers balanced compute power with I/O bandwidth optimized for price/performance, and Capacity optimized – for low cost per terabyte. The Intel® Distribution is supported for both performance optimized and capacity optimized options, and is available in single rack and multiple rack scale. See the new Solution Brief.

The Intel® Distribution is a controlled distribution based on the Apache Hadoop, with feature enhancements, performance optimizations, and security options that are responsible for the solution’s enterprise quality. The combination of the Intel® Distribution and Cisco UCS joins the power of big data with a dependable deployment model that can be implemented rapidly and scaled to meet performance and capacity of demanding workloads.  Enterprise-class services from Cisco and Intel can help with design, deployment, and testing, and organizations can continue to rely on these services through controlled and supported releases.

A performance optimized CPA rack running Intel® Distribution will be demonstrated at the Intel Booth at O’Reilly Strata Conference 2013 this week.

CPA at Strata 2013

References:
1. Cisco UCS with the Intel Distribution for Apache Hadoop — Solution Brief
2. Cisco’s Common Platform Architecture (CPA) for Big Data
3. Paul Perez and Boyd Davis on Cisco and Intel Partnership on Big Data (Video)
4. Cisco and Intel Announcement — blog by Didier Rombaut
5. Intel® Distribution for Apache Hadoop Software


Building on Success: Cisco and Intel Expand Partnership to Big Data

This has been an exciting week. Further expanding its Big Data portfolio, Cisco has announced collaboration with Intel, its long term partner, for the next generation of open platform for data management and analytics. The joint solution combines Intel® Distribution for Apache Hadoop Software with Cisco’s Common Platform Architecture (CPA) to deliver performance, capacity, and security for enterprise-class Hadoop deployments.

As described in my blog posting, the CPA is highly scalable architecture designed to meet variety of scale-out application demands that includes compute, storage, connectivity and unified management, already being deployed in a range of industries including finance, retail, service provider, content management and government. Unique to this architecture is the seamless data integration and management integration capabilities between big data applications and enterprise applications such as Oracle Database, Microsoft SQL Server, SAP and others, as shown below:
CPA Magt 1
The current version of the CPA offers two options depending on use case: Performance optimized — offers balanced compute power with I/O bandwidth optimized for price/performance, and Capacity optimized – for low cost per terabyte. The Intel® Distribution is supported for both performance optimized and capacity optimized options, and is available in single rack and multiple rack scale. See the new Solution Brief.

The Intel® Distribution is a controlled distribution based on the Apache Hadoop, with feature enhancements, performance optimizations, and security options that are responsible for the solution’s enterprise quality. The combination of the Intel® Distribution and Cisco UCS joins the power of big data with a dependable deployment model that can be implemented rapidly and scaled to meet performance and capacity of demanding workloads.  Enterprise-class services from Cisco and Intel can help with design, deployment, and testing, and organizations can continue to rely on these services through controlled and supported releases.

A performance optimized CPA rack running Intel® Distribution will be demonstrated at the Intel Booth at O’Reilly Strata Conference 2013 this week.

CPA at Strata 2013

References:
1. Cisco UCS with the Intel Distribution for Apache Hadoop — Solution Brief
2. Cisco’s Common Platform Architecture (CPA) for Big Data
3. Paul Perez and Boyd Davis on Cisco and Intel Partnership on Big Data (Video)
4. Cisco and Intel Announcement — blog by Didier Rombaut
5. Intel® Distribution for Apache Hadoop Software


#EngineersUnplugged: S2|Ep2| #SDN Doughnut Style

In this week’s episode of Engineers Unplugged, Brian Gracely (@bgracely) of Virtustream takes on the challenge of explaining the industry’s top buzzwork, Software Defined Networking, using doughnuts. Seeing is believing:

httpv://www.youtube.com/watch?feature=player_embedded&v=uAlg8BUh9so#at=16

Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Technology made delicious and simple. Thoughts, comments, or feedback? Join the conversation @CiscoDC.

Join us next week for Engineers Unplugged Episode 3: OpenStack, featuring Joe Onisick and Colin McNamara.

Tags: , , , , ,


#EngineersUnplugged: S2|Ep2| #SDN Doughnut Style

In this week’s episode of Engineers Unplugged, Brian Gracely (@bgracely) of Virtustream takes on the challenge of explaining the industry’s top buzzwork, Software Defined Networking, using doughnuts. Seeing is believing:

httpv://www.youtube.com/watch?feature=player_embedded&v=uAlg8BUh9so#at=16

Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Technology made delicious and simple. Thoughts, comments, or feedback? Join the conversation @CiscoDC.

Join us next week for Engineers Unplugged Episode 3: OpenStack, featuring Joe Onisick and Colin McNamara.

Tags: , , , , ,


#EngineersUnplugged: S2|Ep2| #SDN Doughnut Style

In this week’s episode of Engineers Unplugged, Brian Gracely (@bgracely) of Virtustream takes on the challenge of explaining the industry’s top buzzwork, Software Defined Networking, using doughnuts. Seeing is believing:

httpv://www.youtube.com/watch?feature=player_embedded&v=uAlg8BUh9so#at=16

Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Technology made delicious and simple. Thoughts, comments, or feedback? Join the conversation @CiscoDC.

Join us next week for Engineers Unplugged Episode 3: OpenStack, featuring Joe Onisick and Colin McNamara.

Tags: , , , , ,


Connections on the Strip – How MGM Upped the Wireless Ante

MGM Resorts in Las Vegas is all about hospitality.

Being one of many major resorts on the renowned Las Vegas Strip, MGM was anxious to connect with guests – and have guests connect to them. They needed to offer something that the competition didn’t. So MGM partnered with Cisco to implement an IT infrastructure that would give guests what they were asking for while also enhancing business-focused technology capabilities.

MGM now offers sufficient bandwidth for concurrent connections from many different devices, which enables simple, reliable access for guests. This also leads to better convention incentives for hosting meetings at the resort. And from the business end, MGM is able to collect more data and gather useful information from network users – allowing for even greater ability to communicate experiences and cater to their guestss.

Read more about MGM Resort’s Wi-Fi transformation and how it is positively impacting their business at mobilize.cisco.com.

Tags: , , , ,


Connections on the Strip – How MGM Upped the Wireless Ante

MGM Resorts in Las Vegas is all about hospitality.

Being one of many major resorts on the renowned Las Vegas Strip, MGM was anxious to connect with guests – and have guests connect to them. They needed to offer something that the competition didn’t. So MGM partnered with Cisco to implement an IT infrastructure that would give guests what they were asking for while also enhancing business-focused technology capabilities.

MGM now offers sufficient bandwidth for concurrent connections from many different devices, which enables simple, reliable access for guests. This also leads to better convention incentives for hosting meetings at the resort. And from the business end, MGM is able to collect more data and gather useful information from network users – allowing for even greater ability to communicate experiences and cater to their guestss.

Read more about MGM Resort’s Wi-Fi transformation and how it is positively impacting their business at mobilize.cisco.com.

Tags: , , , ,