Virtualization - Ask the Experts #4
by Anand Lal Shimpi on September 10, 2010 12:55 AM EST- Posted in
- IT Computing
- Virtualization
- Intel
Our Ask the Experts series continues with another round of questions.
A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.
If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.
Question #1 by AnandTech user mpsii
Given that more is better, when considering budget constraints, are more CPU cores better than more systems (one hexa core vs 3 dual core systems)? Or, is one system with 32 GB of RAM going to perform better than 2 systems with 16GB of RAM?
Answer #1 by Johan de Gelas, AnandTech Senior IT Editor
If you like to use virtualization to run several small business services (file servers, mailserver etc.), know that a hypervisor and the necessary utilities require some RAM space, processing power and disk space. In case of ESXi, the free hypervisor, you need about 300MB of RAM to run the hypervisor. Normally the hypervisor absorbs few processor cycles, but network intensive applications can increase the load of the hypervisor significantly. In some cases a complete core can be kept busy doing nothing else than sorting and processing incoming ethernet packets to several VMs.
And the more similar virtual machines you run on top of a single host, the better it gets from an efficiency and cost point of view. For example ESXi uses transparant page sharing, so identical memory pages are shared among several VMs. So several ubuntu servers with the same kernel will use less memory if they are all running on the same physical host.
A dualcore is the absolute minimum for virtualized servers, even home servers. So I would definitely prefer a quadcore or hexacore system for a home server. Of course, the CPU is just one component. Make sure you can have enough RAM slots and at least two gigabit ports.
Question #2 by Dennis H.
What is in store for desktop virtualization? How will virtualization impact cloud computing from a user/customer standpoint? How will virtualization impact home users?
Answer #2 by Rich Brunner, VMware Chief Platform Architect
Desktop virtualization could be most commonly characterized as in the ‘early adopter’ stage despite its massive potential. Often times when discussing cloud computing, the subject equates the technology and benefits to the datacenter. VMware firmly believes that a consistent architecture that supports desktop delivery via the cloud is an important part of the overall enterprise service delivery strategy. From an IT perspective, control is managed through desktop/data leveraging a common cloud architecture, the desktops and data being secure in the datacenter, management being centralized, and cost reductions achieved through pooling, automation, improved service levels and availability. Enterprise end users, on the other hand, seek instant delivery of their desktop, anytime, anywhere, from any device with the rich experience tier to end user identity, not devices.
Question #3 by Ron K.
How do you plan for consolidation when many of the nodes to be consolidated have issues like CPU loops and or nonsensical consumption, like resource intense screen savers?
Answer #3 by Rich Uhlig, Intel Fellow
CPU loops can arise out of different situations, including idle loops and spin locks. When a guest OS enters an idle state, it will typically also issue an instruction like HLT (halt), or issue commands to the CPU to enter a more power efficient state (called “C states”). A hypervisor can set various VT execution controls to cause a VM exit from the guest OS and gain control on such events, and then schedule another VM to run so that the physical CPU resource won’t be wasted.
Detecting idle activity in this way, is more or less standard CPU resource management by a hypervisor, but a more interesting case that of spin locks, where the CPU repeatedly cycles through a loop to check on the availability of lock that might be temporarily held by another CPU. In an non-virtualized system, spin locks usually resolve quickly, because the lock-holding CPU will typically release its lock after a short time, and the lock-requesting CPU will obtain the lock and drop out of the spin-lock loop. However, in a virtualized system, an adverse situation known as “lock-holder preemption” can occur. In this case, the lock-holding CPU – which is now running in a VM – might not be scheduled to run by the hypervisor (i.e., it is preempted), and the virtual CPU requesting the lock will just continue to spin, waiting on a lock that can’t be released until the lock-holder is again scheduled to run. In the worst case, the lock-requesting CPU might spin in a loop for its entire execution quantum, a clear waste of CPU resource. The difficulty here is that the hypervisor doesn’t necessarily know that lock-holder preemption is happening.
To help address this scenario, we recently added a new execution control to VT that has the physical CPU monitor execution within a VM. When an excessive number of iterations around a spin lock are detected (signaled by the occurrence of many executions of the “PAUSE” instruction – a good indicator of a spin lock), the CPU causes a VM exit to return control to the hypervisor so that it can schedule another virtual CPU to run. We’ve found that this new execution control – called “PAUSE-loop Exiting” – is effective in heavy OS consolidation scenarios where the likelihood of lock-holder preemption increases as physical CPUs are oversubscribed relative to virtual CPUs.
For things like resource intensive screen savers, there’s not much that the hypervisor or hardware can do, since the computation going on in a screen saver may be legitimate and what the user desires. The best practice here is simply not to enable such screen savers in guest OSes if their computation is thought not to be useful.
8 Comments
View All Comments
chukked - Friday, September 10, 2010 - link
Why intel can not add VT-d, VT-c support on its multimedia series of boards? (added cost is ok).:(
najames - Thursday, September 16, 2010 - link
....or at least clearly indicate what hardware is and is not VT-d (or AMD version) capable. Trying to find out what motherboard/CPU combo will actually work in white box form is like trying to pin the tail on the donkey blind folded.Finite Loop - Friday, September 10, 2010 - link
Here's an example of where virtualization on the desktop may take place;If I'm on the road with my laptop, I can get wireless internet access in hotel rooms but I'd also like to connect my mobile phone with a PBX. I can tether my phone with my laptop and have the phone route its traffic through the laptop. If the phone traffic is routed straight through the laptop then I may run into issues with SIP connections combined with NAT. Sometimes, I can establish a VPN connection with a backend server and have the phone traffic go through the VPN connection.
More often than not, I'd like to have my laptop connected with several PBX servers allowing the phone to be reachable from all PBXs (and vice versa).
Virtualization is not necessarily required, but I currently just run an Asterisk (Trixbox) PBX server in a virtual machine on the laptop. I wouldn't want to install all this software straight onto the laptop('s host), which is running Linux btw. Now the phone is tethered with the laptop and establishes a SIP connection with the (local) Asterisk PBX. Asterisk allows multiple trunk connections to other PBXs and additionally provides IAX which has less trouble with NAT connections.
Following this, combined with frustrating experiences with SIP proxies and the like, I now have an Asterisk PBX running as a (KVM) virtual machine on most of my desktops at various locations. These are desktops which otherwise don't have a local server available (which would then by the place to run the Asterisk server).
Guspaz - Friday, September 10, 2010 - link
But, as you pointed out, you don't need virtualization in your scenario; a simple VPN would work fine, even for connecting to multiple PBX servers (the typical road-warrior scenario for VPN usage uses the VPN as the gateway, so *all* internet traffic flows through it, secure over the untrusted segment (the hotel, the cell network, etc)).Virtualization is a great technology, but it's still not a magic powder that should be sprinkled on all problems. If a solution other than virtualization exists that is simpler, faster, and cheaper, then virtualization probably isn't the right solution to the problem.
In your case, you'd still need to run through a VPN, since your trunk connections from your virtualized copy of Asterix is still going over the untrusted connection.
Finite Loop - Friday, September 10, 2010 - link
Well, as I pointed out, I already use a VPN in order to route the (SIP) phone traffic with the PBX. I may have not made it clear that it's more desirable to have the phone connect with a local PBX (running as a VM on the laptop) than having the phone connect with a remote PBX.Moreover, it's preferable for me for the phone to make one SIP connection and have Asterisk deal with subsequent connections than having the phone make multiple connections (which I haven't even tried). Additionally, I already have the VPN software running together on the Asterisk VM so using the local PBX is more or less transparent once the phone is tethered with the laptop.
Finite Loop - Saturday, September 11, 2010 - link
I didn't mention this, but having this PBX-in-a-box is really fruitful. Although I may able to setup Asterisk on a Linux host and run a PBX without virtualization, once the PBX is running by itself I can easily transfer/port this setup to other hosts which inherently are not Asterisk friendly.Instead of having to deal with PBX, VPN and related stuff for Windows, Linux and Mac OS, I just focus on getting the virtualized PBX running (as a guest) on a particular host and the rest follows automatically. I wouldn't want to configure someone's IP phone setup on a Mac book, but with a virtualized PBX, I don't have to know about the particulars of a specific platform, just how its hypervisor starts/stops the PBX and how the hypervisor provides (virtual) network connections in relation to the host.
If your company has certain policies on how remote clients connect (specific VPN software), how the PBX is accessed etc., then deploying a PBX-as-a-guest or even a corporate-image-as-a-guest can make things a whole lot easier.
One can generalise this to a lot of software that's specific for a given company, has specific run time characteristics and is otherwise sensitive to host settings or is limited to running on a particular platform.
mmatis - Friday, September 10, 2010 - link
but who won the HD 5770s in your Anniversary giveaway?solgae1784 - Friday, September 10, 2010 - link
In question #1, the answer said to have at least two gigabit ports. That's fine for test & dev use or home use, but for production/enterprise use, I would definitely shoot for at least 4 - more if you need to segregate traffic. Commingling both management traffic for your hypervisor and production traffic for your virtual machines can be considered bad practice in security perspective, and you might end up having your security team raise a red flag.While you can segregate management and production traffic by means of VLANs, and even reduce the number of links required by utilizing 802.1Q trunk, there are still many attacks that can hop between VLANs. Physical separation is still the best way to separate traffic (if not always necessary) and for that, you will need more links and thus, more NICs. Check with your security team!