When a computer system is virtualized, a hypervisor abstracts the underlying hardware components into virtual counterparts. The hypervisor actually manages and maintains the relationship between virtual and physical components, while allocating those virtualized components to VMs and workloads. It's this abstraction that allows the level of workload flexibility, mobility and resource utilization that has made virtualization so powerful for the enterprise. But such abstraction also introduces subtle differences to the way that some compute resources are handled. Let's consider the impact of virtualization on processors.
What is a traditional CPU?
To appreciate a vCPU, it's important to clarify some more traditional concepts that can often be confusing. A central processing unit (CPU or processor) is the traditional term applied to that part of a computer system responsible for fulfilling the instructions -- and handling the associated data -- that are part of any computer program (an application or workload). The CPU is a highly complex digital electronic device that is fabricated onto a small semiconductor substrate called a die. This complete physical semiconductor device is typically referred to as the CPU core or simply the core.
Traditionally, a CPU package consisted of a single core -- a single CPU -- packaged to fit into a physical CPU socket on the computer's motherboard. But things got more complicated as CPU technology advanced.
Adding threads to a traditional CPU
One advance is the addition of threads to a CPU. A CPU is composed of numerous subsystems, so CPU designers devised a means of allowing a CPU to multiplex -- share -- the instruction pipeline between two instruction streams -- threads -- which lets the CPU do more work by keeping the physical instruction pipeline filled. From a logical perspective, each thread is recognized as a separate logical CPU. So, a modern hyper-threaded CPU would appear to an OS or hypervisor as two different CPUs. However, this isn't realistic because threading is a game of diminishing returns. Since much of the CPU is shared, the addition of a second thread doesn't actually double performance, so adding more threads would overtax the CPU and reduce performance for all threads.
Adding cores to the CPU package
The second advance is the addition of more cores to the CPU package. Early CPU packages housed a single CPU core. As CPUs struggled to overcome Moore's law, designers decided if it wasn't practical to make one core faster and more powerful, they could at least put more CPU cores into the same CPU package. Today's enterprise-class CPUs might hold 24 cores or more. If you consider that each of those CPU cores might be threaded, the CPU package that plugs into the motherboard might effectively offer as many as 48 logical CPUs. If the motherboard holds two such CPU sockets filled with identical CPU packages, the system might effectively offer up to 96 logical CPUs. If the motherboard holds four such CPU sockets filled with identical CPU packages, the system could provide as many as 192 logical CPUs.
What is a vCPU?
When a hypervisor is installed, each logical CPU is abstracted into a vCPU. If the CPU is nonthreaded or threading is disabled in the system's basic input/output system (BIOS), each core will become a vCPU. If the CPU is threaded and threading is enabled in the system's BIOS, each thread will become a vCPU. If we consider our example above of one 24-core threaded CPU -- 48 logical CPUs -- the hypervisor should provide 48 vCPUs for use.
As far as the hypervisor is concerned, each vCPU is a full and complete CPU, and you might notice that the hypervisor's management dialogs use CPU rather than vCPU designations. But each CPU has been virtualized before it's assigned to a VM.
Boost mainframe processor capacity and speed
Decide whether to enable hyper-threading
Compare different hypervisor types
Dig Deeper on Virtual machine performance management
Related Q&A from Stephen J. Bigelow
Containers have rapidly come into focus as a popular option for deploying applications, but they have limitations and are fundamentally different ... Continue Reading
ALM and SDLC both cover much of the same ground, such as development, testing and deployment. Where these lifecycle concepts differ is the scope of ... Continue Reading
Eliciting performance requirements from business end users necessitates a clearly defined scope and the right set of questions. Expert Mary Gorman ... Continue Reading