Brian Jackson - Fotolia
Q. What are some caveats or consequences to using hyper-threading technology?
The biggest mistake that novice system administrators make is the false assumption that a second thread within a core is effectively the same as adding an entirely new core -- that's wrong. Hyper-threading technology doesn't add any new execution resources to the processor core, but rather just allows two tasks to share the core's existing execution resources. This can offer a noteworthy boost to a system where processors are relatively underutilized with ample idle processor time. A second thread can fill this idle time with another task and the processor -- and the system -- accomplish more in a given period of time.
Hyperthreading performance problems
But hyper-threading doesn't guarantee superior results. For example, successful hyper-threading requires a capable scheduling system that is typically present in a contemporary OS like Windows Server 2016. But OSes not aware of hyper-threading, such as Windows Server 2003, cannot support successful hyper-threading even when the underlying processors support hyper-threading and the feature is enabled in system BIOS. Similarly, hyper-threading performance benefits statistically decline as more cores are available. For example, a single socket system can see up to a 30% benefit from hyper-threading, and a dual socket system typically sees up to a 15% benefit. But a quad socket -- or more -- system should typically be tested to determine the actual performance benefit with and without hyper-threading. With many cores available, it might be more beneficial to provision multiple cores to a workload rather than use hyper-threading technology.
Also, be extremely cautious of using CPU affinity features within hypervisors. A hypervisor can normally provide excellent thread scheduling and automatic load balancing across all the system's physical and logical -- if hyper-threading is enabled -- cores. Implementing CPU affinity will disrupt the hypervisor's ability to perform that scheduling and load balancing, resulting in less-than optimum results. CPU affinity choices can also disrupt the hypervisor's ability to meet resource reservation requirements for certain VMs. In cases where CPU affinity is employed successfully, migrating the VM to other servers with differing numbers of processors can break CPU affinity selections. It's best to let the hypervisor or OS handle such configurations automatically.
And finally, don't ignore the role of the workloads themselves. For example, a single-threaded workload does not substantially benefit from multiple logical processors, so hyper-threading technology is useless when boosting the performance of such workloads. Also, workloads that fill the execution resources or impose significant demands on data transfers to and from memory -- memory I/O -- won't benefit from hyper-threading. Understand the nature of the resident workloads to determine whether hyper-threading should be used, or if workloads should be migrated to other systems where hyper-threading is used -- or not.
Hyper-threading technology can allow a second task to utilize a processor core's idle execution resources. It's a well-established means of getting more work from existing processors without buying new servers or adding/upgrading processors, but it's not suitable for every hardware or workload deployment. Consider the impact of hyper-threading when assessing the performance and migration characteristics of new workloads, and disable hyper-threading when it makes sense.
Choosing the right CPU features for virtualization
The essential elements of modern processor design
Prevent conflict on your ESXi host
Dig Deeper on Server hardware and virtualization
Related Q&A from Stephen J. Bigelow
Learn how load balancing in the cloud differs from a traditional network traffic distribution, and explore services available from AWS, Google and ... Continue Reading
Access management is critical to securing the cloud. Understand the differences between AWS IAM roles and users to properly restrict access to AWS ... Continue Reading
Containers have rapidly come into focus as a popular option for deploying applications, but they have limitations and are fundamentally different ... Continue Reading