The reality of processor performance improvement with hyperthreading

Hyperthreading aims to improve processor performance by organizing and scheduling application threads, but it's not always the most efficient method.

Processor designers have always focused on different tactics for enhancing performance in order to exact the most...

computing possible from every clock cycle. Faster clocks, larger data paths and different approaches to instruction sets have all improved performance. But, perhaps one of the most misunderstood enhancements has been the inclusion of hyperthreading and its influence on processor performance.

Hyperthreading has been proven to increase processor efficiency, but it isn't appropriate in every scenario and modern processor designs could make it obsolete.

How hyperthreading works

A traditional processor queues instructions through a pipeline-style architecture before passing them into the processor's execution engine. Differences in application design and demand often leave gaps in the processor's instruction pipeline, which leads to idle processor clock cycles. Poor application design can waste cycles and diminish processor performance.

To optimize processor architecture and boost the potential for multitasking, processor designers added a second pipeline that shared the same execution engine. The designers wanted to allow a processor to queue up instructions for a second thread, or task, in a separate pipeline, and then run those instructions through the execution core when the first instruction pipeline was idle. Intel developed this hyper-threading technology (HT), which enhances simultaneous multithreading (SMT) on a computer system.

With a second instruction pipeline in the processor core, an operating system sees two separate processors. Applications that can divide activities into separate tasks can take advantage of hyperthreading. Separate instruction queues help the processor schedule workloads to efficiently use its execution engine, which in turn improves core computing performance.

The processor core still only has one execution engine, however, so performance improvement from HT varies depending on the design and implementation of the workload being organized and scheduled. The performance improvement also never outweighs the benefit provided by adding a second core, which can roughly double the processor's computing resources.

What you need before you implement hyperthreading

The processor, the BIOS, the operating system and the workload make up the four principal elements of a successful hyperthreading server. Most modern systems can support hyperthreading: Intel introduced the now well-established technology of hyperthreading in Xeon processors in early 2002, and now Itanium and Atom support it as well.

The processors do, however, require certain hardware and software elements, including BIOS support on the server motherboard. The mature nature of HT almost guarantees suitable BIOS support, which allows system technicians to enable HT and related activities.

Because the OS parses workload tasks and handles task scheduling across the instruction queues, it must also support hyperthreading. Today, most enterprise-class OSes support HT and SMT, including Windows Server 2012 as well as some newer versions of SUSE and Red Hat Linux. Check your OS documentation to verify HT support for each data center platform.

Lastly, the application design itself influences HT. SMT applications designed to benefit from hyperthreading will demonstrate better performance than applications simply deployed on a server enabled with hyperthreading. Because processor version, BIOS version, OS version and application design all affect hyperthreading, it's difficult to definitively determine how much it will improve performance.

How hyperthreading affects virtual servers

Hyperthreading often boosts the performance of single-core processors, but, unfortunately, the processor performance boost does not equal, let alone surpass, the benefit of multiple cores. You cannot selectively enable or disable hyperthreading on a per-core or per-socket basis, and in some cases, Hyperthreading has the potential to negatively affect processor virtualization. As such, many admins opt to disable hyperthreading.

Additional reading on hyperthreading

Can you implement hyperthreading in a virtual environment?

Hyperthreading and Windows systems

The technology enhances the way the processor organizes and schedules application tasks, and, in turn, improves performance. Modern enterprise servers, however, use multiple processors with at least 8 or 10 cores each. In most instances, this wealth of available computing resources provides a better performance boost than hyperthreading. Hyperthreading also has the potential to negatively affect the way processors are virtualized.

For example, virtualization features such as CPU affinity don't always work well when hyperthreading is enabled. Hyperthreading creates two logical processors on each core, but these logical processors still share much of the core's physical resources. As a result, resource contention and performance bottlenecks can occur when a virtual machine or a single symmetrical multiprocessing (SMP) VM attempts to use vCPUs on the same physical core.

In fact, the number of cores available on modern servers could render hyperthreading a waste of computing power. If you have a 40-core server using four 10-core processors, a server running Windows Server 2008 R2 with Hyper-V will support 64 logical processors. If you enable hyperthreading, the server would present 80 logical processors and would render 16 logical processors idle and waste almost an entire processor socket. In this scenario, the performance benefit from hyperthreading does not surpass the added computing power of real cores.

Dig Deeper on Server hardware and virtualization