Hyper-converged infrastructures can be extremely difficult to manage, because everything is interconnected. Measuring performance in this type of infrastructure is just as challenging. And in the past, the available benchmarks only focused on one part of the system. Now, administrators have the ability to look at the infrastructure as a whole.
In November, the Transaction Processing Performance Council (TPC) announced the availability of TPCx-HCI, an application system-level benchmark for measuring the performance of hyper-converged infrastructures. With this benchmark kit, administrators can get a complete view of their virtualized hardware and converged storage, networking and compute platforms running a database application.
We spoke with Reza Taheri, chairman of the TPCx-HCI committee and principal engineer at VMware, who explained the new benchmark for hyper-converged infrastructures and how the council created it.
What was the process for developing the TPCx-HCI benchmark?
Reza Taheri: Originally, we developed a functional specifications document to leave people's hands open to do any implementation [of the benchmark]. But over time, we realized that it actually made it very hard for people to implement. Not anybody could just go out and start learning the benchmark based on Transaction Processing Performance Council standards. So, we put out a benchmark kit that anybody can download, and it implements the benchmark, measurement, collection of data and all of that in the application kit itself.
The TPCx-V benchmark [for virtualization] was released a couple of years ago. The idea was to look at the performance of a virtualized server -- so the hardware, hypervisor, storage and networking using the database workload. We wanted to compare different virtualization stacks using a very heavy business-critical database workload.
Reza Taherichairman of the TPCx-HCI committee and principal engineer at VMware
Earlier this year, we had a couple new members join the TPC, and they were HCI vendors -- DataCore and Nutanix. They, along with other vendors, [started] asking about a benchmark for HCI systems. We looked at the TPCx-V benchmark kit and specifications and realized that we could very quickly repurpose that for hyper-converged infrastructures. We realized that the HCI market is hot and that there was a demand for a good benchmark.
Will this benchmark account for quality of service, in addition to price and performance?
Taheri: In a couple of ways, yes. One is that you need to have very strict response time performance.
The other one is something that's new in this benchmark: Combine performance with some notion of availability. Say you're running on a four-node cluster. For the test, you limit the VMs on three of the nodes, but all four nodes supply data. At some point during the test, you kill the fourth node and run for a while, and then you turn it back on. You're required to report the impact on performance during this run and also to report how long it took you to recover resilience and redundancy after the host came back on.
What types of applications do you use for benchmark testing?
Taheri: It's an online transaction processing application -- a database application -- that runs on top of Postgres [an open source relational database management system] in a Linux VM. We use that to generate a realistic, very heavy workload that then runs on top of the hypervisor and virtualized storage, virtualized networking, the hardware and so on. The beauty of an application like that is that it really leaves nowhere to hide. Sometimes, for example, if it's a very simple test of just IOPS, you can make up for low storage by using a lot of CPU or a lot of memory.
But you can't do that with a high-level system benchmark like this, because if you make up for storage by using too much CPU in the HCI software itself or do caching and use memory, then the application suffers and your performance drops. So, to have good performance, you have to have good storage, memory, CPU and networking all at the same time.
Are all the tested systems running the same hypervisor? Can you accurately compare benchmark performance results for HCI systems that are running different hypervisors?
Taheri: Any hypervisor can be used for this benchmark. Different hyper-converged infrastructures might be running different software stacks besides different hypervisors. It might not be possible to state how much of a performance difference is solely due to the hypervisor. The TPCx-V benchmark is very similar to TPCx-HCI, but runs on one node and can use any type of storage. TPCx-V is a better tool for studying the performance of hypervisors.
Is there any way to compare this benchmark to something running in the cloud?
Taheri: Not directly, but the benchmark has many attributes of cloud-based applications, such as elasticity of load, virtualization and so on. Also, a sponsor might choose to run the benchmark on a cloud platform, which is allowed by the Transaction Processing Performance Council specifications.
As HCI is still evolving, are there plans to review and make changes to the benchmark at any point?
Taheri: We would need to. It was a quantum leap from the Iometer type of benchmarks -- micro-benchmarks -- to a system application benchmark like this. Going forward, these specs will evolve. Benchmarks ... evolve in minor ways, and every few years we have to do a major change, which makes it incomparable to previous versions of the same benchmark.