Once a distant cousin of the mainstream computing market, HPC has always been treated with consideration by the "family" but rarely invited to the top table. As the skills required to manage HPC environments become more useful in today's increasingly virtualized data centers, however, that situation is changing.
Experts say that as more multicore systems are adopted in enterprise IT shops and the deployment of virtual machines across clusters of compute resources gradually becomes the norm, skills once confined to the supercomputing industry are now relevant and useful in any data center.
"Scheduling jobs, queuing jobs, shoring up resources, determining policies such as rejecting a job that doesn't have an estimate of how long the job is going to take … these are typical HPC skills but start to overlap when you're managing a virtualized compute environment," said Andrew Jones, the vice president of Oxford, England-based Numerical Algorithms Group, a provider of statistical, data mining and visualization software tools for financial analysis companies.
As the technology has shifted from supercomputers to clusters and grids of off-the-shelf components it's also moved out of scientific research and into the mainstream marketplace, according to Gordon Haff, a principal IT adviser at Nashua, N.H.-based IT research firm Illuminata Inc. "With some exceptions – like IBM Blue Gene– HPC uses the same clustered x86 machines deployed by Web front ends and other commercial applications," he said. In some cases, HPC might use high-performance interconnects such as InifiniBand, but largely there's little to differentiate these environments anymore.
Haff noted that cloud computing has become possible, thanks to this homogenization of computing architectures. According to Haff, x86 "has become the predominant architecture, there are few unique operating systems anymore, there's a convergence around protocols and interconnects, single logon and virtualization, this standardization makes a common utility or pool of resources possible."
Furthermore, according to Gartner analyst Thomas Bittman, while all the hype around cloud computing these days focuses on public cloud services from Amazon and Google, and outsourcing to these companies, the real meat and potatoes of cloud computing is on the corporate IT side in building internal, private clouds. And some companies have already begun.
FedEx runs its logistics processing system that includes 20 different applications on a "private cloud" of 500 CPUs managed by Appistry Inc.'s cloud application platform, a meta OS that distributes, schedules and balances compute resources to applications. Somewhat like HPC on steroids, it enables more than one workload to be distributed and managed across a cluster of resources.Time to rethink
John Sobieralski, the manager of IT infrastructure for Aspen County, said that it's not hard to see how skills such as scripting integration between processes and data in different virtual machines as well as running and managing applications across distributed resources will become increasingly useful. "It's going to be a requirement for all our staff eventually" he said.
As industries, Numerical Algorithms Group's Jones said he does not believe mainstream computing will ever catch up with HPC. "By definition, HPC will always be more powerful than mainstream computing," he says. But as the hypervisor eventually becomes the base layer of all machines, it's likely even HPC environments will be entirely virtualized and will become a subset of mainstream computing rather than the distinct niche it has occupied in the past.
For IT pros looking for a piece of the action, take your long-lost cousin in the HPC lab out for a beer. He might just save you your job.Let us know what you think about the story; email Jo Maitland, Executive Editor. You can also check out our Server Virtualizaton blog.