Buying server hardware for a scalable virtual infrastructure, Part 1

Part 1 of this series explores how trends in buying server hardware have been influenced by the scale-up vs. scale-out architecture debate.

With developments like server virtualization, cloud computing and big data analytics permeating data centers, IT professionals have new options related to buying server hardware. Data centers need equipment that can meet the performance and availability needs of rapidly growing companies and allow for relatively easy scalability as IT needs change in upcoming years.

Organizations may opt to purchase small numbers of new, powerful servers -- in a scale-up strategy -- that allows a few servers to handle large workloads while consuming less energy. Alternatively, they can choose a scale-out approach that uses large numbers of less-powerful commodity machines, which allow for clustering and redundancy, and this architecture may be less expensive up front.

Both of these server hardware strategies have their place, but today's need for scalable compute power at a moment's notice has replaced the traditional scale-up models for server hardware architecture with scale-out models. In what follows, we'll dissect this shift and how it takes shape in the modern data center context.

The pros and cons of scale-up and scale-out strategies

Over the past decade, scale-up architectures took hold as the strategy of choice as IT leaders began to use certain metrics, like server consolidation ratio and number of virtual hosts, to measure IT performance. Based on these metrics and a desire to save on hardware costs, IT shifted toward ongoing server consolidation by using a few powerful servers that could each take on large workloads, thus maximizing the use of costly resources. The licensing costs for underlying virtualization software were also reduced, and it became easy to simply add resources when needed due to the decoupled nature of many mainstream applications. Organizations still scaled out, but only as scaling up hit practical limits, such as hitting physical resource maximums in a single-host server.

Today, however, as workloads grow and new needs arise, scale-out architectures are re-emerging and affecting how IT buys server hardware. By harnessing raw compute power rather than divvying up that compute power for discrete workloads, scale-out workloads are solving many of today's most critical challenges. Big data analytics, for example, requires the ability to target data sets with major compute power. That compute power can be acquired by deploying many smaller systems tied together to achieve a common goal. This type of system is also well-suited to cloud environments, to which practically unlimited computing power can be brought to bear. An organization can even consider cloud services as an additional platform in an overall scale-out strategy. In general, cloud vendors provide either large scale-up environments or smaller, discrete scale-out environments, depending on the needs of the customer.

Of course, there are downsides to both architectures as well. Scale-up scenarios rarely provide a linear increase in resources, often favoring one or two resources over others. In a generalized virtual environment that is scaled up to use as few host servers as possible, for example, RAM and disk capacity are often exhausted long before processor capacity is reached, leaving "money on the table" when it comes to resources. Scale-up architectures also require a more detailed approach to availability. On the other hand, a scale-out environment may require new ways of thinking about application design and may not accommodate legacy applications.

With that said, it's important to note that the scale-up-versus-scale-out argument is not a mutually exclusive one. It will be increasingly common to see organizations running their scaled-up environments for legacy and operational needs and their scaled-out environment for research or compute-intensive needs.

Choosing server hardware for virtualization

With different methods for application deployment come different hardware platforms on which to operate those applications. In a predominantly scale-up environment, the capability of underlying hardware plays a much more critical role, while a scale-out environment may be able to leverage the commodity hardware that is emerging on the market.

In the past decade, the virtualization race made the x86 server the go-to platform for just about every organization running mission-critical workloads. The x86 server, in many instances, replaced the legacy mainframe, although aspects of traditional mainframes remain in play today.

For example, while many credit VMware with the creation of virtualization, mainframes have used similar technologies for decades for workload separation. Today's growing environments -- both scale-up and scale-out -- have a lot in common with mainframes, as most environments today are tightly integrated hardware components with master scheduling systems that manage resource allocation. However, it's increasingly rare to see organizations making monolithic mainframe purchases today given the plummeting cost of x86 and commodity hardware and the emerging infrastructure options described later in this section.

IT purchasers know what to expect when it comes to buying x86 servers for scale-up virtualization needs. In short, for pure scale-up virtualization, the ability to expand a single host as much as possible is generally the deciding factor. Doing so keeps down the overall costs of virtualization licensing.

Further reading on virtualization hardware trends

The complete guide to virtualization hardware buying

The latest in hardware for virtualization

Hidden costs of virtualization

In some instances, depending on the size of the environment, companies may consider massively scalable hardware, such as extremely high-end, densely packed servers that include dozens of processor cores, terabytes of RAM and mass storage. Perhaps the biggest challenge in such a scenario is the potential for workload failure in the event that a single hardware device fails.

A couple of emerging infrastructure options exist that are growing in popularity as organizations strive to rein in the complexity that has befallen many virtual environments. Both revolve around converged infrastructure, but to different degrees.

The first solution is basically a data center in a rack (or set of racks): Companies from across the virtualization spectrum come together on a prebuilt, pretested hardware platform that is supported by a single vendor. The most recognizable of these solutions is probably the Vblock provided by Cisco Systems, EMC and VMware, but other companies have gotten in on the action, such as Dell with its vStart solution. These infrastructure options enable customers to buy "units of infrastructure" that meet current demands without having to worry about whether certain elements will be compatible with others. These solutions are great from a support perspective and can provide organizations with a lot of peace of mind.

But buying racks at a time isn't always the best option, particularly for small and medium-sized businesses (SMBs). In fact, smaller organizations may be even more aware of the need to simplify their data center environments, and may have to do so in a more granular way.

This is where a second infrastructure option, hyper-convergences, comes into play. Companies such as Nutanix, Pivot3 and SimpliVity lead in this space. Rather than simply using existing servers and storage, these companies have custom-built units of infrastructure that start at the SMB level and scale to the enterprise level. These individual hardware elements each include compute, RAM and storage resources and often include advanced features in each of these resource categories (e.g., deduplication for storage) intended to maximize their effectiveness. The units of infrastructure are extremely powerful because of their granularity but also because each element can provide a massive amount of resources because of the advanced hardware that's often included.

Look for part 2 of this series when Scott Lowe discusses how to develop your hardware purchasing strategy.

This was first published in August 2013

Dig deeper on Server hardware and virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close