This content is part of the Essential Guide: How to choose the best hardware for virtualization
Get started Bring yourself up to speed with our introductory content.

Buying the next generation of hardware for virtualization

Virtualization is still changing how we use hardware. Here's what you should know about buying the next generation of hardware for virtualization.

Virtualization has precipitated a lot of changes in data centers around the world. Most notably has been how it has changed the IT pro's relationship with hardware. By detaching workflows from physical servers, virtualization opened new doors and changed the way businesses buy and use hardware. When businesses first adopt virtualization, most simply repurpose existing servers. Nowadays, hardware vendors were shipping specialized servers with more memory and computing power. What's in the future of hardware for virtualization? This month, we ask our Advisory Board members about how virtualization is continuing to change this relationship with hardware and what you should keep in mind when looking to buy new hardware for virtualization.

Maish Saidel-Keesing, NDS Group Ltd.

Once upon a time, your average server had 2 GB, perhaps 4 GB, or even more rarely 8 GB of RAM. Today even your basic VDI desktop uses more than that.

So how have vendors changed their hardware to accommodate this ever-hungry monster? They are continuously evolving and pushing their hardware to accommodate more and more RAM. In today's blade servers you can put 768 GB. DIMM sizes are becoming even larger -- up to 32 GB now, but that will not be the end. 

When looking for suitable hardware for virtualization, there are several things that should stand out.

  1. How much RAM can you cram into the server? If you were to ask any virtualization admin, the first and most limiting factor for their servers is RAM, not I/O, not processing power. The more RAM, the better. VMware's decision to introduce (and later eliminate) what is now known as the vRAM tax was a mistake that rightly upset businesses that saw the importance of more RAM.
  2. Does the server have the capability to boot from a flash device? With today's hypervisor, there is not real need to have local disks.
  3. How much solid-state storage can I put into the server? Local solid-state drives accelerate storage access.
  4. Does the underlying technology provide you the possibility to rip and replace a physical server and reconfigure all the unique identifiers (MAC addresses and WWNs, for example) on the fly? Applying a set of configurations to a blank piece of hardware should be the way to go.
  5. Can your server benefit from some kind of I/O flash card that will accelerate storage access and lower your storage costs?

These are some of the things to consider when getting the best of the breed for your virtual machines.

Jack Kaiser, Focus Technology Solutions

This month I went to our CTO, Bill Smeltzer, to answer the questions.

"Server virtualization created a hardware abstraction layer within the x86 environment. As the software-defined data center matures, it promises to abstract more data center elements, such as firewalls, storage and other hardware devices. As the functionality of these devices move to software in the form of virtual appliances and native components within the hypervisors, propriety hardware devices will become less important. The problem with this approach usually shows up in terms of lower performance and scalability. The elements of the software-defined data center run on general-purpose x86 hardware, while many hardware devices utilize hardware-specific design, such as encryption and graphic chips. Because of the advantages of application-specific integrated circuits, specific virtualization-aware devices that can integrate seamlessly into the environment can provide the ability to leverage hardware scalability while not hampering the software-defined data center functionality."

Jason Helmick, Concentrated Technologies

Virtualization released all of us from being constrained to our hardware -- it even seems silly today to think back into the past when we would need to purchase hardware to add a new server. Today, we scale our server on the virtualization platform of our choice, making better -- and smaller -- hardware purchases. There may come a day when the data center is completely empty!

Take a good look at Office 365 and the amazing amount of customers dumping their local hardware for Exchange and SharePoint in the cloud. The new release of Windows Server 2012 R2 will blend integration with Windows Azure so that local on-premises VMs can be easily transitioned to Azure, again allowing you to dump local hardware (and hardware management resources) for a more cost-effective model.

This isn't going to happen overnight, but look inside your own data center and notice how virtualization has allowed you to consolidate and squeeze the best return on investment from your hardware. Going forward, losing the hardware altogether is the next logical step. Until then, make smart purchasing decisions. Realize that the hardware you purchase will be repurposed for a variety of virtualization needs, not a specific line of business application. Maximize the flexibility of your hardware and minimize the cost. Who knows? Maybe your next server deployment will be to the public cloud, where someone else can worry about the spinning gears.

Dig Deeper on Server consolidation with virtualization

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.