Success with virtualization depends on the underlying servers. Even though virtualization introduces a software layer that abstracts each workload from the hardware, the servers must still provide adequate computing resources that fit within an organization’s financial goals.
An organization needs to understand the forces that drive server selection, recognize the offerings from different vendors and demonstrate a keen knowledge of internal budgeting. Then it’s possible to focus on more specific details such as hardware considerations, operating system choices, systems management decisions and even the virtualization platform itself.
So what kind of servers should you specify for your enterprise? Which vendors and operating systems should you consider? How might virtualization influence your choice? TechTarget surveyed IT professionals about their current and future server decisions, and here is what they had to say.
Picking servers for data centers
Let’s start by understanding the issues that drive server selection in the enterprise. Today, most data centers have a relatively small number of physical servers. More than 44% of respondents use up to 50 physical servers, while another 14% run between 51 and 100 physical servers. Only 22% of respondents deploy 101 to 500 servers, and just 20% run more than 500 physical servers.
Selecting a server vendor can be challenging. Not only must the product be appropriate and affordable, but there are also other factors to consider. There is no doubt that an existing relationship with a vendor will likely generate repeat business—almost 27% of respondents said their own relationship with a vendor was a factor in their decisions. Another 21% indicated that server performance was the top factor, while almost 20% touted technical support and service. Server pricing resonated with only 17% of respondents, and just 13% cited product features and functions as their primary criteria for a server vendor.
An important element of server selection is knowing when it’s time to acquire new servers in the first place. As with vendor selection, there were a number of concerns that drive new server purchases. Virtualization ranked as the biggest driver, with more than 46% of respondents seeking to enhance server virtualization capabilities. “Our data shows that over two thirds of virtualization licenses ship on new server hardware,” said Gary Chen, research manager of enterprise virtualization software at IDC, a global IT market research firm in Framingham, Mass. “Virtualization has really become a new design point for servers, and we see virtualization features and acceleration being integrated at the chip level and up.”
There are additional considerations. About 44% of respondents said they needed an increase in computing capacity, while more than 41% said they planned to use the new servers to replace existing servers reaching end of life or coming off-lease. More than 32% of respondents said they purchased new servers to support new applications. Almost 25% cite the demand for consolidation—reducing floor space—and 19% of respondents said they wanted the improved power consumption that newer server designs promise.
Many new servers include comprehensive management software or integrate well with third-party management suites. So more than 16% of respondents said they choose new servers to reduce the administrative workload, and almost another 16% said they buy new servers to standardize on fewer hardware platforms. Just 15% buy new servers to accommodate rapid business growth.
Specific hardware concerns are not major drivers. Only 11% of respondents cited the need for more memory in new servers, while just 5% pointed to the need for faster I/O available in PCI-X expansion architectures.
The rise of blade servers
Blade servers have emerged as an important resource, with high computing densities for data centers. But density does not seem to be a significant priority among IT professionals. More than 45% of respondents said they consider performance to be the single most important attribute for blade servers. Almost 23% of respondents weigh price as the top factor, while roughly 13% put the emphasis on management features. Only 9% of respondents consider power consumption to be their biggest consideration, just 6.6% consider density a top priority, and a meager 2.6% put thermal load at the top of their priority list.
Server budget and purchasing trends
Servers represented a substantial capital investment for modern enterprises. Generally speaking, data center budgets for 2009 have been mixed compared to their budgets for 2008. More than 18% of respondents reported no change in 2009 over 2008.
Almost 39% believed there would be some increase in their budgets, while more than 30% expected some decrease. Most budget changes in 2009 were expected to be substantial—more than 10%. About 12% of respondents said they didn’t know which direction their data center budgets would go.
As you might expect, Windows and Linux-based servers showed the most substantial changes in spending—mostly increases—while mainframe and other more specialized server types showed modest, if any, changes in spending.
Most 2009 server budgets remained unchanged for 2009, though minor budget gains did appear. For small servers with four CPU cores or fewer, 35% of respondents reported that their budgets remained flat, while 31% indicated a budget increase and 25% saw a budget decrease.
A similar trend appeared for mid-sized servers with eight to 16 CPU cores. More than 24% of respondent budgets stayed flat, almost 22% saw an increase, and slightly more than 12% said their budgets got cut.
Large high-end SMP server budgets with more than 16 cores also followed the trend, with more than 14% of respondents staying the same, another 14% getting budget increases and about 9.5% of respondents reporting a decline. Blade server budgets bucked the trend, with more than 27% of respondents indicating a budget increase in 2009, while more than 19% stayed the same, and almost 13% of blade server budgets declined. This underscores interest in blade server technology for data centers.
Budget plans for virtualization are a bit more robust, and a decisive 54% of respondents said they expected an increase in their virtualization budgets in the coming year. Another 31% of respondents said they expected no change, but slightly less than 5% of respondents said they thought there would be a decrease in their virtualization budgets.
Increases in virtualization spending are driven by a variety of factors, but most involve cost. For example, almost 72% of respondents said they are increasing their virtualization budgets to save on hardware costs, while 63% plan to save on power and cooling costs. Consolidation is also a powerful driver, and almost 61% of respondents will use the added virtualization budget to improve consolidation and reduce physical space in their data centers. More than 54% will use the increased budget to modernize their architectures.
Interestingly, though, only about 18% of respondents said they will use the increase to implement a cloud computing architecture. “I think 18% is a decent number given the early state of cloud,” Chen said. “A lot of people are already progressing toward cloud without perhaps realizing it or putting a cloud name on it. Right now there isn’t a real dipstick to clearly define what a cloud is and how you know when you get there,” he said.
Of the minority who reported a decrease in their virtualization budgets, the biggest factor was a lack of tangible return on the investment. Thirty-five percent of respondents said they didn’t project enough—or fast enough—ROI to justify funding virtualization. Another 15% of respondents cited business resistance, saying that virtualization technology did not interest shareholders, the board of directors or corporate management. Other respondents indicated that there wasn’t enough need for the added capacity that virtualization promises or pointed to economic factors as the reason for a budget decrease.
Configuring servers for the data center
There is no question that virtual servers are about as diverse as the organizations that own them, but one issue is clear— a server absolutely must meet the computing needs of the workloads planned for it and often supply reserve computing capacity for workloads moved or migrated to it from other servers.
CPU configurations are the foundation of all workload processing. The vast majority of respondents—more than 91%—said they use smaller servers with four or fewer CPU cores. Mid-sized servers also make a strong appearance, with almost 58% of respondents using servers with eight to 16 cores.
Almost 38% of respondents said they use large SMP servers with more than 16 CPU cores. More than 60% of respondents use blade servers in their enterprises. Keep in mind that the number of CPU “cores” can appear in any distribution. For example, a server with eight dual-core CPUs provides the same number of cores as a server with four quadcore CPUs.
Although CPUs are important, the server’s RAM may be even more pivotal. CPU processing cycles can be shared between workloads if necessary but memory space cannot. This means a server will need enough memory to hold every anticipated workload. The “sweet spot” for memory in today’s virtual servers is 8 GB to 16 GB, reported by almost 36% of respondents. About 30% of respondents said they use servers with more than 16 GB of memory, while 34% deploy servers with less than 8 GB of memory.
Network connectivity is a critical attribute for virtual servers, allowing users to access business applications or data repositories. Sixty-eight percent of respondents said they use standard 1 Gigabit Ethernet as the data center cabling/network backbone. By comparison, 44% of respondents reported that they use Fibre Channel as the backbone of choice. Interestingly, almost 39% of respondents said they use 10 Gigabit Ethernet as the data center/network backbone, indicating strong adoption of the high-speed networking technology, while more powerful standards like InfiniBand languish with roughly 4.2% of the responses. Today, only a handful of organizations—less than 2%—continue to use 100 Megabit Ethernet or other types of connectivity.
Strong following for Windows and Linux
The configuration of a virtual server also includes an operating system. Windows Server versions hold the majority of responses, with Windows Server 2003 used by almost 84% of respondents and Windows Server 2008 deployed by more than 45%. Less than 1% of respondents reported using older versions of Windows Server.
Linux makes a strong showing in modern data centers with Red Hat Linux used by more than 44% of respondents. Other versions of Linux also appear, including SUSE Linux in almost 20% of responses and Ubuntu Linux in almost 14%. Debian Linux appeared as a mention, but garnered only about 1% of responses.
“I find that Windows is unquestionably dominant within a network as the host operating system for email, file systems, directory and printing,” said Rand Morimoto, president of Convergent Computing, an IT provider in Oakland, Calif. “But we see Linux on external-facing servers, in organizations that have a mission-critical client website environment, dotcom-type e-business solutions, etc.”
Other major operating systems deployed across modern enterprises included Sun Solaris, with more than 35%, IBM AIX in more than 24% of cases, and HP-UX with more than 22%. IBM operating system versions were cited, such as z/OS with more than 12% of responses and i/OS with 6.4%.
When asked what operating systems were used for mission-critical applications, the responses were different. Windows Server 2003 topped the list with 70% of the responses, but Red Hat Linux emerged as the second most popular with more than 32% of respondents. Windows Server 2008 ranked third with almost 30% of the responses, probably suggesting that organizations are slower to upgrade their mission-critical servers to the later operating system platform.
Virtualization as a key technology
There is no question that virtualization has become a key technology in data centers. An overwhelming 61% of respondents reported that they planned to expand the existing deployment of VMs in 2009. By comparison, about 16% of respondents said they planned to deploy virtualization for the first time in 2009, while 21% of respondents are now evaluating virtualization with no current deployments. A meager 1.5% said they have no plans for virtualization.
The uses for virtualization are also varied in today’s data centers and include use in disaster recovery and high-availability environments (42%). Other uses included supporting the dynamic allocation of resources (33%) and maintaining “golden images” of server configurations (26%). Another 25% of respondents said they are evaluating endpoint virtualization, while 18% indicated that they will be deploying—or extending the deployment of—endpoint virtualization. Roughly 12% of those responding to the survey expected to use virtualization in conjunction with public or private cloud architecture.
VMware is the most popular virtualization platform for more than 44% of respondents. This is followed by VMware Server with 19% of the responses. Microsoft ran a distant second to VMware with 6.3% of respondents using Hyper-V and 5.5% using Virtual Server. Other virtualization products such as Red Hat Enterprise Linux based Xen/KVM, older versions of
VMware—such as 3i—and Citrix XenServer, among others, make up a handful of the responses. VMware deployments should remain strong over the next 12 months, with 44% of respondents planning to deploy VMware ESX 3.5 and another 23% planning to deploy VMware Server. The most notable jump is for Microsoft Hyper-V where 21% of respondents are contemplating a deployment, probably because of the integration of Hyper-V in Windows Server 2008 R2. XenServer also showed greater uptake, with 13%. Taken together, the “big three” virtualization vendors are still expected to compete into the near future.
There has not been much action from Citrix or Microsoft yet, Chen said, although competition from Microsoft is expected to be fierce. “Right now it’s really [Microsoft’s] product immaturity relative to VMware that is holding them back,” Chen said, adding that many users are waiting to see how Microsoft emerges in the market before committing to or changing virtualization platforms. It’s a delay that can prove costly, he said.
“There will be a sizable installed base of servers, skills and a VMware-specific infrastructure that Microsoft will have to displace,” Chen said. “Switching from one virtual infrastructure to another won’t be easy or cheap, and Microsoft’s license pricing is just one small factor in a larger cost model in that scenario,” he said.
Deployments still limited in size
Although virtualization is clearly a popular and important technology, deployments are still limited in size. Almost 53% of respondents are running virtualization software on fewer than 10 physical servers in their data centers, while another 20% run virtualization software on 25 or fewer physical servers. Roughly 9.4% run virtualization on 26 to 50 servers, and another 9% deploy virtualization across 51 to 100 physical servers. The remaining 9% run virtualization on more than 100 servers.
VM loading is also limited among respondents. More than 60% said they run fewer than 10 VMs per physical server. Another 31% run between 10 and 20 VMs per physical server. The remaining 9% run between 21 and 30 VMs per server. This may be partly because of technology lag. VM loading will probably increase as new and more powerful servers enter service.
“We are seeing in general about six VMs per server on average,” Chen said. “Experienced users can get 10 to 20 VMs per server and very advanced from 20 to 30. Most users we see are comfortable going up to about 60% [server] utilization.”
Virtualization has penetrated just about every application area. Seventy percent of respondents run Web servers in VMs, 66% run application servers, 53% run development databases, 50% run network infrastructure services like DNS, DHCP, firewall and Active Directory servers. File and print servers run in 35% of VMs, email servers see service in 32% of VMs, and another 30% of VMs host production databases.
The effects of virtualization have had a profound impact on modern data centers. VMware is by far the most popular virtualization platform, but Citrix and Microsoft are gaining ground as alternatives. Regardless of the virtualization vehicle, consolidation is still a primary benefit of virtualization technology, efficiently allowing more VMs to run on fewer servers. Windows is clearly the predominant server operating system, but Linux and other open source operating systems are growing in popularity— even in mission-critical server roles and vital applications.
New servers that are brought online tend to be blade platforms with each server supporting small to mid-sized systems with a limited number of CPU cores. Blade platforms also play into modern high-density power and cooling schemes.
Budget activity has been mixed, reflecting the tenuous state of the economy. Although hardware purchases should remain mediocre, virtualization spending and deployments are expected to increase significantly. This follows the consolidation trend that will add computing power to the enterprise without similar increases in hardware, power or cooling.
About the Author
Stephen J. Bigelow, a senior technology writer in the Data Center and Virtualization Media Group at TechTarget Inc., has more than 15 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications, and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow’s PC Hardware Desk Reference and Bigelow’s PC Hardware Annoyances. Contact him at email@example.com.