BACKGROUND IMAGE: iSTOCK/GETTY IMAGES

Virtual Data Center

Why SAP virtualization is now a reality

iSTOCK/GETTY IMAGES

Virtualizing SAP becomes a reality

Just two years ago, bringing mission-critical applications like SAP into a production environment seemed like a bit of fiction. But today, new hardware offerings and hardware-assisted memory virtualization have made the prospect a reality.

When an organization considers virtualizing SAP applications, its IT department may believe that the discussion should start with something like, “A lawyer, a CEO and an IT guy walk into a bar …”

But the idea of virtualizing SAP applications is no joke. In fact, many enterprises have already done so. More will virtualize SAP too, because it offers significant benefits and will be somewhat easier given new and maturing virtualization technologies—like hardware assisted memory virtualization—and upcoming virtual machine (VM) options from SAP.

Before I discuss the technical details that make virtual SAP a reality, let’s begin with the reasons for virtualizing SAP in the first place. After all, it’s always tempting to leave well enough alone, especially when it comes to mission critical applications, which have dauntingly high-performance requirements.

Why virtualize SAP?

Virtualizing SAP yields the following benefits:

  • improved hardware utilization efficiency;
  • ease of new system deployment;
  • system mobility;
  • simple and predictable disaster
  • recovery (DR); and
  • painless development, testing and training.

Improved hardware utilization and efficiency

In recent years, we’ve re-defined what is meant by an “efficient server.” Ten years ago, an efficient server was one that had plenty of free overhead, or white space. So a server that uses 15% of its resources, for example, was considered efficient. Today, a server that uses 15% of its resources is considered wasteful. That’s because a lot of energy, along with power and cooling, is involved in just allowing the server to run. By running fewer servers, an organization can consume less energy by reducing server power requirements as well as the heating and cooling requirements of a given data center. In addition, running fewer more highly utilized servers results in less hardware to maintain, which can reduce an organization’s operational, maintenance, hardware refresh and support expenses.

Ease of new system deployment

From start to finish, the process of provisioning SAP Business Suite applications is often a lengthy one that can take several months. The implementation is usually easy, so software installation is rarely the tough part. Instead, a great deal of time is spent planning, procuring new hardware and staging the hardware within an existing LAN. Of course, testing and certifying a new application can be time-consuming as well.

Virtualization can significantly shorten application deployment time, because systems can be deployed independently of hardware-related certifications. By effectively taking hardware out of the deployment equation, virtualization enables IT staff to focus primarily on software validation.

In the future, the deployment of SAP applications will change significantly. In coming months, I expect SAP to begin offering SAP Business Suite applications as virtual machine appliances. On the surface, that may not seem like a big deal. But if you look closer, the benefits are clear. By deploying applications as VM appliances, SAP can package an OS specifically tuned for a particular application with the application itself. With the application already installed an optimized, you can significantly shorten deployment time. Your organization and SAP will also realize improved product support with the virtual appliance model, since SAP support staff will now handle a known OS and application configuration.

System mobility

Seasoned IT folks like to ask those just entering the profession questions like, “Do you value your weekends?” If the answer is yes, IT practitioners proclaim, “Well, then, this job is not for you.” But virtualization has begun to free up time for perennially time-strapped IT staff. Virtualization platforms that support live migration enable administrators to move a VM to a new physical server without disruption to users or applications. So traditionally disruptive hardware tasks that previously could be completed only during an obscure hour or a weekend can now be performed on a Tuesday afternoon.

Simple and predictable disaster recovery

Virtualization technology provides hardware abstraction to create ease of mobility. VMs can be moved to systems with different hardware and still run. So far, the primary exception to this rule is a CPU platform. When live migration is the goal, all physical hosts in the same cluster must run either AMD or Intel platforms. That’s because virtualization software does not fully virtualize a CPU. Still, CPU platform consistency is a small price to pay for portability.

Disaster recovery always sounds good on paper, especially when a new application is deployed. Organizations with well-defined disaster recovery practices may purchase two servers for each deployed application: one for a production site and a second for the DR site. If a failure occurs immediately after deployment, recovery is simple.

But often the case is more complicated; over time, hardware upgrades or replacements to a production server may occur, and the server’s partner at the DR site may not be upgraded. Replacing a Fibre Channel (FC) host bus adapter (HBA) at the production site and failing to complete a similar replacement at the DR site, for example, could result in a system’s inability to boot at the DR site (until the appropriate HBA driver is installed in a system OS at the recovery site).

The bottom line: These practices create less predictability of DR operations and lengthy recovery times that may not meet the IT organization’s recovery time objectives (RTOs). Even today, when I meet with a large group of IT folks and ask whether they are confident their DR plan will work, the vast majority say no.
The abstraction provided by virtualization allows for differences in server hardware between production and DR sites without undermining system recovery. Today, many organizations that fully leverage virtualization have gone to a virtualized DR model, where a portion of a data center performs two roles: Systems are dedicated for dev/test purposes, but they can also be repurposed for disaster recovery if needed. This approach allows an organization to get the most out of its hardware investment. Virtualization provides the transparency needed to quickly repurpose systems and also yields predictable DR.

Painless development, testing and training

A final justification for virtualizing SAP comes from improved software development, testing and training. Virtualization lab management software (e.g., VMware Inc.’s Lab Manager, VMLogix Inc.’s LabManager, Surgient Inc.’s Virtual QA/Test Lab Management System) makes it easy for developers and testers to provision their own systems. With just a few mouse clicks, lab management software enables users to share a lab environment with other users. New VMs can be deployed in minutes, and the ability to create point-in-time snapshots of VMs allows developers and testers to roll back a VM or complete environment to an earlier point. These capabilities also encourage testers to push systems to the limit, since any failure that results from a test can be recovered in minutes.
Now that we’ve explored the reasons for virtualizing SAP applications, let’s discuss the platforms that make the process possible.

Virtualization platform choices

When it comes to server virtualization, x86 server virtualization (e.g., VMware, Xen and Hyper-V) is what usually comes to mind; but other alternatives abound. SAP applications have also been deployed on the following virtualization platforms:

  • Hewlett-Packard Co.: nPars;
  • IBM: Logical partitions (LPARs), zSeries mainframes; and
  • Sun Microsystems Inc.: Logical domains (LDOMs), Solaris Containers.

For several years, hardware partitioning solutions such as nPars, LPARs and LDOMs have been used to virtualize SAP applications. With a long history of success, these technologies have proven their maturity at the enterprise level and, as a result, are highly effective platforms for SAP. The benefit of these virtualization technologies is that they enable IT shops to divide the hardware of a single-server system and thus provide isolated hardware to each virtual system running an SAP application. The benefits to this approach include the following:

  • no overhead is induced for hardware emulation, so applications run at native performance; and
  • the physical isolation provided by the underlying hardware meets the organization’s security isolation requirements.

Technologies based on hardware partitioning have proven their mettle. But as x86 virtualization matures and its performance encroaches on near-native levels, it is increasingly being considered to virtualize enterprise applications. Don’t get me wrong: Far more high-traffic enterprise applications today run on hardware-based virtualization rather than on software-based x86 virtualization. Still, a few factors have pushed the momentum for x86 virtualization:

  • hardware independence and no vendor lock-in;
  • lower cost; and
  • simpler disaster recovery (since hardware between production and DR sites needn’t be consistent).

OS-level virtualization, such as Solaris Containers, also has a strong track record with virtualizing SAP applications. With Solaris Containers, administrators can partition a single Solaris OS into several independent containers, each with its own identity (e.g., name, IP address, applications and storage). OS-level virtualization has an advantage over server virtualization: Along with shared common services and applications, sharing an OS kernel simplifies system maintenance and updates. But Solaris Containers lacks a major benefit—live migration support—which is a key enabler of data center operational efficiency. Parallels Virtuozzo Containers, another OS virtualization solution, supports live migration, but this live migration technology is still maturing. Virtuozzo Containers, for example, requires a full data copy over a LAN for a live migration job between two systems that share the same Oracle Cluster File System v2-based storage on a storage area network (SAN). Ideally, only session-state information should have to be copied, not data residing on a shared logical unit number (LUN) on a SAN. With all x86 server virtualization solutions (e.g., VMware, Citrix Systems Inc., Novell Inc., Red Hat Inc.) that support live migration, only session-state data needs to be copied.

Slow down! Hazard ahead!

Thus far, I’ve outlined the pros and cons of various virtualization technologies. The mobility and hardware transparency provided by x86 server virtualization has ushered in a trend of moving SAP applications to x86 server virtualization platforms; but each application’s high performance requirements have prompted most IT organizations to proceed with caution.

Being cautious is entirely sensible. Even in the best of circumstances, the typical x86 virtualization platform introduces a range of 2% to 5% latency. The good news is that it’s no longer 2007. As recently as 2007, major hardware roadblocks prevented many SAP applications from being virtualized—at least those in production roles. The primary roadblocks included the following:

  • limits on the number of CPU cores per system;
  • software-based memory virtualization (i.e., shadow page tables [SPTs]); and
  • depending on the virtualization platform, I/O latency ranging from 2% to 20%.

Last year, most organizations said, “Wait. I have an application consuming nearly all the resources on bare metal, and you want us to add latency by running it in a VM? Are you serious?” Sure, the mobility, deployment, operational and recovery benefits were nice, but the performance tax was too high a price to pay. Today, the virtualization performance landscape is vastly different.
All major independent hardware vendors now ship quad-core CPU systems, with eight- and 16-core CPUs on the way in the not-too-distant future. So the typical quad-core, four-way server will provide 16 cores of compute power, making it an ideal platform for running virtualized workloads with high CPU requirements.

Previously, the way in which memory was presented to virtual machines was problematic. When a hypervisor is in charge of memory management, it uses software known as shadow page tables to present memory to each VM. SPTs generally perform fine but often induce an enormous performance tax on multithreaded applications, especially those under a heavy load. The problem occurs as competing threads continually try to refresh pages in memory (an extremely common task for multithreaded apps). As the hypervisor tries to keep up with memory paging requirements, memory latency of 25% to 75% is not uncommon. Naturally, this is more than enough to bring an enterprise application to its knees.

In the past, IT shops have minimized the impact of hypervisor memory management by running a single VM per physical host, which allowed IT to run enterprise applications as VMs. Of course, the tradeoff was the added cost of virtualization software. Ideally, organizations prefer to run more than one VM per host to more easily justify the virtualization investment.

Fast-forward to today, and hardware assisted memory virtualization has hit the street. With it, VMs can manage their own page tables and no longer have to rely exclusively on a hypervisor. The result for all applications is near-native memory performance, thus removing one of the greatest obstacles to virtualizing performance-intensive multithreaded applications. In order to leverage hardware-assisted memory virtualization, there are two requirements:

  • cluster-wide server hardware that supports hardware-assisted memory virtualization (e.g., quad-core AMD Opteron servers); and
  • a hypervisor that supports hardware-assisted memory virtualization.

In order for VMs to leverage new hardware features, every physical node on which these VMs run must support the feature. So if you plan to fully utilize hardware-assisted memory virtualization, you need to ensure that every physical node on which a VM runs includes this feature.

It took hypervisor vendors several months to nail down the secret sauce for hardware-assisted memory virtualization—that is, AMD’s Nested Page Tables. Three vendors have exploited this feature extremely well: VMware, Citrix and Novell. If a hypervisor cannot effectively manage large memory pages, the performance gains you would expect from hardware-assisted memory virtualization may not be there.

Over the past two years, technologies such as para virtualization substantially improved I/O performance. But for some applications, even 2% to 5% latency is too much. In such cases, you should consider technologies that drive performance even closer to native levels. These include network interfaces that support single-root I/O virtualization (SR-IOV), such as the Neterion X3100 series 10 Gb Ethernet network adapters. With SR-IOV, VMs can directly connect to shared-network interfaces and bypass the hypervisor stack, resulting in performance overhead of less than 1%. SRIOV-enabled network interfaces can improve both iSCSI and network performance overhead. On the Fibre Channel side, purchasing 4 Gb or 8 Gb HBAs that support N_Port ID virtualization (NPIV) helps as well. NPIV enables a VM to have its own identity (e.g., a worldwide port name) on a SAN, and, when combined with VMs, to be mapped to raw storage LUNs (instead of virtual hard disk files), it can achieve performance latency in the 1% range.

Today’s technology gains have brought core resource pain points (CPU, memory, network, storage) to acceptable latency levels of 1% or less. Lower latency has enabled enterprise applications such as SAP NetWeaver to realize the benefits of x86 server virtualization.

Making SAP a virtual reality

By taking the right precautions, you can migrate SAP applications from physical servers to VMs without anyone saying, “I told you it wouldn’t work.” With that in mind, here are a few guidelines to ensure success:

  • maintain virtual CPU (vCPU) to physical CPU (pCPU) core affinity;
  • leverage hardware-assisted memory virtualization; and
  • use virtualization platforms that allow you to oversubscribe resources.

With virtualization platforms, the less work that a hypervisor has to do, the better the performance. This starts with the CPU. Keeping a vCPU-to-pCPU core affinity of 1:1 or better significantly eases the hypervisor’s CPU scheduling overhead. For example, on a four-way, quadcore server, I’d have 16 cores at my disposal. To maintain vCPU affinity, ensure that all the VMs on a host collectively used 16 or fewer vCPUs. So running four VMs, each with four vCPUs, would work.

I’ve already covered the importance of hardware-assisted memory virtualization for VM memory performance. If you want to see it firsthand, monitor the number of page faults that an application generates when run in a virtual machine that does not leverage hardware-assisted memory virtualization. The results may surprise you. When virtualizing all performance-intensive applications, including those in SAP’s Business Suite, hardware-assisted memory virtualization makes a significant difference.

One final tip is to leverage virtualization platforms that allow you to oversubscribe resources. This is an issue with memory and is most useful when applications reach their peak workloads at different times of day. Hypervisors that support memory over commit, for example, allow you to allocate more memory to VMs than is physically available on their host system. To understand this, consider a host with 64 GB of RAM running four VMs. If VMs undergo performance spikes at different times of day, you could assign each VM 32 GB of RAM, for example (32 GB x four VMs = 128 GB). The 32 GB would represent the maximum amount of memory a VM could use, and when a VM doesn’t need physical memory, a hypervisor would page the unneeded contents to disk. Without over commit, I could run only two VMs in the scenario mentioned above, thus doubling my required server hardware investment. Again, this feature is useful only when workload spikes occur at different times, so it may not provide a major benefit for all SAP applications. Still, some organizations can clearly benefit from this feature.

A couple of years ago, the notion of running SAP applications on an x86 virtualization platform such as Vmware ESX Server might have been as funny as one of Chris Rock’s jokes. But the organizations that have already embarked on the process aren’t laughing. They have breathed a sigh of relief, because their IT shops now have a more mobile infrastructure and complete confidence in their ability to recover from disaster. For more information on virtualizing SAP, VMware’s SAP community is a great place to start.


About the Author

Chris Wolf is a senior analyst in the Data Center Strategies service at Midvale, Utah-based Burton Group. He has more than 14 years of experience in the IT trenches and eight years of experience with enterprise virtualization technologies. Wolf provides enterprise clients with practical research and advice about server virtualization, data center consolidation, business continuity and data protection. He is the author of Virtualization: From the Desktop to the Enterprise, the first book published on the topic, and has p

Article 1 of 3

Dig Deeper on Application virtualization

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Virtual Data Center

Access to all of our back issues View All

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close