For many information technologists, virtualizing mission-critical applications remains untrod ground.
Fears about application performance, proper backup and recovery, and adequate security for virtual machines (VMs) are among a litany of concerns that have left IT departments slow to warm to virtualization.
But a few IT shops have begun to cross the chasm and reap virtualization's benefits by reducing costs, increasing hardware utilization, containing server sprawl and improving disaster recovery practices.
For one U.K.-based global heating and plumbing supplies provider that wishes to remain anonymous, the next frontier is virtualizing SAP AG’s enterprise resource planning app, SAP ERP. The global company has sales of more than $31 billion, more than 75,000 employees, some 5,000branches and six data centers in North America and Europe. With a large and complex supply chain that is central to the company’s day-to-day business, virtualizing SAP is a major undertaking. And considering that VMware certified SAP only six months ago—in December 2007—the company’s efforts to bring SAP into a virtual environment are relatively cutting edge.
Still, to date the project remains in the early testing phases, with estimated project completion over the next three years.“We’ve gone through lab testing to determine how we’re going to serve up SAP,” said the company’s senior manager of global systems engineering, who is in charge of the project. “We still haven’t proven that virtualization is the right way to go,” he said.
The challenges of virtual ERP
For this senior manager, concerns about virtualizing SAP have little to do with lack of experience with virtual technologies. Over the past five years, the company has virtualized applications extensively—including Microsoft’s Exchange and SharePoint as well as Oracle’s Hyperion—and the company already has about 500 VMs to its 1,000 physical machines. “We’ve used VMware for so long, so we’ve seen the pros and cons,” he said. But as with any massive project, the human element plays a role.“There are lots of players involved; virtualization is only a small piece,” he said.“Things change, the economy changes, so a requirement becomes back burner after being primary.”
So too, the complexity of first consolidating the company’s myriad ERP platforms onto SAP ERP is a substantial undertaking. Over the past few years, the company has made several acquisitions and, as a consequence, inherited several disparate ERP systems to manage its supply chain. Further, SAP itself needs to be customized for the company’s particular business needs and requires introducing multiple SAP environments—test, development, production and so on—into a virtual environment. Two of six data centers will run SAP ERP.
And while the company’s senior manager of global systems engineering hopes for all the usual advantages of virtualization—reducing cost, creating ease of management and increasing flexibility—one driver trumps them all: in the event of failure, rapid recovery time objective (RTO).
“That’s probably the primary motivator for us; it’s the least common denominator,” he said. In the company’s test environments, “Recovery time for the application servers has gone from hours to minutes. We can use a server farm in another data center for testing, and if we need [these servers] for failover, we can have these [VMware] ESX servers designated for recovery in 15 minutes, then place images on them in another15 minutes.”
Virtualizing applications like SAP has prompted the company’s IT department to rethink existing infrastructure as well. First, like many, this company has invested heavily in Fibre Channel (FC) to access and share files across the network. But increasingly, Network File System (NFS) has become a lower-cost and less complex alternative to FC. NFS obviates the expense for host bus adapters, additional ports and fiber cable, all of which FC requires. With its current usage ratios at about 95%FC to 5%NFS, this company has considered ramping up its NFS use. “We’re using NFS more, and we’re seeing cost benefits and performance,” said the senior manager. “We will see a shift overtime,” he said.
Second, given the company’s mandate to keep recovery time objectives “incredibly low,” and the importance of disaster recovery generally, another architectural concern is the need to keep a primary and secondary data center proximal enough to one another to allow for synchronous replication (i.e., site design that allows for more consistent data replication but that also requires close proximity of sites). For synchronous replication, sites must be within about 30 miles of each other, so site proximity has factored into the company’s concerns as well. But for this senior manager, a central architectural consideration is still up for debate: whether to go with virtual technologies on blade servers or to “pull everything off blades and back onto our Superdomes” and use a single management console, he said.
“Today we’ve decided to go down the path of using [Hewlett-Packard] Superdomes, VMware on blades and Red Hat [Enterprise] Linux,” he said. “VMware has a good piece, Red Hat has a good piece and HP has a good piece, but it isn’t a single pane of glass,” he noted. “Ultimately, we have to weigh the viability of using Vmware and Red Hat on blades rather than using a single tool that can get all our environments from physical to virtual [ones].We will consider that seriously, and HADR [high availability disaster recovery] is going to play a big part in the direction we go in,” he said.
About the Author
Lauren Horwitz is the managing editor of the Data Center Media Group at TechTarget. Write to her at firstname.lastname@example.org