Cloud computing defined: Strategies for your enterprise

Today’s cloud holds a lot of promise but still poses a number of security, interoperability and access concerns

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Virtual Data Center: What is cloud computing, and what can it really do?:

Cloud computing got a big boost in credibility recently when major vendors jumped on board. But lost in the swarm of announcements are questions about what cloud computing is and what it can really do.
Many data center managers envision cloud computing as the ability to pool resources, charge customers based on actual usage and tap into extra external capacity when needed. This view of cloud computing is becoming possible through a blend of technologies like virtualization, Software as a Service, and Web-based applications.

But to achieve this computing nirvana, there are several questions about how clouds will work—and how companies’ technology architectures will work with cloud architectures. How can data center managers tap into cloud computing? And should they? How will cloud computing affect data centers with more than 1,000 servers? Which architectural and technological decisions affect a company’s ability to use and exploit cloud computing?

These critical questions will likely arise repeatedly as data center managers begin to navigate vendor hype to uncover the realities of cloud computing as well as its possible benefits for their organizations.

Nuts and bolts of the cloud

To start, it’s important to know that there are external clouds—such as pooled resources provided by third parties like Amazon and Google—and internal clouds—such as pooled resources internal to a company. The interaction between external clouds and internal clouds may raise a range of practical and technological concerns for data access, security, privacy and access control that can come into play.

Some related considerations include the following:

  • How well does a cloud-computing scenario work for many of today’s noWeb-based applications, or does truly harnessing the cloud paradigm require that organizations “re-architect” several internal applications?
  • How does an organization manage network connectivity between its private internal cloud and external cloud resources?
  • How can organizations provision, configure and customize external cloud resources appropriately?
  • How do external cloud resources gain access to the internal data they require in order to operate?
  • For applications running in the external cloud, how much bandwidth is required so these applications can access the internal data they require?
  • How does an organization provide access control?

Existing technologies—such as virtual private networks and data replication— can address some of these issues. VPNs, for example, can provide connectivity between external clouds and private clouds. And many VPN solutions also provide access control lists, or ACLs, for traffic running inside the tunnel, giving organizations fine-grain control over the traffic that moves from a private cloud to an external cloud. This approach helps to ensure that only authorized external cloud resources are allowed to communicate with appropriate resources in the private cloud and only via authorized ports and protocols.
Data replication technologies may be able to address the need for external cloud resources to access certain data sets in order to function correctly. Vendor-independent replication technologies—those that are not tied to a particular storage vendor’s hardware, for example—will be particularly useful in this case, offering organizations and service providers alike greater flexibility and compatibility.

At the same time, however, VPN technologies and data replication topologies must be coordinated between an organization and its service provider and, therefore, can create some degree of vendor lock-in. Successfully using these technologies to solve practical challenges of cloud computing will require IT organizations and their service providers to work together much more closely than ever before. 

As a result, the tight coordination of such technologies doesn’t extend to the rest of a cloud-computing solution. These close ties between an organization and a service provider make it more difficult and more costly for organizations to switch cloud-computing providers, effectively reducing portability.

Also, as the ecosystem of virtualization hypervisors becomes increasingly competitive—with Citrix Systems Inc.’s XenServer and Microsoft’s Hyper-V poised to compete with the market leader VMware Inc.—there are other factors undermining portability. Given that virtualization typically plays a significant role in cloud-computing environments, particularly in private clouds, the interoperability of hypervisors from different platforms and guest virtual machines (VMs) is a key factor in portability and, by extension, the widespread adoption of cloud computing.

These factors introduce even more issues. For example, will organizations be able to make a VMware ESX-powered internal cloud work properly with a XenServer-powered external cloud, and vice versa? Will an organization that currently uses a VMware ESX-powered external cloud be able to switch to a XenServer-powered external cloud? Will service providers be able to mask that complexity for users?

Virtualization vendors now tout interoperability, hoping to answer these questions, but users have yet to see significant progress on this front.

Three major areas of interoperability that have yet to be addressed are VM definitions, virtual hard disks and para virtualized device drivers. Until efforts to address these areas bear results, data center managers need to be wary of the claims of seamless scaling and fluidity that cloud computing supposedly offers. Leveraging cloud computing today may reap benefits for some organizations and some applications, as long as data center managers are fully aware of—and plan around—today’s limitations.
 

Stumbling over cloud-computing compatibility

Work is already being done to address the key compatibility issues: VM definitions, virtual hard disks and para virtualized device drivers. Some of these efforts, though, are still early in development. For example, on the VM definition or configuration front, the Open Virtualization Format (OVF)—also referred to as Open Virtual Machine Format—has been approved as a standard, yet broad support for the format is still lacking.

VMware has the most complete implementation, while Citrix, Microsoft, Novell Inc. and others lag behind. Organizations that want to leverage cloud computing should not count on using OVF to help provide greater compatibility until all of the major vendors have shipped OVF support in their products. Note that some vendors have announced support for OVF, but that support has yet to be delivered to users in the form of actual products. Furthermore, adoption of the standard doesn’t necessarily resolve other key cross hypervisor compatibility issues.
Virtual hard disk formats, in the form of VMware’s Virtual Machine Disk format and Microsoft’s VHD formats, pose another compatibility issue. The OVF specification supports both formats, and each vendor offers the ability to read or convert another provider’s format, but what about native read/write functionality?

Microsoft and Citrix each have a slight upper hand here, in that both natively support the VHD format, but VMware’s Virtual Machine Disk format is the market leader. Both camps have taken steps to make their virtual hard disk formats the default by opening up specifications to third-party developers, but neither format has emerged as the clear, de facto standard.

The use of hypervisor-specific para virtualized device drivers is another key compatibility issue that the virtualization industry has yet to address. All major virtualization providers use para virtualized device drivers to optimize performance of VMs running on their virtualization platforms. VMware Tools, Microsoft’s enlightenments and Citrix’s Xen Tools are examples of para virtualized device drivers. Again, by virtue of their interoperability and cross-licensing agreement, Microsoft and Citrix each have a slight advantage here, making it more likely that XenServer and Hyper-V VMs will be compatible in this realm.
But VMware still controls the lion’s share of the market. This dominance leaves customers without a clear standard on para virtualized device drivers in cross-virtual platform environments.

So how have the major vendors tried to address these key issues? It’s worth noting that the big three players in this area—VMware, Microsoft and Citrix— have taken different approaches to address the challenges for widespread adoption of cloud computing. Here is how each positions itself:

Vmware: Hoping to leverage its market leadership in the cloud-computing space, VMware’s approach lies in its vCloud initiative, which is based on its close partnership with service providers, the use of virtual appliances and vApps, and the ubiquitous presence of the ESX hypervisor and supporting applications, in particular vCenter, formerly known as VirtualCenter. VMware intends to partner with several service providers that will use its software to build external clouds powered by the ESX or ESXi hypervisor and that are managed—or manageable —by vCenter.
Together with virtual appliances and vApps, the OVF standard enables portability between these external and internal clouds. The vApps technology can be considered an extension of the OVF standard and can incorporate information about that application’s service-level agreement, disaster recovery requirements, security needs and other policy information. With vApps and the OVF, organizations could move applications from one “VMware-ready vCloud” to another relatively easily.

And, because service providers offer ESX- or ESXi-based virtualization, it means that applications—in the form of vApps, virtual appliances or VMs with an OS and application installed—can be easily shared and transferred between internal and external cloud infrastructures. VMware has said that it plans to expose vCloud application programming interfaces (APIs) that enable even greater integration between internal cloud infrastructures and external cloud providers.

In that particular scenario, organizations would use VMware vCloud when they use VMware virtualization internally by “federating” a private cloud with an external VMware-ready vCloud service provider. Although the precise mechanics are yet unknown, the idea is that organizations could then move workloads from an internal cloud to an external one or spin up additional VMs at the external provider to handle additional load.

Citrix: In some respects, Citrix’s answer to the cloud—Citrix Cloud Center (C3) —is similar to VMware’s. Naturally, Citrix XenServer is a key component of C3. Citrix will deliver a special version of XenServer—XenServer Cloud Edition— that incorporates all the functionality of traditional XenServer plus consumption based pricing so that service providers can charge based on metered resource usage.

C3 joins XenServer Cloud Edition with NetScaler to provide policy-based application performance management. NetScaler integration will enable organizations to dynamically scale the number of VMs based on user demand to balance workloads across cloud environments or, in the event of failures or outages, to redirect traffic transparently. NetScaler also offloads some protocol and transaction processing from servers, providing greater scalability.

WANScaler is another part of the package that provides acceleration and optimization of traffic between external and internal clouds. Citrix Workflow Studio aims to unify these different components. Like VMware vCloud, Citrix will offer this functionality to service providers. Citrix has focused less on the dynamic movement of workloads between internal and external clouds and more on application delivery via NetScaler and WANScaler, which gives C3 some advantages over vCloud.

Similar to how VMware-using organizations would take advantage of Vmware vCloud, organizations using XenServer —or potentially the open source Xen hypervisor—internally would partner with service providers using C3 to provide the ability to shift workload processing based on demand. C3’s integration with NetScaler enables the latter to become the vehicle by which workloads are shifted internally or externally or are load-balanced between the two based on business-driven policies. If demand dictates, NetScaler can also activate additional VMs. WANScaler optimizes traffic between internal and external clouds as workloads shift.

Microsoft: Windows Azure, Microsoft’s cloud-computing initiative, is quite different than the rest. Rather than relying on virtualization as a key building block, Microsoft has exploited its massive developer base and the leadership of .NET as a development platform to build a new cloud-computing environment. Microsoft touts Windows Azure as a “cloud services operating system” that is designed to provide “on-demand compute and storage to host, scale and manage Web applications on the Internet through Microsoft data centers.”

And although VMware and Citrix tryto help service providers build their own clouds through the use of their virtualization software, Microsoft instead seeks to be its own service provider. External service providers won’t be able to buy Windows Azure, or the Azure Services Platform. These integrated components will be managed and maintained directly by Microsoft.

The other major differentiator of Windows Azure and the Azure Services Platform is that organizations seeking to host their applications in an Azure-powered cloud must port their applications over to Azure because unmodified applications won’t run on Azure. Although Microsoft said it will support third-party languages like Ruby, PHP and Python, the requirement that applications be ported to run on Azure suggests that applications won’t run elsewhere, even in a private cloud infrastructure. Naturally, Microsoft is not talking about merging internal and external clouds or about shifting workloads between clouds because none of that seems to be possible with its approach.

Unlike organizations seeking to take advantage of VMware vCloud or C3, those that want to leverage Windows Azure will need to port their Web-based applications over to the Azure Services Platform, where it will run in data centers that are owned and managed by Microsoft. Within these data centers, Microsoft’s Azure Services Platform will be able to distribute workload processing across many different physical servers and add or remove processing capacity as needed.

There would be no shifting of workloads between clouds. Instead, the processing and hosting would occur within Microsoft’s cloud. In this sense, Microsoft is more of a competitor with other cloud-computing service providers than a provider of cloud-computing infrastructure.

As the dust settles in the cloud-computing market, the initiatives from the major vendors—VMware’s vCloud, Citrix’s Cloud Center (C3) and Windows’ Azure—are grand strategic visions that present different notions of a cloud architecture. What’s conspicuously absent, however, are key definitions, standards, compatibility, interoperability, security and privacy.

Vendors that have jumped into this market have yet to provide concrete, substantive answers on how these questions will be addressed. Until they do, the strategic vision of pervasive cloud computing will not likely see the explosive growth that many experts and industry pundits have predicted.

So what should data center managers do until then? How should they prepare? Data center managers need to recognize that cloud computing may benefit only a portion of their overall set of applications. Until the challenges around portability, compatibility and security are resolved, cloud computing involving external providers may need to be constrained to specific applications and/or specific user populations. As standards evolve and vendors increase their support for these standards, then data center managers can begin to look to cloud computing as a broader solution and can trust more of their applications to this new computing paradigm.


About the Author

Scott Lowe is a senior engineer at ePlus Technology Inc., a provider of technology solutions based in Herndon, Va. Lowe’s experience in enterprise technologies includes storage area networks, server virtualization, directory services and interoperability. He has worked as the president and chief technology officer at Mercurion Systems and as the CTO of iO Systems.

This was first published in February 2009

Dig deeper on Cloud computing architecture

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close