Get started Bring yourself up to speed with our introductory content.

Oddball clouds and new ways to build a virtual infrastructure

New technologies and trends are changing how we look at virtualization and could affect how companies design and build their data centers.

It’s a sign of virtualization’s maturity that we are beginning to see data centers built in alternative ways. Market niches are creating demand for different configurations and even fundamentally different approaches to design.

For most of us, a cloud is synonymous with racks of identical x64 commercial off-the-shelf servers available as an on-demand service to users. Virtual instances that can be created or killed as needed, and the whole setup is failure resilient because automated orchestration software, together with having no user data stored in (stateless) servers, can replace failed instances very quickly. However, many of today’s cloud builds are looking much different.

Stateful instances

First, “state” has gotten back into some server instances, with local instance storage on solid state drives or hard drives. The rationale for this is easy to follow. IO performance of shared network storage and slow networks just can’t keep up with IO intensive instances.  Taking an app from a single server and trying to share space doesn’t work well for IO hogs, so temporary local instance storage is the answer, and a whole new family of instances was born.

This is a fundamental shift in architecture, and requires enormous care to prevent data being left behind when an instance dies. It’s an effective way to speed up many apps, however, and definitely here to stay.

A new interest in containers

Traditional hypervisor-driven clouds will be supplanted in many cases by the new containers approach, as embodied in Docker.  Containers allow a single image of the operating system and apps to be shared by many instances. This more than doubles the number of instances in a server, and cuts down storage and network traffic load tremendously. Some usage restrictions apply, such as requiring all the objects on a server to use the same operating system, but this usually isn’t onerous.

VMware appears a bit disconcerted about containers, which could supplant their mainstay business. They have rushed to sign up and are delivering containers running on an OS inside a VM. The container inside a VM approach may seem a bit “layered,” but it could be effective, and allows VMware to apply their management tools to control the setup.

High-performance computing

High performance computing has been around for years, but not as a genuine cloud. The resource pool is present, but multi-tenancy isn’t, and in the case of National Labs, that’s been a good thing – no cross-contamination of nuclear bomb simulations with oil and gas modelling!

That’s changing  as servers get bigger and hold much more DRAM. The record x64 server DRAM today is 6 TB of DRAM, though that has yet to make it to a cloud. Large instances are common, even up to 1 instance per multi-CPU server, with orchestration still allowing great flexibility and a lot of savings.

A result is that high performance computing is moving to a rental model, both at the platform and at the Software as a Service level. Specialist clouds will better match hardware and instance sizes. Together these are making powerful computing available on demand to even small organizations, and will change that industry segment profoundly.

GPU instances

Anyone who follows high performance computing or big data will know that GPU acceleration is a key to super-computer performance. Building a cloud with GPU instances is a challenge because of the stress on data moving and sheer app scale required. Even so, we are seeing this challenge being picked up by Nvidia who has a GPU cloud in place, with mega cloud service provides following. Many of the high-end supercomputers will migrate to the cloud model for control in the next couple of years.

Big data and the Internet of Things

Everyone seems to consider big data in terms of scale reaching exabytes, but we have to remember that all that data has to move around on networks. The characteristic of clouds to handle big data will be the Bandwidth of the pipes and the flexibility of routing information will determine how cloud handle big data.

This is where software-defined networking (SDN) will have a huge impact, but the central cloud approach in itself is inadequate for the load. It is probable that data streams will be reduced via satellite clouds that are local to the data. This will create a challenge to orchestrate across cloud boundaries.

Again, GPUs and parallelism seem to be likely requirements for adequate cloud performance, but there are fundamental differences in the workload and data structures between big data and high performance computing. Most likely, evolution to a key/data storage model will occur, and these will not be general-purpose clouds.

Mainframes and Watsons

IBM and others have talked up clouds based on mainframes, and it is true that job management has many things in common with cloud orchestration. However, calling a mainframe a cloud is a long stretch. The scale-out capabilities of real clouds far exceed any mainframes around.

IBM’s Watson has been talked up as a cloud product too. It offers an artificial intelligence approach to problems, and it clearly has a place in many applications. Lacking the scale-out capability of the true cloud, Watson is best characterized as a shared resource.

Future evolutions

Specialist systems are a likely step in the cloud space. The ability to design a system for a specific task is improving as both the speed of developing ASICs improves and the basic building blocks become more flexible and featured.

Examples to expect include specialist clouds for narrow application segments. We already have our first video-editing cloud as Adobe moves to a server-centric delivery for video tools. This has been well received and very successful. We can expect other specialist clouds to be created, including on-demand speech recognition systems and, of course, the one that started everything cloud: the search engine.

One aspect of the cloud that hasn’t yet had much focus is the market specialization model for smaller clouds. These add value by delivering expertise and central focus for markets such as legal, health, government and military. With the rise of containers and the general move towards SaaS, verticalisation should flourish.

                                                        

Dig Deeper on Cloud computing infrastructure

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

This is to be expected, as the cloud landscape matures. Some of it is driven by technical changes, some by market changes, and it'll continue.
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close