Manage Learn to apply best practices and optimize your operations.

Local storage solves some problems, creates others

Local storage helps address virtualization-related IO performance problems but raises potential security concerns, especially for cloud environments.

Virtualization is a tremendous benefit to the IT world. It reduces the number of servers needed -- often considerably...

-- and it allows more flexibility in assigning resources to workflows. Coupled with orchestration software, virtualization forms the basis for the cloud approach, allowing flexibility in resource provisioning and workload creation.

But, it isn't all plain sailing. In many cases, the IO structure of servers and networks struggle to keep up with the virtual instances. Imagine taking a single server's IO stream and spreading it over 256 other servers. It would clearly be insufficient to meet those needs. This IO bottleneck is a concern for many virtualization deployments. If a server hosts a number of IO-intensive VMs, they end up IO-starved and run slowly.

One approach is to add local instance storage in which a workload uses storage located on the host server. This tackles the networked IO problem by making the IO local.

With local instance stores, it's possible to tackle the heavier loads, such as big data analytics and in-memory databases. Performance is nowhere near what a dedicated drive could give, but it can be sized to fit the needs of many virtualized applications.

There are alternatives to the local storage approach. Much faster networking and networked storage could also alleviate the pain, but this is expensive today, and it will be a year before 25 GbE and 100 GbE are available and affordable for most organizations.

However, there are downsides to the local instance store approach. The first should be obvious; it is local. If the server dies, the data is no longer available. This problem has spurred an interest in virtual storage area network technologies, where data is replicated across a fast network to guarantee availability. However, sharing the instance store in this way doubles or triples drive traffic, since the instance now holds replica data from one or two other servers

Traditional separate networked storage places a lighter load on already tight server network bandwidth. Overall, the most effective model balancing server, storage and network bandwidth appears to be using a local solid-state drive (SSD) for instance storage, and an Ethernet-connected network storage farm for persistent data storage.

The usual challenges of data integrity also plague the local storage approach. If the instance storage is used for holding data for a significant length of time, it should be protected from crashes. This means a mirrored configuration is needed. The alternative is a "lazy write" to networked storage, which may leave a window where the only copy of the data is in the instance storage.

Cloud and security concerns for local storage

The key to understanding if we have instance storage right is to look at what happens when instances are killed off, either deliberately, or when a failure occurs. The original premise of the cloud was that all servers were stateless, so that a restart could be quickly achieved using another available instance and the image on the network.

Instance storage makes the instance stateful and this brings complications. First, guaranteeing data availability requires the data to be copied to a networked storage device. The next issue is the disposition of the data stored on the instance.

The instance store may have gigabytes of readable data, and when an instance is migrated or restarted on another host, the user loses control of data stored on the previous local storage.

A good orchestration tool has to prevent access to the data by a new tenant of the server. One solution is to delete the space by overwriting, but this is time-consuming and uses up the drive IOPS bandwidth.

Unfortunately, the simple overwrite approach doesn't work with SSDs, since deletion on those devices just moves blocks into a spare status. Overwriting doesn't work on SSDs, either. When a block is written on an SSD, the new data is put into one of the spare blocks, which is relabeled with the block address, and the old block is added to the spares list without overwriting any data in the block.

When the old blocks are given to a new instance, they can still contain data from a previous tenant.

Another method is to diligently manage read operations, preventing them from reading blocks that haven't yet been written, as Azure does. It should prevent access to the old data, but could go wrong if the instance classes on the server are changed, for example.

Containers bring a new twist to the problem. With the consolidation advantages to containers, the instance count per server may double, increasing the load on the networked storage and making local storage even more attractive. The host operating system has to properly control the deletion of instance stores and the protection of privacy on any stored data.

This is a complex and serious issue. My advice is to ask your cloud provider how they handle the question. There is potential for a HIPAA or SOX compliance violation from this issue.

Next Steps

Will local storage catch on for virtualization?

How to improve local storage performance on virtualized servers

Local data storage or SAN?

This was last published in May 2015

Dig Deeper on Server hardware and virtualization

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you use local storage in a virtualized environment?
Cancel

-ADS BY GOOGLE

SearchVMware

SearchWindowsServer

SearchCloudComputing

SearchVirtualDesktop

SearchDataCenter

Close