Problem solve Get help with specific problems with your technologies, process and projects.

Adding storage capacity can actually hurt IOPS

More storage capacity doesn't always mean better performance. In fact, increases in storage can create an inverse relationship many IT pros didn't consider.

There are a number of new trends and technologies emerging that will reshape the landscape for enterprise storage. However, IT pros thinking of adding storage capacity have to be careful how they apply these new products, or they might risk creating different performance problems.

In previous years, the challenge was keeping up with the growing demand for more centralized storage capacity as IT moved large amounts of data from local hard drives.

Data growth continues to be a struggle, and organizations are beginning to outgrow that first storage array they bought a few years ago. As they do, some are in for a big surprise. For years, the focus has been on adding storage capacity. In fact, the development of a storage strategy is still referred to as a "sizing" exercise. However, today the challenge is now accessing that huge amount of data in an acceptable amount of time, and the size or capacity of the drive will have little or no correlation to the performance of the storage. An IT administrator who focuses too narrowly on adding storage capacity can end up with an array that can hold all the data, but can't support the IOPS demanded by applications.

If you are considering a storage upgrade, it is critical that you understand how this can impact your organization. Let's assume that you have a three- to five-year-old storage array and your organization is developing plans for a replacement. If you approach this with the mindset of, "I need a new 100 TB storage array to replace my old 100 TB storage array," you have just fallen into the capacity trap. 

There are many different storage options available. Take, for example, 7,200 rpm SATA drives. These drives each support about 70 IOPS. Three years ago, a 100 TB array likely would have used 200 drives at 500 GB each. Now, you can build the same array with just 50 drives at 2 TB each. This saves power, cooling and rack space, making it much more attractive.

More on adding storage for virtual environments

Creating a safe storage migration process

Inexpensive storage options for RHEV

Analyzing new virtual storage techniques

But it's not that simple. The older solution, with 200 drives that each provided 70 IOPS, had a total performance rating of 14,000 IOPS. However, the new array, with only 50 drives, comes in at about 3,500 IOPS. In this case, the older solution was actually 400% faster than the new one.

If you are only taking into account the capacity of your storage, you are headed toward a cliff. Luckily, you can easily avoid this trap when adding storage.

Always size for both capacity and performance and be aware that it may not be possible to balance the two perfectly. To meet performance needs, you may have to purchase more capacity than you need.

Be aware of new tools and technologies you can leverage to help keep capacity and performance variables closer to the desired goal. Storage tiering is the practice of mixing multiple drive types in one storage pool. You could, for example, mix small capacity/high performance drives with drives that have a larger capacity but slower performance ratings. Together, you can tailor this mix to provide a storage environment that meets both performance and capacity goals, without severely oversizing either performance or capacity.

Regardless of the drive technologies or storage features you deploy, you need to understand that a storage product has to meet both capacity and performance goals. You need to know those goals for your organization, and you need to develop a plan that considers both.

Dig Deeper on Virtual server backup and storage

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.