Application performance is going to be different depending on a system administrator's decisions. When it comes...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
to network, I/O, memory and CPU, there are choices that need to be made depending on what you're looking for. This is the second part of a two-part series about factors that affect virtualized application performance.
Networking is the bridge that connects our applications to each other and ultimately to us. It is a necessary piece but where does it fit in the application profile? We know that networking is critical for our infrastructure; however, that is not exactly the same as the needs for an application. One of the common speeds in the data center today is the 1 gigabyte (GB) connection and outside of over the network backups, it can be hard to find applications that use it all. Part of this is due to other limiting factors in the hardware such as the hard drives. The other is that most applications simply are designed to be more efficient in communications.
When virtualizing 30 standardized servers that have an existing 1 GB link, the raw math reflects 30 GB of bandwidth needed. However, if each of those servers only used 100 megabytes per second, suddenly that 30 GB of bandwidth drops to 3 GB. While not all applications will have that level of use, it often becomes more likely that your application will run into another constraint before networking. With networking speeds beyond 10 GB and moving into the 25 GB and 100 GB range, this is one celling that most applications will never see. With regards to software-defined networking in the application space, the application server itself should have no knowledge of the networking infrastructure below the virtualized "physical layer." So the question of using SDN is a business and infrastructure decision. Will these technologies be the future? Mostly likely yes but the timing of your adoption will be the key, leading edge or bleeding edge.
Managing IOPS to reduce bottlenecks
I/O is the real wild card of the collection. Storage has experienced some of the most radical changes in the data center of any of the four. In the traditional server, storage was typically the bottle neck of the server due to storage being mechanical in nature. Unlike CPU or memory, storage had moving parts which limited its performance. Once you add multiple VMs onto the same storage system, the bottle necks become even more apparent. However, larger RAID groups, meta-luns and solid-state disks (SSDs) have continued to increase the number of (IOPS) you can have access too. Add in the converged infrastructure where the storage is brought into the same frame as the servers and the amount of IOPS increases further. But storage is unique among the categories as the newer technology doesn't simply replace the older ones. SSD has not done away with spinning media, while the performance is much higher and the capacity grows the price difference is still too great to be ignored.
How to build your network and storage infrastructure
This often leads to a hybrid approach of both spinning media and SSD. With the price differences in the disk it is no longer a performance question, but a cost question. Asking an application owner about performance or capacity will typically yield the same answer: SSD. Seeking the vendor recommendations should be a bit more realistic but can still be more theoretical than realistic. Monitoring the application I/O will be the only true measure because it is operating in your environment with your users. Of course monitoring after the fact doesn't really help you in purchasing now -- except for storage. Storage is unique among the four because it's normally the easiest to modify and expand. Adding disk to most storage frames is a nondisruptive event and using multiple different classes of storage is supported by most storage systems.
The ability to move virtual workloads between different classes of storage without interruption makes one of the most difficult issues to determine in a virtual environment one of the easiest to work with. Capacity is fairly cut and dry and taking advantage of thin provisioning can help your efforts. Using the vendor's guidelines, you can get an idea of a starting point. From there you can grow your storage depending on the application profiles. Not purchasing everything at once gives you more freedom to see where your pain points will be and adjust accordingly. Since storage is often a shared environment you also gain insight into how different applications will react with each other. This can help you avoid common pitfalls such as VDI boot storms or backups.
Having insight into your applications is the best way to know how to grow your virtual environment. Virtualization has many features and can have amazing impact to your infrastructure and organization. However, don't forget that while virtualization may get all of the attention, it's the applications and how we support them with virtualization that is the real key.
What you need to know about application performance monitoring tools
Defining the importance of application performance management