It takes months of hard work to design, install and configure a virtual environment, but that labor can go to waste...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
if you don't establish system benchmarking practices.
When a problem pops up, the first question will be, "What has changed since yesterday?" While the average person might say, "Nothing," an IT administrator needs to be a detective and identify the source of the problem. Fortunately, admins have a few methods for getting these answers.
Make time for system benchmarking
How do you know something is broken if you don't know how it's supposed to work in the first place? This is the challenge many IT professionals face as they look at the data center. When something fails completely, you can proceed to the root-cause analysis, but that's only a reaction.
In today's business IT, it's necessary to know what's going on with our data centers at all times. Waiting until after something has gone wrong isn't advisable. Outages and performance issues can become costly blunders, as well as public relations problems.
To stay ahead of these potential disasters, an IT team needs to be aware of how things are currently working. That information needs to be compared with how systems are supposed to work when all things are normal, which is also a challenge.
All of this requires time and effort. Most of the main virtualization vendors have some level of monitoring built in. Their capabilities can be limited in scope, but are often good enough to get you started.
However, any monitoring tool that records data does little good if no one is looking at it. Even if you do look, the information still might not be helpful if you don't have a clear understanding of normal behavior.
For any benchmark, you need to know several key pieces of information, how they interact and how you can simplify the process.
- Who: They might sound straightforward, but the relationships between VMs can be complex. Be sure to document the dependencies; they are critical to understanding and creating a benchmark for the whole application and not just a server.
You have to remember that the end user cares about the presented application, not a server in the stack. This means you have to look at your benchmarking for performance on a macro level rather than simply focusing on an individual server.
Many traditional tools and thought processes may be left in the cold as the applications become more distributed. This doesn't mean the single server tools are unimportant, just that they're secondary options in troubleshooting.
- What: You need to ask yourself just what you are trying to benchmark -- is it I/O or CPU performance? Most tool sets can find these numbers easily, but they may not reveal what's really going on.
One of the challenges often faced by system administrators is a dedicated focus on the server and the aspects of that server. When applications were focused on a single server, this made sense. But today's applications can involve hundreds of servers, and the performance of a single one may not have any impact at all.
So it brings up an interesting question: Shouldn't we be more focused on the application performance and use server benchmarking when we have to do a deep-dive analysis? It makes sense, except that many of today's tools still look at the server stack rather than the application stack.
Making that jump from the server stats to the application response time may seem foreign to many traditional server administrators. With today's distributed applications, however, it's necessary, as we look at things from the customer viewpoint rather than the server viewpoint.
- Why: This is a tricky one when you dig deeper into it. On the surface, system benchmarking is done to ensure application performance and stability. In reality, we do it to avoid problems before they become outages. We look for needles in a haystack to find out why something has latency or increased I/O while trying to filter out the spikes in usage and growth that occur naturally.
Additionally, the data doesn't help if you can't correlate it to anything outside the system. Often, system administrators see the results of an action and have to look for the cause. Was a performance issue caused by an unexpected training class load, a spike in activity or some type of failure?
With this in mind, keep an eye on the events in the application area. To get an accurate benchmark on an application, you'll need to know how it's being used.
- When: One of the challenges in system benchmarking is determining when normal usage occurs. Depending on the application, it could see spikes in usage over days, weekends, ends of months and other points in traditional business cycles.
This is where tracking trends becomes so important. By graphing trends, administrators can see patterns in data cycles. While many tools have the ability to display patterns and even look for trends, they can be costly.
With a little time and some spreadsheet work, you can create your own graphs and pivot tables to spot and monitor trends. It's not ideal, but it could help justify the cost of more advanced tools that can do trending for you.
A key thing to remember is that no one likes surprises in the environment. Knowing you'll run out of a critical resource months ahead of time -- instead of days before -- is a game changer.
We have grown beyond the single-server installation, and system benchmarking should be approached from the application stack downwards. This doesn't mean getting rid of the traditional server monitoring and benchmarking -- an additional layer has just been added on top of what was already there. One of the advantages of this is seeing things more from the client side, which should hopefully allow administrators to have more insight and to be more responsive to potential crises.
Yield a true IT performance benchmark
Data center operations strategies