Best practices in systems automation
Aligned with the scenarios discussed earlier are some elements of systems automation that become very easy when moving to virtualization. Due to the interface commonalities for scripting and programmatic automation, the creation of scripts that work across the entire enterprise is easy and convenient.
Also possible is an enhanced ability to empower the individual server user to perform many tasks without the need for IT administrator management. Adding the concept of user self-management into typical server management offloads many mundane tasks typically relegated to the systems administrator instead of the service owner. Going a step further, it may be possible to assign per-use accounting and chargeback models to the environment, ensuring that costly virtualization equipment is paid for by the entities that are using it.
No discussion of automated provisioning is complete without linking the provisioning activity to server templates. The concept of server templates or "golden images" has been around in IT since the introduction of rapid deployment tools such as Norton Ghost and ImageCast many years ago. In fact, much of the common knowledge associated with templatizing servers has developed initially through physical direct-to-hardware rapid deployments.
With virtualization, the process of templatizing servers is in many ways no different than doing so for an all-physical environment. Typically, the process is completed with the following steps:
- Create a virtual machine and install a desired OS to the instance.
- Install necessary applications, patches, and configurations. Ensure that any installed code is that which should be common to all systems that will be deployed vis-à-vis this image. Code that is custom to a particular server should typically be installed separately from the image.
- Generalize the server to a template. This process eliminates any name or networking references to already-present servers. This is done to prevent conflicts when the template server is later powered on for provisioning.
- (Optionally) Install an automation component to the server, such as Sysprep for Microsoft Windows machines. This automation component will personalize the server immediately after its first post-deployment boot.
- Copy the template to a template location and secure it against inappropriate changes.
- Later, as desired, deploy the template using either manual or automated methods from within the virtualization interface. These automated mechanisms typically will copy the template from a secured location to a server of lowest current load, boot the template, and begin the personalization process.
- (Optionally) Install any custom applications and/or OS customizations to the server to enable it for serving in the desired capacity.
Depending on the virtualization solution chosen, some or all of these processes can be automated. With VMware Virtual Infrastructure, an add-on tool called VMware Lab Manager can be used to automate the process. Native to the VMware Virtual Infrastructure interface are limited capabilities for doing this as well. For Parallels Virtuozzo Containers, many of these customization and templatizing tools are built-in to the management interface, making this process relatively trivial. As stated before, the Virtuozzo Containers interface also includes application packaging tools to ease the administrative burden of applying post-deployment application customizations.
The administrative activities associated with server management have traditionally been left to the trusted systems administrators. Tasks such as patching, powering on and shutting down, application installation and management, and the creation of new server instances could only be done by a systems administrator. This bottleneck had the tendency to cause the instantiation of necessary services to schedule slip when administrators were tied up with other activities. When IT organizations didn't make use of automation, these processes consumed even more time.
One component of virtualization management is the capability of assigning server resources to alternative individuals. Depending on the virtualization architecture chosen, this process may be subtly different. However, the end result is that individuals outside the typical systems administrator can be delegated certain responsibilities for server administration. This is possible due to virtualization's granting of console access over the network combined with the management interface's level of granularity.
Consider the following situation: A business decides to purchase a virtualization solution. That solution is co-purchased by both the IT department and a code development team. In a traditional all-physical environment, the process to stand up and configure the environment would typically be done by systems administrators. The developers typically would need to wait for the environment to be completed before they can begin their work.
With virtualization and automation components built-in to the interface, it is possible to assign certain levels of processor, RAM, and other server resources into a pool usable by the developer team. Thus, if in our example the developer team paid for half of the environment, they can be assigned 50% of the available resources to do with as they will. If they want to create one "big" machine out of their resources, they can. If they want to create dozens of "little" machines, that is similarly possible. Their resources are theirs to do with as they see fit.
Figure 3.2: With resource delegation, resources can be assigned to different groups. Those groups can be given the rights to use their resources in whatever way they see fit.
It is further possible to couple this delegation ability with granular rights management. Continuing our earlier example, the virtualization environment can make use of server templates along with an automated management interface. Adding in permissions and delegation controls, it is now operationally possible for the systems administrator to delegate many server creation, startup and shutdown, and even packaged application management duties to the developer group. The administrator is no longer the bottleneck to necessary data center operations. New servers and services can be brought online through the virtualization solution's management interface.
All this enables a sense of self-management for non-administrative users. The systems administrators can retain the ability to manage the environment as a whole. But their involvement is no longer necessary for the daily mundane tasking of basic server administration.
|Virtualization enables the offloading of much of this work to other individuals. Thus, highly talented and highly paid administrators can focus on strategic initiatives rather than basic tasks.|
Usage accounting and chargebacks
Lastly, taking this entire example a final step is the capability of adding usage-based accounting to managed resources in the virtualization environment. We've already shown how it is possible to abstract server resources into buckets of resources. Those resource buckets can be then assigned to individuals or groups based on their needs. Through granular permissioning, non-administrators can assign specific rights and privileges to manipulate virtual machines within their assigned quantity of resources. This concept of workload management allows non-administrators to easily provision new resources on-the-fly with a much-reduced need for administrator involvement.
Once computing resources are abstracted away from individual machines, it is possible to assign a dollar value to those resources. That value can be used on a per-use model to ensure that the people using resources within the virtualization environment are properly paying for their use. Chargebacks are a mechanism to "charge back" to the user a fee associated with the use of the resource. This is commonly done in hosting environments in which desktops or servers are hosted by one entity for another. But it can also be done within corporate environments in which different organizations have different budgets and needs for computer resources.
Different virtualization solutions have different ways of enabling usage-based accounting and chargebacks. Depending on your need for enabling this functionality, the mechanism in which the interface enables these features may be a compelling reason to drive towards a particular virtualization solution.
Incorporating a best practices approach to virtualization implementation ensures maximum RoV
As we've seen in this chapter, there are several common best practices associated with the move to virtualization. Specific to the implementation, different solutions can provide different benefits to the business. Finding which solution works best for the type of implementation planned by your business is critical to choosing the right virtualization solution.
In the next chapter, our last, we'll conclude our discussion on best practices. There, we'll focus on the management of virtual servers and their physical hosts. You'll see that there are a number of ways that virtualization management can further assist with ensuring a maximum return on your virtualization investment.
Best practices in implementing virtualization
Virtual environments are different than physical environments
Potential usage scenarios and best practices
Obtaining maximum return on virtualization
Best practices in systems automation
About the author: Greg Shields is an independent writer, speaker and IT consultant based in Denver. With more than 10 years of experience in information technology, Greg has developed extensive experience in systems administration, engineering and architecture, specializing in Microsoft, Citrix and VMware technologies. He is a contributing editor for both Redmond magazine and Microsoft Certified Professional magazine, authoring two regular columns along with numerous feature articles, webcasts and white papers. He is also a highly sought-after instructor and speaker, teaching system and network troubleshooting curricula for TechMentor Events, a twice-annual IT conference, and producing computer-based training curriculum for CBT Nuggets on numerous topics. Greg is a triple Microsoft Certified Systems Engineer (MCSE) with security specialization and a Certified Citrix Enterprise Administrator (CCEA). He is also the leader of the Realtime Windows Server Community.