As explained in part 1 of this series, IT admins buying server hardware have complex decisions to make. There’s no question that today's virtualized environments
In a scale-out environment, there are fewer resources per node compared with a scale-up environment. As such, scale-out environments are generally suitable for workloads that are designed to harness the compute resources across hardware boundaries and leverage these resources as a cohesive whole. Consider a major analytics application, for example. It requires far more resources than could be harnessed in a single system. In a scale-out environment, this application could wrap its arms around all the hardware resources dedicated to it. Your methodology may require new ways of thinking about application design and limits the potential for legacy applications to be moved into scaled-out environments. In addition, deploying hypervisor software across a scaled-out architecture may be cost-prohibitive from a licensing perspective.
Workload balancing may be the most important factor in the decision to scale up or scale out. Again, you may want to implement both approaches if needs dictate.
SharePoint, for example, might be considered a legacy tool for some admins, but it provides a good opportunity to explore when to scale up and when to scale out. In an initial SharePoint implementation, you can take either approach. For instance, you may choose the single-server route and simply add resources to that server as needed. Or you may deploy a scaled-out SharePoint environment in which each separate service runs on its own system. Under this scenario, as needs dictate, you would simply add resources to those separate servers or, for additional availability, add servers to support critical roles. In the single-server example, you have the option to add servers in the future, but not in as granular a way.
When it makes sense, I prefer to start with a scale-out environment for such applications, which can be started inside your existing virtualized infrastructure. As needs grow, add resources or move workloads to different hosts or move them to their own physical, scaled-out environment.
In addition to choices surrounding buying server hardware, you should also consider features that can save money. Modern servers are much more power-efficient than older units. With more efficient power supplies and core hardware, today's servers run cooler and require less power to operate. When coupled with appropriate management software, power efficiency in today's server environment can be taken to new levels.
Both VMware and Microsoft have power optimization technologies in their respective virtualization management tool that allows the management software to monitor power levels in managed clusters and to shut down hosts when resource requirements are low, and then return hosts to operation as resource needs increase.
With these criteria in mind, how do you go about balancing scale-up and scale-out architectures? For general line-of-business and productivity applications, it's about balance. Scale out sufficiently to ensure that all mission-critical systems have enough hosts on which to operate and that you've accounted for any overhead associated with automated high-availability mechanisms and workload separation. Scale up to ensure that there is enough horsepower to handle these workloads as they're distributed across the environment.
For scientific, big data or other high-performance computing needs, it's about raw power. Scale out and aggregate the workload using equipment that provides enough processing threads to meet the needs of the application.
A blended architecture that employs both high-end and commodity hardware may be a good choice for organizations looking to scale workloads effectively. The data center of tomorrow will be a combination of scaled-up and scaled-out environments, which will completely depend on workload type.
This was first published in August 2013