Virtualization helped drastically consolidate servers, as organizations spun up VMs rather than purchasing additional...
hardware. The next wave of consolidation stemmed from strides in storage and server performance. Solid-state drives now match or surpass hard disk drive capacities and are much faster, while CPU core counts are soaring and dynamic RAM capacity is going through the roof.
For virtualized server farms, the prospect of much more horsepower allows the total count of servers needed for a given workload to shrink substantially. Alternatively, very powerful virtual instances can be assembled to either change how workloads are processed, such as totally in-memory platforms, or allow new workloads, such as compute/memory-intensive scientific apps, to be moved into the virtual space.
The impact of SSDs
Not long ago, in order to get adequate performance, a server needed at least six Serial-Attached SCSI drives. Today, a pair of NVM Express drives does the same job, with better data integrity. We see the same reduction in networked storage, with performance met by solid-state drives (SSDs) and cold secondary storage supplied by large Serial Advanced Technology Attachment (SATA) hard drives. The writing is on the wall for these SATA hard disk drives (HDDs), with 30 TB 2.5-inch SSDs announced.
SSDs affect more than just capacity, though. SSDs use much less power than HDDs and typically adhere to the 2.5-inch form factor. The M.2 size -- formerly known as the Next Generation Form Factor -- is typically as small as 22 mm x 30 mm, continuing to minimize the space needed for the drive pool. Taken together, the drive space in servers and storage appliances could drop by as much as 70% from the designs common today.
We'll see a reduction in drive count and form factor with consequent overall space saving just from the adoption of SSDs, but other factors are affecting the space needed to store data. Data compression is radically reducing the raw drive capacity needed to store a given amount of data. Space savings vary widely -- some data sets, such as photos, are unreducible -- but other data can be greatly compressed, resulting in five times the usable storage capacity.
Deduplication of files also achieves good savings – though, again, it is use case-dependent. Applying dedupe to virtual desktops can achieve common file set reductions of 100 times or more, with user file reductions as much as five times. Applying both deduplication and compression can really reduce the raw capacity needed for storage, leading to yet more shrinkage in data center footprint.
The server side
The server side is also evolving rapidly. Thanks to SSDs, CPU improvements are now reflecting a similar -- or better -- rate of server-level performance boosting.
That's not all. Memory expansion has led to feasible in-memory databases, which really boost performance in a virtual server cluster. Oracle reports as much as a 100x increase in performance, which either translates into fewer servers needed or a much faster runtime. Either way, organizations will consolidate servers as a result.
In practical terms, organizations are replacing RAID storage arrays with compact storage appliances. These, in turn, are seeing strong competition from combined server/storage hyper-converged infrastructure (HCI) appliances, and it's likely, given the fact that these appliances are close in hardware design to servers, that this is the future of both storage and servers.
HCI aligns well with software-defined infrastructure, which takes advantage of the underlying virtual server structures created by hypervisors to separate control-plane data service software from storage, network and server hardware platforms. With the ability to service many more instances, the next generation of servers will create more room for microservices with all the flexibility that that approach implies.
With a small footprint for these appliances -- think of them as Lego blocks -- and with all of the factors above pressing on the issue, the future server farm will shrink physically by quite an amount. However, another major factor is in play that takes the shrinkage even further.
The cloud is outsourcing whole workloads from the data center. Many IT operations put web serving into the public cloud, and most backup/archiving work has moved there, too. The obsolescence of large tape libraries and the removal of racks of 1U servers will contribute yet another burst of footprint reduction.
Another factor in our equations has to be the effect of containers. Though just entering the mainstream, containers will likely supplant hypervisors. From a server perspective, they will increase instance density by around three to five times per server. Even though the demand for containers will grow substantially compared with present VM levels, it will still help to consolidate servers.
To counter all of this reduction, the industry is looking to big data to increase compute demand, but it isn't clear that this will stem the tide. Much big data has huge spikes in creation rates -- just think of retail as an example. These spikes might best be handled using public cloud space rather than in house.
Future products in storage and servers will accelerate the downsizing trend. SSDs above 30 TB in capacity are being announced -- that's a 3x increase over the largest HDD and comes in a 2.5-inch form factor to boot. We can expect 50 TB and 100 TB drives in 2018.
Servers are moving toward a model with much lower power in the CPU-memory complex, which should allow for a major motherboard shrink and, consequently, a move to smaller servers. Flash and Optane -- nonvolatile dual in-line memory module technology -- will boost system performance substantially, while CPU core counts will increase into the 20 core/CPU range. GPUs will, in certain use cases, boost in-memory operation even further, while much faster remote direct memory access will speed up both drives and data sharing between servers in a cluster.
Overall, we are entering a time when servers both increase substantially in performance and shrink in size. All of these system level boosts will consolidate servers for any given workload. It's clear that we have already passed "max data center" in terms of size, suggesting that some footprint planning would be a good idea.