Brian Jackson - Fotolia


Navigate the pros and cons of Windows Server Core

Thin server OSes allow for reduced maintenance and overhead, but when you increase density to take advantage of excess capacity, you can face issues with failures and performance.

More vendors are removing the GUI and trimming the bloat from their OSes, which has both positive and negative...

implications. The base code reduction helps minimize the number of patches and maintenance each OS needs. These OS types and products like Windows Server Core are also ideal for security personnel struggling to keep up with vulnerabilities and zero-day issues.

The downside to these thin server OSes is management. Removing familiar interfaces requires some retraining on the command line or making use of a management server with a GUI. Despite these challenges, most server administrators can get up to speed fairly quickly and can enjoy the benefits of what Windows Server Core and these other thin server OSes have to offer.

The virtualization administrator will also see some benefits -- a reduction in CPU and memory resources per VM, to start. However, the biggest impact will be with storage. The GUI for most OSes is huge; it can be five to six times the size of an OS without a GUI. This translates to gigabytes of disk savings per VM. With so many positive aspects to using Windows Server Core, it's hard to see the downside, but there is a significant thing to consider: density.

Increase density

Reducing VM overhead allows for greater density of VMs per host and storage platform. While that isn't necessarily a bad thing, you need to understand the overall effect it has on your virtual infrastructure. Before installing Windows Server Core, you sized your hosts based on the number of VMs and capacity they needed. Making a change to the footprint of the VM allows you to increase density by several times due to resource availability. However, increasing the density of VMs per host can lead to some issues.


With so many positive aspects to using Windows Server Core, it's hard to see the downside, but there is a significant thing to consider: density.

Losing a host can affect how and what is impacted, restart times and DRS/high availability rules. So, increasing your density will require you to re-evaluate overall placement, restart order and rules. Fortunately, not all VMs need to be restarted after a fault. It might be beneficial to leave test/dev VMs offline and bring them up using automation to speed up your recovery time.


More VMs per host will further tax your resources, and it's important to understand that bloat and density affect them in different ways. Where you originally had higher memory constraints with bloat, you might now have higher CPU constraints with density. Your storage will be affected as well; rather than capacity being the biggest issue, IOPS and bandwidth could end up being more of a concern.

Redistribute workloads

One of the ways to offset greater density is to redistribute workloads. Increasing the mixture of production, test/dev and VDI will help you spread out the risk when it comes to failures. This will also help you balance resource controls since you will now have denser -- but more varied -- workloads you can adjust to best fit your needs.

Of course, you don't have to increase density, but having your hosts running with what could amount to a lot of excess capacity after moving to Windows Server Core or a thin server OS isn't smart business. Managers and accountants won't be willing to allocate more money for infrastructure unless you can show proper use.

Next Steps

Determine whether you need a GUI

Improve IT resilience with these tips

Master virtual performance management

Dig Deeper on Microsoft Hyper-V management