Software licensing challenges have existed since the very first software sale. In the early days, most software...
licensing models were fairly straightforward; based on concurrent connections or number of installations. Today, virtualization and multi-core CPUs add a new level of complexity.
Virtualization has given the system administrator complete control over a server -- from memory to the number of CPUs or cores for a virtual machine. Because of this flexible control model, the administrator also has the ability to adjust a virtual server to best fit the precise software licensing model needed for her environment. While flexibility for the administrator is ideal and cost effective, often it can come at a cost to the software vendor.
While software manufactures still had a bit of a handle on things with the introduction of multi-core CPUs, it was virtualization's ability to abstract CPUs and cores through software that created the potential to upset software vendors. A higher density of cores per physical CPU coupled with Intel's Hyper-Threading Technology spelled trouble for software vendors. Suddenly, it was possible for a customer to purchase licensing for a single CPU socket with a large number of cores, and use virtualization software to slice up those resources into multiple VMs -- in effect taking advantage of traditional software licensing models.
Software vendors respond
Unfortunately for customers, this flexibility and cost savings was simply not going to last. Over the years, software vendors have tried various methods to adjust licensing with varying levels of success. Ironically, even virtualization vendors have struggled with changing licensing dynamics. VMware's virtualization software was initially licensed per CPU socket. As the average number of cores per CPU increased, the company tried to adjust licensing by setting limits for memory and CPU sockets. This new limit was referred to as the vTax, and was about as popular as normal taxes.
Customers with large amounts of memory in their host, who would have seen dramatic licensing cost increases pressured the company, and VMware eventually reversed course. While VMware listened and responded to customer demands, many chose to plug their ears.
Vendors such as Oracle didn't simply refuse to change, they made it worse. Faced with the same threats, Oracle responded with additional per-core licensing restrictions and has refused to support licensing on any virtualization platform other than its own. These are two examples of software vendors with very different reactions to customer demands.
Arguments can be made that VMware executives changed their minds due to increased competition from Microsoft and other hypervisors. However, Oracle has some of the same concerns with increased database completion, as well. So what made VMware blink in its licensing efforts and Oracle stay the course? A part of these decisions are due to the company's size. VMware is a very large company, but Oracle is much bigger. It could be that Oracle can simply afford to tick off a few more customers than VMware because its customer base is that much larger.
Applications are hard to ditch
The second and most important part is it simply costs a lot more to switch applications than it does to switch hypervisors. Don't get me wrong, switching hypervisors is not easy, by any means. Any infrastructure change requires planning, effort and cost. However, unlike applications, VMs are already contained and abstracted from the hardware. With a little bit of effort and import tools, it is fairly easy to move VMs between different hypervisors. However, the same cannot be said about applications. Moving from applications, such as PeopleSoft or SAP, is not something that can be done with the help of a simple wizard. These apps are complex and often integrate with many other system within a business. Furthermore, changing from Hyper-V to VMware, or vice versa, has very little, if any impact on end users. However, changing a user's application would likely cause a stampede of complaints to support desks and be accompanied by lost time and productivity.
While this means that a company won't switch to a different platform, often the cost -- both financial and time investment -- of moving to a different platform outweighs the licensing hike. That helps to put the application company in the driver's seat during licensing negotiations.
This is why we have seen software vendors react differently to the changes brought on by virtualization and multi-core technology. While some had to catch up with the changes to maintain profits, others have taken the opportunity to aggressively change licensing terms without compromise. Unfortunately, for all of us, the biggest change is yet to come. With the introduction of Windows Server 2016, Microsoft is switching to a per physical core licensing model to better align with cloud services. While I am sure this could save money for some customers, for many it will mean starting out at a 16-core minimum for a license of Server 2016. However, there is some good news: Microsoft is still offering unlimited virtual instances for its DataCenter edition. Now, if you have more than eight cores per socket, you will be required to purchase an additional two core license packs.
It's not like Intel has CPUs with more than eight cores right? Actually, they're now up to 18 physical cores per CPU. Better get out your wallet and stay tuned for advice on how you can save some money in the next part of this series.
Get the most out of your software licensing agreement
Most anticipated Windows Server 2016 features
How virtualization affects software licensing