Tenants of clouds, whether public or hybrid, want the control mechanisms of a typical in-house data center. They don’t want to give up on virtual storage area networks and firewalls, access controls, governance and compliance and all the other security and control systems that go with ownership. But, at the same time, they want to see the promised agility, rapid scaling and cost effectiveness that brought them to the cloud in the first place.
Software-defined networking (SDN) appears to be one answer to this architectural dilemma. In many ways, this is a cloud service provider's answer to the cloud server providers' problems, since the orchestration suite tends to hide much of the network management from the tenants. The need to scale, combined with the cost of standard switch gear, and a recognized need to give more flexibility to tenants led to the conception of an architecture using simple switches based on readily available silicon, with the data services and switch management abstracted from the switch and hosted in virtual machine instances in the server farm.
A good analogy is building a model. You can start with a block of wood and chop away until the model is finished, or you can use Legos. The Lego version is faster to build and it can be quickly altered. This is the traditional fixed structure switch versus the SDN Lego approach.
In response to the growing popularity of the hybrid cloud approach, SDN has become very much a mainstream approach, though we are still very early in its evolution. There are partial solutions available today, some of which can already do serious work in the private segment of a hybrid cloud. The industry is geared up for major work in this area and the wide availability of services, software and hardware platforms will arrive over the next year.
SDN (coupled with network functions virtualization allows the tenant to assemble the pieces of networking that are needed for the job. The tenant chooses data services and connects them together (there are policies and templates to make this easier). Because the data services are spawnable virtual instances on virtual machines, there are unlimited supplies available, so right-sizing for a specific job load will be relatively easy
As envisaged, there will be competitive solutions for each class of data services. This means that some level of standardization between modules will be needed, and tools like OpenFlow are aiming to provide the "glue" to hold the modules together.
Outside of cloud service providers, there are few completed projects at this time, though what feedback there is on SDN is generally positive. There is some serious fear, uncertainty and doubt in the air from the likes of Cisco, which sees a potentially serious loss of revenue occurring over the next few years. Other companies are using the "software-defined" term a bit too liberally on existing approaches that don’t have the agility.
Storage startups have looked at the service abstraction concepts of SDN too, leading to the creation of a parallel effort to build software-defined storage for data centers. Still a very new concept, it is just beginning to coalesce into a concrete architectural approach.
Again, the idea is data services abstraction, but the implications on the underlying hardware are more complex, reflecting the broad diversity of approaches to storage seen in the industry. Ultimately, the model is likely to be like the Ceph open-source universal storage software concept, with simply structured data storage nodes containing drives and offering the storage itself, but with all of the services – such as compression, replication, erasure code generation and encryption – running in virtual machines.
However, this would dramatically reduce the hardware revenue from storage appliances, and it’s probable that we’ll see a lot of complex solutions and confusion as a result. This will take some years to sort out, but even partial solutions will reduce storage costs significantly. An obvious example that’s already heading mainstream is Ceph itself, where companies are buying low-cost hardware from China’s original device manufacturers (ODMs), integrating drives from distribution and building OpenStack compatible scale-out storage. Estimates are that this ODM business is currently 10 percent of the total storage box revenue, but 20 percent of units sold, reflecting the low-cost of the ODM gear.
While the storage nodes and the switch nodes use specialized hardware, everything else runs on virtualized server instances. There are data integrity and latency issues in a distributed system like this which are not well understood. So we can expect performance tuning to be a major value point in the growing market.
Networking performance will be a major factor in the tuning process. New performance levels for Ethernet will arrive in the near term. These offer some potential for relief, but the number of inter-node transfers in a distributed SDN or software-defined storage (SDS) architecture are problematic.
Data flow is not like server virtualization. Latency is a crucial issue. For example, a database will not mark a transaction as complete until multiple copies of the new data are written to permanent storage. Just having the next node say data is stored isn’t enough, since a power outage would cause data loss. This implies some special handling or some form of evanescent storage that won’t lose data on power loss.
Both SDN and SDS offer tremendous savings and high scalability. Executed well, most systems management functions will move to the orchestration suite, and tenants will be able to control their own virtual data centers. Underlying all of this, the recognition that not all instances are equal and that the realities of hardware limitations exist will shape both the hardware solutions and the software for data services.