Nmedia - Fotolia
Organizations looking for a new storage architecture have a difficult decision to make in the face of a changing market. Though still in use, Fibre Channel storage area networks and iSCSI are losing ground, and using virtual storage area networks might not be the go-to approach for much longer.
Does Fibre Channel still make sense?
The first Gen 6 components have been announced at 32 Gbps data rates, but we should remember that adoption rates will start off slow as vendors get on board and do their exhaustive test cycles, followed by cagey customers who test for months.
Speeds and feeds are a crucial issue in the choice of connection schemes; this is where Fibre Channel (FC) runs into trouble. Technology cycles in the Ethernet world have dropped from the traditional 10x performance in 10 years' time to 2x performance in two years' time. Moreover, Ethernet acceptance times are shorter than FC, with vendor cooperation in initial testing and a larger pool of resources.
By the time Gen 6 FC is ready for prime time, we will have had nearly two years of 25 GbE and 100 GbE connections and will be starting on 50/200 GbE. Furthermore, remote direct memory access (RDMA) and nonvolatile memory express (NVMe) over Ethernet will be in mainstream use, boosting transmission significantly while reducing system overhead.
Another nail in FC's coffin is the death of the RAID. The solid-state drive (SSD) has made RAID obsolete, and shipments of RAID disk arrays are declining for many reasons, mainly related to performance in the controller and network. This was the raison d'etre for storage area networks (SAN), and alternatives look better today.
How iSCSI and virtual SANs stack up
That leaves us iSCSI and virtual storage area networks to examine, both of which are Ethernet-based. This gives them a tremendous advantage over FC, since the common fabric of hybrid clouds and virtual clusters will be Ethernet going forward. A single fabric simplifies admin work and lowers costs.
Both iSCSI and virtual storage area networks are block-I/O platforms, so we should ask if, in the virtual world, we should continue with that approach or move to an object system that can scale to the very large sizes expected in the near future. Scaling block systems is difficult.
They use a logical unit number (LUN) system of virtual disks, which tends to freeze storage pools in size, while sharing is restricted by problems of data structure ownership. The file system resides in a server, which means that sharing involves a dialog with any other server using a LUN to ensure coherency of the data sets. This works fine for read-mostly data, where only one server of the cluster changes data -- an image file for VMs comes to mind -- but it gets tough if all the nodes want to update the same LUN.
Performance is often cited as a reason for FC, but iSCSI is close on speed. Virtual storage area networks are another matter. It isn't yet clear how well they perform. The issue is the speed imbalance between local access within the server and the networked access needed to write out replicas for data integrity purposes.
Object storage versus network-attached storage
Let's consider alternatives. Object storage is considered slower storage by many, just fit for backup and archiving. The reality is that, first, most stored data today consists of small objects, especially from mobile endpoints, where as much as 60% of a company's data resides. Second, object stores are getting much faster. Issues with using SSDs for storage and problems with bottlenecks in the back-end network have been mostly overcome. It's likely that object storage will form a good part of any hybrid cloud, as we see with Amazon Web Services S3.
For the remaining data, object stores aren't yet ideal. They'll need the release of file systems and block-I/O portals to close the application gap, but this is imminent, especially in the omnipresent Ceph open source package. The gap is a result of apps that talk blocks and Network File System/Sever Message Block meeting an object store speaking in representational state transfer; translation is needed. We can expect universal storage appliance based on object storage to be a major factor going forward.
Universal storage aside, scale-out network-attached storage (NAS) has arrived, using cross-appliance techniques to match object storage's redundancy and resilience. This is a viable alternative for primary networked storage in the cloud/cluster environment, and having a data structure attached to the data itself is a big plus in sharing compared with FC SANs.
The release of software-defined storage platforms has complicated the situation. Implementing this approach on objects and files is much easier than trying to virtualize services for SANs. A fundamental software-defined storage feature is the ability to scale up or down on service instances on demand. The time to respond has to be short and object storage or NAS will do better in this area.
In summary, it looks like the SAN era is fading away. FC SANs will go first, despite the FC community's efforts to add RDMA and NVMe to FC links. ISCSI will last a while longer, while virtual storage area networks will morph into an RDMA-driven storage pooling system, which likely will have object filer secondary storage.
Options for RDMA over Fabrics
Use object storage systems for cloud applications
Rack-mount NAS appliance configuration steps