Most storage arrays in service today are proprietary. That is, storage is effectively consolidated into an appliance...
that uses a combination of hardware and software developed or customized by the storage system vendor. Proprietary storage arrays will generally use an operating system other than a standard version of Windows or Linux.
As one example, NetApp uses its in-house Data Ontap 7G or GX platform operating systems. Similarly, the storage hardware typically includes modifications that optimize data throughput and resilience. By contrast, nonproprietary storage arrays are basically standard servers built to host a large number of direct-attached disks. Each server runs a common suite of computer hardware and OS software.
In some cases, nonproprietary storage arrays adopt the Open Storage approach of Sun Microsystems Inc. (now Oracle Corp.) that touts open source software and industry-recognized hardware. But it's easy to lose sight of the "open" nature of Open Storage, considering that it relies on Sun OS and storage systems.
Regardless of the delineation, both proprietary and nonproprietary storage approaches typically support standard file systems such as Network File System and Common Internet File System. Additional support for established connectivity schemes like SCSI, iSCSI, Fibre Channel, and Fibre Channel over Ethernet, and compatibility with standard disk types such as Fibre Channel and SAS/SATA can easily be available in both approaches. Consequently, proprietary and nonproprietary "boxes" can use the same disks, connect to the same network and handle the same data.
Although the technical differences between proprietary and nonproprietary storage arrays may seem minimal on the surface, there are profound implications involved for the organizations using them. Nonproprietary storage arrays are typically built to fit a particular storage need that cannot be met easily by an existing proprietary storage system. As a result, they demand a significant investment in time and technical prowess. For example, hardware is assembled from scratch, software integration is more problematic and management tools are often cobbled together from multiple sources -- possibly resulting in inconsistent management and suboptimal performance.
About the author
Stephen J. Bigelow, a senior technology writer in the Data Center and Virtualization Media Group at TechTarget Inc., has more than 15 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications, and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference and Bigelow's PC Hardware Annoyances. Contact him at email@example.com.
Dig Deeper on Virtual server backup and storage