Proprietary Storage Systems Are Not Keeping Pace with Open Source

In: computers

18 Feb 2013

The mass storage world is always changing. Each month there are new technologies, new companies, new products. One thing is certain, there is constantly additional information to store. And it builds on itself. As storage gets larger and more rapid, the latest applications are able to take advantage of these new capabilities. Virtual Desktop Infrastructure and Big Data in the form of Hadoop databases are only two of the current applications that have an insatiable requirement for more storage capacity and speed.

When businesses make selections on how to buy storage, there are a number of different decision aspects that get taken into account. Essentially, the applications that depend on the stored data have to be able to run just about all the time. Ideally, the data would be accessible 100% of the time, but it is very costly to devise a system with that much uptime. So every organization has a system wherein the data is stored on a fast and dependable storage array, it is duplicated to a different site in some fashion, and the vendors who provide the storage hardware and software have available and effective support in case there is an issue.

There are trade-offs to the different decision factors, and this is what generates market opportunity for the different storage suppliers. It is a rapidly changing landscape of elements that are constantly improved:

* Storage medium gets denser – Magnetic disk, SSD, and future technologies.

* Data transfer speeds get better in steps at different rates – SAS, Fibre Channel, Ethernet.

* Processors improve – Intel Architecture gets more cores, more rapid processors.

* Memory – DRAM gets larger, denser, and quicker.

* Software – New features like deduplication and thin provisioning add efficiency.

* Vendor dependability – support capacity, mergers, acquisition.

When I worked in the semiconductor field, I got the chance to work for one of Intel’s manufacturer reps in a place where much of the storage was designed and constructed. I observed the change from the i960 processors to X86 Architcture, and how a large amount of storage appliances were built around the same basic technology as servers.

The fascinating thing regarding X86 Architecture is that for processors and chipsets, a few of them are intended for the X86 Embedded roadmap, meaning they will be manufactured and supported for numerous years. This is extremely different from the Intel Datacenter roadmap, where there is a constant cycle of new processors and chipsets made by the most recent lithography in Intel’s newest fabs. Most people familiar with servers know about the Datacenter products, however,but a smaller number are aware of the Embedded processors and chipsets.

For companies that manufacture dedicated storage appliances that have to endure multi-year design, testing, and manufacturing cycles, it makes a great deal of sense to design around the Intel X86 Embedded roadmap, because the same products can be manufactured and supported for years at a time. It makes sparing of parts simpler, as well as support, software maintenance, and bug fixes. Nevertheless, every few years storage appliance companies make a major design revision because the underlying processors and chipsets must be transferred to the new Embedded version. That’s why there continues to be forklift upgrades every few years in the storage world, and it will continue to stay that way as long as hardware and software are tied together into a dedicated appliance using proprietary operating systems and software.

Servers used to be delivered as a combination of hardware and software also – remember mainframes? I meet with customers who are still operating AS400 systems, because decades ago there was a custom application created on this highly dependable system that they still use. They want to be running a custom or standard application on a version of Linux, virtualized by VMware, on X86 hardware, attached to shared storage, just like each of their other applications. But since getting unbundled from a proprietary appliance is difficult, they don’t get to obtain each of the performance and reliability advantages of current computing.

Will the same thing take place in the storage world? If you look at the hardware that all storage devices are created around, they all have a lot in common. Typical interfaces, processors, chipsets, hard drives, chasses. The only thing different is software, support, and the organization behind it. Linux and Apache provided the alternative to big company software and support that offered a more reliable and better performing product than either Solaris or Microsoft could come up with. Will the same thing take place in storage?

A ZFS Storage appliance built on Cisco Storage Servers is the highest performing system available.

Comment Form