Cloud Storage – Overcomplicated, Underfunctional

This piece is not about cloud storage as an off-site storage solution. This is about the storage used by public cloud providers and private cloud implementations.

There is much debate over what kind of storage should be used in cloud computing solutions. There are those that say it depends, there are those that claim internal disk in cloud servers are just fine, there are those that say only fibre channel based enterprise disk is acceptable for a highly available, commercial cloud offering. I’m on the “it depends” side of the fence, in other words, standing on top of it. The type of storage used is not really what concerns me right now.

The lack of cloud awareness in storage is what’s bothering me. It seems that there is no cloud-ready storage solution. Enterprise storage, advanced storage management, storage replication, deduplication, etc., are all based on the premise of legacy, static, stateful computing. While there have been enhancements to support virtualization, these enhancements do not inherently address the needs of cloud computing.

There are some key components of a cloud. One of them is a self-service user interface. This means that users who are not savvy, do not understand storage tiers, do not really care how storage works, other than they can save files, are able to deploy and destroy workload instances on the cloud’s storage back-end – at their whim. Another key component of a cloud architecture is the workload automation. The cloud management platform needs to be able to automatically deploy workloads, and destroy workloads, based on the requests from the aforementioned self-service user interface.

This poses a certain dilemma for traditional enterprise storage offerings. Existing enterprise storage has no way to know the intended statefulness or longevity of a block of data in a cloud. There are no mechanisms in place for the cloud management software to notify the storage that a particular data set, virtual machine, OS instance, application, etc., may only be temporary, or that it may need to exist for the next 6 months.

Why does any of this matter? Well, let’s say a user scheduled a workload instance for 1 hour. They deploy that workload, and it happens to have identical bits to another workload deployed on the cloud’s backend storage. The storage, being smart, analyzes this and begins a disk deduplication process to reduce the overall usage of physical capacity. The deduplication process is now consuming processor time and hard drive I/O as it performs this function. It may be about 50% complete when the 1 hour time has expired and the cloud management software automatically removes the workload that the storage subsystem has spent the last hour encumbered by attempting to deduplicate it.

This is just one example of an extremely inefficient use of a storage controller’s capabilities and performance capacity. There are certainly ways to manually work around this, but the workarounds very quickly become limiting and shortly thereafter render many advanced storage management techniques useless with cloud computing.

The solution to the matter is for storage vendors to enhance their storage management interface (storage API’s) to the storage subsystems themselves as well as their advanced storage management software with cloud-aware capabilities. The cloud management software should be able to communicate to the storage backend what kind of storage it needs, how long it will need it, and have the ability to modify those policies on the fly, as users, as they inevitably will, change their mind about what resources and how many resources they need, at any given time.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>