Tiering concerns are growing. The reason for the concern is obvious. Data volumes are growing and enterprises can no longer afford to store all kinds of data in expensive storage repositories. There must be a hierarchy in the storage with older, less valuable data occupying cheaper spaces and active data occupying more expensive, reliable and accessible storage areas.

As a result, modern day enterprises categorize data as ‘static’, ‘persistent’ or ‘cold’. (They may call it by other names in accordance with nomenclatures that are preferred by the industry).

Static or historical data is data that does not change. In other words, the data is rarely accessed and never modified. This data is preserved by the enterprise for the historical value.

Persistent data is data that is active, live and constantly accessed and persistently modified. The data is mission critical and the business may cease to function if this data becomes corrupted, inaccessible or lost.

Cold data is data that is not mission critical, but needs to be preserved for fulfillment of legal mandates.

The need for data tiering is obvious. Tiered storage hierarchy is plain common sense. All data is not equal. The amount of money the enterprise will be willing to spend of historical data cannot be the same as the amount of money it is willing to spend on live or active or persistent data! Tiering will ensure that the right amounts are spent on the different categories of information.

Tiered cloud storage is also becoming the sensible option for enterprises with huge volumes of data. New varieties of storage devices have become available today. There are high performance Solid state disks and low cost SATA arrays that help automate tiering in the cloud. The intelligent enterprise can maximize storage utilization and reduce costs or orchestrate cumbersome / disruptive data migrations by configuring cloud software with planned tiering settings.

Petabytes of thin provisioned cloud storage can be integrated into the existing infrastructure with gateway features that are state of the art. Dynamic caching, data reduction, encryption algorithms, bandwidth optimization options, etc. instantly become available to them. Data stores can be augmented on demand and all storage policies and data life cycle management requirements can be orchestrated from a central console for complete control over data.