Everything is becoming “Software-defined”. A real revolution is occurring, and now is the time to join it!
SDA is the acronym, coined by IT industry analysts Gartner, for Software Defined Anything. Despite its almost humorous ring, it aptly portrays the kind of environment where more or less all changes can be defined by software and governed by a set of now established standards such as OpenStack, OpenFlow or OpenRack. Behind it is the common goal of facilitating the interoperability of all elements of an IT infrastructure, whether they be network, storage, servers or applications.
Currently, each supplier in each major IT area is fighting to protect its own corner, a problem which even applies to SDN (Software-Defined Networking). Even though SDN is based on the idea that network system behavior can be software defined and controlled, the suppliers, not least of which to protect their investments, work primarily with proprietary solutions. This long established protectionist principle works entirely against the stated objective of simplifying the management of data center networks by bringing together existing physical equipment under common, open standards to increase their interoperability.
Of course it is legitimate and understandable that, for financial reasons, the principal players in SDN, SDDC (Software-Defined Datacenter), SDS (Software-Defined Storage) and SDI (Software-Defined Infrastructure) try to maintain their leadership in their respective domains. Effectively, these suppliers are not motivated to conform to external, open standards even where these standards are clearly beneficial to their customer organizations. So, from the customer point of view, an SDA environment which enhances automation, interoperability and configuration across the entire datacenter may be an attractive proposition.
Case in point … Software-Defined Storage … or how to migrate from an infrastructure to a service
It would benefit many organizations to move away from independent management of arrays of disk silos, which are often difficult to integrate into the datacenter and where unused space cannot be reallocated to the benefit of new applications. The Software-Defined Storage concept effectively addresses this by placing all physical storage into a common pot. Thereafter there is nothing further to configure as the software itself finds and allocates appropriate storage from anywhere within “the pot”. This opens up the possibility of allowing any storage medium therein to be used even for previously unimaginable needs such as big data.
With an SDS solution it is no longer necessary to physically tie in a storage medium to a particular application. When new software needs to be deployed, the administrator simply uses the administration console to define the appropriate rules governing storage required, service parameters and degree of data sensitivity.
The SDS then applies the parameters and chooses the most appropriate storage solution. All storage in the datacenter, irrespective of which brand it is, or in which disk array, is grouped into a single storage pool and presented as a series of generic virtual volumes. In the traditional model, if different storage media from different sources were used then there was often poor communication between the different administration teams and a project could be slowed down by a constant need for coordination. With this new model a single team can manage the entire storage network.
Effectively, through a portal providing a range of storage related utilities, a data storage infrastructure can be transformed into a data storage service. For example, EMC’s ViPR solution allows the administrator to create and present rule based automated processes to the end-user, thus reducing the need for IT teams to be involved in implementing their storage requests. Additionally ViPR provides native level interfaces to connect application or cloud frameworks such as Hadoop or OpenStack.
With an SDS solution it is also no longer a problem if, for example, a disk controller crashes as the SDS automatically switches to one of the other available resources in much the same way that a hypervisor is capable of hot-swapping a virtual server from a failed physical server to another which is operational. As a result application users no longer need to be inconvenienced by the failure of a dedicated physical storage array. Instead of creating a crisis situation where it is necessary to fix and recover an application in less than two hours, the system continues running and simply produces an alert to inform the administrator of the faulty element. Additionally, an SDS solution like ViPR can also offer plug-ins such as vRealize Operations which can provide tailored metrics capable of identifying bottlenecks that the administrator can rectify to improve the end-user experience.
A good SDS effectively homogenizes all of the storage media. Previously, when faced with the need to upgrade or replace data storage, many organizations have felt tied in to the same suppliers in order to avoid the expense and inconvenience of having to rewrite their storage interfaces which could often be implied with using a rival technology. This resulted in organizations losing significant control over their costs as vendors could largely dictate the price of their replacement kit. With an SDS solution this is a thing of the past, as organizations can mix and match brands and types of data storage at will thus allowing them to take advantage of the best priced products on the market at any moment in time as well as being able to make sometimes substantial savings by making use of redundant storage hitherto unavailable in their existing data storage pool.
This ability to use all of the storage space on each data volume can significantly benefit the management of data storage. If it is no longer necessary to overprovide disk space from the outset in order to anticipate the data needs of 3-5 years hence, then the project cost savings can indeed be significant.
Article from our press partner