SDS … an inevitable evolution born out of the transformation of applications and pressure from the cloud.
Can one really talk of SDS (Software-defined Storage) as a new phenomenon given that storage has always been software managed? While there may be some merit to the question, the answer lies at least in part at the way in which suppliers have historically used proprietary software to tie customers in to their own storage products. This is now changing as the latch that locks the software to the hardware is starting to slide open.
SDS effectively virtualizes storage by creating a single unified pool of non-standard physical data storage devices and a variety of automation and monitoring tools. Decoupling the hardware from the interface software allows for more efficient load balancing, simplification of operator duties and improved reactivity. For the customer this promises greater flexibility, agility and ability to evolve while at the same time driving down the associated hardware and maintenance costs.
SDS may also signal the end to environments composed of disparate technologies such as block mode storage networks (SAN), file mode (NAS) and object-based storage solutions, all controlled by different administrative tools from different suppliers. At the same time SDS contributes to a diversification of information: structured and unstructured data, rich or complex data, big data or IoT. Often data storage has been isolated from the other components of the infrastructure such as networks and servers. The result is an environment that is difficult to administer and upgrade. This has led to an increasing interest in convergent solutions that integrate the storage, network and server components, of which SDS is the solution that addresses the storage element.
The success of SDS depends not only on its ability to deliver the same services as traditional physical solutions (snapshot, deduplication, replication, thin provisioning etc.), but also to do so at a software layer deployable across all servers. According to Dell this can only be achieved by sticking to the three fundamental principles of abstraction of data from the physical storage media; integration of storage, computing and network resources; and orchestration at the software level. The goal is to move towards solutions which are flexible, highly performant and easy to integrate into any environment without needing to resort to costly modifications or replacements of the underlying infrastructure.
Of course, in order to provide a reliable service it is still necessary to be able to count on tried and tested physical solutions which are properly configured and supported by a recognized supplier. Greater flexibility should not come at the price of a reduction in quality or reliability of the storage solution. The tendency will thus remain that customers will prefer suppliers who are capable of providing mature, well tested, well supported and widely known and used solutions
Traditional solution providers are fighting back
Does all this mean that we witnessing the imminent demise of the traditional disk array? The answer seems to be “No”, not least because the dual controller disk arrays used by many organizations are still competitive. In many cases they fulfil their role perfectly, particularly among small and medium sized businesses who often do not have widely distributed infrastructures.
One should also not under-estimate the expertise required to put together and manage a bespoke SDS storage solution. Most small and medium sized companies lack the resources to be able to attempt this, which adds as a natural brake to the widespread rapid adoption of SDS solutions.
What is more, an increasing number of suppliers are offering significant discounts in order to retain their customer base. Discounts of 35% and more can be found, which in many cases negates the short term savings promised by SDS solutions. In fact in the small/medium and medium/large business markets it is really the hyper-convergent solutions which are the real threat to existing storage offerings.
In large enterprises it is often organizational inertia that impedes the adoption of SDS. Many IT departments consist of specialists in storage, specialists in networks and systems administrators all of whom develop along their own, distinct paths whereas the creation of a performant SDS solution requires multi-disciplinary teams. This takes time to create and usually involves sensitive handling of change at an HR level.
The sheer volume of data held by many large organizations is also such that any migration towards a massively distributed storage solution can only be achieved within the constraints of a long term and carefully planned software and hardware program, especially where each data storage migration can have an impact on multiple applications.
All of this cannot be achieved overnight.
The tempting promises of SDS
° Hardware flexibility - by separating the physical storage from its controlling software companies give themselves a wider choice over when and what to deploy.
° Faster access to new hardware technologies - The flexibility to upgrade to faster or higher capacity storage as soon as it becomes available without waiting several years before it is integrated into a compatible array.
° Simplified license management - with the purchase of new generation storage technology there is no longer the need for expensive proprietary licensing
° Support for multiple generations - As an SDS environment can manage all physical storage, it means that upgrades can be made progressively during the entire lifetime of the SDS without the need for subsequent data migration or traumatic system updates.
Article from our press partner