Enterprises have embraced the “big data” era and are actively seeking the ability to mine growing data volumes for business insights. Predictive analytics tools allow them to evaluate large amounts of structured and unstructured data in search of patterns that can help drive the decision-making process.
To fully harness these capabilities, enterprises need a storage infrastructure that can adapt elastically to changing workloads and deliver near-instantaneous access to resources. They need an alternative to the traditional hardware-centric approach to meet capacity growth, application demands and cloud deployments.
Software-defined storage (SDS) is emerging as a potential game-changer for the modern data center. SDS refers to a storage platform in which capacity is pooled on commodity hardware and controlled, provisioned and orchestrated via an independent software stack for high levels of automation.
Industry analyst firm Gartner predicts that by 2020, between 70 percent and 80 percent of unstructured data will be held on lower-cost storage managed by SDS environments. Additionally, the firm says as much as 70 percent of existing storage array products will also be available in “software only” versions by then.
Still, there is considerable confusion about the technology. After all, data storage infrastructures have always used software to administer hardware — so what makes SDS different?
While it is true that traditional storage systems have always derived much of their functionality from software, they also needed an application-specific integrated circuit, a specialized CPU or a controller to perform some of their storage functions. In SDS, the software stack is completely decoupled from the hardware. Any storage software can be installed on commodity, off-the-shelf hardware.
SDS is also frequently confused with storage virtualization, but there are significant differences. While storage virtualization aggregates the capacity of multiple devices or arrays into a single pool of storage, SDS goes much further. In SDS, the actual storage management programming is separated from the hardware, allowing IT to centrally administer storage services — provisioning, orchestration, change management, monitoring, reporting, de-duplication, I/O optimization and more — for the entire storage infrastructure.
This central control promises to significantly improve the efficiency of the data storage infrastructure by creating a shared storage pool that is controlled and automated through a single interface. Changes are made in the common software layer rather than in multiple individual storage devices, greatly reducing repetitive administrative functions. Centralized management will also make it easier to balance workloads to avoid performance degradations and outages.
Dynamic and Agile
Sophisticated orchestration capabilities provide IT with the flexibility and agility to automatically provision storage according to current workloads for the entire storage pool. SDS requires no tuning or configuration, allowing administrators to dynamically add storage capacity in minutes rather than the months it takes today to configure and implement storage hardware systems. As a result, SDS enables organizations to better utilize storage resources in a simplified, efficient and scalable infrastructure while reducing hardware costs and increasing storage capacity.
The software layer can also help provide business continuity for all committed data in the event of a disaster, compared to the risk of losing 15 minutes of data or more with traditional storage software. Both speed and data protection are essential to organizations in data-driven industries such as financial services, healthcare, retail and telecommunications as they seek to deploy new workloads.
The ability to leverage new storage solutions while continuing to utilize existing infrastructure also sets SDS apart. Freed from proprietary operating systems and interfaces, organizations can use open APIs to aggregate storage from existing and new storage arrays, while also federating storage from many underlying resources — including disk, tape, flash and cloud-based platforms. This facilitates a scale-out architecture with practically limitless virtual capacity, regardless of location or device.
Because the SDS controller is compatible with any vendor’s hardware, organizations avoid being locked in to a single vendor and can utilize less expensive commodity hardware without sacrificing performance. It also provides IT with a holistic view of the entire storage environment, making it easier to proactively forecast, plan and budget for future storage needs without overprovisioning.
As the amount of data being produced grows each day and IT infrastructures become more complex, it is becoming clear that traditional storage methods are unsustainable. A new approach is necessary to not only store this data, but to secure, find and access the data in order to extract business value from it. Although SDS is still an emerging market lacking clear definition, it bears watching. With the potential to reduce hardware costs, expand capacity, optimize performance and centralize management, software-defined storage offers a compelling proposition for organizations looking to drive growth through data-driven decisions.
“Software-based storage will slowly but surely become a dominant part of every data center, either as a component of a software-defined data center or simply as a means to store data more efficiently and cost-effectively,” said Ashish Nadkarni, Research Director for IDC’s Storage Systems and Software market research practice. “With a consistent and coherent set of definitions, suppliers can collectively help buyers realize the vision for SBS platforms.”