Blog IMG 3-L

The software-defined revolution is well underway in every cutting-edge data center today.  However, this migration towards “software-defined” is no longer confined to large, public cloud providers who had really pioneered it.  Today—companies of all sizes are exploring and implementing the software-defined data center.

While using virtualization to do software-defined computing is commonplace, the next phase should be implementing hyper-converged storage.  Many companies struggle with this challenge, though, because there’s confusion on what hyper-converged storage is, and how a good solution should be evaluated.

At its heart, hyper-converged storage provides enterprise-class storage using commodity hardware and specialized software to deliver storage services, advanced features, and management capabilities.  Some vendors use the term hyper-converged storage or Software-Defined Storage (SDS) to refer to only storage management software.  However, a true hyper-converged or SDS solution addresses both the “Control Plane” and “Data Plane.”

What is the Control Plane? 

The Control Plane is the software layer that manages data stored across one or more storage pools, which are implemented by storage systems and/or server-side storage. The Control Plane manages data provisioning, and may orchestrate data services across these storage pools.  However, it does not access the data directly for Read and Write operations, and does not implement the native data services.

What is the Data Plane?

The Data Plane is the software layer that manages data layout, storage devices, and Read/Write operations to data stored on storage devices such as SSD and magnetic disk drives. The Data Plane can also provide native data services such as snapshots, clones, and replication, as well as capacity optimization.

A true hyper-converged storage solution should address both the “Control Plane” and “Data Plane.”

Evaluating a Hyper-converged Solution

Knowing what you are looking for in a hyper-converged solution is critical because you should experience significant benefits over your traditional storage solution.  Simply throwing more software into the solution isn’t a tenable position.  Hyper-converged solutions should contain the following features, functions, and benefits as key criteria.

Choice

  • Works on any x86 server
  • Works with any hypervisor or abstraction layer
  • Uses mixed drive types

Manageability

  • VM-centric data services such as snapshots and clones
  • Single pane of glass for VM and data management
  • Pre-configured and pre-validated reference architectures

Scalability

  • Scale-out and scale-up
  • Scale compute and storage independently

Resiliency/HA

  • Data availability and data integrity
  • Data Mirroring
  • Metro-cluster support

This “bare-bones” approach is just the beginning of what should be considered, however.  Because any hyper-converged solution is an investment in your future, there are some additional features and questions that should be considered—such as what kinds of data services does it provide?  Does it also tackle capacity optimization to get the most out of the storage?  And what kinds of OPEX and CAPEX savings will my business experience?

While traditional enterprise storage approaches require proprietary or custom storage systems with higher up-front acquisition and on-going operational costs, the software-defined storage revolution should lessen such constraints.  Hyper-converged solutions, like Maxta Storage Platform (MxSP), deliver greater freedom of choice over servers and storage configurations, eliminate proprietary hardware, and increase the value of software.

However, it is critical that your hyper-converged solution works seamlessly.  Where traditional storage approaches tightly controlled the bill of materials to ensure this, the software-centric approach equates to more choices in hardware.  This is great from a choice perspective, but also means it is critical the individual component vendors work together to certify the entire hyper-converged solution.

Luckily, there is help in this arena and it comes from the software-side.  Maxta has developed software, deployed in a virtual machine, that certifies the interoperability of the individual components of your hyper-converged solution.  At the end of the certification process, the software provides a “go” or a “no-go” on the hardware configuration and status.  The certification software not only guarantees the interoperability of components, but also stress tests the configuration so you know your solution won’t fail when under heavy load.

Additionally, Maxta enjoys strong relationships with both Intel and Broadcom.  This provides expertise on the two key components needed for a successful hyper-converged solution: the system and storage controllers.

Despite the confusing proliferation of hardware and software, you shouldn’t be afraid to join the software-defined data center revolution by exploring a hyper-convergence strategy today.  Putting your trust in a proven leader like Maxta means your company would be experiencing significant CAPEX and OPEX savings when it comes to your storage solution!