Blog IMG 1-L

For years, the basic design of data centers has remained in three distinct pillars. Servers have been running applications, storage arrays have been storing and protecting data, and networks have been providing communications both inside and outside the firewall. 

However, the vast improvements of both chip speed and multi-core design in the last 10 years have opened the gateway to machine virtualization or software-defined computing—a common practice in almost every data center today.  No longer are applications tied to any physical machine—but run virtually on a shared pool of compute resources.

Why not apply the lessons of machine virtualization to the other two pillars of the traditional data center, though?  Why not build abstraction layers for both storage and networking, and run them as shared resource pools as well?

Enter the “software-defined” data center (SDDC) vision—where compute, storage, and networking are no longer defined by the hardware of individual servers, storage arrays, switches, and gateways. 

The SDDC is based on a revolutionary change in thinking, but accompanying the growing SDDC trend is another change on the hardware side.  And that change is hyper-convergence.

Hyper-convergence empowers compute and storage to run on the same server platform without compromising storage sharing or functionality. The ability to run on the same platform facilitates the management of storage with the same constructs as virtual machines, further simplifying IT management and increasing IT agility.

This means storage is now “VM-centric” and can be easily assigned to VMs without having to worry about volumes, LUNs, or storage networking.  This is accomplished by not only moving storage physically to accompany computing power, but also through the use of a storage abstraction platform that works in conjunction with your server virtualization solution.

However, assembling a hyper-converged storage solution to tackle the needs of the SDDC vision is a complex and daunting task.  It involves both hardware- and software-based components to work together flawlessly.  Without careful planning, tested reference architectures, and storage-specific knowledge—many SDS solutions lock one into specific virtualization solutions or specific hardware vendors, often negating the sought after benefits of the SDDC.

Maxta maximizes the promise of hyper-convergence.  Maxta solutions provide organizations the choice to deploy hyper-convergence on any x86 server, and use any hypervisor and any combination of storage devices.  Maxta addresses the challenge of hardware and software compatibility with a software validation eco-system partnership.  Important components in a validated SDS solution is interoperability with the platform and storage controller. Intel® Xeon® processors, Intel® SSDs and Broadcom MegaRAID® SAS controllers are optimized for the SDDC, and have been validated with the Maxta platform, creating a solution which is simple to manage at a VM-level, and reduces IT management to further maximize cost savings.  No wonder Gartner lists Maxta as one of its 2015 Cool Vendors for Storage Technologies.

Without a doubt, the data center of tomorrow is being defined today and the SDDC vision combined with hyper-convergence is more than a fad—it’s the future.  Those IT professionals that embrace the change towards a software-defined future will be well prepared to help their companies succeed not only today, but also tomorrow.

The details of the compatibility issues and the solution are addressed in this blog series.