It’s not a secret: software-defined storage (SDS) platforms are disrupting traditional storage architectures. Companies of all sizes are moving towards the hyper-converged compute/storage architecture favored by large public cloud providers. In such hyper-converged scale out models, the hardware building blocks are rack-mounted, industry-standard servers with internal drive bays that can be populated with direct attached disk drives and SSDs.
But will any server with direct attached storage work for your hyper-converged solution? The answer is paradoxically both yes and no.
Today, much of the server and storage hardware available is commoditized. In this respect, there are many choices available for both server and storage components. Having the ability to run on the hardware of your choice is a key element when choosing the hyper-converged solution that’s right for you. Doing so allows you to use familiar management and monitoring tools, as well as utilize a familiar support model. It also allows you to leverage existing servers within your environment.
However, with choice comes responsibility. Traditional storage arrays did the Bill of Material qualification to ensure the interoperability of all components within a solution. However this approach leads to vendor lock-in, something many companies seek to avoid. At the same time, interoperability must always be considered. Just putting any server and storage together won’t result in a resilient hyper-converged solution.
In a true hyper-converged model, the marriage of software and hardware empowers compute and storage to run on the same server platform without compromising storage performance or functionality. The software provider should do the interoperability testing between a large selection of server and storage hardware as well as virtualization platforms, providing you with the choice to deploy hyper-convergence on any x86 server, use any hypervisor, and utilize any combination of storage devices. Many hyper-converged vendors take the easy way out here and validate exclusively on one or two hardware platforms. This model works to create the exact vendor lock-in we were trying to move away from with traditional storage arrays.
A hyper-converged solution mainly depends on two key components: storage devices and servers. Therefore, success as a hyper-converged solution provider means that you must have interoperability among these two key components. Maxta is able to satisfy this requirement in the hyper-converged market through its strong partnerships and technical integration with industry leaders such as Intel and Broadcom. Going beyond just software integration, Maxta’s MaxDeploy appliances deliver a new and flexible way of deploying hyper-converged solutions for the virtual data center with a build to order model that combines Maxta’s software along with partner solutions and platforms.
So while hyper-convergence should be all about choice, the wise IT professional knows that real expertise is needed to build a true hyper-converged solution. Slapping together server and storage components into a rack will not deliver the gains you seek. However, you shouldn’t be afraid of the SDS revolution that’s freeing you from vendor lock-in either. Choosing the correct server and storage hardware can be simplified by a hyper-converged vendor that allows you to confidently build a solution from components and platforms you are familiar with.