Chances are, if you’re reading this article, you’re one of the many IT leaders looking into how hyper-convergence can help you better manage your data centre infrastructure. Alan Hyde outlines some key issues you should consider.
Hyper-convergence is catching on in organisations of all sizes and across all industries because it allows IT teams to simplify the deployment and management of virtualised resources.
It used to be that it could take weeks or even months to pair up and deploy best-in-class compute, storage and networking resources. Each element had its own management plane and required specific and expensive expertise to deploy and manage. Convergence brought disparate platforms closer together, providing compatibility and shared control through common software elements that reduced complexity and cost. Deployment times shrank from weeks to days or hours.
Hyper-convergence takes this to the next logical level, bringing compute and storage resources together in a single pre-configured appliance with one set of management controls, not individual element piece parts. Think of it as the data centre equivalent of plug-and-play, with all of the complexity of servers, storage and networking hidden behind simple software-based controls. Complexity is removed, along with the operating costs associated with managing that complexity. Deployment times can be reduced to minutes.
So hyper-converged appliances give faster deployment, simpler operation, and lower operating costs. What’s not to like? But before you jump on the bandwagon, you need to understand that hyper-convergence is not for everybody – it’s a great solution for some use cases and there are probably better alternatives for other use cases.
The point of hyper-convergence is to remove complexity and cost from your data centre, so any solution that doesn’t do that isn’t worth your time. As you evaluate hyper-converged appliance options from different vendors, here are six things you should consider:
A hyper-converged solution should provide simplicity in two areas – growth and operations. First, it should give you the flexibility to handle growth and the agility to ramp new applications or services quickly. In traditional IT, scaling can be a major event. In a hyper-converged environment, scaling should be a feature that is just another part of day-to-day operations. Scaling isn’t just about data or user growth, it’s about growing the business as well. To keep up with competition and customer demands, deploying new services must be quick and painless or the business suffers.
Second, you’re looking for operational simplicity. It’s becoming more and more difficult and costly to find and retain IT resources with specialised knowledge of specific systems and solutions (think server team, storage team, etc). Hyper-converged appliances allow you to take more of a generalist approach, with the best solutions being simple enough that you don’t need specialised knowledge to run them. One way to gauge operational simplicity from the beginning is with setup. Setup should be simple, whether you’re setting up a new cluster or adding a new node to an existing cluster, and should look something like this: unbox the appliance, mount it in a rack, plug it in, power it on, execute a simple deployment wizard and start provisioning VMs. You should be able to get form power-on to provisioning in minutes. As more boxes are added, they simply expand existing resource pools.
To scale with ease, data needs to be fluid. A hyper-converged solution should adapt to allow data to different storage tiers (flash, disk, etc.) to meet SLA requirements. Data also needs to be able to move to new systems to handle common things like system failures and new technology adoption.
To allow data to move, an infrastructure should be built with agnostic components to open up the ability to integrate different hardware form factors, media, hypervisors and open source technologies. This allows you to change your mind, change your business, and change your infrastructure resourcing.
While all the movement, change and adapting is happening, the business must stay up and running. This requires a look under the hood to see how the components handle failure. Is there striping across the disks and systems? What’s the reliability level and is it proven? And is there component redundancy? It’s the little things that can sometimes cause the biggest problems, and downtime is not an option.
While the infrastructure is designed for business continuity, that doesn’t guard against human errors, surprise audits, natural disasters and ever-changing policies. Look for features like RAID or mirroring, replication at the site level and between sites, and automated workflow to support disaster recovery to cover the bases of short-term and long-term data protection. Doing so means you can retrieve a lost file, replace a corrupt database, keep continuity in case of device failure, or spin up a new site in case of disaster.
The way forward
Hyper-converged appliances give you the convenience of the entire infrastructure stack – compute, storage, hypervisor, and management – in a single fully integrated compact system.
This can remove resource silos, complexity, and operating costs from your data centre and remote site operations. With the right solution, hyper-converged appliances can peacefully coexist with your existing environment, allowing you to phase in the benefits of a hyper-converged approach.
Alan Hyde is vice president and general manager, Enterprise Group, Hewlett Packard Enterprise South Pacific.