Posted by Tony Ocampo on Feb 26, 2019 10:00:00 AM
In my last blog post, I discussed the differences between hyper-converged infrastructure and traditional data center infrastructure. Now that you understand the value of shifting to hyper-converged infrastructure, what do you do next? Let’s say you’re preparing for your data center refresh. You’re planning for some growth in storage and performance, and you have to future-proof for the next three or perhaps even five years. Not only that, but you also have to reduce your total cost of ownership (TCO), which involves reducing your data center space, cooling, and power. You’ll be looking to streamline your operations and gain cost efficiencies along the way, too.
Those are common objectives that you hear from IT leadership, right? The good news is that hyper-converged infrastructure addresses all of these needs, and the biggest game changer in hyper-converged infrastructure involves storage. Storage represents one of the biggest footprints—if not the biggest—in a traditional infrastructure. A storage array that holds terabytes of data on multiple redundant disks can span racks and racks of data center space. The storage array resources are shared across multiple compute nodes to allow for hosts clustering, which gives the virtual machine guests the redundancy and high availability required of a virtual data center. This also gives the workloads a robust and resilient repository of guest virtual machines, should there be a compute node failure.
Storage arrays rely on a network (SAN) to deliver the data to the back end of the compute node so the workloads and applications are accessible. A dedicated network or fabric is purposely configured for this function. For the most part, storage consists of hardware and some software that help the traditional data center infrastructure do its job.
In hyper-converged infrastructure, storage is software-defined (SDS). Ironically enough, it includes physical hardware resources that are mated to the compute node and can be aggregated or pooled so the guest virtual machines get a slice of the capacity and the performance of the combined pool. The software definition here comes from the abstraction of all the mated hardware resources and compute nodes, which trick the hypervisor into thinking they are a full storage array, with all the bells and whistles that come with it. It doesn’t end there, as new technologies in disk media and innovative software further reduce the physical state, thereby significantly reducing power and cooling costs while still allowing data to transfer at lightning speed.
Determining the storage capacity and performance that applications need to store data has become a different paradigm. Older RAID algorithms have been replaced with newer ones that give you exponentially larger “effective” storage capacity. Redundant copies of the data are configurable to multiples. Storage resources can be configured to be close to the compute resources, using the shortest path to the data. Applications are not dependent on a latent storage network or fabric to get to the data.
What, exactly, does all of this mean? Your organization gets the information rapidly – at a fraction of the cost.
Take a good look at hyper-converged infrastructure if you are considering a data center refresh to improve storage—and at the end of the day, it’s really all about storage.
[ GUIDE ]
EXPLORING THE MODERN DATA CENTER
Is your data center keeping up with your digital transformation demands?
ConvergeOne's Data Center Experts have written a guide that provides valuable insights about how you can strategically design your data center infrastructure to power the technology of tomorrow. The following areas are explored:
Active/Active Data Centers
Application Centric Infrastructure (ACI)