Defining Converged Architecture and Understanding Use-Cases
Converged infrastructure is quickly becoming a very hot topic for organizations, IT administrators, and business leaders. After all, it’s a new way to deliver resources, control multi-tenancy, and even optimize business processes. In fact, trends around converged systems continue to show growth in the segment. A recent IDC report recently look at Q3 of 2015 and how it impacted the converged system market. During that quarter alone, the worldwide converged systems market increased revenue 6.2% year over year to $2.5 billion. The market generated 1,261 petabytes of new storage capacity shipments during the quarter, which was up 34.8% compared to the same period a year ago.
“The overall market is showing growth but that growth is primarily driven by very rapid growth in the hyperconverged systems market,” said Kevin M. Permenter, senior research analyst, Enterprise Servers. “Smaller and more flexible systems, like the ones found in the hyperconverged space, were well positioned to take advantage of the burgeoning mid-market customer segment this quarter.”
Finally, the report gave a big stat showing the amount of growth in the converged market: hyperconverged sales grew 155.3% year over year during the third quarter of 2015, generating more than $278.8 million worth of sales. This amounted to 10.9% of the total market value.
With all of this in mind, it’s important to pause and understand the types of modern data center systems out there today. Specifically, we have traditional, converged infrastructure, and hyperconverged infrastructure.
With these three different types of environments, there are use-cases as well as considerations when it comes to deployment. Let’s define these types of architectures and see where they fit in:
- Traditional infrastructure. This is the original way to deploy workloads and data center resources. You have individual silos of compute, network, and storage. This means your infrastructure runs independently, while still utilizing some type of management technology. In these instances, virtualization helps resource management and VM delivery. Basically, processing is done on the physical servers, while data is stored and delivered on some type of SAN or NAS ecosystem. Networking is done at an independent layer as well with top-of-rack or distributed switching and networking components.
- Use-cases: These are absolutely great for standalone workloads. In some cases you require an independent server for a remote location, or are deploying a smaller office. Similarly, you might be investing in one type of storage, another type network, and yet another type of storage infrastructure. Although these types of heterogeneous architectures can create some levels of complexity – if you have a good management environment, this can still work for you. If you still experience positive economics and a well-managed environment, working with traditional systems can make sense.
- Converged Infrastructure (CI). With CI you see the integration of core resources and delivery technologies. The big premise behind CI is that it comes as a pre-validated, referenced, architecture capable of being deployed in strategic data center building blocks. CI will see the integration of network, storage and compute resources all in one integrated system. Here, you can remove data center resource silos and really begin to optimize virtual workloads. In these scenarios, management is done either at the hypervisor layer or at the CI management console or tool.
- Use-cases: CI helps organizations in a number of ways. First, since this is referenced and validated architecture – you know you’re working with easy-to-deploy data center environments. Most of all, you actually reduce deployment risk and even speed up deployments. Finally, use-cases here also include requirements around rapid roll-outs. Mergers and acquisitions, for example, or the need to deploy new business units can all benefit from CI systems.
- Hyperconverged infrastructure (HCI). There are a number of similarities between HCI and CI environments. However, the biggest difference comes in how these environments are managed. In HCI, the management layer – storage, for example – is controlled at the virtual layer. Specifically, HCI incorporates a virtual appliance which runs within the cluster. This virtual controller runs on each node within the cluster to ensure better failover capabilities, resiliency, and uptime. In these types of models, you begin to see how technologies around software-defined storage (SDS) impact converged infrastructure systems.
- Use-cases: In these scenarios, you’ll have the virtual management controller running at the hypervisor level. If you’re looking to remove existing data center components or are trying to consolidate your infrastructure, HCI is a great option. Furthermore, if you’re trying to simplify management and integrate into your virtual layer – HCI can help there as well. From there, new types of software-defined technologies can help organizations align resources over several HCI nodes and clusters. Remember, if you’ve already have a diverse virtual ecosystem with management policies built in – you might need to take extra to time ensure the HCI technology can fit in. Sometimes these systems run proprietary, single hypervisor, management solutions which might not fit in with your environment. Still, both CI and HCI systems can help improve efficiencies and create better data center economics.
Convergence will continue to have an impact on the modern data center. Most of all, these systems are designed to support new types of business initiatives. This means supporting more users, more applications, and a more diverse IT ecosystem. As you build out your data center and business model – make sure to understand where converged systems can make an impact. In these instances, you’re not only optimizing your data center, you’re also helping your business become a lot more agile.