New technologies spring up all the time to help combat complicated data center computing architectures. And, why not? Who doesn’t want a simplified and easy to deploy framework that combines storage, computing and networking in an effort to reduce data center complexity and increase scalability?
The answer is that every data center manager will at least listen to such a simplification story, and that’s why hyperconvergence is taking off in the market.
Hyperconverged infrastructure (HCI) promises to deliver simplicity and flexibility when compared with legacy solutions that are defined by separate (non-converged) hardware elements. In contrast, HCI solutions utilize virtualized computing, software-defined storage, and virtualized networking all integrated in a rack with a top-of-rack IP-based switch connecting everything together. This allows for flexible resource configuration that lead to the benefits shown in Fig. 1. Those benefits are driving growth of HCI. Gartner predicts the market for hyperconverged infrastructure (HCI) will reach nearly $5 billion by 2019. Some of the vendors in the HCI market include Lenovo, Nutanix, SimpliVity (acquired by HPE), Cisco, Dell-EMC and Pivot3.
Fig. 1: Benefits of HCI (Picture Credit: http://blog.lenovo.com)
HCI Connectivity Limitations
Ironically, the drawbacks to HCI systems are rooted in the connectivity that helps to provide the flexibility, specifically in the reliance on IP switches. Some of the specific disadvantages to a hyperconverged solution include:
- Hyperconverged solutions work really well at lower data rates (1G) but not as well for higher data rate (100G) flows.
- Layer 3 connections are very slow when an IP switch with an electric backplane fabric needs to follow or converge to a particular protocol.
- When one-to-one connectivity is required, it is a lot faster to use the devices’ native optical network protocol than to use an intermediate electric fabric that will slow down the connectivity.
An Optical Circuit Switch (OCS) addresses most of these connectivity shortcomings of a hyperconverged solution utilizing layer 1 switching. The OCS replaces the TOR switch and leverages an all-optical switching capability for connectivity within the HCI rack (IP switches are still needed to switch between racks or out to the Internet):
- The OCS is data rate independent. This means there is no flow rate dependency, an OCS will support any data rate unlike an electric fabric/IP switch that is very dependent on data rates. The reality is that data rates are going to keep increasing; investment in an OCS allows the system to scale as those high-speed technologies are adopted.
- The OCS is protocol agnostic. When one-to-one connectivity is required, using the devices’ native protocol via an OCS is a lot faster than to use an intermediate electric fabric/IP switch that will slow down the connectivity. The latency within the OCS is literally the speed of light.
- In addition to performance, the cost to transfer data via an OCS is a lot cheaper than using an electric fabric or higher-layer switching/routing.
- OCS enables disaggregation and pooling of resources (storage, compute, FPGA, etc.) more cost effectively and efficiently than an electric fabric/IP switch.
Fig. 2 shows an example of how the OCS can be used for flexible allocation of server – GPU relationships via switched optical PCIe. In this figure, all hardware resources are connected to two OCS devices, including PCIe switches, servers, GPUs and switches. The SDN-enabled OCS configuration software can set up and tear down connections between any of the resources. This enables reconfiguration of the packet switch networks to optimize GPU – GPU communications based on server – GPU topologies by pooling and reallocation of packet switch ports.
Fig. 2: OCS utilized for flexible connectivity between hardware resources in a data center.
In short, OCS adds layer one connectivity to the HCI solution to complement this advanced architecture and contributing to the ability of HCI systems to further do their job of simplifying data center networks.
More information about this and related applications is available on the CALIENT blog site.