CALIENT Technologies

Move the light, not the fiber

SC13 Report: Hybrid Packet-Optical Circuit Switch Network Comes to HPC

InfiniBand Demo

CALIENT and Molex InfiniBand demo at SC13

Now that CALIENT has deployed the hybrid packet-optical circuit switch network to improve the performance of some of the highest bandwidth networks in the data center world, our next step is into high-performance networking (HPC).

That’s the reason that CALIENT had a booth at the Supercomputing 2013 show in Denver amidst the cold weather (at least for us Southern Californians) and the remnants of a recent snowstorm.

To warm up, we demonstrated support for InfiniBand in our S320 optical circuit switch.

InfiniBand is a very low latency, very high-performance (56 Gbps) fabric technology that has found a strong niche in HPC.  In fact, in the latest Top 500 list of the world’s largest super computers, InfiniBand was used as an interconnect in 207 of the systems, up from 203 in 2012.

This is impressive because these systems are built of tens of thousands of processors and measure their performance in several thousand trillion floating point operations per second (petaflop/s).

The S320 is the perfect optical switch for this application.  Thanks to our exclusive 3D MEMS technology, the all-optical datapath through the switch is protocol agnostic. (Click here to watch a short video of how this works.) And the switch has the highest throughput available with 320 ports and 32 Tbps.

 

All we needed was an optical interconnect to adapt the optical signals coming from the InfiniBand to the QSFP connector that is supported on the S320.  We teamed up with Molex who has the industry leading silicon photonics-based interconnect.

 

With that in place, the operation and the value of the hybrid packet-circuit switch network architecture is the same as for Ethernet.

 

The S320 is connected to the existing packet switch network.  When large or planned data flows hit the network, that flow is switched over to the optical circuit switch, which offloads the data from the switched network and provides a very low latency, high bandwidth transport connection to the destination switch or storage device.

 

Not only does that data flow have a fast connection to the destination, but the switched InfiniBand fabric is not slowed by that data flow, so other traffic can flow at line rate.

 

It’s a great approach for any network that has these large flows.  It also fits nicely into an SDN-based network.

Comments are closed.