Home > InfiniBand > SCinet at SC’09: building the biggest and best InfiniBand network the Supercomputing conference has ever seen

SCinet at SC’09: building the biggest and best InfiniBand network the Supercomputing conference has ever seen

October 22nd, 2009

For the past several years, the SCinet organization has built an InfiniBand network infrastructure at the annual Supercomputing conference, providing connectivity to numerous organizations and vendors in support of various application and storage demonstrations.

While I’ve been fortunate enough to participate in SCinet almost every year since InfiniBand was first deployed at SC’04, this is my first year as the OpenFabrics committee co-chair and now I’m tasked with the responsibility of putting together the InfiniBand network for SC’09 in Portland, Ore.

With SCinet’s InfiniBand network celebrating its 5th year of operation, we wanted to make sure this was going to be the biggest and best year ever. We decided to give the network a substantial speed increase using 12X InfiniBand QDR, providing up to 120 Gbps connectivity throughout the entire network.

In addition to the network performance enhancements, we’ve teamed up with the HPC Advisory Council to make several different application demonstrations available to all InfiniBand network participants. Demonstrations include:

  • A Remote Desktop over InfiniBand (RDI) demonstration enabling live desktop sharing between all participants
  • A Direct Transport Compositor (DTC) demonstration providing real-time rendering of a PSA Peugeot Citroen automotive CAD model in 2D/3D
  • A new high-bandwidth MPI technology demonstration taking advantage of 120Gbps data rates from a single server

It’s no surprise that there’s already a record number of organizations and vendors signed up to take part in these exciting multivendor demonstrations at SC’09. With over 20 connections requested this year, it’s nearly double that of prior years.

Not only is this year turning out to be a record year for connection requests, so is the size of the network infrastructure required to support all of this connectivity. We’re actually working with 18 different hardware and cable manufacturers to secure enough equipment to build the entire InfiniBand network infrastructure from the ground up – including hundreds of cables, over 40 switches, numerous servers and countless HCAs.

I’d personally like to thank all the SCinet volunteers and hardware vendors working together with the common goal of building the biggest and best InfiniBand network SC has ever seen. I encourage SC attendees to check out all the SCinet-connected application demonstrations going on throughout the exhibit hall and to stop by the SCinet Network Operations Center (NOC) and see the 120 Gbps InfiniBand network infrastructure in action.

Eric Dube

Best regards,

Eric Dube

SCinet/OpenFabrics co-chair

Bay Microsystems, Inc.

Author: admin Categories: InfiniBand Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.