Archive

Posts Tagged ‘FDR’

FDR InfiniBand Continues Rapid Growth on TOP500 Year-Over-Year

November 22nd, 2013

The newest TOP500 list of the world’s most powerful supercomputers was released at SC13 this week, and showed the continued adoption of Fourteen Data Rate (FDR) InfiniBand – the fastest growing interconnect technology on the list! FDR, the latest generation of InfiniBand technology, now connects 80 systems on the TOP500, growing almost 2X year-over-year from 45 systems in November 2012. Overall, InfiniBand technology connects 207 systems, accounting for over 40 percent of the TOP500 list.

InfiniBand technology overall connects 48 percent of the Petascale-capable systems on the list. Petascale-capable systems generally favor the InfiniBand interconnect due to its computing efficiency, low application latency and high speeds. More relevant highlights from the November TOP500 list include:

  • InfiniBand is the most used interconnect in the TOP100, connecting 48 percent of the systems, and in the TOP200, connecting 48.5 percent of the systems.

  • InfiniBand-connected systems deliver 2X the performance of Ethernet systems, while the total performance supported by InfiniBand systems continues to grow.

  • With a peak efficiency of 97 percent and an average efficiency of 86 percent, InfiniBand continues to be the most efficient of interconnects on the TOP500.

The TOP500 list continues to show that InfiniBand technology is the interconnect of choice for HPC and data centers wanting the highest performance, with FDR InfiniBand adoption as a large part of this solution. The graph below demonstrates this further:

TOP500 Results, November 2013

Image source: Mellanox Technologies

Delivering bandwidth up to 56Gb/s with application latencies less than one microsecond, InfiniBand enables the highest server efficiency and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded environments that scale from two nodes up to clusters with thousands of nodes.

The TOP500 list is published twice per year and recognizes and ranks the world’s fastest supercomputers. The list was announced November 18 at the SC13 conference in Denver, Colorado.

Interested in learning more about the TOP500, or how InfiniBand performed? Check out the TOP500 website: www.top500.org.

bill_lee_square

Bill Lee, chair of Marketing Working Group (MWG) at IBTA

New InfiniBand Architecture Specification Open for Comments

October 15th, 2012

After an extensive review process, Release 1.3 of Volume 2 of the InfiniBand Architecture Specification has been approved by our Electro-Mechanical Working Group (EWG). The specification is undergoing final review by the full InfiniBand Trade Association (IBTA) membership and will be available for vendors at Plugfest 22, taking place October 15-26, 2012 at University of New Hampshire Interoperability Lab in Durham, New Hampshire.

All IBTA working groups and individual members have had several weeks to review and comment on the specification. We are encouraged by the feedback we’ve received and are looking forward to the official release at SC12, taking place November 10-16 in Salt Lake City, Utah.

Release 1.3 is a major overhaul of the InfiniBand Architecture Specification and features important new architectural elements:
• FDR and EDR signal specification methodologies
• Analog signal specifications for FDR, that have been verified through Plugfest compliance and interoperability measurements
• More efficient 64b/66 encoding method
• Forward Error Correction coding
• Improved specification of QSFP-4x and CXP-12x connectors, ports and management interfaces

The new specification also includes significant copy editing and organization to include sub volumes and improve overall readability. The previous specification, Release 1.2.1, was released in November 2007. As Chair of the EWG, I’m pleased with the technical progress made on the InfiniBand Architecture specification. More importantly, I’m excited about the impact that this new specification release will have for users and developers of InfiniBand technology.

Alan Benner
EWG Chair

HPC Advisory Council Showcases World’s First FDR 56Gb/s InfiniBand Demonstration at ISC’11

July 1st, 2011

The HPC Advisory Council, together with ISC’11, showcased the world’s first demonstration of FDR 56Gb/s InfiniBand in Hamburg, Germany, June 20-22. The HPC Advisory Council is hosting and organizing new technology demonstrations at leading HPC conferences around the world to highlight new solutions which will influence future HPC systems in term of performance, scalability and utilization.

The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths.

The FDR InfiniBand network included dedicated and distributed clusters, as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualization applications using car models courtesy of Peugeot Citroën.

The installation of the fiber cables (we used 20 and 50 meter cables) was completed a few days before the show opened, and we placed the cables on the floor, protecting them with wooden bridges. The clusters, Lustre and application setup was done the day before and everything ran perfectly.

You can see the network architecture of the ISCnet FDR InfiniBand demo below. We have combined both MPI traffic and storage traffic (Lustre) on the same fabric, utilizing the new bandwidth capabilities to provide a high performance, consolidated fabric for the high speed rendering and visualization application demonstration.

iscnet3

The following HPC Council member organizations contributed to the FDR 56Gb/s InfiniBand demo and I would like to personally thank each of them: AMD, Corning Cable Systems, Dell, Fujitsu, HP, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

Regards,

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council