Posts Tagged ‘High Performance Computing’

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015


High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here:

Bill Lee

FDR InfiniBand Continues Rapid Growth on TOP500 Year-Over-Year

November 22nd, 2013

The newest TOP500 list of the world’s most powerful supercomputers was released at SC13 this week, and showed the continued adoption of Fourteen Data Rate (FDR) InfiniBand – the fastest growing interconnect technology on the list! FDR, the latest generation of InfiniBand technology, now connects 80 systems on the TOP500, growing almost 2X year-over-year from 45 systems in November 2012. Overall, InfiniBand technology connects 207 systems, accounting for over 40 percent of the TOP500 list.

InfiniBand technology overall connects 48 percent of the Petascale-capable systems on the list. Petascale-capable systems generally favor the InfiniBand interconnect due to its computing efficiency, low application latency and high speeds. More relevant highlights from the November TOP500 list include:

  • InfiniBand is the most used interconnect in the TOP100, connecting 48 percent of the systems, and in the TOP200, connecting 48.5 percent of the systems.

  • InfiniBand-connected systems deliver 2X the performance of Ethernet systems, while the total performance supported by InfiniBand systems continues to grow.

  • With a peak efficiency of 97 percent and an average efficiency of 86 percent, InfiniBand continues to be the most efficient of interconnects on the TOP500.

The TOP500 list continues to show that InfiniBand technology is the interconnect of choice for HPC and data centers wanting the highest performance, with FDR InfiniBand adoption as a large part of this solution. The graph below demonstrates this further:

TOP500 Results, November 2013

Image source: Mellanox Technologies

Delivering bandwidth up to 56Gb/s with application latencies less than one microsecond, InfiniBand enables the highest server efficiency and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded environments that scale from two nodes up to clusters with thousands of nodes.

The TOP500 list is published twice per year and recognizes and ranks the world’s fastest supercomputers. The list was announced November 18 at the SC13 conference in Denver, Colorado.

Interested in learning more about the TOP500, or how InfiniBand performed? Check out the TOP500 website:


Bill Lee, chair of Marketing Working Group (MWG) at IBTA

Observations from SC12

December 3rd, 2012

The week of Supercomputing went by quickly and resulted in many interesting discussions around supercomputing and its role in both HPC environments and enterprise data centers. Now that we’re back to work, we’d like to reflect back on the successful supercomputing event. The conference this year saw a huge diversity of attendees from various countries, with major participation from top universities, which seemed to be on the leading edge of Remote Direct Memory Access (RDMA) and InfiniBand deployments.

Overall, we saw InfiniBand and Open Fabrics technologies continue their strong presence at the conference. InfiniBand dominated the Top500 list and is still the #1 interconnect of choice for the world’s fastest supercomputers. The Top500 list also demonstrated that InfiniBand is leading the way to efficient computing, which not only benefits high performance computing, but enterprise data center environments as well.

We also engaged in several discussions around RDMA. Attendees, analysts specifically, were interested in new products using RDMA over Converged Ethernet (RoCE) and their availability, and were impressed that Microsoft Server 2012 natively supports all three RDMA transports, including InfiniBand and RoCE. Another interesting development is InfiniBand customer Microsoft Windows Azure, and their increased efficiency placing them at 165 on the Top500 list.

IBTA & OFA Booth at SC12

IBTA’s Electro-Mechanical Working Group Chair, Alan Benner discussing the new InfiniBand specification with attendees at the IBTA & OFA SC12 booth

IBTA’s release of the new InfiniBand Architecture Specification 1.3 generated a lot of buzz among attendees, press and analysts. IBTA’s Electro-Mechanical Working Group Chair, Alan Benner, was one of our experts at the booth and drew a large crowd of people interested in the InfiniBand roadmap and his projections around the availability of the next specification, which is expected to include EDR and become available in draft form in April 2013.

SC12 provides a great opportunity those in high performance computing to connect in person and engage in discussions around hot industry topics; this year was focused on Software Defined Networking (SDN), OpenSM, and the pioneering efforts by both IBTA and OFA. We enjoyed conversations with exhibitors and attendees that visited our booth and a special thank you to all of those RDMA experts who participated in our booth session: Bill Boas, Cray; Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Kevin Moran, System Fabric Works; and Josh Simons, VMware.

Rupert Dance, Software Forge

Rupert Dance, Software Forge