Posts Tagged ‘InfiniBand’

Race to Exascale – Nations Vie to Build Fastest Supercomputer

September 28th, 2015

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

The race between countries to build the fastest, biggest or first of anything is nothing new – think the race to the moon. One of the current global competitions is focused on supercomputing, specifically the race to Exascale computing or a billion billion calculations per second. Recently, governments (President Obama’s Executive Order and China’s current lead in supercomputing) are allocating significant resources toward Exascale initiatives as they start to understand its vast potential for a variety of industries, including healthcare, defense and space exploration.

The TOP500 list ranking the top supercomputers in the world will continue to be the scorecard. Currently, the U.S. leads with 233 of the top 500 supercomputers, Europe with 141 and China with 37. However, China’s small portfolio of supercomputers does not mean it is not a significant competitor in the supercomputing space as China has the #1 supercomputer on the TOP500 list for the fifth consecutive time.

When looking to build the supercomputers of the future, there are a number of factors which need to be taken into consideration, including superior application performance, compute scalability and resource efficiency. InfiniBand’s compute offloads and scalability makes it extremely attractive to supercomputer architects. Proof of the performance and scalability can be found in places such as the HPC Advisory Council’s library of case studies. InfiniBand makes it possible to achieve near linear performance improvement as more computers are connected to the array. Since observers of this space expect Exascale systems to require a massive amount of compute hardware, InfiniBand’s scalability looks to be a requirement to achieve this goal.

As the race to supercomputing speeds up we expect to see a number of exciting advances in technology as we shift from petaflops to exaflops. To give you an idea of how far we have come and where we are heading here is a comparison from the speed of computers that powered the race to space and the goals for Exascale.

Speeds Then vs. Now – Race to Space vs. Race to Supercomputing

  • Computers in 1960s (Speed of the Race to Space): Hectoscale (hundreds of FLOPs per second)
  • Goal for Computers in 2025 (Speed of the Race to Supercomputing): Exascale (quintillions of FLOPs per second)

Advances in supercomputing will continue to dominate the news with these two nations making the development of the fastest supercomputer a priority. As November approaches and the new TOP500 list is released, it will be very interesting to see where the rankings lie and what interconnects the respective architects will pick.

Bill Lee

InfiniBand leads the TOP500 powering more than 50 percent of the world’s supercomputing systems

August 4th, 2015

TOP500 Interconnect Trends

TOP500 Interconnect Trends released the list of the 500 most powerful commercially available computer systems in the world, reporting that InfiniBand powers 257 systems, 51.4 percent of the list. This marks a 15.8 percent year over year growth from June 2014.

Demand for higher bandwidth, lower latency and higher message rates along with the need for application acceleration is driving continued adoption of InfiniBand in traditional High Performance Computing (HPC) as well as commercial HPC, cloud and enterprise data centers. InfiniBand is the only open-standard I/O that provides the capability required to handling supercomputing’s high demand for CPU cycles without time wasted on I/O transactions.

  • InfiniBand powers the most efficient system on the list with 98.8% efficiency.TOP100 Systems
  • EDR (Enhanced Data Rate) InfiniBand delivers 100Gbps and enters the TOP500 for the first time, powering three systems.
  • FDR (Fourteen Data Rate) InfiniBand at 56Gbps continues to be the most used technology on the TOP500, connecting 156 systems.
  • InfiniBand connects the most powerful clusters, 33 of the Petascale-performance systems, up from 24 in June 2014.
  • InfiniBand Is the leading interconnect for accelerator-based systems covering 77% of the list.

Not only is InfiniBand the most used interconnect solution in the world’s 500 most powerful supercomputers, it’s also the leader in the TOP100 as well. The TOP100 encompasses the top 100 supercomputing systems, as ranked in the TOP500. InfiniBand is the natural choice for world-leading supercomputers because of its performance, efficiency and scalability.

The full TOP500 list is available at

Bill Lee

Author: admin Categories: InfiniBand, TOP500 Tags: , , , ,

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015


High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here:

Bill Lee

InfiniBand Volume 1, Release 1.3 – The Industry Sounds Off

May 14th, 2015


On March 10, 2015, IBTA announced the availability of Release 1.3 of Volume 1 of the InfiniBand Architecture Specification and it’s creating a lot of buzz in the industry. IBTA members recognized that as compute clusters and data centers grew larger and more complex, the network equipment architecture would have difficulty keeping pace with the need for more processing power. With that in mind, the new release included improvements to scalability and management for both high performance computing and enterprise data centers.

Here’s a snap shot of what industry experts and media have said about the new specification:

“Release 1.3 of the Volume 1 InfiniBand Architecture Specification provides several improvements, including deeper visibility into switch hierarchy, improved diagnostics allowing for faster response times to connectivity problems, enhanced network statistics, and added counters for Enhanced Data Rate (EDR) to improve network management. These features will allow network administrators to more easily install, maintain, and optimize very large InfiniBand clusters.” - Kurt Yamamoto, Tom’s IT PRO

“It’s worth keeping up with [InfiniBand], as it clearly shows where the broader networking market is capable of going… Maybe geeky stuff, but it allows [InfiniBand] to keep up with “exascales” of data and lead the way large scale-out computer networking gets done. This is particularly important as the 1000 node clusters of today grow towards the 10,000 node clusters of tomorrow.” - Mike Matchett, Taneja Group, Inc.

“Indeed, a rising tide lifts all boats, and the InfiniBand community does not intend to get caught in the shallows of the Big Data surge. The InfiniBand Trade Association recently issued Release 1.3 of Volume I of the format’s reference architecture, designed to incorporate increased scalability, efficiency, availability and other functions that are becoming central to modern data infrastructure.” - Arthur Cole, Enterprise Networking Planet

“The InifiniBand Trade Association (IBTA) hopes to ward off the risk of an Ethernet invasion in the ranks of HPC users with a renewed focus on manageability and visibility. Such features have just appeared in release 1.3 of the Volume 1 standard. The IBTA’s Bill Lee told The Register that as HPC clusters grow, ‘you want to be able to see every level of switch interconnect, so you can identify choke-points and work around them.’” - Richard Chirgwin, The Register

To read more industry coverage of the new release, visit the InfiniBand in the News page.

For additional information about the InifiniBand specification, check out the InifiniBand specification FAQ or access the InfiniBand specification here.

Bill Lee

IBTA Publishes Updated Integrators’ List & Announces Plugfest #27

March 17th, 2015

We’re excited to announce the availability of the IBTA October 2014 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest #26 held last fall. The list gathers products that have been tested and accepted by the IBTA as being compliant with the InfiniBand™ architecture specifications, with new products being added every spring and fall, following each Plugfest event.

The updated Integrators’ List, along with the bi-annual Plugfest testing, is a testament to the IBTA’s dedication to advancing InfiniBand technology and furthering industry adoption. The list demonstrates how our organization continues to ensure InfiniBand interoperability, as all cables and devices listed have successfully passed all the required compliance tests and procedures.

Additionally, the consistently-updated Integrators’ List assures vendors’ customers and end-users that the manufacturers’ equipment included on the Integrators’ List has achieved the necessary level of compliance and interoperability. It also helps the IBTA to assess current and future industry demands providing background for future InfiniBand specifications.

We’ve already begun preparations for Plugfest #27, taking place April 13-24, 2015 at the University of New Hampshire’s Interoperability Laboratory in Durham, N.H. For more information, or to register for Plugfest #27, please visit the IBTA Plugfest website.

For any questions related to the Integrators’ List or IBTA membership, please visit the IBTA website:

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA Tests Compliance & Interoperability with Top Vendors at Plugfest #26

February 17th, 2015

In preparation for the IBTA’s upcoming Integrators’ List and April Plugfest, we wanted to give a quick recap of our last Plugfest, which included some great participants.

Every year, the IBTA hosts two Compliance and Interoperability Plugfests, one in April and one in October, at the University of New Hampshire (UNH) Interoperability Lab (IOL) in Durham, New Hampshire. The Plugfest’s purpose is to provide an opportunity for participants to measure their products for compliance with the InfiniBand architecture Specifications as well as interoperability with other InfiniBand products.

This past October, we hosted our 26th Plugfest in New Hampshire. A total of 16 cable vendors participated, while our device vendors included Intel, Mellanox and NetApp. Test equipment vendors included Anritsu, Keysight (formerly Agilent) and Tektronix. Overall, 136 cables and 13 devices were tested, and the data is broken out below:


The Integrators’ List, a compilation of all the products tested and accepted to be compliant with the InfiniBand architecture specification, will go live in about a month, so stay tuned!

Plugfest #27 will take place from April 13 to April 24. The cable and device registration deadline is Wednesday, March 16, while the shipping deadline is Wednesday, April 1. Check out the IBTA website for additional information on the upcoming Plugfest.

Tweet: Excited for Plugfest 27? Get a recap of last year's event here:…at-plugfest-26/

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Author: admin Categories: Uncategorized Tags: ,

Storage with Intense Network Growth and the Rise of RoCE

February 4th, 2015

On January 4 and 5, the Entertainment Storage Alliances held the 14th annual Storage Visions conference in Las Vegas, highlighting advances in storage technologies utilized in consumer electronics, the media and entertainment industries. The theme of Storage Visions 2015 was Storage with Intense Network Growth (SWING), which was very appropriate given the explosions going on in both data storage and networking.


While the primary focus of Storage Visions is storage technologies, this year’s theme acknowledges the corollary between storage growth and network growth. Therefore, among the many sessions offered on increased capacity and higher performance, the storage networking session was specifically designed to educate the audience on advances in network technology – “Speed is the Need: High Performance Data Center Fabrics to Speed Networking.”

More pressure is being put on the data center network from a variety sources, including continued growth in enterprise application transactions, new sources of data (aka, big data) and the growth in streaming video and emergence of 4K video. According to Cisco, global IP data center traffic will grow 23% annually to 8.7 zettabytes by 2018. Three quarters of this traffic will be inter-data center, or traffic between servers (East-West) or between servers and storage (North-South). Given this, data centers need to factor in technologies designed to optimize data center traffic.

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Storage administrators have always placed emphasis on two important metrics, I/O operations per second (IOPS) and throughput, to measure the ability of the network to server storage devices. Lately, a third metric, latency, has become equally important. When balanced with the IOPS and throughput, low latency technologies can bring dramatic benefits to storage.
At this year’s Storage Visions conference, I was asked to sit on a panel discussing the benefits of Remote Direct Memory Access (RDMA) for storage traffic. I specifically called out the benefits of RDMA over Converged Ethernet (RoCE). Joining me on the panel were representatives from Mellanox, speaking about InfiniBand, and Chelsio, speaking about iWARP. The storage-focused audience showed real interest in the topic and asked a number of insightful questions about RDMA benefits for their storage implementations.

RoCE in particular will bring specific benefits to data center storage environments. As the purest implementation of the InfiniBand specification in the Ethernet environment, it has the ability to provide the lowest latency for storage. In addition, it capitalizes on the converged Ethernet standards defined in the IEEE 802.1 standards for Ethernet, include Congestion Management, Enhanced Transmission Selection and Priority Flow Control, which collectively allow for lossless transmission, bandwidth allocation and quality of service. With the introduction of RoCEv2 in September 2014, the technology moves from support for a (flat) Layer 2 network to become a routable protocol supporting Layer 3 networks, allowing for use in distributed storage environments.
Ultimately, what customers need for optimal Ethernet-based storage is technology which will balance between IOPS, throughput, and latency while allowing for flexible storage placement in their network. RoCE addresses all of these needs and is becoming widely available in popular server and storage offerings.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Tweet: Get a recap from the IBTA's #SWING & #RoCE panel from #StorageVisions:…e-rise-of-roce/

InfiniBand Trade Association Welcomes Luxshare as Member Company

August 26th, 2014

Please join us in giving a warm welcome to the IBTA’s newest member company, Luxshare-ICT.

As an active member, the global interconnect designer and manufacturer will work closely with the IBTA to continue to further the InfiniBand specification as it pertains to the cable and physical layer. In addition to assisting with the development of InfiniBand interconnect technologies for the industry and its customers, Luxshare will also be actively involved in the IBTA Compliance and Interoperability Working Group (CIWG), which oversees the bi-annual Plugfest event.

The IBTA Plugfest, held twice per year at the University of New Hampshire (UNH) Interoperability Lab (IOL) in Durham, New Hampshire, provides an opportunity for participants to measure products for compliance with the InfiniBand™ architecture specification as well as interoperability with other InfiniBand products. Vendor devices and cables successfully passing all required compliance tests and interoperability procedures are listed on the IBTA Integrators’ List and granted the IBTA Integrators’ List Logo.

Luxshare-ICT is a global, publicly traded interconnect provider, experiencing rapid growth in global markets as it expands its products through technical enhancements. For more information about Luxshare and its offerings, please visit

Bill Lee

Broadcom Joins the InfiniBand Trade Association

August 19th, 2014

The IBTA is pleased to announce Broadcom has become an official member company.

Broadcom, which develops end-to-end solutions for data center architectures including Ethernet switch, controller and other devices, joined the IBTA to assist the industry in its adoption of Remote Direct Memory Access (RDMA) enabled Ethernet networks and to help the association in its ongoing development of the RDMA over Converged Ethernet (RoCE) specification.

“Broadcom believes that RDMA is one of several interesting technologies that have promise to simplify and accelerate applications using more intelligence in the network,” said Nick Ilyadis, Vice President & CTO, Infrastructure and Networking Group at Broadcom. “Enabling use of RDMA over Ethernet networks, as RoCE does, provides a more comprehensive and converged solution for Ethernet-based data center fabric topologies.

Broadcom is considering adding RDMA support into a variety of Ethernet products and anticipates that its experience with Ethernet and Internet Protocol (IP) standards development will help in the on-going work of generating and maintaining a well-engineered, interoperable and collaborative specification for use by all IBTA members.

Broadcom’s main interest as part of the IBTA is to contribute to the ongoing development of the RoCE specification as part of the IBoXE Working Group. Broadcom is also a member of the IBTA Technical and Marketing Working Groups.

Broadcom has found a welcoming environment within the IBTA.  “The association is made up of a very collaborative set of companies and technologists,” Nick Ilyadis said.

Broadcom Corporation (NASDAQ: BRCM) is a global FORTUNE 500® company, providing semiconductor solutions for wired and wireless communications. Broadcom® system-on-a-chip solutions deliver voice, video, data and multimedia connectivity in the home, office and mobile environments.  For more information about Broadcom, please visit

Bill Lee

ISC’14 Insights

July 10th, 2014

With ISC’14 in June recently concluded, the IBTA would like to highlight some of our members’ biggest announcements, key takeaways from the event and give an overall look at the impact of InfiniBand in the supercomputing world.

Event Overview

Overall the event was a huge success, showcasing demonstrations and facilitating many major announcements in the supercomputing industry. As always, the event attracted supercomputing leaders from around the world, who provided thought leadership and knowledge to thousands of attendees and exhibitors.

Chris Pfistner, senior director of marketing at Finisar, noted, “ISC in Leipzig is a vibrant show!  Between SC in the US and ISC focused on Europe and Asia, the global Super Computing community is well represented, and the space is on fire with several players unveiling new systems and showing great interest in Finisar’s EDR AOC demo – 100G links are becoming reality.”

The ISC YouTube channel is another great resource for checking out all that happened at ISC’14!

Major InfiniBand Announcements

During the event, the updated TOP500 list was released, showing that InfiniBand is the most used interconnect by the world’s fastest supercomputers and represents 44.4 percent of the TOP500 at 222 systems. InfiniBand also connects the top 17 most efficient systems and 50 percent of Petaflop-capable systems on the list. Fourteen Data Rate (FDR) InfiniBand, delivering 56Gb/s per link to address the bandwidth demand by high performance clustered computing, drives the systems with the highest utilization on the TOP500, achieving a record breaking 99.8 percent system efficiency.

IBTA member Finisar showcased a live demonstration of its next generation 100Gb/s Quadwire Active Optical Cable (AOC). This optical interconnect runs four parallel lanes of 25Gb/s to enable the next generation of InfiniBand EDR-based supercomputers. Leveraging Finisar’s patented fiber optic technology, the AOC will allow the transmission of high-speed data with enhanced signal integrity, higher density, lower power consumption, smaller bend radius and longer reach than traditional copper solutions.

Mellanox Technologies, also an IBTA member, unveiled the world’s first 100Gb/s EDR InfiniBand switch at ISC’14. The 100Gb/s switch expands InfiniBand’s bandwidth and provides increased switching capacity while lowering latency and power consumption. This solution, called Switch-IB, will enable application managers to use the power of data, delivering 5.4 billion packets per second, and making it an excellent solution for HPC, cloud, Web 2.0, database and storage centers.

In other news, Applied Micro teamed up with Nvidia to promote the X-Gene and Tesla duo. The Ethernet ports on the X-Gene 2 chip will be able to run RDMA over Converged Ethernet (RoCE), which brings the low latency of RDMA to the Ethernet protocol. This will make the chip not only suitable for HPC workloads that are latency sensitive, but also for database, storage, and transaction processing workloads in enterprise datacenters that also demand low latency. In tandem, Mellanox announced that their 56Gb/s InfiniBand products will now be optimized for the X-Gene platform.

To sum it all up, here are the IBTA’s key takeaways from the event:

· 100Gb/s is becoming a reality

· InfiniBand is leading the way with EDR taking center stage

· The industry is taking full advantage today of the performance and efficiency benefits of FDR and RoCE

ISC’14 was a great forum for knowledge sharing and big industry announcements. All of us at the IBTA look forward to seeing more InfiniBand news and discussions in the coming months.