Archive for the ‘InfiniBand’ Category

Changes to the Modern Data Center – Recap from SDC 15

October 19th, 2015

The InfiniBand Trade Association recently had the opportunity to speak on RDMA technology at the 2015 Storage Developer Conference. For the first time, SDC15 introduced Pre-conference Primer Sessions covering topics such as Persistent Memory, Cloud and Interop and Data Center Infrastructure. Intel’s David Cohen, System Architect and Brian Hausauer, Hardware Architect spoke on behalf of IBTA in a pre-conference session and discussed “Nonvolatile Memory (NVM), four trends in the modern data center and implications for the design of next generation distributed storage systems.”

Below is a high level overview of their presentation:

The modern data center continues to transform as applications and uses change and develop. Most recently, we have seen users abandon traditional storage architectures for the cloud. Cloud storage is founded on data-center-wide connectivity and scale-out storage, which delivers significant increases in capacity and performance, enabling application deployment anytime, anywhere. Additionally, job scheduling and system balance capabilities are boosting overall efficiency and optimizing a variety of essential data center functions.

Trends in the modern data center are appearing as cloud architecture takes hold. First, the performance of network bandwidth and storage media is growing rapidly. Furthermore, operating system vendors (OSV) are optimizing the code path of their network and storage stacks. All of these speed and efficiency gains to network bandwidth and storage are occurring while single processor/core performance remains relatively flat.

Data comes in a variety of flavors, some of which is accessed frequently for application I/O requests and others that are rarely retrieved. To enable higher performance and resource efficiency, cloud storage uses a tiering model to access data based on what is accessed most often. Data that is regularly accessed is stored on expensive, high performance media (solid-state drives). Data that is hardly or never retrieved is relegated to less expensive media with the lowest $/GB (rotational drives). This model follows a Hot, Warm and Cold data pattern and allows you faster access to what you use the most.

The growth of high performance storage media is driving the need for innovation in the network, primarily addressing application latency. This is where Remote Direct Memory Access (RDMA) comes into play. RDMA is an advanced, reliable transport protocol that enhances the efficiency of workload processing. Essentially, it increases data center application performance by offloading the movement of data from the CPU. This lowers overhead and allows the CPU to focus its processing power on running applications, which in turn reduces latency.

Demand for cloud storage is increasing and the need for RDMA and high performance storage networking grows as well. With this in mind, the InfiniBand Trade Association is continuing its work developing the RDMA architecture for InfiniBand and Ethernet (via RDMA over Converged Ethernet or RoCE) topologies.

Bill Lee

Race to Exascale – Nations Vie to Build Fastest Supercomputer

September 28th, 2015

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

The race between countries to build the fastest, biggest or first of anything is nothing new – think the race to the moon. One of the current global competitions is focused on supercomputing, specifically the race to Exascale computing or a billion billion calculations per second. Recently, governments (President Obama’s Executive Order and China’s current lead in supercomputing) are allocating significant resources toward Exascale initiatives as they start to understand its vast potential for a variety of industries, including healthcare, defense and space exploration.

The TOP500 list ranking the top supercomputers in the world will continue to be the scorecard. Currently, the U.S. leads with 233 of the top 500 supercomputers, Europe with 141 and China with 37. However, China’s small portfolio of supercomputers does not mean it is not a significant competitor in the supercomputing space as China has the #1 supercomputer on the TOP500 list for the fifth consecutive time.

When looking to build the supercomputers of the future, there are a number of factors which need to be taken into consideration, including superior application performance, compute scalability and resource efficiency. InfiniBand’s compute offloads and scalability makes it extremely attractive to supercomputer architects. Proof of the performance and scalability can be found in places such as the HPC Advisory Council’s library of case studies. InfiniBand makes it possible to achieve near linear performance improvement as more computers are connected to the array. Since observers of this space expect Exascale systems to require a massive amount of compute hardware, InfiniBand’s scalability looks to be a requirement to achieve this goal.

As the race to supercomputing speeds up we expect to see a number of exciting advances in technology as we shift from petaflops to exaflops. To give you an idea of how far we have come and where we are heading here is a comparison from the speed of computers that powered the race to space and the goals for Exascale.

Speeds Then vs. Now – Race to Space vs. Race to Supercomputing

  • Computers in 1960s (Speed of the Race to Space): Hectoscale (hundreds of FLOPs per second)
  • Goal for Computers in 2025 (Speed of the Race to Supercomputing): Exascale (quintillions of FLOPs per second)

Advances in supercomputing will continue to dominate the news with these two nations making the development of the fastest supercomputer a priority. As November approaches and the new TOP500 list is released, it will be very interesting to see where the rankings lie and what interconnects the respective architects will pick.

Bill Lee

EDR Hits Primetime! Newly Published IBTA Integrators’ List Highlights Growth of EDR

August 20th, 2015

The highly anticipated IBTA April 2015 Combined Cable and Device Integrators’ List is now available for download. The list highlights the results of the IBTA Plugfest 27 held at the University of New Hampshire’s Interoperability Lab earlier this year. The updated list consists of newly verified products that are compliant to the InfiniBand specification as well as details on solution interoperability.

Of particular note was the rise of EDR submissions. At IBTA Plugfest 27, eight companies provided 32 EDR cables for testing, up from three companies and 12 EDR cables at IBTA Plugfest 26. The increase in EDR cable solutions indicates that the technology is beginning to hit its stride. At Plugfest 28 we anticipate even more EDR solutions.

The IBTA is known in the industry for its rigorous testing procedures and subsequent Integrators’ List. The Integrators’ List provides IT professionals with peace of mind when purchasing new components to incorporate into new and existing infrastructure. To ensure the most reliable results, the IBTA uses industry-leading test equipment from Anritsu, Keysight, Molex, Tektronix, Total Phase and Wilder Technologies. We appreciate their commitment to our compliance program; we couldn’t do it without them.

The IBTA hosts its Plugfest twice a year to give members a chance to test new configurations or form factors. Although many technical associations require substantial attendance fees for testing events, the IBTA covers the bulk of Plugfest costs through membership dues.

The companies participating in Plugfest 27 included 3M Company, Advanced Photonics, Inc., Amphenol, FCI, Finisar, Fujikura, Ltd., Fujitsu Component Limited, Lorom America, Luxshare-ICT, Mellanox Technologies, Molex Incorporated, SAE Magnetics, Samtec, Shanghai Net Miles Fiber Technology Co. Ltd, Siemon, Sumitomo, and Volex.

We’ve already begun planning for IBTA Plugfest 28, which will be held October 12-23, 2015. For questions about Plugfest, contact or visit the Plugfest page for additional information.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

InfiniBand leads the TOP500 powering more than 50 percent of the world’s supercomputing systems

August 4th, 2015

TOP500 Interconnect Trends

TOP500 Interconnect Trends released the list of the 500 most powerful commercially available computer systems in the world, reporting that InfiniBand powers 257 systems, 51.4 percent of the list. This marks a 15.8 percent year over year growth from June 2014.

Demand for higher bandwidth, lower latency and higher message rates along with the need for application acceleration is driving continued adoption of InfiniBand in traditional High Performance Computing (HPC) as well as commercial HPC, cloud and enterprise data centers. InfiniBand is the only open-standard I/O that provides the capability required to handling supercomputing’s high demand for CPU cycles without time wasted on I/O transactions.

  • InfiniBand powers the most efficient system on the list with 98.8% efficiency.TOP100 Systems
  • EDR (Enhanced Data Rate) InfiniBand delivers 100Gbps and enters the TOP500 for the first time, powering three systems.
  • FDR (Fourteen Data Rate) InfiniBand at 56Gbps continues to be the most used technology on the TOP500, connecting 156 systems.
  • InfiniBand connects the most powerful clusters, 33 of the Petascale-performance systems, up from 24 in June 2014.
  • InfiniBand Is the leading interconnect for accelerator-based systems covering 77% of the list.

Not only is InfiniBand the most used interconnect solution in the world’s 500 most powerful supercomputers, it’s also the leader in the TOP100 as well. The TOP100 encompasses the top 100 supercomputing systems, as ranked in the TOP500. InfiniBand is the natural choice for world-leading supercomputers because of its performance, efficiency and scalability.

The full TOP500 list is available at

Bill Lee

Author: admin Categories: InfiniBand, TOP500 Tags: , , , ,

IBTA Members to Exhibit at ISC High Performance 2015

July 10th, 2015


The ISC High Performance 2015 conference gets underway this weekend in Frankfurt, Germany, where experts in the high performance computing field will gather to discuss the latest developments and trends driving the industry. Event organizers are expecting over 2,500 attendees at this year’s show, which will feature speakers, presentations, BoF sessions, tutorials and workshops on a variety of topics.

IBTA members will be on hand exhibiting their latest InfiniBand-based HPC solutions. Multiple EDR 100Gb/s InfiniBand products and demonstrations can be seen across the exhibit hall at ISC High Performance at the following member company booths:

  • Applied Micro (Booth #1431)
  • Bull (Booth #1230)HP (Booth #732)
  • IBM (Booth #928)
  • Lenovo (Booth #1020)
  • Mellanox (Booth #905)
  • SGI (Booth #910)

Be sure to stop by each of our member booths and ask about their InfiniBand offerings! For additional details on ISC High Performance 2015 keynotes and sessions, visit its program overview page.

Bill Lee

Author: admin Categories: InfiniBand Tags: , ,

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015


High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here:

Bill Lee

IBTA Publishes Updated Integrators’ List & Announces Plugfest #27

March 17th, 2015

We’re excited to announce the availability of the IBTA October 2014 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest #26 held last fall. The list gathers products that have been tested and accepted by the IBTA as being compliant with the InfiniBand™ architecture specifications, with new products being added every spring and fall, following each Plugfest event.

The updated Integrators’ List, along with the bi-annual Plugfest testing, is a testament to the IBTA’s dedication to advancing InfiniBand technology and furthering industry adoption. The list demonstrates how our organization continues to ensure InfiniBand interoperability, as all cables and devices listed have successfully passed all the required compliance tests and procedures.

Additionally, the consistently-updated Integrators’ List assures vendors’ customers and end-users that the manufacturers’ equipment included on the Integrators’ List has achieved the necessary level of compliance and interoperability. It also helps the IBTA to assess current and future industry demands providing background for future InfiniBand specifications.

We’ve already begun preparations for Plugfest #27, taking place April 13-24, 2015 at the University of New Hampshire’s Interoperability Laboratory in Durham, N.H. For more information, or to register for Plugfest #27, please visit the IBTA Plugfest website.

For any questions related to the Integrators’ List or IBTA membership, please visit the IBTA website:

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA Releases New Update to InfiniBand Architecture Specification

March 10th, 2015

Today, the IBTA announced Release 1.3 of the InfiniBand Architecture Specification Volume 1. Dedicated to our mission to maintain and further the specification, we worked to incorporate capabilities to help keep up with the increasing customer demand for expedited data transfer and accessibility. This effort resulted in a fresh version of the specification, which incorporates new features that enable computer systems to match the continuous high performance computing and data center needs required for increased stability and bandwidth, as well as high computing efficiency, availability and isolation.

The rapid evolution of the computer and internet have left existing interconnect technologies struggling to keep pace. The increasing popularity of high-end computing concepts like clustering, CPU offloads and movement of large amounts of data demand a robust fabric.

Additionally, I/O devices are now expected to have link-level data protection, traffic isolation, deterministic behavior and a high quality of service. In order to help meet these needs, the IBTA developed InfiniBand Architecture (IBA) in 2000 and has continued to update the specification suite as the industry evolves.

As a result of this initiative, the updated Volume 1.3 enables notable improvements in scalability with deeper visibility into switch-fabric configuration for improved monitoring of traffic patterns and network maintenance. The updated specification allows for more in-depth cable management with greater accessibility to cables for improved diagnostics mechanisms allowing for shorter response times and improved messaging capabilities, thus improving overall network performance.

Tweet: Learn more about the recent updates to the #InfiniBand architecture specification here:

Bill Lee

Author: admin Categories: InfiniBand, Volume 1 Specification Tags:

Storage with Intense Network Growth and the Rise of RoCE

February 4th, 2015

On January 4 and 5, the Entertainment Storage Alliances held the 14th annual Storage Visions conference in Las Vegas, highlighting advances in storage technologies utilized in consumer electronics, the media and entertainment industries. The theme of Storage Visions 2015 was Storage with Intense Network Growth (SWING), which was very appropriate given the explosions going on in both data storage and networking.


While the primary focus of Storage Visions is storage technologies, this year’s theme acknowledges the corollary between storage growth and network growth. Therefore, among the many sessions offered on increased capacity and higher performance, the storage networking session was specifically designed to educate the audience on advances in network technology – “Speed is the Need: High Performance Data Center Fabrics to Speed Networking.”

More pressure is being put on the data center network from a variety sources, including continued growth in enterprise application transactions, new sources of data (aka, big data) and the growth in streaming video and emergence of 4K video. According to Cisco, global IP data center traffic will grow 23% annually to 8.7 zettabytes by 2018. Three quarters of this traffic will be inter-data center, or traffic between servers (East-West) or between servers and storage (North-South). Given this, data centers need to factor in technologies designed to optimize data center traffic.

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Storage administrators have always placed emphasis on two important metrics, I/O operations per second (IOPS) and throughput, to measure the ability of the network to server storage devices. Lately, a third metric, latency, has become equally important. When balanced with the IOPS and throughput, low latency technologies can bring dramatic benefits to storage.
At this year’s Storage Visions conference, I was asked to sit on a panel discussing the benefits of Remote Direct Memory Access (RDMA) for storage traffic. I specifically called out the benefits of RDMA over Converged Ethernet (RoCE). Joining me on the panel were representatives from Mellanox, speaking about InfiniBand, and Chelsio, speaking about iWARP. The storage-focused audience showed real interest in the topic and asked a number of insightful questions about RDMA benefits for their storage implementations.

RoCE in particular will bring specific benefits to data center storage environments. As the purest implementation of the InfiniBand specification in the Ethernet environment, it has the ability to provide the lowest latency for storage. In addition, it capitalizes on the converged Ethernet standards defined in the IEEE 802.1 standards for Ethernet, include Congestion Management, Enhanced Transmission Selection and Priority Flow Control, which collectively allow for lossless transmission, bandwidth allocation and quality of service. With the introduction of RoCEv2 in September 2014, the technology moves from support for a (flat) Layer 2 network to become a routable protocol supporting Layer 3 networks, allowing for use in distributed storage environments.
Ultimately, what customers need for optimal Ethernet-based storage is technology which will balance between IOPS, throughput, and latency while allowing for flexible storage placement in their network. RoCE addresses all of these needs and is becoming widely available in popular server and storage offerings.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Tweet: Get a recap from the IBTA's #SWING & #RoCE panel from #StorageVisions:…e-rise-of-roce/

The IBTA Celebrates Its 15th Anniversary

December 15th, 2014

Since 1999, the IBTA has worked to further the InfiniBand specification in order to provide the IT industry with an advanced fabric architecture that transmits large amounts of data between data centers around the globe. This year, the IBTA is celebrating 15 years of growth and success.

In its mission to unite the IT industry, the IBTA has welcomed an array of distinguished members including Cray, Microsoft, Oracle and QLogic. The IBTA now boasts over 50 member companies all dedicated to furthering the InfiniBand specification.

The continued growth of the IBTA reflects the IT industry’s dedication to the advancement of InfiniBand. Many IBTA member companies are developing products incorporating InfiniBand technology, including FDR, which has proven to be the fastest growing generation of InfiniBand technology: FDR adoption grew 76 percent year over year from 80 systems in 2013 to 141 systems in 2014. Most recently, the Top500 list announced that 225 of the world’s most powerful computers chose InfiniBand as their interconnect device in 2014.

2014 also marked the release of RoCEv2. RoCEv2 is an extension of the original RoCE specification announced in 2010 that brought benefits of Remote Direct Memory Access (RDMA) I/O architecture to Ethernet-based networks. The updated specification addresses the needs of today’s evolving enterprise data centers by enabling routing across Layer 3 networks. By extending RoCE to allow Layer 3 routing, the specification can provide better traffic isolation and enables hyperscale data center deployments.

Below is a timeline that further illustrates the IBTA’s advancements over the past 15 years that have helped to bring InfiniBand technology to the forefront of the interconnect industry.


Volume 1 – General Specification
Volume 2 – Physical Specification

Bill Lee