Author Archive

SC15 Preview: Exascale or Bust! - On the Road to ExaFLOPS with IBTA

November 17th, 2015


This year’s Supercomputing Conference is shaping up to be one of IBTA’s most comprehensive to date. Whether listening to industry experts debate future system tune ups or cruising the show floor learning from members one-on-one, event attendees are bound to leave revved up about how interconnect advancements will alleviate data management road blocks.

OK. OK. Enough with the car metaphors-read on, though, to see what IBTA’s got in store for you (including some enticing giveaways!):

Birds of a Feather Panel Session
, Nov. 17 at 12:15 p.m. (Room 18AB)

“The Challenge of a Billion Billion Calculations Per Second: InfiniBand Roadmap Shows the Future of the High Performance Standard Interconnect for Exascale Programs”

It’s no secret. The race to exascale is on-the exponentially increasing amounts of data bogging down systems demand such speed. The critical issues now? What technological changes will be required to support this new High Performance Computing (HPC) vision and is the industry ready to solve them? I’ll be joining Brandon Hoff to lead the following esteemed panelists through a discussion on interconnect capabilities and roadmaps that will deliver a billion billion calculations per second:

  • Ola Tørudbakken, Oracle
  • Gilad Shainer, Mellanox
  • Pavan Balaji, Argonne National Laboratory
  • Bob Ciotti, NASA

IBTA Update
, Nov. 18 at 5 p.m. (Booth 613)

Standards-based, multi-vendor solutions are transforming HPC. In the Mellanox booth, I’ll be presenting the latest developments in InifiniBand technology and shed light on what’s to come-proving why this interconnect standard powers some of the world’s fastest supercomputers.

IBTA Roadmap Game,
Nov. 16-19 (Exhibitor Hall)
Interested in timing just how fast IBTA-driven data moves? Track it with a slick Pebble Steel Smartwatch offered as the IBTA Roadmap Game grand prize! To enter:

  • Pick up a Game Card at the Birds of a Feather session or participating member booths
    • Finisar Corporation (#2018)
    • Hewlett-Packard (#603)
    • Lenovo (#1509)
    • Mellanox (#613)
    • Samtec (#1943)
  • Visit experts from our five game-participating members
  • Learn how their companies are revolutionizing HPC technology with IBTA technologies
  • Get your Game Card stamped
  • Submit a completed Roadmap Game Card to an official drop-off location (noted on the card) to receive a handy IBTA convertible flashlight/lantern and a chance to win the watch

The above three activities are only the tip of the iceberg… or, maybe I should have written the white stripe on a ‘69 Chevy Camaro? Many IBTA members are geared up for SC15 with products, solutions and guidance guaranteed to help you achieve your computing goals. Be sure to stop by for an introduction:

  • Bull SAS (#2131)
  • Cisco (#588)
  • Cray, Inc. (#1833)
  • Finisar Corporation (#2018)
  • Fujitsu Limited (#1827)
  • Hewlett-Packard (#603)
  • Hitachi (#1227)
  • IBM (#522)
  • Intel (#1333, #1533)
  • Lenovo (#1509)
  • Mellanox (#613)
  • Microsoft (#1319)
  • Molex (#268)
  • NetApp (#1537)
  • Oracle (#1327)
  • Samtec (#1943)

Last, but not least…if you can’t make it this year, but are interested in learning about any of the topics above, email us at or follow us on twitter for updates @InfiniBandTrade.

Looking forward to seeing you there.

Bill Lee

Author: admin Categories: SC15 Tags: , ,

Changes to the Modern Data Center – Recap from SDC 15

October 19th, 2015

The InfiniBand Trade Association recently had the opportunity to speak on RDMA technology at the 2015 Storage Developer Conference. For the first time, SDC15 introduced Pre-conference Primer Sessions covering topics such as Persistent Memory, Cloud and Interop and Data Center Infrastructure. Intel’s David Cohen, System Architect and Brian Hausauer, Hardware Architect spoke on behalf of IBTA in a pre-conference session and discussed “Nonvolatile Memory (NVM), four trends in the modern data center and implications for the design of next generation distributed storage systems.”

Below is a high level overview of their presentation:

The modern data center continues to transform as applications and uses change and develop. Most recently, we have seen users abandon traditional storage architectures for the cloud. Cloud storage is founded on data-center-wide connectivity and scale-out storage, which delivers significant increases in capacity and performance, enabling application deployment anytime, anywhere. Additionally, job scheduling and system balance capabilities are boosting overall efficiency and optimizing a variety of essential data center functions.

Trends in the modern data center are appearing as cloud architecture takes hold. First, the performance of network bandwidth and storage media is growing rapidly. Furthermore, operating system vendors (OSV) are optimizing the code path of their network and storage stacks. All of these speed and efficiency gains to network bandwidth and storage are occurring while single processor/core performance remains relatively flat.

Data comes in a variety of flavors, some of which is accessed frequently for application I/O requests and others that are rarely retrieved. To enable higher performance and resource efficiency, cloud storage uses a tiering model to access data based on what is accessed most often. Data that is regularly accessed is stored on expensive, high performance media (solid-state drives). Data that is hardly or never retrieved is relegated to less expensive media with the lowest $/GB (rotational drives). This model follows a Hot, Warm and Cold data pattern and allows you faster access to what you use the most.

The growth of high performance storage media is driving the need for innovation in the network, primarily addressing application latency. This is where Remote Direct Memory Access (RDMA) comes into play. RDMA is an advanced, reliable transport protocol that enhances the efficiency of workload processing. Essentially, it increases data center application performance by offloading the movement of data from the CPU. This lowers overhead and allows the CPU to focus its processing power on running applications, which in turn reduces latency.

Demand for cloud storage is increasing and the need for RDMA and high performance storage networking grows as well. With this in mind, the InfiniBand Trade Association is continuing its work developing the RDMA architecture for InfiniBand and Ethernet (via RDMA over Converged Ethernet or RoCE) topologies.

Bill Lee

Race to Exascale – Nations Vie to Build Fastest Supercomputer

September 28th, 2015

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

The race between countries to build the fastest, biggest or first of anything is nothing new – think the race to the moon. One of the current global competitions is focused on supercomputing, specifically the race to Exascale computing or a billion billion calculations per second. Recently, governments (President Obama’s Executive Order and China’s current lead in supercomputing) are allocating significant resources toward Exascale initiatives as they start to understand its vast potential for a variety of industries, including healthcare, defense and space exploration.

The TOP500 list ranking the top supercomputers in the world will continue to be the scorecard. Currently, the U.S. leads with 233 of the top 500 supercomputers, Europe with 141 and China with 37. However, China’s small portfolio of supercomputers does not mean it is not a significant competitor in the supercomputing space as China has the #1 supercomputer on the TOP500 list for the fifth consecutive time.

When looking to build the supercomputers of the future, there are a number of factors which need to be taken into consideration, including superior application performance, compute scalability and resource efficiency. InfiniBand’s compute offloads and scalability makes it extremely attractive to supercomputer architects. Proof of the performance and scalability can be found in places such as the HPC Advisory Council’s library of case studies. InfiniBand makes it possible to achieve near linear performance improvement as more computers are connected to the array. Since observers of this space expect Exascale systems to require a massive amount of compute hardware, InfiniBand’s scalability looks to be a requirement to achieve this goal.

As the race to supercomputing speeds up we expect to see a number of exciting advances in technology as we shift from petaflops to exaflops. To give you an idea of how far we have come and where we are heading here is a comparison from the speed of computers that powered the race to space and the goals for Exascale.

Speeds Then vs. Now – Race to Space vs. Race to Supercomputing

  • Computers in 1960s (Speed of the Race to Space): Hectoscale (hundreds of FLOPs per second)
  • Goal for Computers in 2025 (Speed of the Race to Supercomputing): Exascale (quintillions of FLOPs per second)

Advances in supercomputing will continue to dominate the news with these two nations making the development of the fastest supercomputer a priority. As November approaches and the new TOP500 list is released, it will be very interesting to see where the rankings lie and what interconnects the respective architects will pick.

Bill Lee

EDR Hits Primetime! Newly Published IBTA Integrators’ List Highlights Growth of EDR

August 20th, 2015

The highly anticipated IBTA April 2015 Combined Cable and Device Integrators’ List is now available for download. The list highlights the results of the IBTA Plugfest 27 held at the University of New Hampshire’s Interoperability Lab earlier this year. The updated list consists of newly verified products that are compliant to the InfiniBand specification as well as details on solution interoperability.

Of particular note was the rise of EDR submissions. At IBTA Plugfest 27, eight companies provided 32 EDR cables for testing, up from three companies and 12 EDR cables at IBTA Plugfest 26. The increase in EDR cable solutions indicates that the technology is beginning to hit its stride. At Plugfest 28 we anticipate even more EDR solutions.

The IBTA is known in the industry for its rigorous testing procedures and subsequent Integrators’ List. The Integrators’ List provides IT professionals with peace of mind when purchasing new components to incorporate into new and existing infrastructure. To ensure the most reliable results, the IBTA uses industry-leading test equipment from Anritsu, Keysight, Molex, Tektronix, Total Phase and Wilder Technologies. We appreciate their commitment to our compliance program; we couldn’t do it without them.

The IBTA hosts its Plugfest twice a year to give members a chance to test new configurations or form factors. Although many technical associations require substantial attendance fees for testing events, the IBTA covers the bulk of Plugfest costs through membership dues.

The companies participating in Plugfest 27 included 3M Company, Advanced Photonics, Inc., Amphenol, FCI, Finisar, Fujikura, Ltd., Fujitsu Component Limited, Lorom America, Luxshare-ICT, Mellanox Technologies, Molex Incorporated, SAE Magnetics, Samtec, Shanghai Net Miles Fiber Technology Co. Ltd, Siemon, Sumitomo, and Volex.

We’ve already begun planning for IBTA Plugfest 28, which will be held October 12-23, 2015. For questions about Plugfest, contact or visit the Plugfest page for additional information.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

InfiniBand leads the TOP500 powering more than 50 percent of the world’s supercomputing systems

August 4th, 2015

TOP500 Interconnect Trends

TOP500 Interconnect Trends released the list of the 500 most powerful commercially available computer systems in the world, reporting that InfiniBand powers 257 systems, 51.4 percent of the list. This marks a 15.8 percent year over year growth from June 2014.

Demand for higher bandwidth, lower latency and higher message rates along with the need for application acceleration is driving continued adoption of InfiniBand in traditional High Performance Computing (HPC) as well as commercial HPC, cloud and enterprise data centers. InfiniBand is the only open-standard I/O that provides the capability required to handling supercomputing’s high demand for CPU cycles without time wasted on I/O transactions.

  • InfiniBand powers the most efficient system on the list with 98.8% efficiency.TOP100 Systems
  • EDR (Enhanced Data Rate) InfiniBand delivers 100Gbps and enters the TOP500 for the first time, powering three systems.
  • FDR (Fourteen Data Rate) InfiniBand at 56Gbps continues to be the most used technology on the TOP500, connecting 156 systems.
  • InfiniBand connects the most powerful clusters, 33 of the Petascale-performance systems, up from 24 in June 2014.
  • InfiniBand Is the leading interconnect for accelerator-based systems covering 77% of the list.

Not only is InfiniBand the most used interconnect solution in the world’s 500 most powerful supercomputers, it’s also the leader in the TOP100 as well. The TOP100 encompasses the top 100 supercomputing systems, as ranked in the TOP500. InfiniBand is the natural choice for world-leading supercomputers because of its performance, efficiency and scalability.

The full TOP500 list is available at

Bill Lee

Author: admin Categories: InfiniBand, TOP500 Tags: , , , ,

IBTA Members to Exhibit at ISC High Performance 2015

July 10th, 2015


The ISC High Performance 2015 conference gets underway this weekend in Frankfurt, Germany, where experts in the high performance computing field will gather to discuss the latest developments and trends driving the industry. Event organizers are expecting over 2,500 attendees at this year’s show, which will feature speakers, presentations, BoF sessions, tutorials and workshops on a variety of topics.

IBTA members will be on hand exhibiting their latest InfiniBand-based HPC solutions. Multiple EDR 100Gb/s InfiniBand products and demonstrations can be seen across the exhibit hall at ISC High Performance at the following member company booths:

  • Applied Micro (Booth #1431)
  • Bull (Booth #1230)HP (Booth #732)
  • IBM (Booth #928)
  • Lenovo (Booth #1020)
  • Mellanox (Booth #905)
  • SGI (Booth #910)

Be sure to stop by each of our member booths and ask about their InfiniBand offerings! For additional details on ISC High Performance 2015 keynotes and sessions, visit its program overview page.

Bill Lee

Author: admin Categories: InfiniBand Tags: , ,

IBTA Launches the RoCE Initiative: Industry Ecosystem to Drive Adoption of RDMA over Converged Ethernet

June 23rd, 2015


At IBTA, we are pleased to announce the launch of the RoCE Initiative, a new effort to highlight the many benefits of RDMA over Converged Ethernet (RoCE) and to facilitate the technology’s adoption in the enterprise data centers. With the rise of server virtualization and big data analytics, data center architects are demanding innovative ways to improve overall network performance and to accelerate applications without breaking the bank in the process.

Remote Direct Memory Access (RDMA) is well known in the InfiniBand community as a proven technology that boosts data center efficiency and performance by allowing the transport of data from storage to server with less CPU overhead. RDMA technology achieves faster speeds and lower latency by offloading data movement from the CPU, resulting in more efficient execution of applications and data transfers.

Before RoCE, the advantages of RDMA were only available over InfiniBand fabrics. This left system engineers that leverage Ethernet infrastructure with only the most expensive options for increasing system performance (i.e. adding more servers or buying faster CPUs). Now, data center architects can upgrade their application performance while leveraging existing infrastructure. There is already tremendous ecosystem support for RoCE; it is supported by server and storage OEMs, adapter and switch vendors, and all major operating systems.

Through a new online resource, the RoCE Initiative will:

  • Enable CIOs, enterprise data center architects and solutions engineers to learn about improved application performance and data center productivity through training webinars, whitepapers and educational programs
  • Encourage the adoption and development of RoCE applications with case studies and solution briefs
  • Continue the development of specifications, benchmarking performance improvements and technical resources for current/future RoCE adopters

For additional information about the RoCE Initiative, check out or read the full announcement here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015


High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here:

Bill Lee

RoCE Benefits on Full Display at Ignite 2015

May 27th, 2015


On May 4-8, IT professionals and enterprise developers gathered in Chicago for the 2015 Microsoft Ignite conference. Attendees were given a first-hand glimpse at the future of a variety of Microsoft business solutions through a number of sessions, presentations and workshops.

Of particular note were two demonstrations of RDMA over Converged Ethernet (RoCE) technology and the resulting benefits for Windows Server 2016. In both demos, RoCE technology showed significant improvements over Ethernet implementations without RDMA in terms of throughput, latency and processor efficiency.

Below is a summary of each presentation featuring RoCE at Ignite 2015:

Platform Vision and Strategy (4 of 7): Storage Overview
This demonstration highlighted the extreme performance and scalability of Windows Server 2016 through RoCE enabled servers populated with NVMe and SATA SSDs. It simulated application and user workloads using SMB3 servers with Mellanox ConnectX-4 100 GbE RDMA enabled Ethernet adapters, Micron DRAM and enterprise NVMe SSDs for performance and SATA SSDs for capacity.

During the presentation, the use of RoCE compared to TCP/IP showcased drastically different performance. With RDMA enabled, the SMB3 server was able to achieve about twice the throughput, half the latency and around 33 percent less CPU overhead than that attained by TCP/IP.

Check out the video to see the demonstration in action.

Enabling Private Cloud Storage Using Servers with Local Disks

Claus Joergensen, a principal program manager at Microsoft, demonstrated a Windows Server 2016’s Storage Spaces Direct with Mellanox’s ConnectX-3 56Gb/s RoCE with Micron RAM and M500DC local SATA storage.

The goal of the demo was to highlight the value of running RoCE on a system as it related to performance, latency and processor utilization. The system was able to achieve a combined 680,000 4KB IOPS and 2ms latency when RoCE was disabled. With RoCE enabled, the system increased the 4KB IOPS to about 1.1 million and reduced the latency to 1ms. This translated roughly to a 40 percent increase in performance with RoCE enabled, all while utilizing the same amount of CPU resources.

For additional information, watch a recording of the presentation (demonstration starts at 57:00).

For more videos from Ignite 2015, visit Ignite On Demand.

Bill Lee

InfiniBand Volume 1, Release 1.3 – The Industry Sounds Off

May 14th, 2015


On March 10, 2015, IBTA announced the availability of Release 1.3 of Volume 1 of the InfiniBand Architecture Specification and it’s creating a lot of buzz in the industry. IBTA members recognized that as compute clusters and data centers grew larger and more complex, the network equipment architecture would have difficulty keeping pace with the need for more processing power. With that in mind, the new release included improvements to scalability and management for both high performance computing and enterprise data centers.

Here’s a snap shot of what industry experts and media have said about the new specification:

“Release 1.3 of the Volume 1 InfiniBand Architecture Specification provides several improvements, including deeper visibility into switch hierarchy, improved diagnostics allowing for faster response times to connectivity problems, enhanced network statistics, and added counters for Enhanced Data Rate (EDR) to improve network management. These features will allow network administrators to more easily install, maintain, and optimize very large InfiniBand clusters.” - Kurt Yamamoto, Tom’s IT PRO

“It’s worth keeping up with [InfiniBand], as it clearly shows where the broader networking market is capable of going… Maybe geeky stuff, but it allows [InfiniBand] to keep up with “exascales” of data and lead the way large scale-out computer networking gets done. This is particularly important as the 1000 node clusters of today grow towards the 10,000 node clusters of tomorrow.” - Mike Matchett, Taneja Group, Inc.

“Indeed, a rising tide lifts all boats, and the InfiniBand community does not intend to get caught in the shallows of the Big Data surge. The InfiniBand Trade Association recently issued Release 1.3 of Volume I of the format’s reference architecture, designed to incorporate increased scalability, efficiency, availability and other functions that are becoming central to modern data infrastructure.” - Arthur Cole, Enterprise Networking Planet

“The InifiniBand Trade Association (IBTA) hopes to ward off the risk of an Ethernet invasion in the ranks of HPC users with a renewed focus on manageability and visibility. Such features have just appeared in release 1.3 of the Volume 1 standard. The IBTA’s Bill Lee told The Register that as HPC clusters grow, ‘you want to be able to see every level of switch interconnect, so you can identify choke-points and work around them.’” - Richard Chirgwin, The Register

To read more industry coverage of the new release, visit the InfiniBand in the News page.

For additional information about the InifiniBand specification, check out the InifiniBand specification FAQ or access the InfiniBand specification here.

Bill Lee