Archive

Posts Tagged ‘OpenFabrics Alliance’

Life in the Fast Lane: InfiniBand Continues to Reign as HPC Interconnect of Choice

July 8th, 2016

top500

TOP500.org recently released its latest account of the world’s most powerful supercomputers and, as with previous reports, InfiniBand leads the way. The 47th edition of the bi-annual list shows that 205 of the fastest commercially available systems are accelerated by InfiniBand and OpenFabrics Software (OFS).

The InfiniBand fabric, with the OFS open source software, is the High Performance Computing (HPC) interconnect of choice because it delivers a distinctive combination of superior performance, efficiency, scalability and low latency. InfiniBand is the only open-standard I/O that provides the capability required to handle supercomputing’s high demand for CPU cycles without time wasted on I/O transactions. With today’s supercomputers pushing nearly 100 petaflops on the LINPACK benchmark, the need for efficient, low latency performance is higher than ever.

High Marks for InfiniBand and OFS

  • InfiniBand and OFS systems outperformed competing technologies in overall efficiency, scoring an 85 percent list average for compute efficiency – with one system even reaching an incredible 99.8 percent.
  • The technologies enable 70 percent of the HPC system segment. This segment includes academic, research and government fields.
  • For supercomputers capable of Petascale performance, the number of InfiniBand and OFS systems grew from 33 to 45.

InfiniBand’s ability to carry multiple traffic types over a single connection makes it ideal for clustering, communications, storage and management. As a result, the interconnect technology is used in thousands of data centers, HPC clusters, storage, and embedded application that scale from two nodes to a single cluster of tens-of-thousands of nodes. Supercomputers powered by OFS reach their highest performance capacity through the speed and efficiency delivered by Remote Direct Memory Access (RDMA). In turn, OFS enables RDMA fabrics, such as InfiniBand, to run applications that require extreme speeds, Petascale-level scalability and utility-class reliability.

Check out the full list at www.top500.org.

Bill Lee

InfiniBand Experts Discuss Latest Trends and Opportunities at OFA Workshop 2016

May 24th, 2016

ofaworkshop2016

Each year, OpenFabrics Software (OFS) users and developers gather at the OpenFabrics Alliance (OFA) Workshop to discuss and tackle the most recent challenges facing the high performance storage and networking industry. OFS is an open-source software that enables maximum application efficiency and performance agnostically over RDMA fabrics, including InfiniBand and RDMA over Converged Ethernet (RoCE). The work of the OFA supports mission critical applications in High Performance Computing (HPC) and enterprise data centers, but is also quickly becoming significant in cloud and hyper-converged markets.

In our previous blog, we showcased an IBTA sponsored session that provided an update on InfiniBand virtualization support. In addition to our virtualization update, there were a handful of other notable sessions that highlighted the latest InfiniBand developments, case studies and tutorials. Below is a collection of notable InfiniBand focused sessions that we recommend you check out:

InfiniBand as Core Network in an Exchange Application
Ralph Barth, Deutsche Börse AG; Joachim Stenzel, Deutsche Börse AG

Group Deutsche Boerse is a global financial service organization covering the entire value chain from trading, market data, clearing, settlement to custody. While reliability has been a fundamental requirement for exchanges since the introduction of electronic trading systems in the 1990s, since about 10 years also low and predictable latency of the entire system has become a major design objective. Both issues have been important architecture considerations, when Deutsche Boerse started to develop an entirely new derivatives trading system T7 for its options market in the US (ISE) in 2008. As the best fit at the time a combination of InfiniBand with IBM® WebSphere® MQ Low Latency Messaging (WLLM) as the messaging solution was determined. Since then the same system has been adopted for EUREX, one of the largest derivatives exchanges in the world, and is now also extended to cover cash markets. The session presents the design of the application and its interdependence with the combination of InfiniBand and WLLM. Also practical experiences with InfiniBand in the last couple of years will be reflected upon.

Download: Slides / Video


Experiences in Writing OFED Software for a New InfiniBand HCA
Knut Omang, Oracle

This talk presents experiences, challenges and opportunities as lead developer in initiating and developing OFED stack support (kernel and user space driver) for Oracles InfiniBand HCA integrated in the new SPARC Sonoma SoC CPU. In addition to the physical HCA function SR/IOV is supported with vHCAs visible to the interconnect as connected to virtual switches. Individual driver instances for the vHCAs maintains page tables set up for the HCAs MMU for memory accessible from the HCA. The HCA is designed to scale to a large number of QPs. For minimal overhead and maximal flexibility, administrative operations such as memory invalidations also use an asynchronous work request model similar to normal InfiniBand traffic.

Download: Slides / Video

Fabrics and Topologies for Directly Attached Parallel File Systems and Storage Networks
Susan Coulter, Los Alamos National Laboratory

InfiniBand fabrics supporting directly attached storage systems are designed to handle unique traffic patterns, and they contain different stress points than other fabrics. These SAN fabrics are often expected to be extensible in order to allow for expansion of existing file systems and addition of new file systems. The character and lifetime of these fabrics is distinct from those of internal compute fabrics, or multi-purpose fabrics. This presentation covers the approach to InfiniBand SAN design and deployment as experienced by the High Performance Computing effort at Los Alamos National Laboratory.

Download: Slides / Video


InfiniBand Topologies and Routing in the Real World
Susan Coulter, Los Alamos National Laboratory; Jesse Martinez, Los Alamos National Laboratory

As with all sophisticated and multifaceted technologies - designing, deploying and maintaining high-speed networks and topologies in a production environment and/or at larger scales can be unwieldy and surprising in their behavior. This presentation illustrates that fact via a case study from an actual fabric deployed at Los Alamos National Laboratory.

Download: Slides / Video


InfiniBand Routers Premier
Mark Bloch, Mellanox Technologies; Liran Liss, Mellanox Technologies

InfiniBand has gone a long way in providing efficient large-scale high performance connectivity. InfiniBand subnets have shown to scale to tens of thousands of nodes, both in raw capacity and in management. As demand for computing capacity increases, future clusters sizes might exceed the number of addressable endpoints in a single IB subnet (around 40K nodes). To accommodate such clusters, a routing layer with the same latencies and bandwidth characteristics as switches is required.

In addition, as data center deployments evolve, it becomes beneficial to consolidate resources across multiple clusters. For example, several compute clusters might require access to a common storage infrastructure. Routers can enable such connectivity while reducing management complexity and isolating intra-subnet faults. The bandwidth capacity to storage may be provisioned as needed.

This session reviews InfiniBand routing operation and how it can be used in the future. Specifically, we will cover topology considerations, subnet management issues, name resolution and addressing, and potential implications for the host software stack and applications.

Download: Slides

Bill Lee

Author: admin Categories: InfiniBand, RDMA Tags: , ,

OpenFabrics Software Users and Developers Receive InfiniBand Virtualization Update at the 2016 OFA Workshop

April 26th, 2016

capture

The InfiniBand architecture is a proven network interconnect standard that provides benefits for bandwidth, efficiency and latency, while also boasting an extensive roadmap of future performance increases. Initially adopted by the High Performance Computing industry, a growing number of enterprise data centers are demanding the performance capabilities that InfiniBand has to offer. InfiniBand data center use cases vary widely, ranging from physical network foundations transporting compute and storage traffic to enabling Platform-as-a-Service (PaaS) in cloud service providers.

Today’s enterprise data center and cloud environments are also seeing an increased use of virtualized workloads. Using virtualized servers allows data center managers to create a common shared pool of resources from a single host. Virtualization support in the Channel Adapter enables different software entities to interact independently with the fabric. This effectively creates an efficient service-centric computing model capable of dynamic resource utilization and scalable performance, while reducing overhead costs.

Earlier this month at the OpenFabrics Alliance (OFA) Workshop 2016 in Monterey, CA, Liran Liss of member company Mellanox Technologies provided an update on the IBTA’s ongoing work to standardize InfiniBand virtualization support. He explained that the IBTA Management Working Group’s goals include making the InfiniBand Virtualization Annex scalable, explicit, backward compatible and, above all, simple in both implementation and management. Liss specifically covered the concepts of InfiniBand Virtualization, and its manifestation in the host software stack, subnet management and monitoring tools.

The IBTA effort to support virtualization is nearing completion as the annex enters its final review period from other working groups. If you were unable to attend the OFA Workshop 2016 and would like to learn more about InfiniBand virtualization, download the official slides or watch a video of the presentation via insideHPC.

Bill Lee

Visit the IBTA and OFA at SC13!

November 13th, 2013

Attending SC13? The IBTA will be teaming up once again with the OpenFabrics Alliance (OFA) to participate in a number of conference activities. The organizations will be exhibiting together at booth #4132 – stop by for access to:

  • Hands-on computing cluster demonstrations
  • IBTA cable compliance demonstration
  • IBTA & OFA member company exhibition map and SC13 news
  • Current and prospective member information
  • Information regarding OFA’s 2014 User Day and Developer Workshop

IBTA and OFA will also lead the discussion on the future of I/O architectures for improved application performance and efficiency during several technical sessions:

  • RDMA: Scaling the I/O Architecture for Future Applications,” an IBTA-moderated session, will discuss what new approaches to I/O architecture could be used to meet Exascale requirements. The session will be moderated by IBTA’s Bill Boas and will feature a discussion between top users of RDMA. The panel session will take place on Wednesday, November 20 from 1:30 p.m. to 3:00 p.m. in room 301/302/303.

  • Accelerating Improvements in HPC Application I/O Performance and Efficiency,” an OFA Emerging Technologies exhibit, will present to attendees ideas on how incorporating a new framework of I/O APIs may increase performance and efficiency for applications. This Emerging Technologies exhibit will take place at booth #3547. The OFA will also be giving a short talk on this subject in the Emerging Technologies theatre at booth #3947 on Tuesday, November 19 at 2:50 p.m.

  • OFA member company representatives will further develop ideas discussed in its Emerging Technologies exhibit during the Birds of a Feather (BoF) session entitled, “Discussing an I/O Framework to Accelerate Improvements in Application I/O Performance.” Moderators Paul Grun of Cray and Sean Hefty of Intel will lead the discussion on how developers and end-users can enhance and further encourage the growth of open source I/O software.

Bill Lee

Chair, Marketing Working Group (MWG)

InfiniBand Trade Association

Observations from SC12

December 3rd, 2012

The week of Supercomputing went by quickly and resulted in many interesting discussions around supercomputing and its role in both HPC environments and enterprise data centers. Now that we’re back to work, we’d like to reflect back on the successful supercomputing event. The conference this year saw a huge diversity of attendees from various countries, with major participation from top universities, which seemed to be on the leading edge of Remote Direct Memory Access (RDMA) and InfiniBand deployments.

Overall, we saw InfiniBand and Open Fabrics technologies continue their strong presence at the conference. InfiniBand dominated the Top500 list and is still the #1 interconnect of choice for the world’s fastest supercomputers. The Top500 list also demonstrated that InfiniBand is leading the way to efficient computing, which not only benefits high performance computing, but enterprise data center environments as well.

We also engaged in several discussions around RDMA. Attendees, analysts specifically, were interested in new products using RDMA over Converged Ethernet (RoCE) and their availability, and were impressed that Microsoft Server 2012 natively supports all three RDMA transports, including InfiniBand and RoCE. Another interesting development is InfiniBand customer Microsoft Windows Azure, and their increased efficiency placing them at 165 on the Top500 list.

IBTA & OFA Booth at SC12

IBTA’s Electro-Mechanical Working Group Chair, Alan Benner discussing the new InfiniBand specification with attendees at the IBTA & OFA SC12 booth

IBTA’s release of the new InfiniBand Architecture Specification 1.3 generated a lot of buzz among attendees, press and analysts. IBTA’s Electro-Mechanical Working Group Chair, Alan Benner, was one of our experts at the booth and drew a large crowd of people interested in the InfiniBand roadmap and his projections around the availability of the next specification, which is expected to include EDR and become available in draft form in April 2013.

SC12 provides a great opportunity those in high performance computing to connect in person and engage in discussions around hot industry topics; this year was focused on Software Defined Networking (SDN), OpenSM, and the pioneering efforts by both IBTA and OFA. We enjoyed conversations with exhibitors and attendees that visited our booth and a special thank you to all of those RDMA experts who participated in our booth session: Bill Boas, Cray; Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Kevin Moran, System Fabric Works; and Josh Simons, VMware.

Rupert Dance, Software Forge

Rupert Dance, Software Forge

ISC’11 Highlights: ISCnet to Feature FDR InfiniBand

June 13th, 2011

ISC’11 — taking place in Hamburg, Germany from June 19-23 - will include major new product introductions and groundbreaking talks from users worldwide. We are happy to call to your attention to the fact that this year’s conference will feature the world’s first large-scale demonstration of next-generation FDR InfiniBand technology. 

With link speeds of 56Gb/s, FDR InfiniBand uses the latest version of the OpenFabrics Enterprise Distribution (OFEDTM) and delivers an increase of nearly 80% in data rate compared to previous InfiniBand generations.

Running on the ISC’11 network “ISCnet,” the multi-vendor, FDR 56Gb/s InfiniBand demo will provide exhibitors with fast interconnect connectivity between their booths on the show floor and enable them to demonstrate a wide variety of applications and experimental HPC applications, as well as new developments and products.

The demo will also continue to show the processing efficiency of RDMA and microsecond latency of OFED, which reduces the cost of the servers, increases productivity and improves customer’s ROI. ISCnet will be the fastest open commodity network demonstration ever assembled to date.

If you are heading to the conference, be sure to visit the booths of IBTA and OFA members who are exhibiting, as well as the many users of InfiniBand and OFED. The last TOP500 list (published in November 2010) showed that nearly half of the most powerful computers in the world are using these technologies.

InfiniBand Trade Association Members Exhibiting

  • Bull - booth 410
  • Fujitsu Limted - booth 620
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Leoni - booth 842
  • Mellanox - booth 331
  • Molex - booth 730 Co-Exhibitor of Stordis
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330
OpenFabrics Alliance Members Exhibiting

  • AMD - booth 752
  • APPRO - booth 751 Co-Exhibitor of AMD
  • Chelsio - booth 702
  • Cray - booth 650
  • DataDirect Networks - booth 550
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Mellanox - booth 331
  • Microsoft - booth 832
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330

Also, don’t miss the HPC Advisory Council Workshop on June 19 at ISC’11 that includes talks on the following hot topics related to InfiniBand and OFED:

  • GPU Access and Acceleration
  • MPI Optimizations and Futures
  • Fat-Tree Routing
  • Improved Congestion Management
  • Shared Memory models
  • Lustre Release Update and Roadmap 

Go to http://www.hpcadvisorycouncil.com/events/2011/european_workshop/index.php to learn more and register.

For those who are InfiniBand, OFED and Lustre users, don’t miss the important announcement and breakfast on June 22 regarding the coming together of the worldwide Lustre vendor and user community. This announcement is focused on ensuring the continued development, upcoming releases and consolidated support for Lustre; details and location will be available on site at ISC’11. This is your opportunity to meet with representatives of OpenSFS, HPCFS and EOFS to learn how the whole community is working together.

Looking forward to seeing several of you at the show!

briansparksBrian Sparks
IBTA & OFA Marketing Working Groups Co-Chair
HPC Advisory Council Media Relations

January Course: Writing Application Programs for RDMA using OFA Software

December 16th, 2010

As part of its new training initiative, The OpenFabrics Alliance (OFA) is holding a “Writing Application Programs for RDMA using OFA Software” class this January 19-20, 2011 at the University of New Hampshire’s InterOperability Lab (UNH-IOL). If you are an application developer skilled in C programming and familiar with sockets, but with little or no experience programming with OpenFabrics Software, this class is the perfect opportunity to develop your RDMA expertise.

“Writing Application Programs for RDMA using OFA Software” immediately prepares you for writing application programs using RDMA. The class includes 8 hours of classroom work and 8 hours in the lab on Wednesday and Thursday, January 19 and 20. **Attendees enrolled by Dec. 24 will receive a FREE pass and rentals to Loon Mountain for skiing on Friday, January 21.**

Software Forge is a member of the IBTA and is helping drive this very first RDMA class. More information is available at www.openfabrics.org/training. Feel free to contact me with questions as well.

rupert-dance2

Regards,

Rupert Dance

rsdance@soft-forge.com

Member, IBTA’s Compliance and Interoperability Working Group