Archive

Posts Tagged ‘RDMA’

Broadcom Joins the InfiniBand Trade Association

August 19th, 2014

The IBTA is pleased to announce Broadcom has become an official member company.

Broadcom, which develops end-to-end solutions for data center architectures including Ethernet switch, controller and other devices, joined the IBTA to assist the industry in its adoption of Remote Direct Memory Access (RDMA) enabled Ethernet networks and to help the association in its ongoing development of the RDMA over Converged Ethernet (RoCE) specification.

“Broadcom believes that RDMA is one of several interesting technologies that have promise to simplify and accelerate applications using more intelligence in the network,” said Nick Ilyadis, Vice President & CTO, Infrastructure and Networking Group at Broadcom. “Enabling use of RDMA over Ethernet networks, as RoCE does, provides a more comprehensive and converged solution for Ethernet-based data center fabric topologies.

Broadcom is considering adding RDMA support into a variety of Ethernet products and anticipates that its experience with Ethernet and Internet Protocol (IP) standards development will help in the on-going work of generating and maintaining a well-engineered, interoperable and collaborative specification for use by all IBTA members.

Broadcom’s main interest as part of the IBTA is to contribute to the ongoing development of the RoCE specification as part of the IBoXE Working Group. Broadcom is also a member of the IBTA Technical and Marketing Working Groups.

Broadcom has found a welcoming environment within the IBTA.  “The association is made up of a very collaborative set of companies and technologists,” Nick Ilyadis said.


Broadcom Corporation (NASDAQ: BRCM) is a global FORTUNE 500® company, providing semiconductor solutions for wired and wireless communications. Broadcom® system-on-a-chip solutions deliver voice, video, data and multimedia connectivity in the home, office and mobile environments.  For more information about Broadcom, please visit www.broadcom.com.


Bill Lee

Emulex Joins the InfiniBand Trade Association

October 30th, 2013

Emulex is proud to be a new member of the IBTA.  The IBTA has a great history of furthering the InfiniBand and RDMA over Converged Ethernet (RoCE) specifications, developing an active solution ecosystem and building market momentum around technologies with strong value propositions.  We are excited to be a voting member on the Organization’s Steering Committee and look forward tocontributing to relevant technical and marketing working groups, as well as participating in IBTA-sponsored Plugfests and other interoperability activities.

Why the IBTA?

Through our experience in building high performance, large-scale, mission critical networks we understand the benefits of RDMA.  Since its original implementation as part of InfiniBand back in 1999, RDMA has a well-proven track record of delivering better application performance, reduced CPU overhead and increased bandwidth efficiency in demanding computing environments.

Due to increased adoption of technologies such as cloud computing, big data analytics, virtualization and mobile computing, more and more commercial IT infrastructures are starting to run into the same challenges that supercomputing was forced to confront around performance bottlenecks, resource utilization and moving large data sets.  In other words, data centers supporting the Fortune 2000, vertical applications for media and entertainment or life sciences, telecommunications and cloud service providers are starting to look a lot more like the data centers at research institutions or systems on the TOP500 .  Thus, with the advent of RoCE, we see an opportunity to bring the benefits of RDMA that have been well-proven in high performance computing (HPC) markets to the mainstream commercial markets.

RoCE is a key enabling technology for converging data center infrastructure and enabling application centric I/O across a broad spectrum of requirements.  From supercomputing to share drives, RDMA deliveries broad based benefits for fundamentally more efficient network communications.  Emulex has a data center vision to connect, monitor and manage on a single unified fabric.  We are looking forward to supporting the IBTA, contributing to the advancement of RoCE and helping to bring RDMA to mainstream markets.

Jon Affled

Jon Affeld, senior director, marketing alliances at Emulex

IBTA Updates Integrators’ List Following PF23 Compliance & Interoperability Testing

September 25th, 2013

We’re proud to announce the availability of the IBTA April 2013 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest 23, during which we conducted the first-ever Enhanced Data Rate (EDR) 100Gb/s InfiniBand standard compliance testing.

IBTA’s updated Integrators’ List and our newest Plugfest testing is a testament to the IBTA’s commitment to advancing InfiniBand technology and ensuring its interoperability, as all cables and devices on the list successfully passed the required compliance tests and interoperability procedures. Vendors listed may now access the IBTA Integrators’ List promotional materials and a special marketing program for their products.

Plugfest 23 was a huge success, attracting top manufacturers and would not have been possible without donated testing equipment from the following vendors: Agilent Technologies, Anritsu, Molex, Tektronix and Wilder Technologies. We are thrilled with the level of participation and the caliber of technology manufacturers who came out and supported the IBTA.

The updated Integrators’ List is a tool used by the IBTA to assure vendors’ customers and end-users that manufacturers have made the mark of compliance and interoperability. It is also a method for furthering the InfiniBand specification. The integrator’s list is published every spring and fall following the bi-annual Plugfest and serves to assist IT professionals, including data center managers and CIOs, with their planned deployment of InfiniBand solutions.

We’ve already begun preparations for Plugfest 24, which will take place October 7-18, 2013 at the University of New Hampshire’s Interoperability Laboratory. For more information, or to register for Plugfest 24, please visit IBTA Plugfest website.

If you have any questions related to IBTA membership or Integrators’ List, please visit the IBTA website: http://www.infinibandta.org/, or email us: ibta_plugfest@soft-forge.com.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Observations from SC12

December 3rd, 2012

The week of Supercomputing went by quickly and resulted in many interesting discussions around supercomputing and its role in both HPC environments and enterprise data centers. Now that we’re back to work, we’d like to reflect back on the successful supercomputing event. The conference this year saw a huge diversity of attendees from various countries, with major participation from top universities, which seemed to be on the leading edge of Remote Direct Memory Access (RDMA) and InfiniBand deployments.

Overall, we saw InfiniBand and Open Fabrics technologies continue their strong presence at the conference. InfiniBand dominated the Top500 list and is still the #1 interconnect of choice for the world’s fastest supercomputers. The Top500 list also demonstrated that InfiniBand is leading the way to efficient computing, which not only benefits high performance computing, but enterprise data center environments as well.

We also engaged in several discussions around RDMA. Attendees, analysts specifically, were interested in new products using RDMA over Converged Ethernet (RoCE) and their availability, and were impressed that Microsoft Server 2012 natively supports all three RDMA transports, including InfiniBand and RoCE. Another interesting development is InfiniBand customer Microsoft Windows Azure, and their increased efficiency placing them at 165 on the Top500 list.

IBTA & OFA Booth at SC12

IBTA’s Electro-Mechanical Working Group Chair, Alan Benner discussing the new InfiniBand specification with attendees at the IBTA & OFA SC12 booth

IBTA’s release of the new InfiniBand Architecture Specification 1.3 generated a lot of buzz among attendees, press and analysts. IBTA’s Electro-Mechanical Working Group Chair, Alan Benner, was one of our experts at the booth and drew a large crowd of people interested in the InfiniBand roadmap and his projections around the availability of the next specification, which is expected to include EDR and become available in draft form in April 2013.

SC12 provides a great opportunity those in high performance computing to connect in person and engage in discussions around hot industry topics; this year was focused on Software Defined Networking (SDN), OpenSM, and the pioneering efforts by both IBTA and OFA. We enjoyed conversations with exhibitors and attendees that visited our booth and a special thank you to all of those RDMA experts who participated in our booth session: Bill Boas, Cray; Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Kevin Moran, System Fabric Works; and Josh Simons, VMware.

Rupert Dance, Software Forge

Rupert Dance, Software Forge

IBTA & OFA Join Forces at SC12

November 7th, 2012

Attending SC12? Check out OFA’s Exascale and Big Data I/O panel discussion and stop by the IBTA/OFA booth to meet our industry experts

The IBTA is gearing up for the annual SC12 conference taking place November 10-16 at the UT Salt Palace Convention Center in Salt Lake City, Utah. We will be joining forces with the OpenFabrics Alliance (OFA) on a number of conference activities and will be exhibiting together at SC12 booth #3630.

IBTA members will participate in the OFA-moderated panel, Exascale and Big Data I/O, which we highly recommend attending if you’re at the conference.  The panel session, moderated by IBTA and OFA member Bill Boas, takes place Wednesday, November 14 at 1:30 p.m. Mountain Time and will discuss drivers for future I/O architectures.

Also be sure to stop by the IBTA and OFA booth #3630 to chat with industry experts regarding a wide range of industry topics, including:

·         Behind the IBTA integrators list

·         High speed optical connectivity

·         Building and validating OFA software

·         Achieving low latency with RDMA in virtualized cloud environments

·         UNH-IOL hardware testing and interoperability capabilities

·         Utilizing high-speed interconnects for HPC

·         Release 1.3 of IBA Vol2

·         Peering into a live OFS cluster

·         RoCE in Wide Area Networks

·         OpenFabrics for high speed SAN and NAS

Experts, including: Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Bill Boas and Kevin Moran, System Fabric Works; and Josh Simons, VMware will be in the booth to answer your questions and discuss topics currently affecting the HPC community.

Be sure to check the SC12 website to learn more about Supercomputing 2012, and stay tuned to the IBTA website and Twitter to follow IBTA’s plans and activities at SC12.

See you there!

NVIDIA GPUDirect Technology – InfiniBand RDMA for Accelerating GPU-Based Systems

May 11th, 2011

As a member of the IBTA and as being the chairman of the HPC Advisory Council, I wanted to share with you some information on the important role of InfiniBand in the emerging hybrid (CPU-GPU) clustering architectures.

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, has made graphics accelerators a compelling platform for computationally demanding tasks in a wide variety of application domains. Due to the great computational power of the GPU, the GPGPU method has proven valuable in various areas of science and technology and the hybrid CPU-GPU architecture is seeing increased adoption.

GPU-based clusters are being used to perform compute intensive tasks like finite element computations, Computational Fluids Dynamics, Monte-Carlo simulations, etc. Several of the world-leading InfiniBand supercomputers are using GPUs in order to achieve the desired performance. Since the GPUs provide very high core count and floating point operations capability, a high-speed networking interconnect such as InfiniBand is required to provide the needed throughput and the lowest latency for GPU-to-GPU communications. As such, InfiniBand has become the preferred interconnect solution for hybrid GPU-CPU systems.

While GPUs have been shown to provide worthwhile performance acceleration yielding benefits to price/performance and power/performance, several areas of GPU-based clusters could be improved in order to provide higher performance and efficiency. One issue with deploying clusters consisting of multi-GPU nodes involves the interaction between the GPU and the high speed InfiniBand network - in particular, the way GPUs use the network to transfer data between them. Before the NVIDIA GPUDirect technology, a performance issue existed with user-mode DMA mechanisms used by GPU devices and the InfiniBand RDMA technology. The issue involved the lack of a software/hardware mechanism of “pinning” pages of virtual memory to physical pages that can be shared by both the GPU devices and the networking devices.

The new hardware/software mechanism called GPUDirect eliminates the need for the CPU to be involved in the data movement and essentially enables not only higher GPU-based cluster efficiency, but sets the way for the creation of “floating point services.” GPUDirect is based on a new interface between the GPU and the InfiniBand device that enables both devices to share pinned memory buffers. Therefore data written by a GPU to the host memory can be sent immediately by the InfiniBand device (using RDMA semantics) to a remote GPU much faster.

As a result, GPU communication can now utilize the low latency and zero copies advantages of the InfiniBand RDMA transport for higher applications performance and efficiency. InfiniBand RDMA enables you to connect remote GPUs with latency characteristics to make it seems like all of the GPUs are on the same platform. Examples of the performance benefits and more info on GPUDirect can be found at - http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.

gilad-shainer

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

January Course: Writing Application Programs for RDMA using OFA Software

December 16th, 2010

As part of its new training initiative, The OpenFabrics Alliance (OFA) is holding a “Writing Application Programs for RDMA using OFA Software” class this January 19-20, 2011 at the University of New Hampshire’s InterOperability Lab (UNH-IOL). If you are an application developer skilled in C programming and familiar with sockets, but with little or no experience programming with OpenFabrics Software, this class is the perfect opportunity to develop your RDMA expertise.

“Writing Application Programs for RDMA using OFA Software” immediately prepares you for writing application programs using RDMA. The class includes 8 hours of classroom work and 8 hours in the lab on Wednesday and Thursday, January 19 and 20. **Attendees enrolled by Dec. 24 will receive a FREE pass and rentals to Loon Mountain for skiing on Friday, January 21.**

Software Forge is a member of the IBTA and is helping drive this very first RDMA class. More information is available at www.openfabrics.org/training. Feel free to contact me with questions as well.

rupert-dance2

Regards,

Rupert Dance

rsdance@soft-forge.com

Member, IBTA’s Compliance and Interoperability Working Group