Archive

Author Archive

Latest InfiniBand and RoCE Developments a Major Focus at the OpenFabrics Alliance Workshop 2018

May 31st, 2018

openfabriclogo

The annual OpenFabrics Alliance (OFA) Workshop is a premier means of fostering collaboration among those in the OpenFabrics community and advanced networking industry as a whole. Known for being the only event of its kind, the OFA Workshop allows attendees to discuss emerging fabric technologies, collaborate on future industry requirements, and address remaining challenges. The week-long event is made up of sessions covering a wide range of pressing topics, including talks related to InfiniBand and RDMA over Converged Ethernet (RoCE).

This year’s agenda featured sessions highlighting a variety of InfiniBand and RoCE updates and emerging applications. Below is a list of all OFA Workshop 2018 sessions covering RDMA technologies and the associated presentations.


RoCE Containers - Status update

Parav Pandit, Mellanox Technologies

Using RDMA in containerized environment in a secure manner is desired. RDMA over Converged Ethernet (RoCE) needs to operate and honor net namespace other than default init_net. This session focused on recent and upcoming enhancements for functionality and security for RoCE. Various modules of the InfiniBand stack including connection manager, user verbs, core, statistics, resource tracking, device discovery and visibility to applications, net device migration across namespaces at minimum are the key areas to address for supporting RoCE devices in container environment.

Building Efficient Clouds for HPC, Big Data, and Neuroscience Applications over SR-IOV-enabled InfiniBand Clusters

Xiaoyi Lu, The Ohio State University

Single Root I/O Virtualization (SR-IOV) technology has been steadily gaining momentum for high performance interconnects such as InfiniBand. SR-IOV can deliver near-native performance but lacks locality-aware communication support. This talk presented an efficient approach to building HPC clouds based on MVAPICH2 and RDMA-Hadoop with SR-IOV. The talk highlighted high-performance designs of the virtual machine and container aware MVAPICH2 library over SR-IOV enabled HPC Clouds. This talk also presented a high-performance virtual machine migration framework for MPI applications on SR-IOV enabled InfiniBand clouds. The presenter discussed how to leverage the high-performance networking features (e.g., RDMA, SR-IOV) on cloud environments to accelerate data processing through RDMA-Hadoop package. To show the performance benefits of the proposed designs, they co-designed a scalable and distributed tool with MVAPICH2 for statistical evaluation of brain connectomes in the Neuroscience domain, which can run on top of container-based cloud environments with natively utilizing RDMA interconnects and delivering near-native performance.

Non-Contiguous Memory Registration

Tzahi Oved, Mellanox Technologies

Memory registration enables contiguous memory regions to be accessed with RDMA. In this talk, they showed how this could be extended beyond access rights, for describing complex memory layouts. Many HPC applications receive regular structured data, such as a column of a matrix. In this case, the application would typically receive a chunk of data and scatter it by the CPU, or use multiple RDMA writes to transfer each element in-place. Both options introduce significant overhead. By using a memory region that specifies strided access, this overhead could be completely eliminated: the initiator posts a single RDMA write and the target HCA scatters each element into place. Similarly, standard memory regions cannot describe non-contiguous memory allocations, forcing applications to generate remote keys for each buffer. However, by allowing a non-contiguous memory region to span multiple address ranges, an application may scatter remote data with a single remote key. Using non-contiguous memory registration, such memory layouts may be created, accessed, and invalidated using efficient, non-privileged, user-level interfaces.

Dynamically-Connected Transport

Alex Rosenbaum, Mellanox Technologies

Dynamically-Connected (DC) transport is a combination of features from the existing UD and RC transports: DC can send every message to a different destination, like UD does, and is also a reliable transport - supporting RDMA and Atomic operations as RC does. The crux of the transport is dynamically connecting and disconnecting on-the-fly in hardware when changing destinations. As a result, a DC endpoint may communicate with any peer, providing the full RC feature set, and maintain a fixed memory footprint regardless of the size of the network. In this talk, we present the unique characteristics of this new transport, and show how it could be leveraged to reach peek all-to all communication performance. We will review the DC transport objects and their semantics, the Linux upstream DC API and its usage.

T10-DIF offload

Tzahi Oved, Mellanox Technologies

T10-DIF is a standard that defines how to protect the integrity of storage data blocks. Every storage block is proceeded by a Data Integrity Field (DIF). This field contains CRC of the preceding block, the LBA (block number within the storage device) and an application tag. Normally the DIF will be saved in the storage device along with the data block itself, so that in the future it will be used to verify the data integrity.

Modern storage systems and adapters allow creating, verifying and stripping those DIFs while reading and writing data to the storage device, as requested by the user and supported by the OS. The T10-DIF offload RDMA feature brings this capability to the RDMA based storage protocols. Using this feature, RDMA based protocols can request the RDMA device to generate, strip and/or verify DIF while sending or receiving a message. DIF operation is configured in a new Signature Memory-Region. Every memory access using this MR (local or remote) results in DIF operation done on the data as it moves between wire and memory. This session will describe how the configuration and operation of this feature should be done using verbs API.

NVMf Target Offload

Liran Liss, Mellanox Technologies

NVMe is a standard that defines how to access a solid-state storage device over PCI in a very efficient way. It defines how to create and use multiple submission and completion queues between software and the device over which storage operations are carried and completed.

NVMe-over-Fabric is a newer standard that maps NVMe to RDMA to allow remote access to storage devices over an RDMA fabric using the same NVMe language. Since NVMe queues look and act very much like RDMA queues, it is a natural application to bridge between the two. In fact, a couple of software packages today implement an NVMe-over-Fabric to local NVMe target.

The NVMe-oF Target Offload feature is such an implementation that is done in hardware. A supporting RDMA device is configured with the details of the queues of an NVMe device. An incoming client RDMA connection (QP) is then bound to those NVMe queues. From that point on, every IO request arriving over the network from the client is submitted to the respective NVMe queue without any software intervention using PCI peer-to-peer access. This session will describe how the configuration and operation of such feature should be done using verbs.

High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD

Xiaoyi Lu, The Ohio State University

The convergence of Big Data and HPC has been pushing the innovation of accelerating Big Data analytics and management on modern HPC clusters. Recent studies have shown that the performance of Apache Hadoop, Spark, and Memcached can be significantly improved by leveraging the high performance networking technologies, such as Remote Direct Memory Access (RDMA). Most of these studies are based on `DRAM+RDMA’ schemes. On the other hand, Non-Volatile Memory (NVM)and NVMe-SSD technologies can support RDMA access with low-latency, high-throughput, and persistence on HPC clusters. NVMs and NVMe-SSDs provide the opportunity to build novel high-performance and QoS-aware communication and I/O subsystems for data-intensive applications. In this talk, we proposed new communication and I/O schemes for these data analytics stacks, which are designed with RDMA over NVM and NVMe-SSD. Our studies show that the proposed designs can significantly improve the communication, I/O, and application performance for Big Data analytics and management middleware, such as Hadoop, Spark, Memcached, etc. In addition, we will also discuss how to design QoS-aware schemes in these frameworks with NVMe-SSD.

Comprehensive, Synchronous, High Frequency Measurement of InfiniBand Networks in Production HPC Systems

Michael Aguilar, Sandia National Laboratories

In this presentation, we showed InfiniBand performance information gathered from a large Sandia HPC system, Skybridge. We showed detection of network hot spots that may affect data exchanges for tightly coupled parallel threads. We quantified the overhead cost (application impact) when data is being collected.

At Sandia Labs, we are continuing to develop an InfiniBand fabric switch port sampler that can used to gather remote data from InfiniBand switches. Using coordinated InfiniBand switch and HCA port samplers, a real-time snapshot of InfiniBand traffic can be retrieved from the fabric on a large-scale HPC computing platform. Due to the time-stamped and light-weight data retrieval with LDMS, production job runs can be instrumented to provide research data that can be used to specify computing platforms with improved data performance.

Our implementation of synchronous monitoring of large-scale HPC systems provides insights into how to improve computing performance. Our sampler takes advantage of the OpenFabrics software stack for metric gathering. The OFED stack supports a common inter-operable software stack that provides the inherent ability to gather traffic metrics from selected connection points within a network fabric. We use OFED MAD and UMAD to collect the remote switch port traffic metrics.

The OFA Workshop is extremely valuable to InfiniBand Trade Association members and the fabrics community as a whole with an aim to identify, discuss and overcome the industry’s most significant challenges. We look forward to participating again next year. Videos of each presentation from the OFA Workshop 2018 are now available online on insideHPC.com.

Bill Lee

Author: admin Categories: InfiniBand, OpenFabrics Alliance, RDMA, RoCE Tags:

Plugfest 32 Pushes the IBTA Compliance and Interoperability Testing Program to New Heights

February 26th, 2018

il-interop

In October 2017, the IBTA held its 32nd Plugfest at the University of New Hampshire Interoperability Laboratory (UNH-IOL), resulting in the latest InfiniBand Combined Cable and Device Integrators’ List and RDMA over Converged Ethernet (RoCE) Interoperability List. Our rigorous, independent third-party compliance and interoperability program ensures that each cable and device that is tested successfully meets end user needs and expectations of InfiniBand and RoCE technology.

Plugfest 32 featured key updates and highlights, including:

  • New records for RoCE interoperability testing
    • Four major device vendors and seven of the most prominent cable vendors in the industry are now participating in the RoCE interoperability testing program
    • 21 Ethernet devices and over 60 Ethernet cables registered for the event, which were run through 30 different scenarios at speeds of 10, 25, 40, 50 and 100 GbE speeds
  • Cutting-Edge Testing Equipment and Capabilities
    • Work on new SFI Transport tester at Plugfest 32 will now allow 25 additional tests for Plugfest 33 in April 2018
    • New test equipment and application software enabled testing of InfiniBand HDR 200 Gb/s copper cables and the development of new HDR 200 Gb/s testing suites for implementation at Plugfest 33

Our Plugfests are widely acknowledged as the most demanding and effective compliance and interoperability program in the industry, creating a robust and reliable ecosystem of InfiniBand and RoCE solutions that end users can depend on. Each IBTA Plugfest provides our members – both cable and device vendors – with a neutral venue in which to test products. Industry leading RDMA test equipment vendors supply cutting edge equipment for advanced compliance and interoperability testing. This results in a unique opportunity only offered by the IBTA for its members to test the latest InfiniBand and Ethernet-based products in many different scenarios, debug in real-time and resolve any issues uncovered in the process.

The IBTA Plugfest is essential to customers and members alike. Plugfests create a clear path to develop products that are compliant to the InfiniBand and RoCE specifications while also being interoperable within the larger ecosystem. Customers ranging from large data centers and research facilities to universities and government labs leverage the results of our Plugfests when determining which products to use when designing or upgrading their systems. Fabric design is an important and costly business decision, making the InfiniBand Integrators’ List and the RoCE Interoperability List crucial elements when building systems and ensuring that all equipment will operate seamlessly. The independent validation provided by the IBTA Compliance and Interoperability program is critical to the advancement of RDMA technology and the industry as a whole.

We would like to give a special thanks to the vendors that contributed test equipment to IBTA Plugfest 32, including Ace Unitech, Anritsu, Keysight Technologies, Molex, Software Forge, TE Connectivity, Tektronix and Wilder Technologies.

IBTA Plugfest 33 will be held April 9-20, 2018 at UNH-IOL. Registration and additional event information can be found on the IBTA Plugfest page.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Author: admin Categories: Uncategorized Tags:

InfiniBand Leads the TOP500 List, is Preferred Fabric of Leading AI and Deep Learning Systems

December 7th, 2017

top500The latest iteration of the bi-annual TOP500 List reveals that InfiniBand not only powers the world’s first and fourth fastest supercomputers, but it is also the preferred interconnect for Artificial Intelligence (AI) and Deep Learning systems. Furthermore, the latest results show InfiniBand continues to be the most used high-speed interconnect in the TOP500, reinforcing its status as the industry’s leading high performance interconnect technology.

As High Performance Computing (HPC) demands evolve, especially in the case of emerging AI and Deep Learning applications, the industry can rely on InfiniBand to meet their rigorous network performance requirements and scalability needs. System architects will continue to turn to the unmatched combination of scalable network bandwidth, low latency and efficiency that InfiniBand offers.

Top of the List:

  • InfiniBand accelerates two of the top five systems – including the first (China) and fourth (Japan) fastest supercomputers
  • InfiniBand connects 77% of new HPC systems
  • InfiniBand is the most used high-speed interconnect on the TOP500 List
  • InfiniBand is the preferred interconnect for leading AI and Deep Learning systems

· All 23 systems running Ethernet at 25Gb/s or higher are RoCE capable

InfiniBand continues to prove that it can deliver on the increasing demands for performance, scalability and speed that are required of today’s HPC systems, efficiently tackling challenges involving even larger and more complex data sets. Read the full IBTA announcement for more information on InfiniBand and RoCE’s status in the world’s top supercomputers.

Bill Lee

Author: admin Categories: RDMA, TOP500 Tags:

How RDMA is Solving AI’s Scalability Problem

October 25th, 2017

stockphoto_5

Artificial Intelligence (AI) is already impacting many aspects of our day-to-day lives. Through the use of AI, we have been introduced to autonomous vehicles, real-time fraud detection, public safety, advanced drug discovery, cancer research and much more. AI has already enabled scientific achievements once thought impossible, while also delivering on the promise of improving humanity.

Today, AI and machine learning is becoming completely intertwined in society and the way we interact with computers, but the real barrier to tackling even bigger challenges of tomorrow is scalable performance. As future research, development and simulations require the processing of larger data sets, the key to unlocking the performance barrier associated with highly parallelized computing and the communication overhead associated with it will undoubtedly be the interconnect.  The more parallel processes we add to solve a complex problem, the more communication and data movement is needed.  Remote Direct Memory Access (RDMA) fabrics, such as InfiniBand and RDMA over Converged Ethernet (RoCE), are key to unlocking scalable performance for the most demanding AI applications being developed and deployed today.

The InfiniBand Trade Association’s (IBTA) InfiniBand Roadmap lays out a clear and attainable path for performance gains, detailing 1x, 4x and 12x port widths with bandwidths reaching 600Gb/s this year and further outlining plans for future speed increases. For those already deploying InfiniBand in their HPC and AI systems, the roadmap provides specific milestones around expected performance improvements to ensure their investment is protected, and with the assurance of backwards and forwards compatibility across the generations. While high bandwidth is very important, the low latency benefits of RDMA are equally essential for the advancement of machine learning and AI. The ultra-low latency provided by RDMA enables minimal processing overhead and greatly accelerates overall application performance, which AI requires when moving massive amounts of data, exchanging messages and computing results.  InfiniBand’s low latency and high bandwidth characteristics will undoubtedly address AI scalability and efficiency needs as systems tackle challenges involving even larger and more complex data sets.

The InfiniBand Architecture Specification is an open standard developed in a vendor-neutral, community-centric manner. The IBTA has a long history of addressing HPC and enterprise application requirements for I/O performance and scalability – providing a reliable ecosystem for end users through promotion of open standards and roadmaps, compliant and interoperable products, as well as success stories and educational resources. Furthermore, many institutions advancing AI research and development leverage InfiniBand and RoCE  as they satisfy both performance needs and requirements for non-proprietary, open technologies.

One of the most critical elements when creating a cognitive computing application involves deep learning. It takes a considerable amount of time to find a solution in creating a data model with the highest degree of accuracy. While this could be done over a traditional network such as Ethernet, the time required to train the models is considerably time consuming and not practical.  Today, all major frameworks (i.e. TensorFlow, Microsoft Cognitive Toolkit, Baidu’s PaddlePaddle and others) and even communications libraries such as NVIDIA’s NCCL library are natively enabled to take advantage of the low level verb implementation of the InfiniBand standard.  This greatly improves the overall accuracy in training, but also considerably reduces amount of time needed to deploy the solution (as highlighted in a recent IBM PowerAI DDL demonstration).

The supercomputing industry has been aggressively marching towards Exascale. RDMA is the core offload technology that is able to solve the scalability issues hindering the advancements of HPC.  Since machine learning shares the same underlying hardware and interconnect needs as HPC, RDMA is unlocking the power of AI through the use of InfiniBand.  As machine learning demands advance even further, InfiniBand will continue to lead and drive the industries who rely them.

Be sure to check back in on the IBTA blog for future posts on RDMA’s role in AI and machine learning.

Scot Schultz, Sr. Director of HPC/AI & Techincal Computing at Mellanox

Scot Schultz, Sr. Director of HPC/AI & Technical Computing at Mellanox

Author: admin Categories: Uncategorized Tags:

Enabling Exascale with Co-Design Architecture, Intelligent Networks and InfiniBand Routing

June 14th, 2017

isc

For those working in the High Performance Computing (HPC) industry, achieving Exascale performance has been an ongoing challenge and a significant milestone for some time. Recently, experts have started to take a holistic system-level approach to performance improvements by examining how hardware and software components interact within data centers. This approach, known as co-design architecture, recognizes that the CPU has reached the limits of its scalability, and offers an intelligent network as the new “co-processor” to share the responsibility for handling and accelerating application workloads.

Next week, IBTA representatives will join other industry experts in Frankfurt, Germany for ISC High Performance, an annual event focused on HPC technological developments and applications. In a Birds of a Feather (BoF) session titled A Holistic Approach to Exascale - Co-Design Architecture, Intelligent Networks and InfiniBand Routing, Scott Atchely from Oak Ridge National Laboratory, as well as Richard Graham, Gerald Lotto and Gilad Shainer from member company Mellanox Technologies will discuss the advantages of a co-design architecture in depth. Additionally, the group will cover the role that InfiniBand routers play in reaching Exascale performance and share insights behind these recent HPC developments.

Attending ISC High Performance? The BoF session will be held in the Kontrast room starting at 4:00 p.m. on Monday, June 19. For more information on the BoF session and its participants, visit the event site here.

Bill Lee

RoCE Initiative Launches New Online Product Directory for CIOs and IT Professionals

May 24th, 2017

roce-logo

The RoCE Initiative is excited to announce the launch of the online RoCE Product Directory, the latest technical resource to supplement the IBTA’s RoCE educational program. The new online resource is intended to inform CIOs and enterprise data center architects about their options for deploying RDMA over Converged Ethernet (RoCE) technology within their Ethernet infrastructure.

The directory is comprised of a growing catalogue of RoCE-enabled solutions provided by IBTA members, including Broadcom, Cavium, Inc. and Mellanox Technologies. The new online tool allows users to search by product type and/or brand, connecting them directly to each item’s specific product page. The product directory currently boasts over 65 products and counting that accelerate performance over Ethernet networks while lowering latency.

For more information on the RoCE Product Directory and members currently involved, view the press release here.

Explore the RoCE Product Directory on the RoCE Initiative Product Search page here.

Bill Lee

Author: admin Categories: InfiniBand, RDMA, RoCE Tags:

IBTA to Feature Optimized Testing, Debugging Procedures Onsite at Plugfest 31

March 19th, 2017

il-interop

The IBTA boasts one of the industry’s top compliance and interoperability programs, which provides device and cable vendors the opportunity to test their products for compliance with the InfiniBand architecture specification as well as interoperability with other InfiniBand and RoCE products. The IBTA Integrators’ List program produces two lists, the InfiniBand Integrators’ List and the RoCE Interoperability List, which are updated twice a year following bi-annual plugfests.

We’re pleased to announce that the results from Plugfest 29 are now available on the IBTA Integrators’ List webpage, while Plugfest 30 results will be made available in the coming weeks. These results are designed to support data center managers, CIOs and other IT decision makers with their planned deployment of InfiniBand and RoCE solutions in both small clusters and large-scale clusters of 1,000 nodes or more.

Changes for Plugfest 31

IBTA Plugfest 31, taking place April 17-28 at the University of New Hampshire Interoperability Lab, is just around the corner and we are excited to announce some significant updates to our testing processes and procedures. These changes originated from efforts at last year’s plugfests and will be fully implemented onsite for the first time at Plugfest 31.

Changes:

  1. We will no longer be testing QDR but we are adding HDR (200 GHz) testing.
  2. Keysight VNA testing is now performed using a 32 port VNA to enable testing of all 8 lanes.
  3. Software Forge (SFI) has developed all new MATLAB code that will allow real time processing of the 32 port s-parameter files generated by the Keysight VNA. This allows us to test and post process VNA results in less than 2 minutes per cable.
  4. Anritsu, Keysight and Software Forge have teamed together to bring hardware and software solutions that allow for real time VNA & ATD testing. This allows direct vendor participation and validation during the Plugfest.

Benefits:

  1. Anritsu and Keysight bring the best leading edge equipment to the Plugfest twice per year.
    1. See the Methods of Implementation for details.
  2. The IBTA also has access to SFI software that allows the Plugfest engineers to post process the results in real time. Therefore we are now able to do real time interactive testing and debugging while your engineers are at the Plugfest.
  3. We are offering a dedicated guaranteed 5 hour time slot for each vendor to debug and review their test results. Additional time will be available but will be allocated during the Plugfest after all vendors are allocated the initial 5 hours. See the registration to choose your time slot.
  4. Arbitration will occur during the Plugfest and not afterwards. This is because we only have access to the EDR and HDR test equipment at the bi-annual IBTA Plugfests.
  5. Results from the IBTA Plugfest will now be available much more quickly since the post processing time has been reduced so dramatically.
  6. We are strongly encouraging vendors to send engineers to this event so that you can compare your results with ours and do any necessary debugging and validation. This interactive debugging and testing opportunity is the best in any of the high speed industries and is provided to you as part of your IBTA Membership. Please take advantage of it.
  7. We will be providing both InfiniBand and RoCE Interoperability testing at PF31.

Interested in attending IBTA Plugfest 31? Registration can be completed on the IBTA Plugfest page. The March 20 registration deadline is fast approaching, so don’t delay!

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Author: admin Categories: IBTA Plugfest, Plugfest, RDMA, RoCE Tags:

InfiniBand and RoCE to Make Their Mark at OFA Workshop 2017

March 16th, 2017

openfabriclogo

The OpenFabrics Alliance (OFA) workshop is an annual event devoted to advancing the state of the art in networking. The workshop is known for showcasing a broad range of topics all related to network technology and deployment through an interactive, community-driven event. The comprehensive event includes a rich program made up of more than 50 sessions covering a variety of critical networking topics, which range from current deployments of RDMA to new and advanced network technologies.

To view the full list of abstracts, visit the OFA Workshop 2017 Abstracts and Agenda page.

This year’s workshop program will also feature some notable sessions that showcase the latest developments happening for InfiniBand and RoCE technology. Below are is the collection of OFA Workshop 2017 sessions that we recommend you check out:

Developer Experiences of the First Paravirtual RDMA Provider and Other RDMA Updates
Presented by Adit Ranadive, VMware

VMware’s Paravirtual RDMA (PVRDMA) device is a new NIC in vSphere 6.5 that allows VMs in a cluster to communicate using Remote Direct Memory Access (RDMA), while maintaining latencies and bandwidth close to that of physical hardware. Recently, the PVRDMA driver was accepted as part of the Linux 4.10 kernel and our user-library was added as part of the new rdma-core package.

In this session, we will provide a brief overview of our PVRDMA design and capabilities. Next, we will discuss our development approach and challenges for joint device and driver development. Further, we will highlight our experience for upstreaming the driver and library with the new changes to the core RDMA stack.

We will provide an update on the performance of the PVRDMA device along with upcoming updates to the device capabilities. Finally, we will provide new results on the performance achieved by several HPC applications using VM DirectPath I/O.

This session seeks to engage the audience in discussions on: 1) new RDMA provider development and acceptance, and 2) hardware support for RDMA virtualization.

Experiences with NVMe over Fabrics
Presented by Parav Pandit, Mellanox

NVMe is an interface specification to access non-volatile storage media over PCIe buses. The interface enables software to interact with devices using multiple, asynchronous submission and completion queues, which reside in memory. Consequently, software may leverage the inherent parallelism and low latency of modern NMV devices with minimal overhead. Recently, the NMVe specification has been extended to support remote access over fabrics, such as RDMA and Fibre Channel. Using RDMA, NVMe over Fabrics (NVMe-oF) provides the high BW and low-latency characteristics of NVMe to remote devices. Moreover, these performance traits are delivered with negligible CPU overhead as the bulk of the data transfer is conducted by RDMA.

In this session, we present an overview of NVMe-oF and its implementation in Linux. We point out the main design choices and evaluate NVMe-oF performance for both InfiniBand and RoCE fabrics.

Validating RoCEv2 for Production Deployment in the Cloud Datacenter
Presented by Sowmini Varadhan, Oracle

With the increasing prevalence of ethernet switches and NICs in Data Center Networks, we have been experimenting with the deployment of RDMA over Commodity Ethernet (RoCE) in our DCN. RDMA needs a lossless transport, and, in theory, this can be achieved on ethernet by using priority based PFC (IEEE 802.1qbb) and ECN (IETF RFC 3168).

We describe our experiences in trying to deploy these protocols in a RoCEv2 testbed running @ 100 Gbit/sec consisting of a multi-level CLOS network.

In addition to addressing the documented limitations around PFC/ECN (livelock, pause-frame-storm, memory requirements for supporting multiple priority flows), we also hope to share some of the performance metrics gathered, as well as some feedback on ways to improve the tooling for observability and diagnosability of the system in a vendor-agnostic, interoperable way.

Host Based InfiniBand Network Fabric Monitoring
Presented by Michael Aguilar, Sandia National Laboratories

Synchronized host based InfiniBand network counter monitoring of local connections at 1Hz can provide a reasonable system snapshot understanding of traffic injection/ejection into/from the fabric. This type of monitoring is currently used to enable understanding about the data flow characteristics of applications and inference about congestion based on application performance degradation. It cannot, however, enable identification of where congestion occurs or how well adaptive routing algorithms and policies react to and alleviate it. Without this critical information the fabric remains opaque and congestion management will continue to be largely handled through increases in bandwidth. To reduce fabric opacity, we have extended our host based monitoring to include internal InfiniBand fabric network ports. In this presentation we describe our methodology along with preliminary timing and overhead information. Limitations and their sources are discussed along with proposed solutions, optimizations, and planned future work.

IBTA TWG - Recent Topics in the IBTA, and a Look Ahead
Presented by Bill Magro, Intel on behalf of InfiniBand Trade Association

This talk discusses some recent activities in the IBTA including recent specification updates. It also provides a glimpse into the future for the IBTA.

InfiniBand Virtualization
Presented by Liran Liss, Mellanox on behalf of InfiniBand Trade Association

InfiniBand Virtualization allows a single Channel Adapter to present multiple transport endpoints that share the same physical port. To software, these endpoints are exposed as independent Virtual HCAs (VHCAs), and thus may be assigned to different software entities, such as VMs. VHCAs are visible to Subnet Management, and are managed just like physical HCAs. This session provides an overview of the InfiniBand Virtualization Annex, which was released on November 2016. We will cover the Virtualization model, management, addressing modes, and discuss deployment considerations.

IPoIB Acceleration
Presented by Tzahi Oved, Mellanox

The IPoIB protocol encapsulates IP packets over InfiniBand datagrams. As a direct RDMA Upper Layer Protocol (ULP), IPoIB cannot support HW features that are specific to the IP protocol stack. Nevertheless, RDMA interfaces have been extended to support some of the prominent IP offload features, such as TCP/UDP checksum and TSO. This provided reasonable performance for IPoIB.

However, new network interface features are one of the most active areas of the Linux kernel. Examples include TSS and RSS, tunneling offloads, and XDP. In addition, the basic IP offload features are insufficient to cope with the increasing network bandwidth. Rather than continuously porting IP network interface developments into the RDMA stack, we propose adding abstract network data-path interfaces to RDMA devices.

In order to present a consistent interface to users, the IPoIB ULP continues to represent the network device to the IP stack. The common code also manages the IPoIB control plane, such as resolving path queries and registering to multicast groups. Data path operations are forwarded to devices that implement the new API, or fallback to the standard implementation otherwise. Using the forgoing approach, we show how IPoIB closes the performance gap compared to state-of-the-art Ethernet network interfaces.

Packet Processing Verbs for Ethernet and IPoIB
Presented by Tzahi Oved, Mellanox

As a prominent user-level networking API, the RDMA stack has been extended to support packet processing applications and user-level TCP/IP stacks, initially focusing on Ethernet. This allowed delivering low latency and high message-rate to these applications.

In this talk, we provide an extensive introduction to both current and upcoming packet processing Verbs, such as checksum offloads, TSO, flow steering, and RSS. Next, we describe how these capabilities may also be applied to IPoIB traffic.

In contrast to Ethernet support, which was based on Raw Ethernet QPs that receive unmodified packets from the wire, IPoIB packets are sent over a “virtual wire”, managed by the kernel. Thus, processing selective IP flows from user-space requires coordination with the IPoIB interface.

The Linux SoftRoCE Driver
Presented by Liran Liss, Mellanox

SoftRoCE is a software implementation of the RDMA transport protocol over Ethernet. Thus, any host to conduct RDMA traffic without necessitating a RoCE-capable NIC, allowing RDMA development anywhere.

This session presents the Linux SoftRoCE driver, RXE, which was recently accepted to the 4.9 kernel. In addition, the RXE user-level driver is now part of rdma-core, the consolidated RDMA user-space codebase. RXE is fully interoperable with HW RoCE devices, and may be used for both testing and production. We provide an overview of the RXE driver, detail its configuration, and discuss the current status and remaining challenges in RXE development.

Ubiquitous RoCE
Presented by Alex Shpiner, Mellanox

In recent years, the usage of RDMA in datacenter networks has increased significantly, with RoCE (RDMA over Converged Ethernet) emerging as the canonical approach to deploying RDMA in Ethernet-based datacenters.

Initially, RoCE required a lossless fabric for optimal performance. This is typically achieved by enabling Priority Flow Control (PFC) on Ethernet NICs and switches. The RoCEv2 specification introduced RoCE congestion control, which allows throttling transmission rate in response to congestion. Consequently, packet loss may be minimized and performance is maintained even if the underlying Ethernet network is lossy.

In this talk, we discuss the details of latest developments in the RoCE congestion control. Hardware congestion control reduces the latency of the congestion control loop; it reacts promptly in the face of congestion by throttling the transmission rate quickly and accurately; when congestion is relieved, bandwidth is immediately recovered. The short control loop also prevents network buffers from overfilling in many congestion scenarios.

In addition, fast hardware retransmission complements congestion control in heavy congestion scenarios, by significantly reducing the penalty of packet drops.

Keep an eye out as videos of the OFA Workshop 2017 sessions will be published on both the OFA website and insideHPC. Interested in attending? Registration for the 13th Annual OFA Workshop will be available online and onsite up until the opening day of the event, March 27. Visit the OFA Workshop 2017 Registration page for more information.

Bill Lee

Author: admin Categories: InfiniBand, OpenFabrics Alliance, RoCE Tags:

Share Your RDMA Expertise at the 2017 OpenFabrics Alliance Workshop

January 27th, 2017

openfabriclogo

Experts from around the world will gather at the 13th Annual OpenFabrics Alliance (OFA) Workshop March 27-31 in Austin, TX to discuss recent innovations in network technology and to tackle the industry’s toughest challenges. Through engaging keynotes, lively sessions, and open discussions, the OFA Workshop will offer a rich program covering a broad range of topics related to network deployment.

Get involved in this exciting, community-driven industry experience by answering OFA’s Call for Sessions. Recommended session topics range from RDMA in Commercial Environments to Data Intensive Computing & Analytics. However, alternate topics are welcome; OFA encourages topic submissions that aren’t included in the current list.

IBTA members are encouraged to share their vast RDMA technology expertise by submitting session proposals on advancements surrounding InfiniBand and RoCE technology. To submit a proposal, go to the OFA Workshop 2017 Call for Sessions webpage (https://openfabrics.org/index.php/call-for-sessions.html) and follow the simple instructions.

If you are not interested in submitting a session proposal but would like to attend the workshop, register today to take advantage of the Early Bird rate:

Registration Type Pricing

Early Bird (through Feb 13) $595

Regular $695

On Site $695

For additional registration and lodging information, visit the Workshop registration page.

Questions or comments? Contact press@openfabrics.org.

Bill Lee

Author: admin Categories: OpenFabrics Alliance Tags:

New InfiniBand Specification Updates Expand Interoperability, Flexibility, and Virtualization Support

November 29th, 2016

ibta_logohorz

Performance demands continue to evolve in High Performance Computing (HPC) and enterprise cloud networks, increasing the need for enhancements to InfiniBand capabilities, support features, and overall interoperability. To address this need, the InfiniBand Trade Association (IBTA) is announcing the public availability of two new InfiniBand Architecture Specification updates - the Volume 2 Release 1.3.1 and a Virtualization Annex to Volume 1 Release 1.3.

The Volume 2 Release 1.3.1 adds flexible performance enhancements to InfiniBand-based networks. With the addition of Forward Error Correction (FEC) upgrades, IT managers can experience both minimal error rates and low latency performance. The new release also enables the InfiniBand subnet manager to optimize signal integrity while maintaining the lowest power possible from the port. Additionally, updates to QSFP28 and CXP28 memory mapping support improved InfiniBand cable management.

This new Volume 2 release also improves upon interoperability and test methodologies for the latest InfiniBand data rates, namely EDR 100 Gb/s and FDR 56 Gb/s. These enhancements are achieved through updated EDR electrical requirements, amended testing methodology for EDR Limiting Active Cables, and FDR interoperability and test specification corrections.

With an aim toward supporting the ever-increasing deployment of virtualized solutions in HPC and enterprise cloud networks, the IBTA also published a new Virtualization Annex to Volume 1 Release 1.3. The Annex extends the InfiniBand specification to address multiple virtual machines connected to a single physical port, which allows subnet managers to recognize each logical endpoint and reduces the burden on the subnet managers as networks leverage virtualization for greater system scalability.

The InfiniBand Architecture Specification Volume 2 Release 1.3.1 and Volume 1 Release 1.3 are available for public download here.

Please contact us at press@infinibandta.org with questions about InfiniBand’s latest updates.

Bill Lee