Enabling Exascale with Co-Design Architecture, Intelligent Networks and InfiniBand Routing

June 14th, 2017

isc

For those working in the High Performance Computing (HPC) industry, achieving Exascale performance has been an ongoing challenge and a significant milestone for some time. Recently, experts have started to take a holistic system-level approach to performance improvements by examining how hardware and software components interact within data centers. This approach, known as co-design architecture, recognizes that the CPU has reached the limits of its scalability, and offers an intelligent network as the new “co-processor” to share the responsibility for handling and accelerating application workloads.

Next week, IBTA representatives will join other industry experts in Frankfurt, Germany for ISC High Performance, an annual event focused on HPC technological developments and applications. In a Birds of a Feather (BoF) session titled A Holistic Approach to Exascale - Co-Design Architecture, Intelligent Networks and InfiniBand Routing, Scott Atchely from Oak Ridge National Laboratory, as well as Richard Graham, Gerald Lotto and Gilad Shainer from member company Mellanox Technologies will discuss the advantages of a co-design architecture in depth. Additionally, the group will cover the role that InfiniBand routers play in reaching Exascale performance and share insights behind these recent HPC developments.

Attending ISC High Performance? The BoF session will be held in the Kontrast room starting at 4:00 p.m. on Monday, June 19. For more information on the BoF session and its participants, visit the event site here.

Bill Lee

RoCE Initiative Launches New Online Product Directory for CIOs and IT Professionals

May 24th, 2017

roce-logo

The RoCE Initiative is excited to announce the launch of the online RoCE Product Directory, the latest technical resource to supplement the IBTA’s RoCE educational program. The new online resource is intended to inform CIOs and enterprise data center architects about their options for deploying RDMA over Converged Ethernet (RoCE) technology within their Ethernet infrastructure.

The directory is comprised of a growing catalogue of RoCE-enabled solutions provided by IBTA members, including Broadcom, Cavium, Inc. and Mellanox Technologies. The new online tool allows users to search by product type and/or brand, connecting them directly to each item’s specific product page. The product directory currently boasts over 65 products and counting that accelerate performance over Ethernet networks while lowering latency.

For more information on the RoCE Product Directory and members currently involved, view the press release here.

Explore the RoCE Product Directory on the RoCE Initiative Product Search page here.

Bill Lee

Author: admin Categories: InfiniBand, RDMA, RoCE Tags:

IBTA to Feature Optimized Testing, Debugging Procedures Onsite at Plugfest 31

March 19th, 2017

il-interop

The IBTA boasts one of the industry’s top compliance and interoperability programs, which provides device and cable vendors the opportunity to test their products for compliance with the InfiniBand architecture specification as well as interoperability with other InfiniBand and RoCE products. The IBTA Integrators’ List program produces two lists, the InfiniBand Integrators’ List and the RoCE Interoperability List, which are updated twice a year following bi-annual plugfests.

We’re pleased to announce that the results from Plugfest 29 are now available on the IBTA Integrators’ List webpage, while Plugfest 30 results will be made available in the coming weeks. These results are designed to support data center managers, CIOs and other IT decision makers with their planned deployment of InfiniBand and RoCE solutions in both small clusters and large-scale clusters of 1,000 nodes or more.

Changes for Plugfest 31

IBTA Plugfest 31, taking place April 17-28 at the University of New Hampshire Interoperability Lab, is just around the corner and we are excited to announce some significant updates to our testing processes and procedures. These changes originated from efforts at last year’s plugfests and will be fully implemented onsite for the first time at Plugfest 31.

Changes:

  1. We will no longer be testing QDR but we are adding HDR (200 GHz) testing.
  2. Keysight VNA testing is now performed using a 32 port VNA to enable testing of all 8 lanes.
  3. Software Forge (SFI) has developed all new MATLAB code that will allow real time processing of the 32 port s-parameter files generated by the Keysight VNA. This allows us to test and post process VNA results in less than 2 minutes per cable.
  4. Anritsu, Keysight and Software Forge have teamed together to bring hardware and software solutions that allow for real time VNA & ATD testing. This allows direct vendor participation and validation during the Plugfest.

Benefits:

  1. Anritsu and Keysight bring the best leading edge equipment to the Plugfest twice per year.
    1. See the Methods of Implementation for details.
  2. The IBTA also has access to SFI software that allows the Plugfest engineers to post process the results in real time. Therefore we are now able to do real time interactive testing and debugging while your engineers are at the Plugfest.
  3. We are offering a dedicated guaranteed 5 hour time slot for each vendor to debug and review their test results. Additional time will be available but will be allocated during the Plugfest after all vendors are allocated the initial 5 hours. See the registration to choose your time slot.
  4. Arbitration will occur during the Plugfest and not afterwards. This is because we only have access to the EDR and HDR test equipment at the bi-annual IBTA Plugfests.
  5. Results from the IBTA Plugfest will now be available much more quickly since the post processing time has been reduced so dramatically.
  6. We are strongly encouraging vendors to send engineers to this event so that you can compare your results with ours and do any necessary debugging and validation. This interactive debugging and testing opportunity is the best in any of the high speed industries and is provided to you as part of your IBTA Membership. Please take advantage of it.
  7. We will be providing both InfiniBand and RoCE Interoperability testing at PF31.

Interested in attending IBTA Plugfest 31? Registration can be completed on the IBTA Plugfest page. The March 20 registration deadline is fast approaching, so don’t delay!

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Author: admin Categories: IBTA Plugfest, Plugfest, RDMA, RoCE Tags:

InfiniBand and RoCE to Make Their Mark at OFA Workshop 2017

March 16th, 2017

openfabriclogo

The OpenFabrics Alliance (OFA) workshop is an annual event devoted to advancing the state of the art in networking. The workshop is known for showcasing a broad range of topics all related to network technology and deployment through an interactive, community-driven event. The comprehensive event includes a rich program made up of more than 50 sessions covering a variety of critical networking topics, which range from current deployments of RDMA to new and advanced network technologies.

To view the full list of abstracts, visit the OFA Workshop 2017 Abstracts and Agenda page.

This year’s workshop program will also feature some notable sessions that showcase the latest developments happening for InfiniBand and RoCE technology. Below are is the collection of OFA Workshop 2017 sessions that we recommend you check out:

Developer Experiences of the First Paravirtual RDMA Provider and Other RDMA Updates
Presented by Adit Ranadive, VMware

VMware’s Paravirtual RDMA (PVRDMA) device is a new NIC in vSphere 6.5 that allows VMs in a cluster to communicate using Remote Direct Memory Access (RDMA), while maintaining latencies and bandwidth close to that of physical hardware. Recently, the PVRDMA driver was accepted as part of the Linux 4.10 kernel and our user-library was added as part of the new rdma-core package.

In this session, we will provide a brief overview of our PVRDMA design and capabilities. Next, we will discuss our development approach and challenges for joint device and driver development. Further, we will highlight our experience for upstreaming the driver and library with the new changes to the core RDMA stack.

We will provide an update on the performance of the PVRDMA device along with upcoming updates to the device capabilities. Finally, we will provide new results on the performance achieved by several HPC applications using VM DirectPath I/O.

This session seeks to engage the audience in discussions on: 1) new RDMA provider development and acceptance, and 2) hardware support for RDMA virtualization.

Experiences with NVMe over Fabrics
Presented by Parav Pandit, Mellanox

NVMe is an interface specification to access non-volatile storage media over PCIe buses. The interface enables software to interact with devices using multiple, asynchronous submission and completion queues, which reside in memory. Consequently, software may leverage the inherent parallelism and low latency of modern NMV devices with minimal overhead. Recently, the NMVe specification has been extended to support remote access over fabrics, such as RDMA and Fibre Channel. Using RDMA, NVMe over Fabrics (NVMe-oF) provides the high BW and low-latency characteristics of NVMe to remote devices. Moreover, these performance traits are delivered with negligible CPU overhead as the bulk of the data transfer is conducted by RDMA.

In this session, we present an overview of NVMe-oF and its implementation in Linux. We point out the main design choices and evaluate NVMe-oF performance for both InfiniBand and RoCE fabrics.

Validating RoCEv2 for Production Deployment in the Cloud Datacenter
Presented by Sowmini Varadhan, Oracle

With the increasing prevalence of ethernet switches and NICs in Data Center Networks, we have been experimenting with the deployment of RDMA over Commodity Ethernet (RoCE) in our DCN. RDMA needs a lossless transport, and, in theory, this can be achieved on ethernet by using priority based PFC (IEEE 802.1qbb) and ECN (IETF RFC 3168).

We describe our experiences in trying to deploy these protocols in a RoCEv2 testbed running @ 100 Gbit/sec consisting of a multi-level CLOS network.

In addition to addressing the documented limitations around PFC/ECN (livelock, pause-frame-storm, memory requirements for supporting multiple priority flows), we also hope to share some of the performance metrics gathered, as well as some feedback on ways to improve the tooling for observability and diagnosability of the system in a vendor-agnostic, interoperable way.

Host Based InfiniBand Network Fabric Monitoring
Presented by Michael Aguilar, Sandia National Laboratories

Synchronized host based InfiniBand network counter monitoring of local connections at 1Hz can provide a reasonable system snapshot understanding of traffic injection/ejection into/from the fabric. This type of monitoring is currently used to enable understanding about the data flow characteristics of applications and inference about congestion based on application performance degradation. It cannot, however, enable identification of where congestion occurs or how well adaptive routing algorithms and policies react to and alleviate it. Without this critical information the fabric remains opaque and congestion management will continue to be largely handled through increases in bandwidth. To reduce fabric opacity, we have extended our host based monitoring to include internal InfiniBand fabric network ports. In this presentation we describe our methodology along with preliminary timing and overhead information. Limitations and their sources are discussed along with proposed solutions, optimizations, and planned future work.

IBTA TWG - Recent Topics in the IBTA, and a Look Ahead
Presented by Bill Magro, Intel on behalf of InfiniBand Trade Association

This talk discusses some recent activities in the IBTA including recent specification updates. It also provides a glimpse into the future for the IBTA.

InfiniBand Virtualization
Presented by Liran Liss, Mellanox on behalf of InfiniBand Trade Association

InfiniBand Virtualization allows a single Channel Adapter to present multiple transport endpoints that share the same physical port. To software, these endpoints are exposed as independent Virtual HCAs (VHCAs), and thus may be assigned to different software entities, such as VMs. VHCAs are visible to Subnet Management, and are managed just like physical HCAs. This session provides an overview of the InfiniBand Virtualization Annex, which was released on November 2016. We will cover the Virtualization model, management, addressing modes, and discuss deployment considerations.

IPoIB Acceleration
Presented by Tzahi Oved, Mellanox

The IPoIB protocol encapsulates IP packets over InfiniBand datagrams. As a direct RDMA Upper Layer Protocol (ULP), IPoIB cannot support HW features that are specific to the IP protocol stack. Nevertheless, RDMA interfaces have been extended to support some of the prominent IP offload features, such as TCP/UDP checksum and TSO. This provided reasonable performance for IPoIB.

However, new network interface features are one of the most active areas of the Linux kernel. Examples include TSS and RSS, tunneling offloads, and XDP. In addition, the basic IP offload features are insufficient to cope with the increasing network bandwidth. Rather than continuously porting IP network interface developments into the RDMA stack, we propose adding abstract network data-path interfaces to RDMA devices.

In order to present a consistent interface to users, the IPoIB ULP continues to represent the network device to the IP stack. The common code also manages the IPoIB control plane, such as resolving path queries and registering to multicast groups. Data path operations are forwarded to devices that implement the new API, or fallback to the standard implementation otherwise. Using the forgoing approach, we show how IPoIB closes the performance gap compared to state-of-the-art Ethernet network interfaces.

Packet Processing Verbs for Ethernet and IPoIB
Presented by Tzahi Oved, Mellanox

As a prominent user-level networking API, the RDMA stack has been extended to support packet processing applications and user-level TCP/IP stacks, initially focusing on Ethernet. This allowed delivering low latency and high message-rate to these applications.

In this talk, we provide an extensive introduction to both current and upcoming packet processing Verbs, such as checksum offloads, TSO, flow steering, and RSS. Next, we describe how these capabilities may also be applied to IPoIB traffic.

In contrast to Ethernet support, which was based on Raw Ethernet QPs that receive unmodified packets from the wire, IPoIB packets are sent over a “virtual wire”, managed by the kernel. Thus, processing selective IP flows from user-space requires coordination with the IPoIB interface.

The Linux SoftRoCE Driver
Presented by Liran Liss, Mellanox

SoftRoCE is a software implementation of the RDMA transport protocol over Ethernet. Thus, any host to conduct RDMA traffic without necessitating a RoCE-capable NIC, allowing RDMA development anywhere.

This session presents the Linux SoftRoCE driver, RXE, which was recently accepted to the 4.9 kernel. In addition, the RXE user-level driver is now part of rdma-core, the consolidated RDMA user-space codebase. RXE is fully interoperable with HW RoCE devices, and may be used for both testing and production. We provide an overview of the RXE driver, detail its configuration, and discuss the current status and remaining challenges in RXE development.

Ubiquitous RoCE
Presented by Alex Shpiner, Mellanox

In recent years, the usage of RDMA in datacenter networks has increased significantly, with RoCE (RDMA over Converged Ethernet) emerging as the canonical approach to deploying RDMA in Ethernet-based datacenters.

Initially, RoCE required a lossless fabric for optimal performance. This is typically achieved by enabling Priority Flow Control (PFC) on Ethernet NICs and switches. The RoCEv2 specification introduced RoCE congestion control, which allows throttling transmission rate in response to congestion. Consequently, packet loss may be minimized and performance is maintained even if the underlying Ethernet network is lossy.

In this talk, we discuss the details of latest developments in the RoCE congestion control. Hardware congestion control reduces the latency of the congestion control loop; it reacts promptly in the face of congestion by throttling the transmission rate quickly and accurately; when congestion is relieved, bandwidth is immediately recovered. The short control loop also prevents network buffers from overfilling in many congestion scenarios.

In addition, fast hardware retransmission complements congestion control in heavy congestion scenarios, by significantly reducing the penalty of packet drops.

Keep an eye out as videos of the OFA Workshop 2017 sessions will be published on both the OFA website and insideHPC. Interested in attending? Registration for the 13th Annual OFA Workshop will be available online and onsite up until the opening day of the event, March 27. Visit the OFA Workshop 2017 Registration page for more information.

Bill Lee

Author: admin Categories: InfiniBand, OpenFabrics Alliance, RoCE Tags:

Share Your RDMA Expertise at the 2017 OpenFabrics Alliance Workshop

January 27th, 2017

openfabriclogo

Experts from around the world will gather at the 13th Annual OpenFabrics Alliance (OFA) Workshop March 27-31 in Austin, TX to discuss recent innovations in network technology and to tackle the industry’s toughest challenges. Through engaging keynotes, lively sessions, and open discussions, the OFA Workshop will offer a rich program covering a broad range of topics related to network deployment.

Get involved in this exciting, community-driven industry experience by answering OFA’s Call for Sessions. Recommended session topics range from RDMA in Commercial Environments to Data Intensive Computing & Analytics. However, alternate topics are welcome; OFA encourages topic submissions that aren’t included in the current list.

IBTA members are encouraged to share their vast RDMA technology expertise by submitting session proposals on advancements surrounding InfiniBand and RoCE technology. To submit a proposal, go to the OFA Workshop 2017 Call for Sessions webpage (https://openfabrics.org/index.php/call-for-sessions.html) and follow the simple instructions.

If you are not interested in submitting a session proposal but would like to attend the workshop, register today to take advantage of the Early Bird rate:

Registration Type Pricing

Early Bird (through Feb 13) $595

Regular $695

On Site $695

For additional registration and lodging information, visit the Workshop registration page.

Questions or comments? Contact press@openfabrics.org.

Bill Lee

Author: admin Categories: OpenFabrics Alliance Tags:

New InfiniBand Specification Updates Expand Interoperability, Flexibility, and Virtualization Support

November 29th, 2016

ibta_logohorz

Performance demands continue to evolve in High Performance Computing (HPC) and enterprise cloud networks, increasing the need for enhancements to InfiniBand capabilities, support features, and overall interoperability. To address this need, the InfiniBand Trade Association (IBTA) is announcing the public availability of two new InfiniBand Architecture Specification updates - the Volume 2 Release 1.3.1 and a Virtualization Annex to Volume 1 Release 1.3.

The Volume 2 Release 1.3.1 adds flexible performance enhancements to InfiniBand-based networks. With the addition of Forward Error Correction (FEC) upgrades, IT managers can experience both minimal error rates and low latency performance. The new release also enables the InfiniBand subnet manager to optimize signal integrity while maintaining the lowest power possible from the port. Additionally, updates to QSFP28 and CXP28 memory mapping support improved InfiniBand cable management.

This new Volume 2 release also improves upon interoperability and test methodologies for the latest InfiniBand data rates, namely EDR 100 Gb/s and FDR 56 Gb/s. These enhancements are achieved through updated EDR electrical requirements, amended testing methodology for EDR Limiting Active Cables, and FDR interoperability and test specification corrections.

With an aim toward supporting the ever-increasing deployment of virtualized solutions in HPC and enterprise cloud networks, the IBTA also published a new Virtualization Annex to Volume 1 Release 1.3. The Annex extends the InfiniBand specification to address multiple virtual machines connected to a single physical port, which allows subnet managers to recognize each logical endpoint and reduces the burden on the subnet managers as networks leverage virtualization for greater system scalability.

The InfiniBand Architecture Specification Volume 2 Release 1.3.1 and Volume 1 Release 1.3 are available for public download here.

Please contact us at press@infinibandta.org with questions about InfiniBand’s latest updates.

Bill Lee

IBTA Updates HPC Community on the Continued Evolution of InfiniBand at SC16

November 14th, 2016

sc16-logo
Starting this week, the international supercomputing community will gather in Salt Lake City, Utah for the Supercomputing Conference 2016 (SC16). The event includes six days of timely and informative presentations, papers, tutorials, research posters, exhibits and Birds-of-a-Feather sessions. SC16 gives scientists, engineers, researchers, educators, programmers, system administrators and developers the opportunity to learn about innovative technologies that will shape the future of large-scale technical computing and data-driven research.

IBTA members will be in full force at the show again this year to promote our organization’s mission and showcase their InfiniBand and RoCE solutions. In addition, visit any of the following exhibits during SC16 to learn more about InfiniBand-based technologies and see a wide-variety of offerings up close:

• Broadcom (#3677)
• Bull (#721)
• Cray (#1731)
• Dell EMC (#2209)
• DDN Storage (#1931)
• Ethernet Alliance (#1101)
• Finisar (#706)
• Fujitsu Limited (#831)
• Hewlett Packard Enterprise (#1531)
• IBM (#1018 and #1042)
• Lenovo (#2643) • Mangstor, Inc. (#3377)
• Mellanox (#2631)
• Microsoft (#1501)
• Molex (#1947)
• NVIDIA (#2217 & #2231)
• Oracle (#1231)
• RSC Group (#3818)
• Samtec (#2522)
• Seagate (#1209)
• SGI (#1519)
• Supermicro (#1717)

The Future is Bright for InfiniBand and Students of HPC

With a robust ecosystem and an aggressive roadmap for performance increases, the IBTA believes InfiniBand technology will play a vital role in building the future of supercomputing. That said, we expect InfiniBand will make an appearance during the Student Cluster Competition (SCC), a 48-hour challenge where students from all around the world demonstrate the skills, technologies and science needed to build, maintain and utilize a supercomputer. This competition not only highlights each student’s technical expertise but also introduces the next generation of engineers to the HPC community overall. Keep an eye out for IBTA representatives at SCC who will be meeting with the students and getting them thinking about InfiniBand technology through IBTA-branded giveaways.

Additionally, the IBTA will be providing updates to attending press and analysts on recent developments and upcoming enhancements to the InfiniBand Architecture Specification. Interested media or analysts can contact press@infinbandta.org to schedule briefings.

For more information on the SC16 program, exhibitors and SCC, visit the event website.

Bill Lee

Author: admin Categories: SC16 Tags: ,

New RoCE Interoperability List Features Higher Test Speeds, Additional Vendors

August 24th, 2016

dsc_0050-1600x1200

It’s that time again! Having finalized the RDMA over Converged Ethernet (RoCE) test results from Plugfest 29, the IBTA is pleased to announce the availability of the new RoCE Interoperability List. Designed to support data center managers, CIOs and other IT decision makers with their planned RoCE deployments for enterprise and high performance computing, the latest edition features a growing number of cable and equipment vendors and Ethernet test speeds.

In April 2016, Plugfest 29 saw nine member companies submit RoCE capable RNICS, switches, QFSP, QFSP28 and SFP28 cables for interoperability testing. This is an encouraging sign for the RoCE ecosystem as more and more vendors begin to offer solutions that are proven to work seamlessly with each other, regardless of brand. Furthermore, the new list now features 50 and 100 GbE test scenarios, which complements the IBTA’s existing 10, 25 and 40 GbE interoperability testing. This expansion gives RoCE deployers confidence in knowing that as they integrate faster Ethernet speeds in their systems, their applications can still leverage the advantages of tested RDMA technology.

The RoCE Interoperability List is created twice a year following bi-annual IBTA-sponsored Plugfests, which take place at the University of New Hampshire InterOperability Lab (UNH-IOL). The IBTA Integrators’ Program, made up of both the InfiniBand Integrators’ List and the RoCE Interoperability List, is founded on rigorous testing procedures that establish compliance and real-world interoperability.

The InfiniBand Integrators’ List, which features InfiniBand Host Channel Adapters (HCAs), switches, SCSI Remote Protocol (SRP) targets and cables, will be available soon via the IBTA Integrators’ List page. Additionally, mark your calendars for Plugfest 30 – October 17-29, 2016 at UNH-IOL. Registration information and event details will be available on the IBTA Plugfest page in the coming month.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Incorporate Networking into Hyperconverged Integrated Systems to Gain a Market Advantage

August 22nd, 2016

gartner-logo
The concept of hyperconverged integrated systems (HCIS) emerged as data centers considered new ways to increase resource utilization by reducing infrastructure inefficiencies and complexities. HCIS is primarily a software-defined platform that integrates compute, storage, networking resources. The HCIS market is expected to grow 79 percent to reach almost $2 billion this year, driving it into mainstream use in the next five years, according to Gartner.

Since this market is growing so rapidly, Gartner released an exciting new report, “Use Networking to Differentiate Your Hyperconverged System.” In the report, Gartner advises HCIS vendors to focus on networking to gain competitive market advantage by integrating use-case-specific guidelines and case studies in go-to-market efforts.

According to the report, more than 10 percent of HCIS deployments will suffer from avoidable network-induced performance problems by 2018, up from less than one percent today. HCIS vendors can help address expected challenges and add value for buyers by considering high performance networking protocols, such as InfiniBand and RDMA over Converged Ethernet (RoCE), during the system design stage.

The growing scale of HCIS clusters creates challenges such as expanding workload coverage and diminishing competitive product differentiation. This will force HCIS vendors to alter their product lines and marketing efforts to help their offerings stand out from the rest. Integrating the right networking capabilities will become even more important as a growing number of providers look to differentiate their products. The Gartner report states that by 2018, 60 percent of providers will start to offer integration of networking services, together with compute and storage services, inside of their HCIS products.

Until recently, HCIS vendors have often treated networking simply as a “dumb” interconnect. However, when clusters grow beyond a handful of nodes and higher workloads are introduced, issues begin to arise. This Gartner report stresses that treating the network as “fat dumb pipes” will make it harder to troubleshoot application performance problems from an end-to-end perspective. The report also determines that optimizing the entire communications stack is key to driving latency down and it names InfiniBand and RoCE as important protocols to implement for input/output (I/O)-intensive workloads.

As competition in the HCIS market continues to grow, vendors must change their perception of networking and begin to focus on how to integrate it in order to keep a competitive edge. To learn more about how HCIS professionals can achieve this market advantage, download the full report from the InfiniBand Reports page.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Bill Lee

Dive into RDMA’s Impact on NVMe Devices at the 2016 Flash Memory Summit

August 5th, 2016

fms

Next week, storage experts will gather at the 2016 Flash Memory Summit (FMS) in Santa Clara, CA, to discuss the current state of flash memory applications and how these technologies are enabling new designs for many products in the consumer and enterprise markets. This year’s program will include three days packed with sessions, tutorials and forums on a variety of flash storage trends, including new architectures, systems and standards.

NVMe technology, and its impact on enterprise flash applications, is among the major topics that will be discussed at the show. The growing industry demand to unlock flash storage’s full potential by leveraging high performance networking has led to the NVMe community to develop a new standard for fabrics. NVMe over Fabrics (NVMe/F) allows flash storage devices to communicate over RDMA fabrics, such as InfiniBand and RDMA over Converged Ethernet (RoCE), and thereby enabling all flash arrays to overcome existing performance bottlenecks.

Attending FMS 2016?

If you’re attending FMS 2016 and are interested in learning more about the importance of RDMA fabrics for NVMe/F solutions, I recommend the following two educational sessions:

NVMe over Fabrics Panel – Which Transport Is Best?
Tuesday, August 9, 2016 (9:45-10:50 a.m.)

Representatives from the IBTA will join a panel to discuss the value of RDMA interconnects for the NVMe/F standard. Attendees can expect to receive an overview of each RDMA fabric and the benefits they bring to specific applications and workloads. Additionally, the session will cover the promise that NVMe/F has for unleashing the potential performance of NVMe drives via mainstream high performance interconnects.

Beer, Pizza and Chat with the Experts
Tuesday, August 9, 2016 (7-8:30 p.m.)

This informal event encourages attendees to “sit and talk shop” with experts about a diverse set of storage and networking topics. As IBTA’s Marketing Work Group Co-Chair, I will be hosting a table focused on RDMA interconnects. I’d love to meet with you to answer questions about InfiniBand and RoCE and discuss the advantages they provide the flash storage industry.

Additionally, there will be various IBTA member companies exhibiting on the show floor, so stop by their booths to learn about the new InfiniBand and RoCE solutions:

·HPE (#600)

· Keysight Technologies (#810)

· Mellanox Technologies (#138)

· Tektronix (#641)

· University of New Hampshire InterOperability Lab (#719)

For more information on the FMS 2016 program and exhibitors, visit the event website.

Bill Lee