Archive

Archive for the ‘RoCE’ Category

IBTA to Feature Optimized Testing, Debugging Procedures Onsite at Plugfest 31

March 19th, 2017

il-interop

The IBTA boasts one of the industry’s top compliance and interoperability programs, which provides device and cable vendors the opportunity to test their products for compliance with the InfiniBand architecture specification as well as interoperability with other InfiniBand and RoCE products. The IBTA Integrators’ List program produces two lists, the InfiniBand Integrators’ List and the RoCE Interoperability List, which are updated twice a year following bi-annual plugfests.

We’re pleased to announce that the results from Plugfest 29 are now available on the IBTA Integrators’ List webpage, while Plugfest 30 results will be made available in the coming weeks. These results are designed to support data center managers, CIOs and other IT decision makers with their planned deployment of InfiniBand and RoCE solutions in both small clusters and large-scale clusters of 1,000 nodes or more.

Changes for Plugfest 31

IBTA Plugfest 31, taking place April 17-28 at the University of New Hampshire Interoperability Lab, is just around the corner and we are excited to announce some significant updates to our testing processes and procedures. These changes originated from efforts at last year’s plugfests and will be fully implemented onsite for the first time at Plugfest 31.

Changes:

  1. We will no longer be testing QDR but we are adding HDR (200 GHz) testing.
  2. Keysight VNA testing is now performed using a 32 port VNA to enable testing of all 8 lanes.
  3. Software Forge (SFI) has developed all new MATLAB code that will allow real time processing of the 32 port s-parameter files generated by the Keysight VNA. This allows us to test and post process VNA results in less than 2 minutes per cable.
  4. Anritsu, Keysight and Software Forge have teamed together to bring hardware and software solutions that allow for real time VNA & ATD testing. This allows direct vendor participation and validation during the Plugfest.

Benefits:

  1. Anritsu and Keysight bring the best leading edge equipment to the Plugfest twice per year.
    1. See the Methods of Implementation for details.
  2. The IBTA also has access to SFI software that allows the Plugfest engineers to post process the results in real time. Therefore we are now able to do real time interactive testing and debugging while your engineers are at the Plugfest.
  3. We are offering a dedicated guaranteed 5 hour time slot for each vendor to debug and review their test results. Additional time will be available but will be allocated during the Plugfest after all vendors are allocated the initial 5 hours. See the registration to choose your time slot.
  4. Arbitration will occur during the Plugfest and not afterwards. This is because we only have access to the EDR and HDR test equipment at the bi-annual IBTA Plugfests.
  5. Results from the IBTA Plugfest will now be available much more quickly since the post processing time has been reduced so dramatically.
  6. We are strongly encouraging vendors to send engineers to this event so that you can compare your results with ours and do any necessary debugging and validation. This interactive debugging and testing opportunity is the best in any of the high speed industries and is provided to you as part of your IBTA Membership. Please take advantage of it.
  7. We will be providing both InfiniBand and RoCE Interoperability testing at PF31.

Interested in attending IBTA Plugfest 31? Registration can be completed on the IBTA Plugfest page. The March 20 registration deadline is fast approaching, so don’t delay!

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Author: admin Categories: IBTA Plugfest, Plugfest, RDMA, RoCE Tags:

InfiniBand and RoCE to Make Their Mark at OFA Workshop 2017

March 16th, 2017

openfabriclogo

The OpenFabrics Alliance (OFA) workshop is an annual event devoted to advancing the state of the art in networking. The workshop is known for showcasing a broad range of topics all related to network technology and deployment through an interactive, community-driven event. The comprehensive event includes a rich program made up of more than 50 sessions covering a variety of critical networking topics, which range from current deployments of RDMA to new and advanced network technologies.

To view the full list of abstracts, visit the OFA Workshop 2017 Abstracts and Agenda page.

This year’s workshop program will also feature some notable sessions that showcase the latest developments happening for InfiniBand and RoCE technology. Below are is the collection of OFA Workshop 2017 sessions that we recommend you check out:

Developer Experiences of the First Paravirtual RDMA Provider and Other RDMA Updates
Presented by Adit Ranadive, VMware

VMware’s Paravirtual RDMA (PVRDMA) device is a new NIC in vSphere 6.5 that allows VMs in a cluster to communicate using Remote Direct Memory Access (RDMA), while maintaining latencies and bandwidth close to that of physical hardware. Recently, the PVRDMA driver was accepted as part of the Linux 4.10 kernel and our user-library was added as part of the new rdma-core package.

In this session, we will provide a brief overview of our PVRDMA design and capabilities. Next, we will discuss our development approach and challenges for joint device and driver development. Further, we will highlight our experience for upstreaming the driver and library with the new changes to the core RDMA stack.

We will provide an update on the performance of the PVRDMA device along with upcoming updates to the device capabilities. Finally, we will provide new results on the performance achieved by several HPC applications using VM DirectPath I/O.

This session seeks to engage the audience in discussions on: 1) new RDMA provider development and acceptance, and 2) hardware support for RDMA virtualization.

Experiences with NVMe over Fabrics
Presented by Parav Pandit, Mellanox

NVMe is an interface specification to access non-volatile storage media over PCIe buses. The interface enables software to interact with devices using multiple, asynchronous submission and completion queues, which reside in memory. Consequently, software may leverage the inherent parallelism and low latency of modern NMV devices with minimal overhead. Recently, the NMVe specification has been extended to support remote access over fabrics, such as RDMA and Fibre Channel. Using RDMA, NVMe over Fabrics (NVMe-oF) provides the high BW and low-latency characteristics of NVMe to remote devices. Moreover, these performance traits are delivered with negligible CPU overhead as the bulk of the data transfer is conducted by RDMA.

In this session, we present an overview of NVMe-oF and its implementation in Linux. We point out the main design choices and evaluate NVMe-oF performance for both InfiniBand and RoCE fabrics.

Validating RoCEv2 for Production Deployment in the Cloud Datacenter
Presented by Sowmini Varadhan, Oracle

With the increasing prevalence of ethernet switches and NICs in Data Center Networks, we have been experimenting with the deployment of RDMA over Commodity Ethernet (RoCE) in our DCN. RDMA needs a lossless transport, and, in theory, this can be achieved on ethernet by using priority based PFC (IEEE 802.1qbb) and ECN (IETF RFC 3168).

We describe our experiences in trying to deploy these protocols in a RoCEv2 testbed running @ 100 Gbit/sec consisting of a multi-level CLOS network.

In addition to addressing the documented limitations around PFC/ECN (livelock, pause-frame-storm, memory requirements for supporting multiple priority flows), we also hope to share some of the performance metrics gathered, as well as some feedback on ways to improve the tooling for observability and diagnosability of the system in a vendor-agnostic, interoperable way.

Host Based InfiniBand Network Fabric Monitoring
Presented by Michael Aguilar, Sandia National Laboratories

Synchronized host based InfiniBand network counter monitoring of local connections at 1Hz can provide a reasonable system snapshot understanding of traffic injection/ejection into/from the fabric. This type of monitoring is currently used to enable understanding about the data flow characteristics of applications and inference about congestion based on application performance degradation. It cannot, however, enable identification of where congestion occurs or how well adaptive routing algorithms and policies react to and alleviate it. Without this critical information the fabric remains opaque and congestion management will continue to be largely handled through increases in bandwidth. To reduce fabric opacity, we have extended our host based monitoring to include internal InfiniBand fabric network ports. In this presentation we describe our methodology along with preliminary timing and overhead information. Limitations and their sources are discussed along with proposed solutions, optimizations, and planned future work.

IBTA TWG - Recent Topics in the IBTA, and a Look Ahead
Presented by Bill Magro, Intel on behalf of InfiniBand Trade Association

This talk discusses some recent activities in the IBTA including recent specification updates. It also provides a glimpse into the future for the IBTA.

InfiniBand Virtualization
Presented by Liran Liss, Mellanox on behalf of InfiniBand Trade Association

InfiniBand Virtualization allows a single Channel Adapter to present multiple transport endpoints that share the same physical port. To software, these endpoints are exposed as independent Virtual HCAs (VHCAs), and thus may be assigned to different software entities, such as VMs. VHCAs are visible to Subnet Management, and are managed just like physical HCAs. This session provides an overview of the InfiniBand Virtualization Annex, which was released on November 2016. We will cover the Virtualization model, management, addressing modes, and discuss deployment considerations.

IPoIB Acceleration
Presented by Tzahi Oved, Mellanox

The IPoIB protocol encapsulates IP packets over InfiniBand datagrams. As a direct RDMA Upper Layer Protocol (ULP), IPoIB cannot support HW features that are specific to the IP protocol stack. Nevertheless, RDMA interfaces have been extended to support some of the prominent IP offload features, such as TCP/UDP checksum and TSO. This provided reasonable performance for IPoIB.

However, new network interface features are one of the most active areas of the Linux kernel. Examples include TSS and RSS, tunneling offloads, and XDP. In addition, the basic IP offload features are insufficient to cope with the increasing network bandwidth. Rather than continuously porting IP network interface developments into the RDMA stack, we propose adding abstract network data-path interfaces to RDMA devices.

In order to present a consistent interface to users, the IPoIB ULP continues to represent the network device to the IP stack. The common code also manages the IPoIB control plane, such as resolving path queries and registering to multicast groups. Data path operations are forwarded to devices that implement the new API, or fallback to the standard implementation otherwise. Using the forgoing approach, we show how IPoIB closes the performance gap compared to state-of-the-art Ethernet network interfaces.

Packet Processing Verbs for Ethernet and IPoIB
Presented by Tzahi Oved, Mellanox

As a prominent user-level networking API, the RDMA stack has been extended to support packet processing applications and user-level TCP/IP stacks, initially focusing on Ethernet. This allowed delivering low latency and high message-rate to these applications.

In this talk, we provide an extensive introduction to both current and upcoming packet processing Verbs, such as checksum offloads, TSO, flow steering, and RSS. Next, we describe how these capabilities may also be applied to IPoIB traffic.

In contrast to Ethernet support, which was based on Raw Ethernet QPs that receive unmodified packets from the wire, IPoIB packets are sent over a “virtual wire”, managed by the kernel. Thus, processing selective IP flows from user-space requires coordination with the IPoIB interface.

The Linux SoftRoCE Driver
Presented by Liran Liss, Mellanox

SoftRoCE is a software implementation of the RDMA transport protocol over Ethernet. Thus, any host to conduct RDMA traffic without necessitating a RoCE-capable NIC, allowing RDMA development anywhere.

This session presents the Linux SoftRoCE driver, RXE, which was recently accepted to the 4.9 kernel. In addition, the RXE user-level driver is now part of rdma-core, the consolidated RDMA user-space codebase. RXE is fully interoperable with HW RoCE devices, and may be used for both testing and production. We provide an overview of the RXE driver, detail its configuration, and discuss the current status and remaining challenges in RXE development.

Ubiquitous RoCE
Presented by Alex Shpiner, Mellanox

In recent years, the usage of RDMA in datacenter networks has increased significantly, with RoCE (RDMA over Converged Ethernet) emerging as the canonical approach to deploying RDMA in Ethernet-based datacenters.

Initially, RoCE required a lossless fabric for optimal performance. This is typically achieved by enabling Priority Flow Control (PFC) on Ethernet NICs and switches. The RoCEv2 specification introduced RoCE congestion control, which allows throttling transmission rate in response to congestion. Consequently, packet loss may be minimized and performance is maintained even if the underlying Ethernet network is lossy.

In this talk, we discuss the details of latest developments in the RoCE congestion control. Hardware congestion control reduces the latency of the congestion control loop; it reacts promptly in the face of congestion by throttling the transmission rate quickly and accurately; when congestion is relieved, bandwidth is immediately recovered. The short control loop also prevents network buffers from overfilling in many congestion scenarios.

In addition, fast hardware retransmission complements congestion control in heavy congestion scenarios, by significantly reducing the penalty of packet drops.

Keep an eye out as videos of the OFA Workshop 2017 sessions will be published on both the OFA website and insideHPC. Interested in attending? Registration for the 13th Annual OFA Workshop will be available online and onsite up until the opening day of the event, March 27. Visit the OFA Workshop 2017 Registration page for more information.

Bill Lee

Author: admin Categories: InfiniBand, OpenFabrics Alliance, RoCE Tags:

New RoCE Interoperability List Features Higher Test Speeds, Additional Vendors

August 24th, 2016

dsc_0050-1600x1200

It’s that time again! Having finalized the RDMA over Converged Ethernet (RoCE) test results from Plugfest 29, the IBTA is pleased to announce the availability of the new RoCE Interoperability List. Designed to support data center managers, CIOs and other IT decision makers with their planned RoCE deployments for enterprise and high performance computing, the latest edition features a growing number of cable and equipment vendors and Ethernet test speeds.

In April 2016, Plugfest 29 saw nine member companies submit RoCE capable RNICS, switches, QFSP, QFSP28 and SFP28 cables for interoperability testing. This is an encouraging sign for the RoCE ecosystem as more and more vendors begin to offer solutions that are proven to work seamlessly with each other, regardless of brand. Furthermore, the new list now features 50 and 100 GbE test scenarios, which complements the IBTA’s existing 10, 25 and 40 GbE interoperability testing. This expansion gives RoCE deployers confidence in knowing that as they integrate faster Ethernet speeds in their systems, their applications can still leverage the advantages of tested RDMA technology.

The RoCE Interoperability List is created twice a year following bi-annual IBTA-sponsored Plugfests, which take place at the University of New Hampshire InterOperability Lab (UNH-IOL). The IBTA Integrators’ Program, made up of both the InfiniBand Integrators’ List and the RoCE Interoperability List, is founded on rigorous testing procedures that establish compliance and real-world interoperability.

The InfiniBand Integrators’ List, which features InfiniBand Host Channel Adapters (HCAs), switches, SCSI Remote Protocol (SRP) targets and cables, will be available soon via the IBTA Integrators’ List page. Additionally, mark your calendars for Plugfest 30 – October 17-29, 2016 at UNH-IOL. Registration information and event details will be available on the IBTA Plugfest page in the coming month.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Incorporate Networking into Hyperconverged Integrated Systems to Gain a Market Advantage

August 22nd, 2016

gartner-logo
The concept of hyperconverged integrated systems (HCIS) emerged as data centers considered new ways to increase resource utilization by reducing infrastructure inefficiencies and complexities. HCIS is primarily a software-defined platform that integrates compute, storage, networking resources. The HCIS market is expected to grow 79 percent to reach almost $2 billion this year, driving it into mainstream use in the next five years, according to Gartner.

Since this market is growing so rapidly, Gartner released an exciting new report, “Use Networking to Differentiate Your Hyperconverged System.” In the report, Gartner advises HCIS vendors to focus on networking to gain competitive market advantage by integrating use-case-specific guidelines and case studies in go-to-market efforts.

According to the report, more than 10 percent of HCIS deployments will suffer from avoidable network-induced performance problems by 2018, up from less than one percent today. HCIS vendors can help address expected challenges and add value for buyers by considering high performance networking protocols, such as InfiniBand and RDMA over Converged Ethernet (RoCE), during the system design stage.

The growing scale of HCIS clusters creates challenges such as expanding workload coverage and diminishing competitive product differentiation. This will force HCIS vendors to alter their product lines and marketing efforts to help their offerings stand out from the rest. Integrating the right networking capabilities will become even more important as a growing number of providers look to differentiate their products. The Gartner report states that by 2018, 60 percent of providers will start to offer integration of networking services, together with compute and storage services, inside of their HCIS products.

Until recently, HCIS vendors have often treated networking simply as a “dumb” interconnect. However, when clusters grow beyond a handful of nodes and higher workloads are introduced, issues begin to arise. This Gartner report stresses that treating the network as “fat dumb pipes” will make it harder to troubleshoot application performance problems from an end-to-end perspective. The report also determines that optimizing the entire communications stack is key to driving latency down and it names InfiniBand and RoCE as important protocols to implement for input/output (I/O)-intensive workloads.

As competition in the HCIS market continues to grow, vendors must change their perception of networking and begin to focus on how to integrate it in order to keep a competitive edge. To learn more about how HCIS professionals can achieve this market advantage, download the full report from the InfiniBand Reports page.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Bill Lee

Dive into RDMA’s Impact on NVMe Devices at the 2016 Flash Memory Summit

August 5th, 2016

fms

Next week, storage experts will gather at the 2016 Flash Memory Summit (FMS) in Santa Clara, CA, to discuss the current state of flash memory applications and how these technologies are enabling new designs for many products in the consumer and enterprise markets. This year’s program will include three days packed with sessions, tutorials and forums on a variety of flash storage trends, including new architectures, systems and standards.

NVMe technology, and its impact on enterprise flash applications, is among the major topics that will be discussed at the show. The growing industry demand to unlock flash storage’s full potential by leveraging high performance networking has led to the NVMe community to develop a new standard for fabrics. NVMe over Fabrics (NVMe/F) allows flash storage devices to communicate over RDMA fabrics, such as InfiniBand and RDMA over Converged Ethernet (RoCE), and thereby enabling all flash arrays to overcome existing performance bottlenecks.

Attending FMS 2016?

If you’re attending FMS 2016 and are interested in learning more about the importance of RDMA fabrics for NVMe/F solutions, I recommend the following two educational sessions:

NVMe over Fabrics Panel – Which Transport Is Best?
Tuesday, August 9, 2016 (9:45-10:50 a.m.)

Representatives from the IBTA will join a panel to discuss the value of RDMA interconnects for the NVMe/F standard. Attendees can expect to receive an overview of each RDMA fabric and the benefits they bring to specific applications and workloads. Additionally, the session will cover the promise that NVMe/F has for unleashing the potential performance of NVMe drives via mainstream high performance interconnects.

Beer, Pizza and Chat with the Experts
Tuesday, August 9, 2016 (7-8:30 p.m.)

This informal event encourages attendees to “sit and talk shop” with experts about a diverse set of storage and networking topics. As IBTA’s Marketing Work Group Co-Chair, I will be hosting a table focused on RDMA interconnects. I’d love to meet with you to answer questions about InfiniBand and RoCE and discuss the advantages they provide the flash storage industry.

Additionally, there will be various IBTA member companies exhibiting on the show floor, so stop by their booths to learn about the new InfiniBand and RoCE solutions:

·HPE (#600)

· Keysight Technologies (#810)

· Mellanox Technologies (#138)

· Tektronix (#641)

· University of New Hampshire InterOperability Lab (#719)

For more information on the FMS 2016 program and exhibitors, visit the event website.

Bill Lee

Plugfest 28 Results Highlight Expanding InfiniBand EDR 100 Gb/s & RoCE Ecosystems

March 21st, 2016

il-interopWe are excited to announce the availability of our latest InfiniBand Integrators’ List and RoCE Interoperability List. The two lists make up the backbone of our Integrators’ Program and are designed to support data center managers, CIOs and other IT decision makers with their planned InfiniBand and RoCE deployments for enterprise and high performance computing systems. To keep data up to date and as useful as possible, both documents are refreshed twice a year following our bi-annual plugfests, which are held at the University of New Hampshire InterOperability Lab (UNH-IOL).

Having recently finalized the results from Plugfest 28, we can report a significant increase in InfiniBand EDR 100 Gb/s submissions compared to the last Integrators’ List. This trend demonstrates a continued industry demand for InfiniBand-based systems that are capable of higher bandwidth and faster performance. The updated list features a variety of InfiniBand devices, including Host Channel Adapters (HCAs), Switches, SCSI Remote Protocol (SRP) targets and cables (QDR, FDR and EDR).

Additionally, we held our second RoCE interoperability event at Plugfest 28, testing 10, 25 and 40 GbE RNICs, Switches and SFP+, SFP28, QSFP and QSFP28 cables. Although a full spec compliance program is still under development for RoCE, the existing interoperability testing offers solid insight into the ecosystem’s robustness and viability. We plan to continue our work creating a comprehensive RoCE compliance program at Plugfest 29. RoCE testing at Plugfest 29 will include testing of more than 16 different 10, 25, 40, 50 and 100 GbE RNICs and Switches along with all of the various cables to support these devices. Plugfest 29 testing of RoCE products, which use Ethernet physical and link layers, will be the most comprehensive interoperability testing ever performed.

As always, we’d like to thank the leading vendors that contributed test equipment to IBTA Plugfest 28. These invaluable members include Anritsu, Keysight Technologies, Matlab, Molex, Tektronix, Total Phase and Wilder Technologies.

The next opportunity for members to test InfiniBand and RoCE products is Plugfest 29 is scheduled for April 4-15, 2016 at UNH-IOL. Event details and registration information are available here.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA Wants You – Guide the Future of InfiniBand, RoCE and Performance-Driven Data Centers

January 25th, 2016

For any organization, the New Year provides a great opportunity to reflect on the past and set a healthier course for the future. Companies can take a variety of internal actions to prepare for impeding market changes, but rarely do they have the power to influence the course of an entire industry on their own. For those devoted to improving clustered server and data center performance, joining an industry alliance such as the InfiniBand Trade Association (IBTA) offers a chance to contribute to the foundational work that sets the path for technological advances one, five and ten years into the future.

The IBTA is the organization that maintains and furthers the InfiniBand specification, used by Cloud service providers and high performing enterprise data centers as well as the interconnect of choice for the world’s fastest supercomputers. Additionally, the IBTA defines the specification for RDMA over Converged Ethernet (RoCE), which leverages the advantages of RDMA technology for Ethernet-based environments.

Leading enterprise IT vendors and HPC research facilities make up the coalition of more than 50 members that all have a shared interest in the advancement of InfiniBand and/or RoCE technology. Each member company contributes specialized expertise to IBTA’s various technical working groups, which shape and guide the progression of InfiniBand and RoCE capabilities.

Joining the IBTA comes with a variety of membership benefits, including:

Access to:

  • The InfiniBand and RoCE architecture specifications as they are being developed
  • Meeting minutes and notices of proposed and actual changes to IBTA-controlled documents
  • IBTA-sponsored Compliance and Interoperability Plugfests and workshops

Participation in:

  • The maintenance of the InfiniBand Roadmap, which defines future speeds and lane widths for InfiniBand-based technologies
  • IBTA sponsored activities at tradeshows including the annual Supercomputing Conference in November
  • IBTA speaking and demo opportunities

Opportunity to:

  • Influence and contribute to the ongoing development and maintenance of the InfiniBand and RoCE architecture specifications
  • Add approved products to the IBTA Integrators’ List, which provides a centralized listing of products that have passed a suite of compliance and interoperability testing
  • Post InfiniBand and RoCE related whitepapers, webinars, podcasts and press releases on the IBTA and RoCE Initiative web sites
  • Submit and obtain access to information regarding licensing policies posted by member patent holders on specific InfiniBand and RoCE architecture specifications
  • Network with the world’s foremost developers of InfiniBand and RoCE hardware and software

Make 2016 the year your company defines the future of the HPC industry! Visit our Membership page to learn how to join or contact membership@infinibandta.org for more information.

Bill Lee

Changes to the Modern Data Center – Recap from SDC 15

October 19th, 2015

sdc15_logo
The InfiniBand Trade Association recently had the opportunity to speak on RDMA technology at the 2015 Storage Developer Conference. For the first time, SDC15 introduced Pre-conference Primer Sessions covering topics such as Persistent Memory, Cloud and Interop and Data Center Infrastructure. Intel’s David Cohen, System Architect and Brian Hausauer, Hardware Architect spoke on behalf of IBTA in a pre-conference session and discussed “Nonvolatile Memory (NVM), four trends in the modern data center and implications for the design of next generation distributed storage systems.”

Below is a high level overview of their presentation:

The modern data center continues to transform as applications and uses change and develop. Most recently, we have seen users abandon traditional storage architectures for the cloud. Cloud storage is founded on data-center-wide connectivity and scale-out storage, which delivers significant increases in capacity and performance, enabling application deployment anytime, anywhere. Additionally, job scheduling and system balance capabilities are boosting overall efficiency and optimizing a variety of essential data center functions.

Trends in the modern data center are appearing as cloud architecture takes hold. First, the performance of network bandwidth and storage media is growing rapidly. Furthermore, operating system vendors (OSV) are optimizing the code path of their network and storage stacks. All of these speed and efficiency gains to network bandwidth and storage are occurring while single processor/core performance remains relatively flat.

Data comes in a variety of flavors, some of which is accessed frequently for application I/O requests and others that are rarely retrieved. To enable higher performance and resource efficiency, cloud storage uses a tiering model to access data based on what is accessed most often. Data that is regularly accessed is stored on expensive, high performance media (solid-state drives). Data that is hardly or never retrieved is relegated to less expensive media with the lowest $/GB (rotational drives). This model follows a Hot, Warm and Cold data pattern and allows you faster access to what you use the most.

The growth of high performance storage media is driving the need for innovation in the network, primarily addressing application latency. This is where Remote Direct Memory Access (RDMA) comes into play. RDMA is an advanced, reliable transport protocol that enhances the efficiency of workload processing. Essentially, it increases data center application performance by offloading the movement of data from the CPU. This lowers overhead and allows the CPU to focus its processing power on running applications, which in turn reduces latency.

Demand for cloud storage is increasing and the need for RDMA and high performance storage networking grows as well. With this in mind, the InfiniBand Trade Association is continuing its work developing the RDMA architecture for InfiniBand and Ethernet (via RDMA over Converged Ethernet or RoCE) topologies.

Bill Lee

IBTA Launches the RoCE Initiative: Industry Ecosystem to Drive Adoption of RDMA over Converged Ethernet

June 23rd, 2015

roce-logo

At IBTA, we are pleased to announce the launch of the RoCE Initiative, a new effort to highlight the many benefits of RDMA over Converged Ethernet (RoCE) and to facilitate the technology’s adoption in the enterprise data centers. With the rise of server virtualization and big data analytics, data center architects are demanding innovative ways to improve overall network performance and to accelerate applications without breaking the bank in the process.

Remote Direct Memory Access (RDMA) is well known in the InfiniBand community as a proven technology that boosts data center efficiency and performance by allowing the transport of data from storage to server with less CPU overhead. RDMA technology achieves faster speeds and lower latency by offloading data movement from the CPU, resulting in more efficient execution of applications and data transfers.

Before RoCE, the advantages of RDMA were only available over InfiniBand fabrics. This left system engineers that leverage Ethernet infrastructure with only the most expensive options for increasing system performance (i.e. adding more servers or buying faster CPUs). Now, data center architects can upgrade their application performance while leveraging existing infrastructure. There is already tremendous ecosystem support for RoCE; it is supported by server and storage OEMs, adapter and switch vendors, and all major operating systems.

Through a new online resource, the RoCE Initiative will:

  • Enable CIOs, enterprise data center architects and solutions engineers to learn about improved application performance and data center productivity through training webinars, whitepapers and educational programs
  • Encourage the adoption and development of RoCE applications with case studies and solution briefs
  • Continue the development of specifications, benchmarking performance improvements and technical resources for current/future RoCE adopters

For additional information about the RoCE Initiative, check out www.RoCEInitiative.org or read the full announcement here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

RoCE Benefits on Full Display at Ignite 2015

May 27th, 2015

ignite-2015

On May 4-8, IT professionals and enterprise developers gathered in Chicago for the 2015 Microsoft Ignite conference. Attendees were given a first-hand glimpse at the future of a variety of Microsoft business solutions through a number of sessions, presentations and workshops.

Of particular note were two demonstrations of RDMA over Converged Ethernet (RoCE) technology and the resulting benefits for Windows Server 2016. In both demos, RoCE technology showed significant improvements over Ethernet implementations without RDMA in terms of throughput, latency and processor efficiency.

Below is a summary of each presentation featuring RoCE at Ignite 2015:

Platform Vision and Strategy (4 of 7): Storage Overview
This demonstration highlighted the extreme performance and scalability of Windows Server 2016 through RoCE enabled servers populated with NVMe and SATA SSDs. It simulated application and user workloads using SMB3 servers with Mellanox ConnectX-4 100 GbE RDMA enabled Ethernet adapters, Micron DRAM and enterprise NVMe SSDs for performance and SATA SSDs for capacity.

During the presentation, the use of RoCE compared to TCP/IP showcased drastically different performance. With RDMA enabled, the SMB3 server was able to achieve about twice the throughput, half the latency and around 33 percent less CPU overhead than that attained by TCP/IP.

Check out the video to see the demonstration in action.

Enabling Private Cloud Storage Using Servers with Local Disks

Claus Joergensen, a principal program manager at Microsoft, demonstrated a Windows Server 2016’s Storage Spaces Direct with Mellanox’s ConnectX-3 56Gb/s RoCE with Micron RAM and M500DC local SATA storage.

The goal of the demo was to highlight the value of running RoCE on a system as it related to performance, latency and processor utilization. The system was able to achieve a combined 680,000 4KB IOPS and 2ms latency when RoCE was disabled. With RoCE enabled, the system increased the 4KB IOPS to about 1.1 million and reduced the latency to 1ms. This translated roughly to a 40 percent increase in performance with RoCE enabled, all while utilizing the same amount of CPU resources.

For additional information, watch a recording of the presentation (demonstration starts at 57:00).

For more videos from Ignite 2015, visit Ignite On Demand.

Bill Lee