IBTA Launches the RoCE Initiative: Industry Ecosystem to Drive Adoption of RDMA over Converged Ethernet

June 23rd, 2015

roce-logo

At IBTA, we are pleased to announce the launch of the RoCE Initiative, a new effort to highlight the many benefits of RDMA over Converged Ethernet (RoCE) and to facilitate the technology’s adoption in the enterprise data centers. With the rise of server virtualization and big data analytics, data center architects are demanding innovative ways to improve overall network performance and to accelerate applications without breaking the bank in the process.

Remote Direct Memory Access (RDMA) is well known in the InfiniBand community as a proven technology that boosts data center efficiency and performance by allowing the transport of data from storage to server with less CPU overhead. RDMA technology achieves faster speeds and lower latency by offloading data movement from the CPU, resulting in more efficient execution of applications and data transfers.

Before RoCE, the advantages of RDMA were only available over InfiniBand fabrics. This left system engineers that leverage Ethernet infrastructure with only the most expensive options for increasing system performance (i.e. adding more servers or buying faster CPUs). Now, data center architects can upgrade their application performance while leveraging existing infrastructure. There is already tremendous ecosystem support for RoCE; it is supported by server and storage OEMs, adapter and switch vendors, and all major operating systems.

Through a new online resource, the RoCE Initiative will:

  • Enable CIOs, enterprise data center architects and solutions engineers to learn about improved application performance and data center productivity through training webinars, whitepapers and educational programs
  • Encourage the adoption and development of RoCE applications with case studies and solution briefs
  • Continue the development of specifications, benchmarking performance improvements and technical resources for current/future RoCE adopters

For additional information about the RoCE Initiative, check out www.RoCEInitiative.org or read the full announcement here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015

pleiades

High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here: http://www.nas.nasa.gov/hecc/about/hecc_facts.html

Bill Lee

RoCE Benefits on Full Display at Ignite 2015

May 27th, 2015

ignite-2015

On May 4-8, IT professionals and enterprise developers gathered in Chicago for the 2015 Microsoft Ignite conference. Attendees were given a first-hand glimpse at the future of a variety of Microsoft business solutions through a number of sessions, presentations and workshops.

Of particular note were two demonstrations of RDMA over Converged Ethernet (RoCE) technology and the resulting benefits for Windows Server 2016. In both demos, RoCE technology showed significant improvements over Ethernet implementations without RDMA in terms of throughput, latency and processor efficiency.

Below is a summary of each presentation featuring RoCE at Ignite 2015:

Platform Vision and Strategy (4 of 7): Storage Overview
This demonstration highlighted the extreme performance and scalability of Windows Server 2016 through RoCE enabled servers populated with NVMe and SATA SSDs. It simulated application and user workloads using SMB3 servers with Mellanox ConnectX-4 100 GbE RDMA enabled Ethernet adapters, Micron DRAM and enterprise NVMe SSDs for performance and SATA SSDs for capacity.

During the presentation, the use of RoCE compared to TCP/IP showcased drastically different performance. With RDMA enabled, the SMB3 server was able to achieve about twice the throughput, half the latency and around 33 percent less CPU overhead than that attained by TCP/IP.

Check out the video to see the demonstration in action.

Enabling Private Cloud Storage Using Servers with Local Disks

Claus Joergensen, a principal program manager at Microsoft, demonstrated a Windows Server 2016’s Storage Spaces Direct with Mellanox’s ConnectX-3 56Gb/s RoCE with Micron RAM and M500DC local SATA storage.

The goal of the demo was to highlight the value of running RoCE on a system as it related to performance, latency and processor utilization. The system was able to achieve a combined 680,000 4KB IOPS and 2ms latency when RoCE was disabled. With RoCE enabled, the system increased the 4KB IOPS to about 1.1 million and reduced the latency to 1ms. This translated roughly to a 40 percent increase in performance with RoCE enabled, all while utilizing the same amount of CPU resources.

For additional information, watch a recording of the presentation (demonstration starts at 57:00).

For more videos from Ignite 2015, visit Ignite On Demand.

Bill Lee

InfiniBand Volume 1, Release 1.3 – The Industry Sounds Off

May 14th, 2015

spec-roadmap

On March 10, 2015, IBTA announced the availability of Release 1.3 of Volume 1 of the InfiniBand Architecture Specification and it’s creating a lot of buzz in the industry. IBTA members recognized that as compute clusters and data centers grew larger and more complex, the network equipment architecture would have difficulty keeping pace with the need for more processing power. With that in mind, the new release included improvements to scalability and management for both high performance computing and enterprise data centers.

Here’s a snap shot of what industry experts and media have said about the new specification:

“Release 1.3 of the Volume 1 InfiniBand Architecture Specification provides several improvements, including deeper visibility into switch hierarchy, improved diagnostics allowing for faster response times to connectivity problems, enhanced network statistics, and added counters for Enhanced Data Rate (EDR) to improve network management. These features will allow network administrators to more easily install, maintain, and optimize very large InfiniBand clusters.” - Kurt Yamamoto, Tom’s IT PRO

“It’s worth keeping up with [InfiniBand], as it clearly shows where the broader networking market is capable of going… Maybe geeky stuff, but it allows [InfiniBand] to keep up with “exascales” of data and lead the way large scale-out computer networking gets done. This is particularly important as the 1000 node clusters of today grow towards the 10,000 node clusters of tomorrow.” - Mike Matchett, Taneja Group, Inc.

“Indeed, a rising tide lifts all boats, and the InfiniBand community does not intend to get caught in the shallows of the Big Data surge. The InfiniBand Trade Association recently issued Release 1.3 of Volume I of the format’s reference architecture, designed to incorporate increased scalability, efficiency, availability and other functions that are becoming central to modern data infrastructure.” - Arthur Cole, Enterprise Networking Planet

“The InifiniBand Trade Association (IBTA) hopes to ward off the risk of an Ethernet invasion in the ranks of HPC users with a renewed focus on manageability and visibility. Such features have just appeared in release 1.3 of the Volume 1 standard. The IBTA’s Bill Lee told The Register that as HPC clusters grow, ‘you want to be able to see every level of switch interconnect, so you can identify choke-points and work around them.’” - Richard Chirgwin, The Register

To read more industry coverage of the new release, visit the InfiniBand in the News page.

For additional information about the InifiniBand specification, check out the InifiniBand specification FAQ or access the InfiniBand specification here.

Bill Lee

Accelerating Data Movement with RoCE

April 29th, 2015

On April 14-16, Ethernet designers and experts from around the globe gathered at the Ethernet Technology Summit 2015 to discuss developments happening within the industry as it pertained to the popular networking standard. IBTA’s Diego Crupnicoff, co-chair of the Technical Working Group, shared his expertise with attendees via a presentation on “Accelerating Data Movement with RDMA over Converged Ethernet (RoCE).” The session focused on the ever-growing complexity, bandwidth requirements and services of data centers and how RoCE can address the challenges that emerge from new enterprise data center initiatives.

Here is a brief synopsis of the points that Diego covered in his well-attended presentation:

People are living in an ever-increasing digital world. In the last decade, there’s been an explosion of connected devices that are running many applications and creating massive amounts of data in the process that must be accessible anytime, anywhere.

accelerating-data-movement-with-roce_image

Over time, the data center has emerged as the workhorse of the networking industry, with the increased pace of the ‘information generation’ generating many new data center initiatives, such as the cloud, virtualization and hyper-converged infrastructure. Expectations for enhanced accessibility to larger sets of data are straining enterprise data networks, bringing about a variety of new challenges to the industry, including the following needs:

• Scale and Flexibility
• Overlays & Shared Storage
• Reduce Latency
• Rapid Server-to-Server I/O
• Big Storage, Large Clusters
• New Scale-out Storage Traffic

The Transmission Control Protocol (TCP) has had difficulty keeping up with some traffic stemming from newer, more demanding applications. In these cases, packet processing over TCP saturates CPU resources, resulting in networks with low bandwidth, high latency and limited scalability. The industry was in need of a capability that would bypass the CPU altogether to enable faster, more efficient movement of data between servers.

The advent of Remote Direct Memory Access (RDMA) did just that, utilizing hardware offloads to move data faster with less CPU overhead. By offloading the I/O from the CPU, users of RDMA experience lower latency while freeing up the CPU to focus its resources on applications that process data as opposed to moving it.

Recently, RDMA expanded into the enterprise market and is now being widely adopted over Ethernet networks with RDMA over Converged Ethernet or RoCE. The RoCE standard acts as an efficient, lightweight transport that’s layered directly over Ethernet, bypassing the TCP/IP stack. It offers the lowest latency in the Ethernet industry, which enables faster application completion, better server utilization and higher scalability. Given these advantages, RoCE became the most widely deployed Ethernet RDMA standard, resulting in millions of RoCE-capable ports on the market today.

For additional details on the benefits of RDMA for Ethernet networks, including RoCE network considerations and use cases, view the presentation in its entirety here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

IBTA Member Companies to Exhibit and Present at Ethernet Technology Summit 2015

April 14th, 2015

ets-logo

The Ethernet Technology Summit 2015 kicks off today at the Santa Clara Convention Center in Santa Clara, CA. The three-day conference offers seminars, forums, panels and exhibits for Ethernet designers of all levels to discuss new developments, share expertise and learn from prominent industry leaders. Sessions run April 14-16 and exhibits are open April 15-16.

IBTA members exhibiting at the show include:

• Cisco – Booth #201-203
• Mellanox Technologies – Booth #304
• QLogic Corporation – Booth #300-302

Be sure to stop by their booths and ask about their RDMA solutions.

Additionally, Diego Crupnicoff, co-chair of the Technical Working Group from member company Mellanox Technologies, will participate in the “Forum 1B: Ethernet in Data Centers (Data/Telco Centers Track)” session on Wednesday, April 15 from 8:30 to 10:50 a.m.

Specifically, Diego will present on “Accelerating Data Movement with RDMA over Converged Ethernet (RoCE)” and how RoCE addresses the challenges that are emerging from new data center initiatives. This is an excellent opportunity to learn about RoCE and other enhancements required to meet the ever-growing needs of an optimized, flexible data center.

In addition there are other presentations, panel discussions, and keynotes given by representatives of IBTA members Broadcom, Cisco, Intel, Mellanox, Microsoft, Molex, and QLogic. For a complete schedule and description of the 2015 Summit sessions, click here.

Bill Lee

IBTA Publishes Updated Integrators’ List & Announces Plugfest #27

March 17th, 2015

We’re excited to announce the availability of the IBTA October 2014 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest #26 held last fall. The list gathers products that have been tested and accepted by the IBTA as being compliant with the InfiniBand™ architecture specifications, with new products being added every spring and fall, following each Plugfest event.

The updated Integrators’ List, along with the bi-annual Plugfest testing, is a testament to the IBTA’s dedication to advancing InfiniBand technology and furthering industry adoption. The list demonstrates how our organization continues to ensure InfiniBand interoperability, as all cables and devices listed have successfully passed all the required compliance tests and procedures.

Additionally, the consistently-updated Integrators’ List assures vendors’ customers and end-users that the manufacturers’ equipment included on the Integrators’ List has achieved the necessary level of compliance and interoperability. It also helps the IBTA to assess current and future industry demands providing background for future InfiniBand specifications.

We’ve already begun preparations for Plugfest #27, taking place April 13-24, 2015 at the University of New Hampshire’s Interoperability Laboratory in Durham, N.H. For more information, or to register for Plugfest #27, please visit the IBTA Plugfest website.

For any questions related to the Integrators’ List or IBTA membership, please visit the IBTA website: http://www.infinibandta.org/.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA Releases New Update to InfiniBand Architecture Specification

March 10th, 2015

Today, the IBTA announced Release 1.3 of the InfiniBand Architecture Specification Volume 1. Dedicated to our mission to maintain and further the specification, we worked to incorporate capabilities to help keep up with the increasing customer demand for expedited data transfer and accessibility. This effort resulted in a fresh version of the specification, which incorporates new features that enable computer systems to match the continuous high performance computing and data center needs required for increased stability and bandwidth, as well as high computing efficiency, availability and isolation.

The rapid evolution of the computer and internet have left existing interconnect technologies struggling to keep pace. The increasing popularity of high-end computing concepts like clustering, CPU offloads and movement of large amounts of data demand a robust fabric.

Additionally, I/O devices are now expected to have link-level data protection, traffic isolation, deterministic behavior and a high quality of service. In order to help meet these needs, the IBTA developed InfiniBand Architecture (IBA) in 2000 and has continued to update the specification suite as the industry evolves.

As a result of this initiative, the updated Volume 1.3 enables notable improvements in scalability with deeper visibility into switch-fabric configuration for improved monitoring of traffic patterns and network maintenance. The updated specification allows for more in-depth cable management with greater accessibility to cables for improved diagnostics mechanisms allowing for shorter response times and improved messaging capabilities, thus improving overall network performance.

Tweet: Learn more about the recent updates to the #InfiniBand architecture specification here:http://bit.ly/1FBbpcJ

Bill Lee

Author: admin Categories: InfiniBand, Volume 1 Specification Tags:

IBTA Tests Compliance & Interoperability with Top Vendors at Plugfest #26

February 17th, 2015

In preparation for the IBTA’s upcoming Integrators’ List and April Plugfest, we wanted to give a quick recap of our last Plugfest, which included some great participants.

Every year, the IBTA hosts two Compliance and Interoperability Plugfests, one in April and one in October, at the University of New Hampshire (UNH) Interoperability Lab (IOL) in Durham, New Hampshire. The Plugfest’s purpose is to provide an opportunity for participants to measure their products for compliance with the InfiniBand architecture Specifications as well as interoperability with other InfiniBand products.

This past October, we hosted our 26th Plugfest in New Hampshire. A total of 16 cable vendors participated, while our device vendors included Intel, Mellanox and NetApp. Test equipment vendors included Anritsu, Keysight (formerly Agilent) and Tektronix. Overall, 136 cables and 13 devices were tested, and the data is broken out below:

ib-blog

The Integrators’ List, a compilation of all the products tested and accepted to be compliant with the InfiniBand architecture specification, will go live in about a month, so stay tuned!

Plugfest #27 will take place from April 13 to April 24. The cable and device registration deadline is Wednesday, March 16, while the shipping deadline is Wednesday, April 1. Check out the IBTA website for additional information on the upcoming Plugfest.

Tweet: Excited for Plugfest 27? Get a recap of last year's event here: http://blog.infinibandta.org/2015/02/16/ibta-tests-com…at-plugfest-26/

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Author: admin Categories: Uncategorized Tags: ,

Storage with Intense Network Growth and the Rise of RoCE

February 4th, 2015

On January 4 and 5, the Entertainment Storage Alliances held the 14th annual Storage Visions conference in Las Vegas, highlighting advances in storage technologies utilized in consumer electronics, the media and entertainment industries. The theme of Storage Visions 2015 was Storage with Intense Network Growth (SWING), which was very appropriate given the explosions going on in both data storage and networking.

SV1

While the primary focus of Storage Visions is storage technologies, this year’s theme acknowledges the corollary between storage growth and network growth. Therefore, among the many sessions offered on increased capacity and higher performance, the storage networking session was specifically designed to educate the audience on advances in network technology – “Speed is the Need: High Performance Data Center Fabrics to Speed Networking.”

More pressure is being put on the data center network from a variety sources, including continued growth in enterprise application transactions, new sources of data (aka, big data) and the growth in streaming video and emergence of 4K video. According to Cisco, global IP data center traffic will grow 23% annually to 8.7 zettabytes by 2018. Three quarters of this traffic will be inter-data center, or traffic between servers (East-West) or between servers and storage (North-South). Given this, data centers need to factor in technologies designed to optimize data center traffic.

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Storage administrators have always placed emphasis on two important metrics, I/O operations per second (IOPS) and throughput, to measure the ability of the network to server storage devices. Lately, a third metric, latency, has become equally important. When balanced with the IOPS and throughput, low latency technologies can bring dramatic benefits to storage.
At this year’s Storage Visions conference, I was asked to sit on a panel discussing the benefits of Remote Direct Memory Access (RDMA) for storage traffic. I specifically called out the benefits of RDMA over Converged Ethernet (RoCE). Joining me on the panel were representatives from Mellanox, speaking about InfiniBand, and Chelsio, speaking about iWARP. The storage-focused audience showed real interest in the topic and asked a number of insightful questions about RDMA benefits for their storage implementations.

RoCE in particular will bring specific benefits to data center storage environments. As the purest implementation of the InfiniBand specification in the Ethernet environment, it has the ability to provide the lowest latency for storage. In addition, it capitalizes on the converged Ethernet standards defined in the IEEE 802.1 standards for Ethernet, include Congestion Management, Enhanced Transmission Selection and Priority Flow Control, which collectively allow for lossless transmission, bandwidth allocation and quality of service. With the introduction of RoCEv2 in September 2014, the technology moves from support for a (flat) Layer 2 network to become a routable protocol supporting Layer 3 networks, allowing for use in distributed storage environments.
Ultimately, what customers need for optimal Ethernet-based storage is technology which will balance between IOPS, throughput, and latency while allowing for flexible storage placement in their network. RoCE addresses all of these needs and is becoming widely available in popular server and storage offerings.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Tweet: Get a recap from the IBTA's #SWING & #RoCE panel from #StorageVisions: http://blog.infinibandta.org/2015/02/03/storage-with-i…e-rise-of-roce/