Archive

Author Archive

EDR Hits Primetime! Newly Published IBTA Integrators’ List Highlights Growth of EDR

August 20th, 2015

The highly anticipated IBTA April 2015 Combined Cable and Device Integrators’ List is now available for download. The list highlights the results of the IBTA Plugfest 27 held at the University of New Hampshire’s Interoperability Lab earlier this year. The updated list consists of newly verified products that are compliant to the InfiniBand specification as well as details on solution interoperability.

Of particular note was the rise of EDR submissions. At IBTA Plugfest 27, eight companies provided 32 EDR cables for testing, up from three companies and 12 EDR cables at IBTA Plugfest 26. The increase in EDR cable solutions indicates that the technology is beginning to hit its stride. At Plugfest 28 we anticipate even more EDR solutions.

The IBTA is known in the industry for its rigorous testing procedures and subsequent Integrators’ List. The Integrators’ List provides IT professionals with peace of mind when purchasing new components to incorporate into new and existing infrastructure. To ensure the most reliable results, the IBTA uses industry-leading test equipment from Anritsu, Keysight, Molex, Tektronix, Total Phase and Wilder Technologies. We appreciate their commitment to our compliance program; we couldn’t do it without them.

The IBTA hosts its Plugfest twice a year to give members a chance to test new configurations or form factors. Although many technical associations require substantial attendance fees for testing events, the IBTA covers the bulk of Plugfest costs through membership dues.

The companies participating in Plugfest 27 included 3M Company, Advanced Photonics, Inc., Amphenol, FCI, Finisar, Fujikura, Ltd., Fujitsu Component Limited, Lorom America, Luxshare-ICT, Mellanox Technologies, Molex Incorporated, SAE Magnetics, Samtec, Shanghai Net Miles Fiber Technology Co. Ltd, Siemon, Sumitomo, and Volex.

We’ve already begun planning for IBTA Plugfest 28, which will be held October 12-23, 2015. For questions about Plugfest, contact ibta_plugfest@soft-forge.com or visit the Plugfest page for additional information.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

InfiniBand leads the TOP500 powering more than 50 percent of the world’s supercomputing systems

August 4th, 2015

TOP500 Interconnect Trends

TOP500 Interconnect Trends

TOP500.org released the list of the 500 most powerful commercially available computer systems in the world, reporting that InfiniBand powers 257 systems, 51.4 percent of the list. This marks a 15.8 percent year over year growth from June 2014.

Demand for higher bandwidth, lower latency and higher message rates along with the need for application acceleration is driving continued adoption of InfiniBand in traditional High Performance Computing (HPC) as well as commercial HPC, cloud and enterprise data centers. InfiniBand is the only open-standard I/O that provides the capability required to handling supercomputing’s high demand for CPU cycles without time wasted on I/O transactions.

  • InfiniBand powers the most efficient system on the list with 98.8% efficiency.TOP100 Systems
  • EDR (Enhanced Data Rate) InfiniBand delivers 100Gbps and enters the TOP500 for the first time, powering three systems.
  • FDR (Fourteen Data Rate) InfiniBand at 56Gbps continues to be the most used technology on the TOP5000, connecting 156 systems.
  • InfiniBand connects the most powerful clusters, 33 of the Petascale-performance systems, up from 24 in June 2014.
  • InfiniBand Is the leading interconnect for accelerator-based systems covering 77% of the list.

Not only is InfiniBand the most used interconnect solution in the world’s 500 most powerful supercomputers, it’s also the leader in the TOP100 as well. The TOP100 encompasses the top 100 supercomputing systems, as ranked in the TOP500. InfiniBand is the natural choice for world-leading supercomputers because of its performance, efficiency and scalability.

The full TOP500 list is available at www.top500.org.

Bill Lee

Author: admin Categories: InfiniBand, TOP500 Tags: , , , ,

IBTA Members to Exhibit at ISC High Performance 2015

July 10th, 2015

isc_conference1

The ISC High Performance 2015 conference gets underway this weekend in Frankfurt, Germany, where experts in the high performance computing field will gather to discuss the latest developments and trends driving the industry. Event organizers are expecting over 2,500 attendees at this year’s show, which will feature speakers, presentations, BoF sessions, tutorials and workshops on a variety of topics.

IBTA members will be on hand exhibiting their latest InfiniBand-based HPC solutions. Multiple EDR 100Gb/s InfiniBand products and demonstrations can be seen across the exhibit hall at ISC High Performance at the following member company booths:

  • Applied Micro (Booth #1431)
  • Bull (Booth #1230)HP (Booth #732)
  • IBM (Booth #928)
  • Lenovo (Booth #1020)
  • Mellanox (Booth #905)
  • SGI (Booth #910)

Be sure to stop by each of our member booths and ask about their InfiniBand offerings! For additional details on ISC High Performance 2015 keynotes and sessions, visit its program overview page.

Bill Lee

Author: admin Categories: InfiniBand Tags: , ,

IBTA Launches the RoCE Initiative: Industry Ecosystem to Drive Adoption of RDMA over Converged Ethernet

June 23rd, 2015

roce-logo

At IBTA, we are pleased to announce the launch of the RoCE Initiative, a new effort to highlight the many benefits of RDMA over Converged Ethernet (RoCE) and to facilitate the technology’s adoption in the enterprise data centers. With the rise of server virtualization and big data analytics, data center architects are demanding innovative ways to improve overall network performance and to accelerate applications without breaking the bank in the process.

Remote Direct Memory Access (RDMA) is well known in the InfiniBand community as a proven technology that boosts data center efficiency and performance by allowing the transport of data from storage to server with less CPU overhead. RDMA technology achieves faster speeds and lower latency by offloading data movement from the CPU, resulting in more efficient execution of applications and data transfers.

Before RoCE, the advantages of RDMA were only available over InfiniBand fabrics. This left system engineers that leverage Ethernet infrastructure with only the most expensive options for increasing system performance (i.e. adding more servers or buying faster CPUs). Now, data center architects can upgrade their application performance while leveraging existing infrastructure. There is already tremendous ecosystem support for RoCE; it is supported by server and storage OEMs, adapter and switch vendors, and all major operating systems.

Through a new online resource, the RoCE Initiative will:

  • Enable CIOs, enterprise data center architects and solutions engineers to learn about improved application performance and data center productivity through training webinars, whitepapers and educational programs
  • Encourage the adoption and development of RoCE applications with case studies and solution briefs
  • Continue the development of specifications, benchmarking performance improvements and technical resources for current/future RoCE adopters

For additional information about the RoCE Initiative, check out www.RoCEInitiative.org or read the full announcement here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015

pleiades

High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here: http://www.nas.nasa.gov/hecc/about/hecc_facts.html

Bill Lee

RoCE Benefits on Full Display at Ignite 2015

May 27th, 2015

ignite-2015

On May 4-8, IT professionals and enterprise developers gathered in Chicago for the 2015 Microsoft Ignite conference. Attendees were given a first-hand glimpse at the future of a variety of Microsoft business solutions through a number of sessions, presentations and workshops.

Of particular note were two demonstrations of RDMA over Converged Ethernet (RoCE) technology and the resulting benefits for Windows Server 2016. In both demos, RoCE technology showed significant improvements over Ethernet implementations without RDMA in terms of throughput, latency and processor efficiency.

Below is a summary of each presentation featuring RoCE at Ignite 2015:

Platform Vision and Strategy (4 of 7): Storage Overview
This demonstration highlighted the extreme performance and scalability of Windows Server 2016 through RoCE enabled servers populated with NVMe and SATA SSDs. It simulated application and user workloads using SMB3 servers with Mellanox ConnectX-4 100 GbE RDMA enabled Ethernet adapters, Micron DRAM and enterprise NVMe SSDs for performance and SATA SSDs for capacity.

During the presentation, the use of RoCE compared to TCP/IP showcased drastically different performance. With RDMA enabled, the SMB3 server was able to achieve about twice the throughput, half the latency and around 33 percent less CPU overhead than that attained by TCP/IP.

Check out the video to see the demonstration in action.

Enabling Private Cloud Storage Using Servers with Local Disks

Claus Joergensen, a principal program manager at Microsoft, demonstrated a Windows Server 2016’s Storage Spaces Direct with Mellanox’s ConnectX-3 56Gb/s RoCE with Micron RAM and M500DC local SATA storage.

The goal of the demo was to highlight the value of running RoCE on a system as it related to performance, latency and processor utilization. The system was able to achieve a combined 680,000 4KB IOPS and 2ms latency when RoCE was disabled. With RoCE enabled, the system increased the 4KB IOPS to about 1.1 million and reduced the latency to 1ms. This translated roughly to a 40 percent increase in performance with RoCE enabled, all while utilizing the same amount of CPU resources.

For additional information, watch a recording of the presentation (demonstration starts at 57:00).

For more videos from Ignite 2015, visit Ignite On Demand.

Bill Lee

InfiniBand Volume 1, Release 1.3 – The Industry Sounds Off

May 14th, 2015

spec-roadmap

On March 10, 2015, IBTA announced the availability of Release 1.3 of Volume 1 of the InfiniBand Architecture Specification and it’s creating a lot of buzz in the industry. IBTA members recognized that as compute clusters and data centers grew larger and more complex, the network equipment architecture would have difficulty keeping pace with the need for more processing power. With that in mind, the new release included improvements to scalability and management for both high performance computing and enterprise data centers.

Here’s a snap shot of what industry experts and media have said about the new specification:

“Release 1.3 of the Volume 1 InfiniBand Architecture Specification provides several improvements, including deeper visibility into switch hierarchy, improved diagnostics allowing for faster response times to connectivity problems, enhanced network statistics, and added counters for Enhanced Data Rate (EDR) to improve network management. These features will allow network administrators to more easily install, maintain, and optimize very large InfiniBand clusters.” - Kurt Yamamoto, Tom’s IT PRO

“It’s worth keeping up with [InfiniBand], as it clearly shows where the broader networking market is capable of going… Maybe geeky stuff, but it allows [InfiniBand] to keep up with “exascales” of data and lead the way large scale-out computer networking gets done. This is particularly important as the 1000 node clusters of today grow towards the 10,000 node clusters of tomorrow.” - Mike Matchett, Taneja Group, Inc.

“Indeed, a rising tide lifts all boats, and the InfiniBand community does not intend to get caught in the shallows of the Big Data surge. The InfiniBand Trade Association recently issued Release 1.3 of Volume I of the format’s reference architecture, designed to incorporate increased scalability, efficiency, availability and other functions that are becoming central to modern data infrastructure.” - Arthur Cole, Enterprise Networking Planet

“The InifiniBand Trade Association (IBTA) hopes to ward off the risk of an Ethernet invasion in the ranks of HPC users with a renewed focus on manageability and visibility. Such features have just appeared in release 1.3 of the Volume 1 standard. The IBTA’s Bill Lee told The Register that as HPC clusters grow, ‘you want to be able to see every level of switch interconnect, so you can identify choke-points and work around them.’” - Richard Chirgwin, The Register

To read more industry coverage of the new release, visit the InfiniBand in the News page.

For additional information about the InifiniBand specification, check out the InifiniBand specification FAQ or access the InfiniBand specification here.

Bill Lee

Accelerating Data Movement with RoCE

April 29th, 2015

On April 14-16, Ethernet designers and experts from around the globe gathered at the Ethernet Technology Summit 2015 to discuss developments happening within the industry as it pertained to the popular networking standard. IBTA’s Diego Crupnicoff, co-chair of the Technical Working Group, shared his expertise with attendees via a presentation on “Accelerating Data Movement with RDMA over Converged Ethernet (RoCE).” The session focused on the ever-growing complexity, bandwidth requirements and services of data centers and how RoCE can address the challenges that emerge from new enterprise data center initiatives.

Here is a brief synopsis of the points that Diego covered in his well-attended presentation:

People are living in an ever-increasing digital world. In the last decade, there’s been an explosion of connected devices that are running many applications and creating massive amounts of data in the process that must be accessible anytime, anywhere.

accelerating-data-movement-with-roce_image

Over time, the data center has emerged as the workhorse of the networking industry, with the increased pace of the ‘information generation’ generating many new data center initiatives, such as the cloud, virtualization and hyper-converged infrastructure. Expectations for enhanced accessibility to larger sets of data are straining enterprise data networks, bringing about a variety of new challenges to the industry, including the following needs:

• Scale and Flexibility
• Overlays & Shared Storage
• Reduce Latency
• Rapid Server-to-Server I/O
• Big Storage, Large Clusters
• New Scale-out Storage Traffic

The Transmission Control Protocol (TCP) has had difficulty keeping up with some traffic stemming from newer, more demanding applications. In these cases, packet processing over TCP saturates CPU resources, resulting in networks with low bandwidth, high latency and limited scalability. The industry was in need of a capability that would bypass the CPU altogether to enable faster, more efficient movement of data between servers.

The advent of Remote Direct Memory Access (RDMA) did just that, utilizing hardware offloads to move data faster with less CPU overhead. By offloading the I/O from the CPU, users of RDMA experience lower latency while freeing up the CPU to focus its resources on applications that process data as opposed to moving it.

Recently, RDMA expanded into the enterprise market and is now being widely adopted over Ethernet networks with RDMA over Converged Ethernet or RoCE. The RoCE standard acts as an efficient, lightweight transport that’s layered directly over Ethernet, bypassing the TCP/IP stack. It offers the lowest latency in the Ethernet industry, which enables faster application completion, better server utilization and higher scalability. Given these advantages, RoCE became the most widely deployed Ethernet RDMA standard, resulting in millions of RoCE-capable ports on the market today.

For additional details on the benefits of RDMA for Ethernet networks, including RoCE network considerations and use cases, view the presentation in its entirety here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

IBTA Member Companies to Exhibit and Present at Ethernet Technology Summit 2015

April 14th, 2015

ets-logo

The Ethernet Technology Summit 2015 kicks off today at the Santa Clara Convention Center in Santa Clara, CA. The three-day conference offers seminars, forums, panels and exhibits for Ethernet designers of all levels to discuss new developments, share expertise and learn from prominent industry leaders. Sessions run April 14-16 and exhibits are open April 15-16.

IBTA members exhibiting at the show include:

• Cisco – Booth #201-203
• Mellanox Technologies – Booth #304
• QLogic Corporation – Booth #300-302

Be sure to stop by their booths and ask about their RDMA solutions.

Additionally, Diego Crupnicoff, co-chair of the Technical Working Group from member company Mellanox Technologies, will participate in the “Forum 1B: Ethernet in Data Centers (Data/Telco Centers Track)” session on Wednesday, April 15 from 8:30 to 10:50 a.m.

Specifically, Diego will present on “Accelerating Data Movement with RDMA over Converged Ethernet (RoCE)” and how RoCE addresses the challenges that are emerging from new data center initiatives. This is an excellent opportunity to learn about RoCE and other enhancements required to meet the ever-growing needs of an optimized, flexible data center.

In addition there are other presentations, panel discussions, and keynotes given by representatives of IBTA members Broadcom, Cisco, Intel, Mellanox, Microsoft, Molex, and QLogic. For a complete schedule and description of the 2015 Summit sessions, click here.

Bill Lee

IBTA Publishes Updated Integrators’ List & Announces Plugfest #27

March 17th, 2015

We’re excited to announce the availability of the IBTA October 2014 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest #26 held last fall. The list gathers products that have been tested and accepted by the IBTA as being compliant with the InfiniBand™ architecture specifications, with new products being added every spring and fall, following each Plugfest event.

The updated Integrators’ List, along with the bi-annual Plugfest testing, is a testament to the IBTA’s dedication to advancing InfiniBand technology and furthering industry adoption. The list demonstrates how our organization continues to ensure InfiniBand interoperability, as all cables and devices listed have successfully passed all the required compliance tests and procedures.

Additionally, the consistently-updated Integrators’ List assures vendors’ customers and end-users that the manufacturers’ equipment included on the Integrators’ List has achieved the necessary level of compliance and interoperability. It also helps the IBTA to assess current and future industry demands providing background for future InfiniBand specifications.

We’ve already begun preparations for Plugfest #27, taking place April 13-24, 2015 at the University of New Hampshire’s Interoperability Laboratory in Durham, N.H. For more information, or to register for Plugfest #27, please visit the IBTA Plugfest website.

For any questions related to the Integrators’ List or IBTA membership, please visit the IBTA website: http://www.infinibandta.org/.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG