Archive

Posts Tagged ‘InfiniBand’

Newest IBTA Compliance & Interoperability Testing Includes EDR 100Gb/s InfiniBand Methodology

February 25th, 2014

2014 has already proved to be a busy year for the IBTA. The organization is currently in the midst of compiling results from the recent Plugfest 24, which took place this past October at the University of New Hampshire’s Interoperability Laboratory.

IBTA SC13 Demonstration

IBTA SC13 Demonstration

While exhibiting at SC13 in November, we had the opportunity to showcase the IBTA’s newest EDR 100Gb/s compliance and interoperability testing capabilities that were implemented during October’s biannual Plugfest event. Due to the popularity of the demo at the conference, we wanted to share some details and results here on our blog.

The IBTA’s sophisticated test set-up leveraged Anritsu generators and Tektronix scopes to test EDR active cables.  The Anritsu generators were used to generate a spec compliant 25Gb/s input signal to one lane of an active optical cable along with 25Gb/s signals to the other 7 lanes which were being used as “aggressor” lanes.  Tektronix provided a scope capable of testing the 25Gb/s per lane output of the EDR active cables. The details are provided in the Anritsu Method of Implementation (MOI) which is available here:  http://infinibandta.org/content/pages.php?pg=technology_methods_of_implementation.

The significance of this testing, is that we were able to not only achieve 25Gb/s speed per lane (100Gb/s total over 4 lanes), but we were also able to analyze data at that speed. The IBTA currently has both copper and active optical cables running at 100Gb/s that are also compliant with the latest InfiniBand specifications.

3M Company, Advanced Photonics, Inc, Amphenol, FCI, Finisar, Foxconn, Fujikura Ltd., Hitachi Cable America, Lorom Industrial, Mellanox, Molex, Samtec, Sumitomo, TE Connectivity and Volex attended PlugFest 24 and tested a total of 189 cables for both compliance and interoperability.

Intel, Mellanox and NetApp also tested a total of 17 devices including InfiniBand to Ethernet Gateways, Host Channel Adapters (HCAs), Switches and SRP Targets. These end points and switches provided the foundation for running approximately 1000 interoperability tests using the Open MPI test benchmark and the interconnects provided by the cable vendors listed above.

Anritsu InfiniBand ATD EDR Test Solution

Anritsu InfiniBand ATD EDR Test Solution

Agilent, Anritsu, Tektronix and Wilder Technologies all provided the latest generation of Test equipment needed to test 100Gb/s InfiniBand EDR cables. The IBTA PlugFest provides a great opportunity for all these companies to work together to improve InfiniBand and to ensure that the products are both robust and interoperable.

IBTA’s Plugfest events take place twice per year, during which the IBTA tests compliance with the InfiniBand architecture specification and interoperability with other InfiniBand products. The IBTA Plugfest 24 Integrators’ List has been released and features vendor devices and cables that successfully passed all required Integrators’ List compliance tests and interoperability procedures during Plugfest 24.

The IBTA Integrators’ List contains a list of compliant and interoperable products and is available here: http://infinibandta.org/content/pages.php?pg=integrators_list_overview.

For more information on the upcoming IBTA Plugfest being held in April 2014 check out the Plugfest website.

FDR InfiniBand Continues Rapid Growth on TOP500 Year-Over-Year

November 22nd, 2013

The newest TOP500 list of the world’s most powerful supercomputers was released at SC13 this week, and showed the continued adoption of Fourteen Data Rate (FDR) InfiniBand – the fastest growing interconnect technology on the list! FDR, the latest generation of InfiniBand technology, now connects 80 systems on the TOP500, growing almost 2X year-over-year from 45 systems in November 2012. Overall, InfiniBand technology connects 207 systems, accounting for over 40 percent of the TOP500 list.

InfiniBand technology overall connects 48 percent of the Petascale-capable systems on the list. Petascale-capable systems generally favor the InfiniBand interconnect due to its computing efficiency, low application latency and high speeds. More relevant highlights from the November TOP500 list include:

  • InfiniBand is the most used interconnect in the TOP100, connecting 48 percent of the systems, and in the TOP200, connecting 48.5 percent of the systems.

  • InfiniBand-connected systems deliver 2X the performance of Ethernet systems, while the total performance supported by InfiniBand systems continues to grow.

  • With a peak efficiency of 97 percent and an average efficiency of 86 percent, InfiniBand continues to be the most efficient of interconnects on the TOP500.

The TOP500 list continues to show that InfiniBand technology is the interconnect of choice for HPC and data centers wanting the highest performance, with FDR InfiniBand adoption as a large part of this solution. The graph below demonstrates this further:

TOP500 Results, November 2013

Image source: Mellanox Technologies

Delivering bandwidth up to 56Gb/s with application latencies less than one microsecond, InfiniBand enables the highest server efficiency and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded environments that scale from two nodes up to clusters with thousands of nodes.

The TOP500 list is published twice per year and recognizes and ranks the world’s fastest supercomputers. The list was announced November 18 at the SC13 conference in Denver, Colorado.

Interested in learning more about the TOP500, or how InfiniBand performed? Check out the TOP500 website: www.top500.org.

bill_lee_square

Bill Lee, chair of Marketing Working Group (MWG) at IBTA

Emulex Joins the InfiniBand Trade Association

October 30th, 2013

Emulex is proud to be a new member of the IBTA.  The IBTA has a great history of furthering the InfiniBand and RDMA over Converged Ethernet (RoCE) specifications, developing an active solution ecosystem and building market momentum around technologies with strong value propositions.  We are excited to be a voting member on the Organization’s Steering Committee and look forward tocontributing to relevant technical and marketing working groups, as well as participating in IBTA-sponsored Plugfests and other interoperability activities.

Why the IBTA?

Through our experience in building high performance, large-scale, mission critical networks we understand the benefits of RDMA.  Since its original implementation as part of InfiniBand back in 1999, RDMA has a well-proven track record of delivering better application performance, reduced CPU overhead and increased bandwidth efficiency in demanding computing environments.

Due to increased adoption of technologies such as cloud computing, big data analytics, virtualization and mobile computing, more and more commercial IT infrastructures are starting to run into the same challenges that supercomputing was forced to confront around performance bottlenecks, resource utilization and moving large data sets.  In other words, data centers supporting the Fortune 2000, vertical applications for media and entertainment or life sciences, telecommunications and cloud service providers are starting to look a lot more like the data centers at research institutions or systems on the TOP500 .  Thus, with the advent of RoCE, we see an opportunity to bring the benefits of RDMA that have been well-proven in high performance computing (HPC) markets to the mainstream commercial markets.

RoCE is a key enabling technology for converging data center infrastructure and enabling application centric I/O across a broad spectrum of requirements.  From supercomputing to share drives, RDMA deliveries broad based benefits for fundamentally more efficient network communications.  Emulex has a data center vision to connect, monitor and manage on a single unified fabric.  We are looking forward to supporting the IBTA, contributing to the advancement of RoCE and helping to bring RDMA to mainstream markets.

Jon Affled

Jon Affeld, senior director, marketing alliances at Emulex

IBTA Updates Integrators’ List Following PF23 Compliance & Interoperability Testing

September 25th, 2013

We’re proud to announce the availability of the IBTA April 2013 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest 23, during which we conducted the first-ever Enhanced Data Rate (EDR) 100Gb/s InfiniBand standard compliance testing.

IBTA’s updated Integrators’ List and our newest Plugfest testing is a testament to the IBTA’s commitment to advancing InfiniBand technology and ensuring its interoperability, as all cables and devices on the list successfully passed the required compliance tests and interoperability procedures. Vendors listed may now access the IBTA Integrators’ List promotional materials and a special marketing program for their products.

Plugfest 23 was a huge success, attracting top manufacturers and would not have been possible without donated testing equipment from the following vendors: Agilent Technologies, Anritsu, Molex, Tektronix and Wilder Technologies. We are thrilled with the level of participation and the caliber of technology manufacturers who came out and supported the IBTA.

The updated Integrators’ List is a tool used by the IBTA to assure vendors’ customers and end-users that manufacturers have made the mark of compliance and interoperability. It is also a method for furthering the InfiniBand specification. The integrator’s list is published every spring and fall following the bi-annual Plugfest and serves to assist IT professionals, including data center managers and CIOs, with their planned deployment of InfiniBand solutions.

We’ve already begun preparations for Plugfest 24, which will take place October 7-18, 2013 at the University of New Hampshire’s Interoperability Laboratory. For more information, or to register for Plugfest 24, please visit IBTA Plugfest website.

If you have any questions related to IBTA membership or Integrators’ List, please visit the IBTA website: http://www.infinibandta.org/, or email us: ibta_plugfest@soft-forge.com.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA & OFA Join Forces at SC12

November 7th, 2012

Attending SC12? Check out OFA’s Exascale and Big Data I/O panel discussion and stop by the IBTA/OFA booth to meet our industry experts

The IBTA is gearing up for the annual SC12 conference taking place November 10-16 at the UT Salt Palace Convention Center in Salt Lake City, Utah. We will be joining forces with the OpenFabrics Alliance (OFA) on a number of conference activities and will be exhibiting together at SC12 booth #3630.

IBTA members will participate in the OFA-moderated panel, Exascale and Big Data I/O, which we highly recommend attending if you’re at the conference.  The panel session, moderated by IBTA and OFA member Bill Boas, takes place Wednesday, November 14 at 1:30 p.m. Mountain Time and will discuss drivers for future I/O architectures.

Also be sure to stop by the IBTA and OFA booth #3630 to chat with industry experts regarding a wide range of industry topics, including:

·         Behind the IBTA integrators list

·         High speed optical connectivity

·         Building and validating OFA software

·         Achieving low latency with RDMA in virtualized cloud environments

·         UNH-IOL hardware testing and interoperability capabilities

·         Utilizing high-speed interconnects for HPC

·         Release 1.3 of IBA Vol2

·         Peering into a live OFS cluster

·         RoCE in Wide Area Networks

·         OpenFabrics for high speed SAN and NAS

Experts, including: Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Bill Boas and Kevin Moran, System Fabric Works; and Josh Simons, VMware will be in the booth to answer your questions and discuss topics currently affecting the HPC community.

Be sure to check the SC12 website to learn more about Supercomputing 2012, and stay tuned to the IBTA website and Twitter to follow IBTA’s plans and activities at SC12.

See you there!

InfiniBand at VMworld!

September 2nd, 2011

VMworld 2011 took place this week in sunny Las Vegas, and with over 20,000 attendees, this show has quickly developed into one of the largest enterprise IT events in the world. Virtualization continues to be one of the hottest topics in the industry, providing a great opportunity for InfiniBand vendors to market the wide-array of benefits that InfiniBand is enabling in virtualized environments. There were several in the IBTA community spreading the InfiniBand message, but here were a few of note.

vmworld-8312011-image-11

On the networking side, Mellanox Technologies showed the latest generation of InfiniBand technology, FDR 56Gb/s. With FDR adapters, switches and cables available today, IT managers can immediately deploy this next generation technology into their data center and get instant performance improvements, whether it be leading vMotion performance, the ability to support more virtual machines per server at higher bandwidth per virtual machine, or higher capital and lower operating expenses by consolidating the networking, management and storage I/O into a one-wire infrastructure.

vmworld-8312011-image-21

Fusion-io, a Flash-based storage manufacturer that targets heavy data acceleration needs from such applications as database, virtualization, Memchached and VDI, also made a big splash at VMworld. Their booth featured an excellent demonstration of how low-latency, high-speed InfiniBand networks help enable Fusion-io to show 800 virtual desktops being accessed and displayed over 17 monitors. InfiniBand enabled them to stream bandwidth-intensive HD movies (over 2,000) from just eight servers.

vmworld-8312011-image-3

Pure Storage, a newcomer in the storage arena, announced their 40Gb/s InfiniBand-based enterprise storage array that targets applications such as database, VDI, etc. With InfiniBand they are able to reduce their latency by over 800 percent while increasing performance by 10X.

vmworld-8312011-image-41

Isilon was recently acquired by EMC, and in the EMC booth, a rack of Isilon storage systems was displayed, scaling out by running 40Gb/s InfiniBand on the back-end. These storage systems excel in VDI implementations and are ripe for customers implementing a cloud solution where performance, reliability and storage resiliency are vital.

vmworld-8312011-image-5

Also exhibiting at VMworld was Xsigo Systems. Xsigo showed their latest Virtual I/O Director which now includes 40Gb/s InfiniBand. The previous generation used 20Gb/s InfiniBand. With the upgraded bandwidth capabilities, Xsigo can now offer their customers with 12-30X acceleration of I/O intensive tasks such as vMotion, queries, backup, etc all while providing dynamic bandwidth allocation per VM or job. In addition, by consolidating the network over a single wire, Xsigo is able to provide customers with 85 percent less hardware cost per virtual machine.

The items mentioned above are just a small slice of the excitement that was at VMworld. I’m glad to have seen so many InfiniBand solutions displayed. For more information on InfiniBand in the enterprise, watch for an upcoming webinar series being produced by the IBTA.

Brian Sparks

IBTA Marketing Working Group Co-Chair

HPC Advisory Council Showcases World’s First FDR 56Gb/s InfiniBand Demonstration at ISC’11

July 1st, 2011

The HPC Advisory Council, together with ISC’11, showcased the world’s first demonstration of FDR 56Gb/s InfiniBand in Hamburg, Germany, June 20-22. The HPC Advisory Council is hosting and organizing new technology demonstrations at leading HPC conferences around the world to highlight new solutions which will influence future HPC systems in term of performance, scalability and utilization.

The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths.

The FDR InfiniBand network included dedicated and distributed clusters, as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualization applications using car models courtesy of Peugeot Citroën.

The installation of the fiber cables (we used 20 and 50 meter cables) was completed a few days before the show opened, and we placed the cables on the floor, protecting them with wooden bridges. The clusters, Lustre and application setup was done the day before and everything ran perfectly.

You can see the network architecture of the ISCnet FDR InfiniBand demo below. We have combined both MPI traffic and storage traffic (Lustre) on the same fabric, utilizing the new bandwidth capabilities to provide a high performance, consolidated fabric for the high speed rendering and visualization application demonstration.

iscnet3

The following HPC Council member organizations contributed to the FDR 56Gb/s InfiniBand demo and I would like to personally thank each of them: AMD, Corning Cable Systems, Dell, Fujitsu, HP, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

Regards,

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

ISC’11 Highlights: ISCnet to Feature FDR InfiniBand

June 13th, 2011

ISC’11 — taking place in Hamburg, Germany from June 19-23 - will include major new product introductions and groundbreaking talks from users worldwide. We are happy to call to your attention to the fact that this year’s conference will feature the world’s first large-scale demonstration of next-generation FDR InfiniBand technology. 

With link speeds of 56Gb/s, FDR InfiniBand uses the latest version of the OpenFabrics Enterprise Distribution (OFEDTM) and delivers an increase of nearly 80% in data rate compared to previous InfiniBand generations.

Running on the ISC’11 network “ISCnet,” the multi-vendor, FDR 56Gb/s InfiniBand demo will provide exhibitors with fast interconnect connectivity between their booths on the show floor and enable them to demonstrate a wide variety of applications and experimental HPC applications, as well as new developments and products.

The demo will also continue to show the processing efficiency of RDMA and microsecond latency of OFED, which reduces the cost of the servers, increases productivity and improves customer’s ROI. ISCnet will be the fastest open commodity network demonstration ever assembled to date.

If you are heading to the conference, be sure to visit the booths of IBTA and OFA members who are exhibiting, as well as the many users of InfiniBand and OFED. The last TOP500 list (published in November 2010) showed that nearly half of the most powerful computers in the world are using these technologies.

InfiniBand Trade Association Members Exhibiting

  • Bull - booth 410
  • Fujitsu Limted - booth 620
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Leoni - booth 842
  • Mellanox - booth 331
  • Molex - booth 730 Co-Exhibitor of Stordis
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330
OpenFabrics Alliance Members Exhibiting

  • AMD - booth 752
  • APPRO - booth 751 Co-Exhibitor of AMD
  • Chelsio - booth 702
  • Cray - booth 650
  • DataDirect Networks - booth 550
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Mellanox - booth 331
  • Microsoft - booth 832
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330

Also, don’t miss the HPC Advisory Council Workshop on June 19 at ISC’11 that includes talks on the following hot topics related to InfiniBand and OFED:

  • GPU Access and Acceleration
  • MPI Optimizations and Futures
  • Fat-Tree Routing
  • Improved Congestion Management
  • Shared Memory models
  • Lustre Release Update and Roadmap 

Go to http://www.hpcadvisorycouncil.com/events/2011/european_workshop/index.php to learn more and register.

For those who are InfiniBand, OFED and Lustre users, don’t miss the important announcement and breakfast on June 22 regarding the coming together of the worldwide Lustre vendor and user community. This announcement is focused on ensuring the continued development, upcoming releases and consolidated support for Lustre; details and location will be available on site at ISC’11. This is your opportunity to meet with representatives of OpenSFS, HPCFS and EOFS to learn how the whole community is working together.

Looking forward to seeing several of you at the show!

briansparksBrian Sparks
IBTA & OFA Marketing Working Groups Co-Chair
HPC Advisory Council Media Relations

NVIDIA GPUDirect Technology – InfiniBand RDMA for Accelerating GPU-Based Systems

May 11th, 2011

As a member of the IBTA and as being the chairman of the HPC Advisory Council, I wanted to share with you some information on the important role of InfiniBand in the emerging hybrid (CPU-GPU) clustering architectures.

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, has made graphics accelerators a compelling platform for computationally demanding tasks in a wide variety of application domains. Due to the great computational power of the GPU, the GPGPU method has proven valuable in various areas of science and technology and the hybrid CPU-GPU architecture is seeing increased adoption.

GPU-based clusters are being used to perform compute intensive tasks like finite element computations, Computational Fluids Dynamics, Monte-Carlo simulations, etc. Several of the world-leading InfiniBand supercomputers are using GPUs in order to achieve the desired performance. Since the GPUs provide very high core count and floating point operations capability, a high-speed networking interconnect such as InfiniBand is required to provide the needed throughput and the lowest latency for GPU-to-GPU communications. As such, InfiniBand has become the preferred interconnect solution for hybrid GPU-CPU systems.

While GPUs have been shown to provide worthwhile performance acceleration yielding benefits to price/performance and power/performance, several areas of GPU-based clusters could be improved in order to provide higher performance and efficiency. One issue with deploying clusters consisting of multi-GPU nodes involves the interaction between the GPU and the high speed InfiniBand network - in particular, the way GPUs use the network to transfer data between them. Before the NVIDIA GPUDirect technology, a performance issue existed with user-mode DMA mechanisms used by GPU devices and the InfiniBand RDMA technology. The issue involved the lack of a software/hardware mechanism of “pinning” pages of virtual memory to physical pages that can be shared by both the GPU devices and the networking devices.

The new hardware/software mechanism called GPUDirect eliminates the need for the CPU to be involved in the data movement and essentially enables not only higher GPU-based cluster efficiency, but sets the way for the creation of “floating point services.” GPUDirect is based on a new interface between the GPU and the InfiniBand device that enables both devices to share pinned memory buffers. Therefore data written by a GPU to the host memory can be sent immediately by the InfiniBand device (using RDMA semantics) to a remote GPU much faster.

As a result, GPU communication can now utilize the low latency and zero copies advantages of the InfiniBand RDMA transport for higher applications performance and efficiency. InfiniBand RDMA enables you to connect remote GPUs with latency characteristics to make it seems like all of the GPUs are on the same platform. Examples of the performance benefits and more info on GPUDirect can be found at - http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.

gilad-shainer

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

IBTA Plugfest 19 Wrap Up

April 12th, 2011

The latest IBTA Plugfest took place last week at UNH-IOL in Durham, NH. This event provided an opportunity for participants to measure their products for compliance with the InfiniBand architecture specification as well as interoperability with other InfiniBand products. We were happy to welcome 21 vendors and we tested 22 devices, 235 DDR rated cables,  227 QDR rated cables and 66 FDR rated cables at the April 2011 event.

New for this latest Plugfest, we added beta testing for QSFP+ FDR Cables which support 54 Gb/sec data rate. I’m happy to report that we received a total of 66 cables supporting FDR rates and 5 of these were Active Fiber cables. Their performance bodes well for the new data rate supported in the IBTA Roadmap.

Vendor devices and cables successfully passing all required Integrators’ List (IL) Compliance Tests will be listed on the IBTA Integrators’ List and will be granted the IBTA Integrators’ List Logo. We will also have comprehensive interoperability results available documenting the results of heterogeneous device testing using all of the cables submitted at the Plugfest. We’re expecting to have the IBTA Integrators’ List updated in time for ISC’11.

2011-04-cable-interop-diagram

I can’t believe I’m about to talk about our 20th event, but IBTA Plugfest 20 will be taking place this October. Stay tuned for exact dates. Thank you and congratulations to all of the vendors who participated in our April event and performed so well.

rupert-danceRupert Dance

Co-chair, IBTA’s Compliance and Interoperability Working Group

Author: admin Categories: InfiniBand Tags: , , , ,