Archive

Posts Tagged ‘IBTA’

Visit the IBTA and OFA at SC13!

November 13th, 2013

Attending SC13? The IBTA will be teaming up once again with the OpenFabrics Alliance (OFA) to participate in a number of conference activities. The organizations will be exhibiting together at booth #4132 – stop by for access to:

  • Hands-on computing cluster demonstrations
  • IBTA cable compliance demonstration
  • IBTA & OFA member company exhibition map and SC13 news
  • Current and prospective member information
  • Information regarding OFA’s 2014 User Day and Developer Workshop

IBTA and OFA will also lead the discussion on the future of I/O architectures for improved application performance and efficiency during several technical sessions:

  • RDMA: Scaling the I/O Architecture for Future Applications,” an IBTA-moderated session, will discuss what new approaches to I/O architecture could be used to meet Exascale requirements. The session will be moderated by IBTA’s Bill Boas and will feature a discussion between top users of RDMA. The panel session will take place on Wednesday, November 20 from 1:30 p.m. to 3:00 p.m. in room 301/302/303.

  • Accelerating Improvements in HPC Application I/O Performance and Efficiency,” an OFA Emerging Technologies exhibit, will present to attendees ideas on how incorporating a new framework of I/O APIs may increase performance and efficiency for applications. This Emerging Technologies exhibit will take place at booth #3547. The OFA will also be giving a short talk on this subject in the Emerging Technologies theatre at booth #3947 on Tuesday, November 19 at 2:50 p.m.

  • OFA member company representatives will further develop ideas discussed in its Emerging Technologies exhibit during the Birds of a Feather (BoF) session entitled, “Discussing an I/O Framework to Accelerate Improvements in Application I/O Performance.” Moderators Paul Grun of Cray and Sean Hefty of Intel will lead the discussion on how developers and end-users can enhance and further encourage the growth of open source I/O software.

Bill Lee

Chair, Marketing Working Group (MWG)

InfiniBand Trade Association

Emulex Joins the InfiniBand Trade Association

October 30th, 2013

Emulex is proud to be a new member of the IBTA.  The IBTA has a great history of furthering the InfiniBand and RDMA over Converged Ethernet (RoCE) specifications, developing an active solution ecosystem and building market momentum around technologies with strong value propositions.  We are excited to be a voting member on the Organization’s Steering Committee and look forward tocontributing to relevant technical and marketing working groups, as well as participating in IBTA-sponsored Plugfests and other interoperability activities.

Why the IBTA?

Through our experience in building high performance, large-scale, mission critical networks we understand the benefits of RDMA.  Since its original implementation as part of InfiniBand back in 1999, RDMA has a well-proven track record of delivering better application performance, reduced CPU overhead and increased bandwidth efficiency in demanding computing environments.

Due to increased adoption of technologies such as cloud computing, big data analytics, virtualization and mobile computing, more and more commercial IT infrastructures are starting to run into the same challenges that supercomputing was forced to confront around performance bottlenecks, resource utilization and moving large data sets.  In other words, data centers supporting the Fortune 2000, vertical applications for media and entertainment or life sciences, telecommunications and cloud service providers are starting to look a lot more like the data centers at research institutions or systems on the TOP500 .  Thus, with the advent of RoCE, we see an opportunity to bring the benefits of RDMA that have been well-proven in high performance computing (HPC) markets to the mainstream commercial markets.

RoCE is a key enabling technology for converging data center infrastructure and enabling application centric I/O across a broad spectrum of requirements.  From supercomputing to share drives, RDMA deliveries broad based benefits for fundamentally more efficient network communications.  Emulex has a data center vision to connect, monitor and manage on a single unified fabric.  We are looking forward to supporting the IBTA, contributing to the advancement of RoCE and helping to bring RDMA to mainstream markets.

Jon Affled

Jon Affeld, senior director, marketing alliances at Emulex

IBTA Updates Integrators’ List Following PF23 Compliance & Interoperability Testing

September 25th, 2013

We’re proud to announce the availability of the IBTA April 2013 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest 23, during which we conducted the first-ever Enhanced Data Rate (EDR) 100Gb/s InfiniBand standard compliance testing.

IBTA’s updated Integrators’ List and our newest Plugfest testing is a testament to the IBTA’s commitment to advancing InfiniBand technology and ensuring its interoperability, as all cables and devices on the list successfully passed the required compliance tests and interoperability procedures. Vendors listed may now access the IBTA Integrators’ List promotional materials and a special marketing program for their products.

Plugfest 23 was a huge success, attracting top manufacturers and would not have been possible without donated testing equipment from the following vendors: Agilent Technologies, Anritsu, Molex, Tektronix and Wilder Technologies. We are thrilled with the level of participation and the caliber of technology manufacturers who came out and supported the IBTA.

The updated Integrators’ List is a tool used by the IBTA to assure vendors’ customers and end-users that manufacturers have made the mark of compliance and interoperability. It is also a method for furthering the InfiniBand specification. The integrator’s list is published every spring and fall following the bi-annual Plugfest and serves to assist IT professionals, including data center managers and CIOs, with their planned deployment of InfiniBand solutions.

We’ve already begun preparations for Plugfest 24, which will take place October 7-18, 2013 at the University of New Hampshire’s Interoperability Laboratory. For more information, or to register for Plugfest 24, please visit IBTA Plugfest website.

If you have any questions related to IBTA membership or Integrators’ List, please visit the IBTA website: http://www.infinibandta.org/, or email us: ibta_plugfest@soft-forge.com.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA & OFA Join Forces at SC12

November 7th, 2012

Attending SC12? Check out OFA’s Exascale and Big Data I/O panel discussion and stop by the IBTA/OFA booth to meet our industry experts

The IBTA is gearing up for the annual SC12 conference taking place November 10-16 at the UT Salt Palace Convention Center in Salt Lake City, Utah. We will be joining forces with the OpenFabrics Alliance (OFA) on a number of conference activities and will be exhibiting together at SC12 booth #3630.

IBTA members will participate in the OFA-moderated panel, Exascale and Big Data I/O, which we highly recommend attending if you’re at the conference.  The panel session, moderated by IBTA and OFA member Bill Boas, takes place Wednesday, November 14 at 1:30 p.m. Mountain Time and will discuss drivers for future I/O architectures.

Also be sure to stop by the IBTA and OFA booth #3630 to chat with industry experts regarding a wide range of industry topics, including:

·         Behind the IBTA integrators list

·         High speed optical connectivity

·         Building and validating OFA software

·         Achieving low latency with RDMA in virtualized cloud environments

·         UNH-IOL hardware testing and interoperability capabilities

·         Utilizing high-speed interconnects for HPC

·         Release 1.3 of IBA Vol2

·         Peering into a live OFS cluster

·         RoCE in Wide Area Networks

·         OpenFabrics for high speed SAN and NAS

Experts, including: Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Bill Boas and Kevin Moran, System Fabric Works; and Josh Simons, VMware will be in the booth to answer your questions and discuss topics currently affecting the HPC community.

Be sure to check the SC12 website to learn more about Supercomputing 2012, and stay tuned to the IBTA website and Twitter to follow IBTA’s plans and activities at SC12.

See you there!

New InfiniBand Architecture Specification Open for Comments

October 15th, 2012

After an extensive review process, Release 1.3 of Volume 2 of the InfiniBand Architecture Specification has been approved by our Electro-Mechanical Working Group (EWG). The specification is undergoing final review by the full InfiniBand Trade Association (IBTA) membership and will be available for vendors at Plugfest 22, taking place October 15-26, 2012 at University of New Hampshire Interoperability Lab in Durham, New Hampshire.

All IBTA working groups and individual members have had several weeks to review and comment on the specification. We are encouraged by the feedback we’ve received and are looking forward to the official release at SC12, taking place November 10-16 in Salt Lake City, Utah.

Release 1.3 is a major overhaul of the InfiniBand Architecture Specification and features important new architectural elements:
• FDR and EDR signal specification methodologies
• Analog signal specifications for FDR, that have been verified through Plugfest compliance and interoperability measurements
• More efficient 64b/66 encoding method
• Forward Error Correction coding
• Improved specification of QSFP-4x and CXP-12x connectors, ports and management interfaces

The new specification also includes significant copy editing and organization to include sub volumes and improve overall readability. The previous specification, Release 1.2.1, was released in November 2007. As Chair of the EWG, I’m pleased with the technical progress made on the InfiniBand Architecture specification. More importantly, I’m excited about the impact that this new specification release will have for users and developers of InfiniBand technology.

Alan Benner
EWG Chair

InfiniBand at VMworld!

September 2nd, 2011

VMworld 2011 took place this week in sunny Las Vegas, and with over 20,000 attendees, this show has quickly developed into one of the largest enterprise IT events in the world. Virtualization continues to be one of the hottest topics in the industry, providing a great opportunity for InfiniBand vendors to market the wide-array of benefits that InfiniBand is enabling in virtualized environments. There were several in the IBTA community spreading the InfiniBand message, but here were a few of note.

vmworld-8312011-image-11

On the networking side, Mellanox Technologies showed the latest generation of InfiniBand technology, FDR 56Gb/s. With FDR adapters, switches and cables available today, IT managers can immediately deploy this next generation technology into their data center and get instant performance improvements, whether it be leading vMotion performance, the ability to support more virtual machines per server at higher bandwidth per virtual machine, or higher capital and lower operating expenses by consolidating the networking, management and storage I/O into a one-wire infrastructure.

vmworld-8312011-image-21

Fusion-io, a Flash-based storage manufacturer that targets heavy data acceleration needs from such applications as database, virtualization, Memchached and VDI, also made a big splash at VMworld. Their booth featured an excellent demonstration of how low-latency, high-speed InfiniBand networks help enable Fusion-io to show 800 virtual desktops being accessed and displayed over 17 monitors. InfiniBand enabled them to stream bandwidth-intensive HD movies (over 2,000) from just eight servers.

vmworld-8312011-image-3

Pure Storage, a newcomer in the storage arena, announced their 40Gb/s InfiniBand-based enterprise storage array that targets applications such as database, VDI, etc. With InfiniBand they are able to reduce their latency by over 800 percent while increasing performance by 10X.

vmworld-8312011-image-41

Isilon was recently acquired by EMC, and in the EMC booth, a rack of Isilon storage systems was displayed, scaling out by running 40Gb/s InfiniBand on the back-end. These storage systems excel in VDI implementations and are ripe for customers implementing a cloud solution where performance, reliability and storage resiliency are vital.

vmworld-8312011-image-5

Also exhibiting at VMworld was Xsigo Systems. Xsigo showed their latest Virtual I/O Director which now includes 40Gb/s InfiniBand. The previous generation used 20Gb/s InfiniBand. With the upgraded bandwidth capabilities, Xsigo can now offer their customers with 12-30X acceleration of I/O intensive tasks such as vMotion, queries, backup, etc all while providing dynamic bandwidth allocation per VM or job. In addition, by consolidating the network over a single wire, Xsigo is able to provide customers with 85 percent less hardware cost per virtual machine.

The items mentioned above are just a small slice of the excitement that was at VMworld. I’m glad to have seen so many InfiniBand solutions displayed. For more information on InfiniBand in the enterprise, watch for an upcoming webinar series being produced by the IBTA.

Brian Sparks

IBTA Marketing Working Group Co-Chair

HPC Advisory Council Showcases World’s First FDR 56Gb/s InfiniBand Demonstration at ISC’11

July 1st, 2011

The HPC Advisory Council, together with ISC’11, showcased the world’s first demonstration of FDR 56Gb/s InfiniBand in Hamburg, Germany, June 20-22. The HPC Advisory Council is hosting and organizing new technology demonstrations at leading HPC conferences around the world to highlight new solutions which will influence future HPC systems in term of performance, scalability and utilization.

The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths.

The FDR InfiniBand network included dedicated and distributed clusters, as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualization applications using car models courtesy of Peugeot Citroën.

The installation of the fiber cables (we used 20 and 50 meter cables) was completed a few days before the show opened, and we placed the cables on the floor, protecting them with wooden bridges. The clusters, Lustre and application setup was done the day before and everything ran perfectly.

You can see the network architecture of the ISCnet FDR InfiniBand demo below. We have combined both MPI traffic and storage traffic (Lustre) on the same fabric, utilizing the new bandwidth capabilities to provide a high performance, consolidated fabric for the high speed rendering and visualization application demonstration.

iscnet3

The following HPC Council member organizations contributed to the FDR 56Gb/s InfiniBand demo and I would like to personally thank each of them: AMD, Corning Cable Systems, Dell, Fujitsu, HP, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

Regards,

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

ISC’11 Highlights: ISCnet to Feature FDR InfiniBand

June 13th, 2011

ISC’11 — taking place in Hamburg, Germany from June 19-23 - will include major new product introductions and groundbreaking talks from users worldwide. We are happy to call to your attention to the fact that this year’s conference will feature the world’s first large-scale demonstration of next-generation FDR InfiniBand technology. 

With link speeds of 56Gb/s, FDR InfiniBand uses the latest version of the OpenFabrics Enterprise Distribution (OFEDTM) and delivers an increase of nearly 80% in data rate compared to previous InfiniBand generations.

Running on the ISC’11 network “ISCnet,” the multi-vendor, FDR 56Gb/s InfiniBand demo will provide exhibitors with fast interconnect connectivity between their booths on the show floor and enable them to demonstrate a wide variety of applications and experimental HPC applications, as well as new developments and products.

The demo will also continue to show the processing efficiency of RDMA and microsecond latency of OFED, which reduces the cost of the servers, increases productivity and improves customer’s ROI. ISCnet will be the fastest open commodity network demonstration ever assembled to date.

If you are heading to the conference, be sure to visit the booths of IBTA and OFA members who are exhibiting, as well as the many users of InfiniBand and OFED. The last TOP500 list (published in November 2010) showed that nearly half of the most powerful computers in the world are using these technologies.

InfiniBand Trade Association Members Exhibiting

  • Bull - booth 410
  • Fujitsu Limted - booth 620
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Leoni - booth 842
  • Mellanox - booth 331
  • Molex - booth 730 Co-Exhibitor of Stordis
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330
OpenFabrics Alliance Members Exhibiting

  • AMD - booth 752
  • APPRO - booth 751 Co-Exhibitor of AMD
  • Chelsio - booth 702
  • Cray - booth 650
  • DataDirect Networks - booth 550
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Mellanox - booth 331
  • Microsoft - booth 832
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330

Also, don’t miss the HPC Advisory Council Workshop on June 19 at ISC’11 that includes talks on the following hot topics related to InfiniBand and OFED:

  • GPU Access and Acceleration
  • MPI Optimizations and Futures
  • Fat-Tree Routing
  • Improved Congestion Management
  • Shared Memory models
  • Lustre Release Update and Roadmap 

Go to http://www.hpcadvisorycouncil.com/events/2011/european_workshop/index.php to learn more and register.

For those who are InfiniBand, OFED and Lustre users, don’t miss the important announcement and breakfast on June 22 regarding the coming together of the worldwide Lustre vendor and user community. This announcement is focused on ensuring the continued development, upcoming releases and consolidated support for Lustre; details and location will be available on site at ISC’11. This is your opportunity to meet with representatives of OpenSFS, HPCFS and EOFS to learn how the whole community is working together.

Looking forward to seeing several of you at the show!

briansparksBrian Sparks
IBTA & OFA Marketing Working Groups Co-Chair
HPC Advisory Council Media Relations

NVIDIA GPUDirect Technology – InfiniBand RDMA for Accelerating GPU-Based Systems

May 11th, 2011

As a member of the IBTA and as being the chairman of the HPC Advisory Council, I wanted to share with you some information on the important role of InfiniBand in the emerging hybrid (CPU-GPU) clustering architectures.

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, has made graphics accelerators a compelling platform for computationally demanding tasks in a wide variety of application domains. Due to the great computational power of the GPU, the GPGPU method has proven valuable in various areas of science and technology and the hybrid CPU-GPU architecture is seeing increased adoption.

GPU-based clusters are being used to perform compute intensive tasks like finite element computations, Computational Fluids Dynamics, Monte-Carlo simulations, etc. Several of the world-leading InfiniBand supercomputers are using GPUs in order to achieve the desired performance. Since the GPUs provide very high core count and floating point operations capability, a high-speed networking interconnect such as InfiniBand is required to provide the needed throughput and the lowest latency for GPU-to-GPU communications. As such, InfiniBand has become the preferred interconnect solution for hybrid GPU-CPU systems.

While GPUs have been shown to provide worthwhile performance acceleration yielding benefits to price/performance and power/performance, several areas of GPU-based clusters could be improved in order to provide higher performance and efficiency. One issue with deploying clusters consisting of multi-GPU nodes involves the interaction between the GPU and the high speed InfiniBand network - in particular, the way GPUs use the network to transfer data between them. Before the NVIDIA GPUDirect technology, a performance issue existed with user-mode DMA mechanisms used by GPU devices and the InfiniBand RDMA technology. The issue involved the lack of a software/hardware mechanism of “pinning” pages of virtual memory to physical pages that can be shared by both the GPU devices and the networking devices.

The new hardware/software mechanism called GPUDirect eliminates the need for the CPU to be involved in the data movement and essentially enables not only higher GPU-based cluster efficiency, but sets the way for the creation of “floating point services.” GPUDirect is based on a new interface between the GPU and the InfiniBand device that enables both devices to share pinned memory buffers. Therefore data written by a GPU to the host memory can be sent immediately by the InfiniBand device (using RDMA semantics) to a remote GPU much faster.

As a result, GPU communication can now utilize the low latency and zero copies advantages of the InfiniBand RDMA transport for higher applications performance and efficiency. InfiniBand RDMA enables you to connect remote GPUs with latency characteristics to make it seems like all of the GPUs are on the same platform. Examples of the performance benefits and more info on GPUDirect can be found at - http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.

gilad-shainer

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

IBTA Plugfest 19 Wrap Up

April 12th, 2011

The latest IBTA Plugfest took place last week at UNH-IOL in Durham, NH. This event provided an opportunity for participants to measure their products for compliance with the InfiniBand architecture specification as well as interoperability with other InfiniBand products. We were happy to welcome 21 vendors and we tested 22 devices, 235 DDR rated cables,  227 QDR rated cables and 66 FDR rated cables at the April 2011 event.

New for this latest Plugfest, we added beta testing for QSFP+ FDR Cables which support 54 Gb/sec data rate. I’m happy to report that we received a total of 66 cables supporting FDR rates and 5 of these were Active Fiber cables. Their performance bodes well for the new data rate supported in the IBTA Roadmap.

Vendor devices and cables successfully passing all required Integrators’ List (IL) Compliance Tests will be listed on the IBTA Integrators’ List and will be granted the IBTA Integrators’ List Logo. We will also have comprehensive interoperability results available documenting the results of heterogeneous device testing using all of the cables submitted at the Plugfest. We’re expecting to have the IBTA Integrators’ List updated in time for ISC’11.

2011-04-cable-interop-diagram

I can’t believe I’m about to talk about our 20th event, but IBTA Plugfest 20 will be taking place this October. Stay tuned for exact dates. Thank you and congratulations to all of the vendors who participated in our April event and performed so well.

rupert-danceRupert Dance

Co-chair, IBTA’s Compliance and Interoperability Working Group

Author: admin Categories: InfiniBand Tags: , , , ,