IBTA Updates Integrators’ List Following PF23 Compliance & Interoperability Testing

September 25th, 2013

We’re proud to announce the availability of the IBTA April 2013 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest 23, during which we conducted the first-ever Enhanced Data Rate (EDR) 100Gb/s InfiniBand standard compliance testing.

IBTA’s updated Integrators’ List and our newest Plugfest testing is a testament to the IBTA’s commitment to advancing InfiniBand technology and ensuring its interoperability, as all cables and devices on the list successfully passed the required compliance tests and interoperability procedures. Vendors listed may now access the IBTA Integrators’ List promotional materials and a special marketing program for their products.

Plugfest 23 was a huge success, attracting top manufacturers and would not have been possible without donated testing equipment from the following vendors: Agilent Technologies, Anritsu, Molex, Tektronix and Wilder Technologies. We are thrilled with the level of participation and the caliber of technology manufacturers who came out and supported the IBTA.

The updated Integrators’ List is a tool used by the IBTA to assure vendors’ customers and end-users that manufacturers have made the mark of compliance and interoperability. It is also a method for furthering the InfiniBand specification. The integrator’s list is published every spring and fall following the bi-annual Plugfest and serves to assist IT professionals, including data center managers and CIOs, with their planned deployment of InfiniBand solutions.

We’ve already begun preparations for Plugfest 24, which will take place October 7-18, 2013 at the University of New Hampshire’s Interoperability Laboratory. For more information, or to register for Plugfest 24, please visit IBTA Plugfest website.

If you have any questions related to IBTA membership or Integrators’ List, please visit the IBTA website: http://www.infinibandta.org/, or email us: ibta_plugfest@soft-forge.com.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Plugfest #23: InfiniBand Compliance & Interoperability Testing

March 28th, 2013

The IBTA is ramping up for our 23rd Plugfest (Plugfest 23), taking place April 1-12, 2013 at the University of New Hampshire’s Interoperability Laboratory (UNH-IOL). The bi-annual Plugfest event provides cable and device compliance testing with the current InfiniBand Architecture specifications as well as interoperability with other InfiniBand products.

The IBTA ElectroMechanical Working Group (EWG) has prepared a draft of the next InfiniBand Architecture Specification, 1.3.1 of Volume 2, which specifies the parameters for Enhanced Data Rate (EDR) InfiniBand products.  The InfiniBand™ specification defines the interconnect technology for servers and storage that changes the way data centers are built, deployed and managed. Many vendors have submitted EDR cables for testing, and we expect Plugfest 23 to be the preview of the new EDR generation of products. IBTA added FDR Receiver testing to this year’s Plugfest, which depends on Agilent, Anritsu and Tektronix test equipment and will lead the way for EDR device testing moving forward.

“The IBTA Plugfest has long been a critical event for InfiniBand vendors and users, and Plugfest 23 will be particularly important as the technology continues to advance,” said Dr. Alan Benner, system & network design engineer at IBM Corporation. “I’m looking forward to working with IBTA and our testing equipment partners to further the testing of InfiniBand-compliant products.”

Vendor devices and cables successfully passing all required Integrators’ List Compliance Tests and Interoperability procedures at Plugfest 23 will be listed on the IBTA Integrators’ List, updated twice per year, and will be granted the IBTA Integrators’ List Logo. The IBTA’s compliance and interoperability program provides resources for end-users who are building InfiniBand clusters, drives adoption of RDMA technology and provides assurance to participating vendors’ customers and end-users.

To learn more about Plugfest 23, or to find out more about IBTA membership benefits, please visit the IBTA website or contact ibta_plugfest@soft-forge.com.


The IBTA wishes to thank Agilent, Anritsu, Molex and Tektronix for providing test equipment for the IBTA Plugfest. Testing equipment is provided free of charge for the benefit of the InfiniBand community.  IBTA Plugfests would not be possible without this testing equipment.


Rupert Dance, Software Forge

Rupert Dance, Software Forge

Author: admin Categories: Uncategorized Tags:

Observations from SC12

December 3rd, 2012

The week of Supercomputing went by quickly and resulted in many interesting discussions around supercomputing and its role in both HPC environments and enterprise data centers. Now that we’re back to work, we’d like to reflect back on the successful supercomputing event. The conference this year saw a huge diversity of attendees from various countries, with major participation from top universities, which seemed to be on the leading edge of Remote Direct Memory Access (RDMA) and InfiniBand deployments.

Overall, we saw InfiniBand and Open Fabrics technologies continue their strong presence at the conference. InfiniBand dominated the Top500 list and is still the #1 interconnect of choice for the world’s fastest supercomputers. The Top500 list also demonstrated that InfiniBand is leading the way to efficient computing, which not only benefits high performance computing, but enterprise data center environments as well.

We also engaged in several discussions around RDMA. Attendees, analysts specifically, were interested in new products using RDMA over Converged Ethernet (RoCE) and their availability, and were impressed that Microsoft Server 2012 natively supports all three RDMA transports, including InfiniBand and RoCE. Another interesting development is InfiniBand customer Microsoft Windows Azure, and their increased efficiency placing them at 165 on the Top500 list.

IBTA & OFA Booth at SC12

IBTA’s Electro-Mechanical Working Group Chair, Alan Benner discussing the new InfiniBand specification with attendees at the IBTA & OFA SC12 booth

IBTA’s release of the new InfiniBand Architecture Specification 1.3 generated a lot of buzz among attendees, press and analysts. IBTA’s Electro-Mechanical Working Group Chair, Alan Benner, was one of our experts at the booth and drew a large crowd of people interested in the InfiniBand roadmap and his projections around the availability of the next specification, which is expected to include EDR and become available in draft form in April 2013.

SC12 provides a great opportunity those in high performance computing to connect in person and engage in discussions around hot industry topics; this year was focused on Software Defined Networking (SDN), OpenSM, and the pioneering efforts by both IBTA and OFA. We enjoyed conversations with exhibitors and attendees that visited our booth and a special thank you to all of those RDMA experts who participated in our booth session: Bill Boas, Cray; Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Kevin Moran, System Fabric Works; and Josh Simons, VMware.

Rupert Dance, Software Forge

Rupert Dance, Software Forge

IBTA & OFA Join Forces at SC12

November 7th, 2012

Attending SC12? Check out OFA’s Exascale and Big Data I/O panel discussion and stop by the IBTA/OFA booth to meet our industry experts

The IBTA is gearing up for the annual SC12 conference taking place November 10-16 at the UT Salt Palace Convention Center in Salt Lake City, Utah. We will be joining forces with the OpenFabrics Alliance (OFA) on a number of conference activities and will be exhibiting together at SC12 booth #3630.

IBTA members will participate in the OFA-moderated panel, Exascale and Big Data I/O, which we highly recommend attending if you’re at the conference.  The panel session, moderated by IBTA and OFA member Bill Boas, takes place Wednesday, November 14 at 1:30 p.m. Mountain Time and will discuss drivers for future I/O architectures.

Also be sure to stop by the IBTA and OFA booth #3630 to chat with industry experts regarding a wide range of industry topics, including:

·         Behind the IBTA integrators list

·         High speed optical connectivity

·         Building and validating OFA software

·         Achieving low latency with RDMA in virtualized cloud environments

·         UNH-IOL hardware testing and interoperability capabilities

·         Utilizing high-speed interconnects for HPC

·         Release 1.3 of IBA Vol2

·         Peering into a live OFS cluster

·         RoCE in Wide Area Networks

·         OpenFabrics for high speed SAN and NAS

Experts, including: Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Bill Boas and Kevin Moran, System Fabric Works; and Josh Simons, VMware will be in the booth to answer your questions and discuss topics currently affecting the HPC community.

Be sure to check the SC12 website to learn more about Supercomputing 2012, and stay tuned to the IBTA website and Twitter to follow IBTA’s plans and activities at SC12.

See you there!

New InfiniBand Architecture Specification Open for Comments

October 15th, 2012

After an extensive review process, Release 1.3 of Volume 2 of the InfiniBand Architecture Specification has been approved by our Electro-Mechanical Working Group (EWG). The specification is undergoing final review by the full InfiniBand Trade Association (IBTA) membership and will be available for vendors at Plugfest 22, taking place October 15-26, 2012 at University of New Hampshire Interoperability Lab in Durham, New Hampshire.

All IBTA working groups and individual members have had several weeks to review and comment on the specification. We are encouraged by the feedback we’ve received and are looking forward to the official release at SC12, taking place November 10-16 in Salt Lake City, Utah.

Release 1.3 is a major overhaul of the InfiniBand Architecture Specification and features important new architectural elements:
• FDR and EDR signal specification methodologies
• Analog signal specifications for FDR, that have been verified through Plugfest compliance and interoperability measurements
• More efficient 64b/66 encoding method
• Forward Error Correction coding
• Improved specification of QSFP-4x and CXP-12x connectors, ports and management interfaces

The new specification also includes significant copy editing and organization to include sub volumes and improve overall readability. The previous specification, Release 1.2.1, was released in November 2007. As Chair of the EWG, I’m pleased with the technical progress made on the InfiniBand Architecture specification. More importantly, I’m excited about the impact that this new specification release will have for users and developers of InfiniBand technology.

Alan Benner
EWG Chair

InfiniBand as Data Center Communication Virtualization

August 1st, 2012

Last month, the Taneja Group released a report on InfiniBand’s role in the data center, “InfiniBand’s Data Center March” confirming what members of the IBTA have known for a while: InfiniBand is expanding its role in the enterprise.

Mike Matchett, Sr. Analyst at Taneja Group recently posted a blog on the industry report. In his post, Mike summarizes the growing InfiniBand market and the benefits he sees in adopting InfiniBand. Here is an excerpt from his blog:

Recently we posted a new market assessment of InfiniBand and its growing role in enterprise data centers, so I’ve been thinking a lot about low-latency switched fabrics and what they imply for IT organizations. I’d like to add a more philosophical thought about the optimized design of InfiniBand and its role as data center communication virtualization.

From the start, InfiniBand’s design goal was to provide a high performing messaging service for applications even if they existed in entirely separate address spaces across servers. By architecting from the “top down” rather than layering up from something like Ethernet’s byte stream transport, InfiniBand is able to deliver highly efficient and effective messaging between applications. In fact, the resulting messaging service can be thought of as “virtual channel IO” (I’m sure much to the delight of mainframers).

To read the full blog post, check out the Taneja blog. And be sure to read the full analyst report from Taneja Group that published in July.

Bill Lee
IBTA Marketing Working Group Chair

Author: admin Categories: Uncategorized Tags:

InfiniBand’s Data Center March

July 18th, 2012

Today’s enterprise data center is challenged with managing growing data, hosting denser computing clusters, and meeting increasing performance demands. As IT architects work to design efficient solutions for Big Data processing, web-scale applications, elastic clouds, and the virtualized hosting of mission-critical applications they are realizing that key infrastructure design “patterns” include scale-out compute and storage clusters, switched fabrics, and low-latency I/O. 

This looks a lot like what the HPC community has been pioneering for years - leveraging scale-out compute and storage clusters with high-speed low-latency interconnects like InfiniBand. In fact, InfiniBand has now become the most widely used interconnect among the top 500 supercomputers (according to www.TOP500.org).  It has taken a lot of effort to challenge the entrenched ubiquity of Ethernet, but InfiniBand has not just survived for over a decade, it has consistently delivered on an aggressive roadmap - and it has an even more competitive future. 

The adoption of InfiniBand in a data center core environment not only supercharges network communications, but by simplifying and converging cabling and switching reduces operational risk and can even reduce overall cost. Bolstered by technologies that should ease migration concerns like RoCE and virtualized protocol adapters, we expect to see InfiniBand further expand into mainstream data center architectures not only as a back-end interconnect in high-end storage systems, but also as the main interconnect across the core.  

For more details be sure to check out Taneja Group’s latest report “InfiniBand’s Data Center March” - available here.

Mike Matchett
Sr. Analyst, Taneja Group 
mike.matchett@tanejagroup.com
http://www.tanejagroup.com

Author: admin Categories: Uncategorized Tags:

InfiniBand Most Used Interconnect on the TOP500

June 21st, 2012
The TOP500 list was released this week, ranking the fastest supercomputers in the world. We at the IBTA were excited to see that InfiniBand’s presence increased on the list to become the most used interconnect for the first time.  Clearly, InfiniBand is the interconnect of choice for today’s compute-intensive systems! As the chart below demonstrates, InfiniBand’s adoption rate has grown significantly, outpacing all of the other options.

picture1

We were also pleased to see that since the last report 6 months ago FDR has increased the number of systems it connects tenfold, making it the fastest growing interconnect on the list.  

The TOP500 list notes that InfiniBand connects eight of the 20 Petascale systems on the list, and that InfiniBand-connected systems boast the highest performance growth rates on the list.  Petascale systems on the TOP500 list favored InfiniBand because of its scalability and the resulting computing efficiency. The graph below illustrates the performance trends showing how supercomputing depends on InfiniBand to achieve the highest performance.

 top500-performance-trends-per-interconnect

The TOP500 list demonstrates what the IBTA and the InfiniBand community have already known - InfiniBand is a technology that has changed the face of HPC and, we believe, is having the same effect on the enterprise data center. Below, are some additional stats referenced on the TOP500 list.

  • InfiniBand connects 25 of the 30 most compute-efficient systems, including the top 2
  • InfiniBand-based system performance grew 69% from June ‘11 to June ‘12

Want to learn more about the TOP500, or how InfiniBand fared? Check out the IBTA’s press release on the news.

 

 

Bill Lee, IBTA Marketing Working Group Co-Chair

Bill Lee, IBTA Marketing Working Group Co-Chair

Author: admin Categories: Uncategorized Tags:

InfiniBand at Interop

May 29th, 2012

This month, IBTA member companies attended the Interop Conference 2012 in Las Vegas. As news from the event streamed in and demos began on-site, we were excited to see that InfiniBand and RDMA were making headlines at this traditionally datacenter-focused event. Microsoft, FusionIO and Mellanox demoed a setup with Windows Server 2012 Beta and SMB 3.0 that illustrated amazing remote file performance using SMB Direct (SMB over RDMA), resulting in 5.8 Gbytes per second from a single network port. The demo consisted of a combination of Intel Romley motherboards each with two CPUs each with 8 cores, the faster PCIe Gen3 bus, four FusionIO ioDrive 2 drives rated at 1.5 Gbytes/sec each and the latest Mellanox InfiniBand ConnectX-3 network adapters. TechNet’s Microsoft contributor Jose Barreto noted in his coverage of the demo when looking at this handy results table that accompanied the demo:

“You can’t miss how RDMA improves the numbers for % Privileged CPU utilization, fulfilling the promise of low CPU utilization and low number of cycles per byte. The comparison between traditional, non-RDMA 10GbE and InfiniBand FDR for the first workload shows the most impressive contrast: over 5 times the throughput with about half the CPU utilization.”

rdma

If you’re interested in learning more about the demo, check out Jose’s presentation at the conference; slides are available on his blog. Also if you want to learn more about RDMA, don’t forget to swing by the IBTA’s RDMA Over Converged Ethernet (RoCE) section of the website.

We’re happy to see RDMA getting the recognition it deserves, and we look forward to seeing more coverage resulting from Interop in the coming days, as well as future discussions around RoCE and how InfiniBand solutions can be deployed in the enterprise.

bill_lee

 

 

 

 

 

 

 

 

Bill Lee
Mellanox

Author: admin Categories: Uncategorized Tags:

VMware: InfiniBand and RDMA Better Than Ethernet

April 25th, 2012

Last month, VMware’s Josh Simon’s attended the OpenFabrics Alliance User and Developer Workshop in Monterey, CA. While at the event, Josh sat down with InsideHPC to discuss VMware’s current play in the HPC space, big data and the company’s interest in InfiniBand and RDMA. Josh believes that by adopting RDMA, VMware can better manage low-latency issues in the enterprise.

Josh noted that RDMA over an InfiniBand device aides in virtualization. VMware is seeing live-migration times shrinking in its virtualization platforms, such as VMotion and others. The company is also seeing CPU savings in addition, thanks to the efficient RDMA applications it has deployed.

Greg Ferro also posted on The Ethereal Mind blog about how VMware believes InfiniBand over Ethernet is better than Ethernet alone. According to Greg:

“Good InfiniBand networks have latency measured in hundreds of nanoseconds and much lower impact on system CPU because InfiniBand uses RDMA to transfer data. RDMA (Remote Direct Memory Access) means that data is transferred from memory location to memory location thus removing the encapsulation overhead of Ethernet and IP (that’s as short as I can make that description).”

To read the full text of Greg Ferro’s blog post, click here.

To watch Josh’s interview with InsideHPC, or to check out the other presentations from the OFA workshop, head over to the InsideHPC workshop page. Presentations are also available for download on the OFA website.

briansparks-150x150 Brian Sparks
IBTA Marketing Working Group Co-Chair

Author: admin Categories: Uncategorized Tags: