Archive

Author Archive

Statement from QLogic

February 6th, 2012

Recently, QLogic Corporation sold Intel Corporation the QLogic InfiniBand business unit. QLogic continues its focus and expansion of its Ethernet product-line, as well as RoCE and iWARP protocols, amongst others, over Ethernet. QLogic remains committed to its close relationships and positions within the IBTA.

 
skip-jones

 

 

 

 

 

 

Skip Jones
IBTA Marketing Working Group Co-Chair

Author: admin Categories: Uncategorized Tags:

2012 Outlook from the MWG Chairs

January 20th, 2012

As we begin 2012, the InfiniBand Trade Association is not only looking back on our successes in 2011, but also mapping out our areas for growth in the coming year. In 2011, the IBTA:

And this is just a slice of 2011!

Exciting things are to come in 2012 including: an update to our minibook, more discussions/events around RoCE, and so much more. As we look to the future of InfiniBand, we want to broaden the InfiniBand story beyond the HPC into the enterprise, highlighting real-life deployments and showcasing the great work of our members. We know various market-verticals utilize InfiniBand in their storage solutions, such as financial services, cloud computing and manufacturing, and this year we plan to increase the visibility of these solutions by reaching out to you, our members, and spreading your success stories.

We look forward to the year ahead with the IBTA and the continued support and participation of our members.

Happy New Year!

Brian Sparks and Skip Jones

IBTA MWG Chairs

Author: admin Categories: Uncategorized Tags:

3M Gearing Up for SC11

November 10th, 2011

With the SC11 show less than a week away, we at 3M are gearing up and getting excited about what we have to share with the IBTA community. The high-speed twin axial cable and active optical cable (AOC) teams haven’t wasted any time by quickly identifying ways to contribute to the working groups and working to bring the best of 3M processes and innovation to the table. Coupled with access to IBTA’s resources, our support and rapidly growing portfolio of high-speed products will continue to expand and evolve.

This week, our 3M engineers are currently at IBTA Plugfest #20 with a full suite of complementary copper and fiber products for FDR, QDR and DDR rates, ranging in lengths from 0.5 meters up to 100 meters. And this momentum continues as we head into SC11.

3M is a sponsor of the IBTA/OFA booth, where we’ll be showcasing our various high-speed products for the HPC segment. What we have to offer in this space is truly differentiated and we believe can meet the most demanding of our customers’ needs.

For example, our line of copper QSFP+ assemblies are made with 3MTM Twin Axial Cable technology, which lets the cable assembly easily bend and fold, allowing the cable to achieve a very small bend radius (2.5mm) with little to no impact on electrical performance.

The cable is lightweight and low profile (flat ribbon format) as well, and together with its unique flexibility, enables very efficient routing and improved airflow. Imagine folding the cable at a right angle right off the backshell where it exits the switch. Honestly, it’s a bit hard to envision without seeing it in action, so check out the cable and its capabilities on our YouTube page!

We’ll also be leveraging our presence at SC11 to showcase our AOC assemblies - one of the lowest power AOCs available on the market. Up there in flexibility with the 3M Twin Ax assemblies, our AOCs utilize a low-loss, bend-insensitive fiber, meaning the flexibility and high-performance benefits of 3M copper QSFP+ assemblies extends to the fiber side, as well.

These and other 3M cables will be put to the test in the 40 Gbps RDMA live demonstration within the IBTA/OFA booth. So, if you’re going to be at SC11, the demo should not be missed as it’s an extraordinary 40Gbps RDMA demo over this kind of distance… 6,000 miles, further than a trip from Seattle to Paris!

3M will be in Seattle all week during SC11 with a full agenda, so make sure you stop by the IBTA/OFA booth and check out our latest technologies. We’ll also be meeting with customers one-on-one in our meeting suite at the Grand Hyatt. The countdown is on and we look forward to seeing you at the show. Feel free to e-mail us for more information or to schedule time to chat at SC11.

Shalou Dhamija, sdhamija@3m.com

Jeff Bullion, jbullion@3m.com

Author: admin Categories: Uncategorized Tags:

SC11 Just One Week Away; Great Content Planned for IBTA & OFA Booth #6010

November 7th, 2011

With less than two weeks until SC11, there’s lots of buzz around the forthcoming November release of the TOP500 list. Will Japan’s K supercomputer stay at the top? Where will China’s Sunway Bluelight place? How many systems with GPUs will we see?

The TOP500 list plays into SC11’s unofficial theme of Big Data. Nicole Hemsoth of HPCwire, released an article last week providing highlights of what you can expect to see at SC11. Nicole sites John Johnson, conference chair, who says that this year the supercomputing community is “being called upon to rise to the data challenge and develop methods for dealing with the exponential growth of data and strategies for analyzing and storing large data sets.”

Nicole goes on to highlight a number of presentations and sessions being given at SC11 focused on the problems and new developments spawned by Big Data and technical or scientific computing.

The technical working groups of the IBTA and OFA are all about addressing the problems and new developments spawned by data challenges in high performance computing - and translating those technologies into meaningful solutions for the enterprise data center.

We have joined forces with several member companies as well as SC11’s SCinet and Energy Sciences Network (ESnet) and will be showcasing a “world’s first” demonstration of Remote Direct Memory Access (RDMA) protocols over a 40 Gbps WAN. Watch here for more details to be released this Monday, Nov. 14.

We will also be featuring presentations from member companies on a full range of topics detailed on this site.

Be sure to add the IBTA/OFA #6010 to your list of must-see booths at the show and watch this space for live updates from the show.

hs-brian-sparks

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: Uncategorized Tags:

Why I/O is Worth a Fresh Look

October 5th, 2011

In September, I had the privilege of working with my friend and colleague, Paul Grun of System Fabric Works (SFW) on the first webinar in a four-part series, “Why I/O is Worth a Fresh Look,” presented by InfiniBand Trade Association on September 23.

The IBTA Fall Webinar Series is part of a planned outreach program led by the IBTA to expand InfiniBand technology to new areas where its capabilities may be especially useful. InfiniBand is well-accepted in the High-Performance Community (HPC), but the technology can be just as beneficial in “mainstream” Enterprise Data Centers (EDC). The webinar series addresses the role of remote direct memory access (RDMA) technologies, such as InfiniBand and RDMA over Converged Ethernet (RoCE), in the EDC, highlighting the rising importance of I/O technology in the on-going transformation of modern data centers. We know that broadening into EDC is a difficult task for several reasons, including the fact that InfiniBand could be viewed as a “disruptive” technology, not based on the familiar Ethernet transport, and therefore requires new components in the EDC. The benefits are certainly there, but so are the challenges, hence the difficulty of our task.

Like all new technologies, one of our challenges is educating those who are not familiar with InfiniBand and challenging them to look at their current systems differently – just as our first part in this webinar series suggests - taking a fresh look at I/O. In this first webinar, we took on the task of reexamining I/O and assessing genuine advancements in I/O, specifically InfiniBand and making case for how this technology should be considered when improving your data center. We believe the developments in the InfiniBand world over the last decade are not well-known to EDC managers, or at least not well understood.

I am very happy with the result, and the first webinar really set the stage for the next three webinars which dive into the nuts and bolts of this technology and give practical information on how this technology can be implemented and improve your data center.

During the webinar we answered several questions, but one in particular, I felt we did not spend enough time discussing due to time limitations. The attendee asked, “How will interoperability in the data center be assured? The results from the IBTA plugfests are less than impressive. Will this improve with the next generation FDR product?”

First, this question requires a little explanation, because it uses terminology and implies knowledge outside of the webinar itself. There is testing of InfiniBand components which takes place jointly between the IBTA and OpenFabrics Alliance (OFA) at the University of New Hampshire Interoperability Lab (UNH-IOL). We test InfiniBand components for compliance to the InfiniBand specification and for interoperability with other compliant InfiniBand components.

In the opinion of IBTA and OFA members, vendors and customers alike, interoperability must be verified with a variety of vendors and their products. However, that makes the testing much more difficult and results in lower success rates than if a less demanding approach were to be taken. The ever-increasing data rates also put additional demands on cable vendors and InfiniBand Channel Adapter and Switch vendors.

The real world result of our testing is a documented pass rate of about 90%, and a continuing commitment to do better.

What this means in real world terms is that the InfiniBand community has achieved the most comprehensive and strictest compliance and interoperability program in the industry. This fact, in and of itself, is probably the strongest foundational element that justifies our belief that InfiniBand can and should be considered for adoption in the mainstream EDC, with complete confidence as to its quality, reliability and maturity.

If you were unable to attend the webinar, be sure to check out the recorded webinar and download the presentation slides here. We’re looking forward to the next webinar (The Practical Approach to Applying InfiniBand in Your Data Center, taking place October 21) in the series which will dig more deeply into how this technology can be integrated into the data center and get into the meat of this technology. I look forward to your participation in the remaining webinars. There’s a lot we can accomplish together, and it starts with this basic understanding of the technology and how it can help you reach your company’s goals.

Jim Ryan
Chairman of the OpenFabrics Alliance

Author: admin Categories: InfiniBand, Uncategorized Tags:

InfiniBand at VMworld!

September 2nd, 2011

VMworld 2011 took place this week in sunny Las Vegas, and with over 20,000 attendees, this show has quickly developed into one of the largest enterprise IT events in the world. Virtualization continues to be one of the hottest topics in the industry, providing a great opportunity for InfiniBand vendors to market the wide-array of benefits that InfiniBand is enabling in virtualized environments. There were several in the IBTA community spreading the InfiniBand message, but here were a few of note.

vmworld-8312011-image-11

On the networking side, Mellanox Technologies showed the latest generation of InfiniBand technology, FDR 56Gb/s. With FDR adapters, switches and cables available today, IT managers can immediately deploy this next generation technology into their data center and get instant performance improvements, whether it be leading vMotion performance, the ability to support more virtual machines per server at higher bandwidth per virtual machine, or higher capital and lower operating expenses by consolidating the networking, management and storage I/O into a one-wire infrastructure.

vmworld-8312011-image-21

Fusion-io, a Flash-based storage manufacturer that targets heavy data acceleration needs from such applications as database, virtualization, Memchached and VDI, also made a big splash at VMworld. Their booth featured an excellent demonstration of how low-latency, high-speed InfiniBand networks help enable Fusion-io to show 800 virtual desktops being accessed and displayed over 17 monitors. InfiniBand enabled them to stream bandwidth-intensive HD movies (over 2,000) from just eight servers.

vmworld-8312011-image-3

Pure Storage, a newcomer in the storage arena, announced their 40Gb/s InfiniBand-based enterprise storage array that targets applications such as database, VDI, etc. With InfiniBand they are able to reduce their latency by over 800 percent while increasing performance by 10X.

vmworld-8312011-image-41

Isilon was recently acquired by EMC, and in the EMC booth, a rack of Isilon storage systems was displayed, scaling out by running 40Gb/s InfiniBand on the back-end. These storage systems excel in VDI implementations and are ripe for customers implementing a cloud solution where performance, reliability and storage resiliency are vital.

vmworld-8312011-image-5

Also exhibiting at VMworld was Xsigo Systems. Xsigo showed their latest Virtual I/O Director which now includes 40Gb/s InfiniBand. The previous generation used 20Gb/s InfiniBand. With the upgraded bandwidth capabilities, Xsigo can now offer their customers with 12-30X acceleration of I/O intensive tasks such as vMotion, queries, backup, etc all while providing dynamic bandwidth allocation per VM or job. In addition, by consolidating the network over a single wire, Xsigo is able to provide customers with 85 percent less hardware cost per virtual machine.

The items mentioned above are just a small slice of the excitement that was at VMworld. I’m glad to have seen so many InfiniBand solutions displayed. For more information on InfiniBand in the enterprise, watch for an upcoming webinar series being produced by the IBTA.

Brian Sparks

IBTA Marketing Working Group Co-Chair

HPC Advisory Council Showcases World’s First FDR 56Gb/s InfiniBand Demonstration at ISC’11

July 1st, 2011

The HPC Advisory Council, together with ISC’11, showcased the world’s first demonstration of FDR 56Gb/s InfiniBand in Hamburg, Germany, June 20-22. The HPC Advisory Council is hosting and organizing new technology demonstrations at leading HPC conferences around the world to highlight new solutions which will influence future HPC systems in term of performance, scalability and utilization.

The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths.

The FDR InfiniBand network included dedicated and distributed clusters, as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualization applications using car models courtesy of Peugeot Citroën.

The installation of the fiber cables (we used 20 and 50 meter cables) was completed a few days before the show opened, and we placed the cables on the floor, protecting them with wooden bridges. The clusters, Lustre and application setup was done the day before and everything ran perfectly.

You can see the network architecture of the ISCnet FDR InfiniBand demo below. We have combined both MPI traffic and storage traffic (Lustre) on the same fabric, utilizing the new bandwidth capabilities to provide a high performance, consolidated fabric for the high speed rendering and visualization application demonstration.

iscnet3

The following HPC Council member organizations contributed to the FDR 56Gb/s InfiniBand demo and I would like to personally thank each of them: AMD, Corning Cable Systems, Dell, Fujitsu, HP, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

Regards,

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

ISC’11 Highlights: ISCnet to Feature FDR InfiniBand

June 13th, 2011

ISC’11 — taking place in Hamburg, Germany from June 19-23 - will include major new product introductions and groundbreaking talks from users worldwide. We are happy to call to your attention to the fact that this year’s conference will feature the world’s first large-scale demonstration of next-generation FDR InfiniBand technology. 

With link speeds of 56Gb/s, FDR InfiniBand uses the latest version of the OpenFabrics Enterprise Distribution (OFEDTM) and delivers an increase of nearly 80% in data rate compared to previous InfiniBand generations.

Running on the ISC’11 network “ISCnet,” the multi-vendor, FDR 56Gb/s InfiniBand demo will provide exhibitors with fast interconnect connectivity between their booths on the show floor and enable them to demonstrate a wide variety of applications and experimental HPC applications, as well as new developments and products.

The demo will also continue to show the processing efficiency of RDMA and microsecond latency of OFED, which reduces the cost of the servers, increases productivity and improves customer’s ROI. ISCnet will be the fastest open commodity network demonstration ever assembled to date.

If you are heading to the conference, be sure to visit the booths of IBTA and OFA members who are exhibiting, as well as the many users of InfiniBand and OFED. The last TOP500 list (published in November 2010) showed that nearly half of the most powerful computers in the world are using these technologies.

InfiniBand Trade Association Members Exhibiting

  • Bull - booth 410
  • Fujitsu Limted - booth 620
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Leoni - booth 842
  • Mellanox - booth 331
  • Molex - booth 730 Co-Exhibitor of Stordis
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330
OpenFabrics Alliance Members Exhibiting

  • AMD - booth 752
  • APPRO - booth 751 Co-Exhibitor of AMD
  • Chelsio - booth 702
  • Cray - booth 650
  • DataDirect Networks - booth 550
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Mellanox - booth 331
  • Microsoft - booth 832
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330

Also, don’t miss the HPC Advisory Council Workshop on June 19 at ISC’11 that includes talks on the following hot topics related to InfiniBand and OFED:

  • GPU Access and Acceleration
  • MPI Optimizations and Futures
  • Fat-Tree Routing
  • Improved Congestion Management
  • Shared Memory models
  • Lustre Release Update and Roadmap 

Go to http://www.hpcadvisorycouncil.com/events/2011/european_workshop/index.php to learn more and register.

For those who are InfiniBand, OFED and Lustre users, don’t miss the important announcement and breakfast on June 22 regarding the coming together of the worldwide Lustre vendor and user community. This announcement is focused on ensuring the continued development, upcoming releases and consolidated support for Lustre; details and location will be available on site at ISC’11. This is your opportunity to meet with representatives of OpenSFS, HPCFS and EOFS to learn how the whole community is working together.

Looking forward to seeing several of you at the show!

briansparksBrian Sparks
IBTA & OFA Marketing Working Groups Co-Chair
HPC Advisory Council Media Relations

NVIDIA GPUDirect Technology – InfiniBand RDMA for Accelerating GPU-Based Systems

May 11th, 2011

As a member of the IBTA and as being the chairman of the HPC Advisory Council, I wanted to share with you some information on the important role of InfiniBand in the emerging hybrid (CPU-GPU) clustering architectures.

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, has made graphics accelerators a compelling platform for computationally demanding tasks in a wide variety of application domains. Due to the great computational power of the GPU, the GPGPU method has proven valuable in various areas of science and technology and the hybrid CPU-GPU architecture is seeing increased adoption.

GPU-based clusters are being used to perform compute intensive tasks like finite element computations, Computational Fluids Dynamics, Monte-Carlo simulations, etc. Several of the world-leading InfiniBand supercomputers are using GPUs in order to achieve the desired performance. Since the GPUs provide very high core count and floating point operations capability, a high-speed networking interconnect such as InfiniBand is required to provide the needed throughput and the lowest latency for GPU-to-GPU communications. As such, InfiniBand has become the preferred interconnect solution for hybrid GPU-CPU systems.

While GPUs have been shown to provide worthwhile performance acceleration yielding benefits to price/performance and power/performance, several areas of GPU-based clusters could be improved in order to provide higher performance and efficiency. One issue with deploying clusters consisting of multi-GPU nodes involves the interaction between the GPU and the high speed InfiniBand network - in particular, the way GPUs use the network to transfer data between them. Before the NVIDIA GPUDirect technology, a performance issue existed with user-mode DMA mechanisms used by GPU devices and the InfiniBand RDMA technology. The issue involved the lack of a software/hardware mechanism of “pinning” pages of virtual memory to physical pages that can be shared by both the GPU devices and the networking devices.

The new hardware/software mechanism called GPUDirect eliminates the need for the CPU to be involved in the data movement and essentially enables not only higher GPU-based cluster efficiency, but sets the way for the creation of “floating point services.” GPUDirect is based on a new interface between the GPU and the InfiniBand device that enables both devices to share pinned memory buffers. Therefore data written by a GPU to the host memory can be sent immediately by the InfiniBand device (using RDMA semantics) to a remote GPU much faster.

As a result, GPU communication can now utilize the low latency and zero copies advantages of the InfiniBand RDMA transport for higher applications performance and efficiency. InfiniBand RDMA enables you to connect remote GPUs with latency characteristics to make it seems like all of the GPUs are on the same platform. Examples of the performance benefits and more info on GPUDirect can be found at - http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.

gilad-shainer

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

IBTA Plugfest 19 Wrap Up

April 12th, 2011

The latest IBTA Plugfest took place last week at UNH-IOL in Durham, NH. This event provided an opportunity for participants to measure their products for compliance with the InfiniBand architecture specification as well as interoperability with other InfiniBand products. We were happy to welcome 21 vendors and we tested 22 devices, 235 DDR rated cables,  227 QDR rated cables and 66 FDR rated cables at the April 2011 event.

New for this latest Plugfest, we added beta testing for QSFP+ FDR Cables which support 54 Gb/sec data rate. I’m happy to report that we received a total of 66 cables supporting FDR rates and 5 of these were Active Fiber cables. Their performance bodes well for the new data rate supported in the IBTA Roadmap.

Vendor devices and cables successfully passing all required Integrators’ List (IL) Compliance Tests will be listed on the IBTA Integrators’ List and will be granted the IBTA Integrators’ List Logo. We will also have comprehensive interoperability results available documenting the results of heterogeneous device testing using all of the cables submitted at the Plugfest. We’re expecting to have the IBTA Integrators’ List updated in time for ISC’11.

2011-04-cable-interop-diagram

I can’t believe I’m about to talk about our 20th event, but IBTA Plugfest 20 will be taking place this October. Stay tuned for exact dates. Thank you and congratulations to all of the vendors who participated in our April event and performed so well.

rupert-danceRupert Dance

Co-chair, IBTA’s Compliance and Interoperability Working Group

Author: admin Categories: InfiniBand Tags: , , , ,