Foundation for the Converging Data Center: InfiniBand with Intelligent Fabric Management

March 26th, 2010

Most data centers are starting to realize the benefits of virtualization, cloud computing and automation. However, the heavy I/O requirements and intense need for better visibility and control quickly become key challenges that create inefficiencies and inhibit wider adoption of these advancements.

InfiniBand coupled with intelligent fabric management software can address many of these challenges-specifically those related to connectivity and I/O.

Technology analysis firm The Taneja Group recently took an in-depth look at this topic and published an interesting whitepaper, “Foundation for the Converging Data Center: Intelligent Fabric Management.” The paper lays out the requirements for intelligent fabric management and highlights how the right software can harness the traffic analysis capabilities that are inherent in and unique to InfiniBand to make data centers run a lot more efficiently. You can download the paper for free here.

Another interesting InfiniBand data point they shared: “in a recent Taneja Group survey of 359 virtual server administrators, those with InfiniBand infrastructures considered storage provisioning and capacity/performance management twice as easy as users of some of the other fabrics (Taneja Group 2009 Survey of Storage Best Practices and Server Virtualization).

The High Performance Computing Center Stuttgart (HLRS) is one organization that has benefited from using 40 Gb/s InfiniBand and Voltaire’s Unified Fabric ManagerTM software (UFMTM software) on their 700-node multi-tenant cluster, which basically operates as a cloud delivering HPC services to their customers. You can read more about it here.

I’d also like to invite you to tune into a recent webinar, “How to Optimize and Acclerate Application Performance with Intelligent Fabric Management,” co-hosted by Voltaire, The Taneja Group and Adaptive Computing. Here we explore this topic further and give an overview of some of the key capabilities of Voltaire’s UFM software.

Look forward to seeing many of you at Interop in Las Vegas next month!

christy-lynch

Christy Lynch

Director, Corporate Communications, Voltaire

Member, IBTA Marketing Working Group

Highlighting End-User InfiniBand Deployments: Partners Healthcare Cuts Latency of Cloud-Based Storage Solution

February 22nd, 2010

Not enough is said about front- or back-end InfiniBand storage, but fear not! Interesting article just came out from Dave Raffo at SearchStorage.com that I think is worth spreading through the InfiniBand community. I have a quick summary below but be sure to also check out the full article: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners HealthCare recognized early on that a cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners HealthCare network.

In an effort to address their unique needs, Partners HealthCare developed their own specially designed storage network. Initially, they chose Ethernet as the transport technology for their storage system. As demand grew, the solution began hitting significant performance bottlenecks - particularly during the read/write of hundreds of thousands of small files. The issue was found to lie with the interconnect -Ethernet created problems due to its high natural latency. In order to provide a scalable, low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners HealthCare experienced roughly two orders of magnitude faster read times.

“One user had over 1,000 files, but only took up 100 gigs or so,” said Brent Richter, corporate manager for enterprise research infrastructure and services, Partners HealthCare System. “Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners HealthCare chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” said Richter.

Richter mentioned the final price tag came to about $1 per gigabyte.

By integrating InfiniBand into the storage solution, Partners HealthCare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Great to see end-user cases like this come out! If you are a member of the IBTA and would like to share one of your InfiniBand end user deployments, please contact us at press@infinibandta.org.

Till next time,

brian_2

Brian Sparks

IBTA Marketing Working Group Co-Chair

OpenFabrics Alliance Update and Invitation to Sonoma Workshop

January 27th, 2010

ofa-logoHappy New Year, InfiniBand community! I’m writing to you on behalf of the OpenFabrics Alliance (OFA), although I am also a member of the IBTA’s marketing working group. The OFA had many highlights in 2009, many of which may be of interest to InfiniBand vendors and end users. I wanted to review some of the news, events and technology updates from the past year, as well as invite you to our 6th Annual International Sonoma Workshop, taking place March 14-17, 2010.

At the OFA’s Sonoma Workshop in March 2009, OpenFabrics firmed up 40 Gigabit InfiniBand support and 10 Gigabit Ethernet support in the same Linux releases, providing a converged network strategy that leverages the best of both technologies. At the International Supercomputer Conference (ISC) in Germany in June 2009, OFED software was used on the exhibitors’ 40 Gigabit InfiniBand network and then a much larger 120 Gigabit network at SC09 in November.

Also at SC09, the current Top500 list of the world’s fastest supercomputer sites was published. The number of systems on the list that use OpenFabrics software is closely tied to the number of Top500 systems using InfiniBand. For InfiniBand, the numbers on the November 2009 list are as follows: 5 in the Top 10, 64 in the Top 100, and 186 in the Top500. For OpenFabrics, the numbers may be slightly higher because the Top500 does not capture the interconnect used for storage at the sites. One example of this is the Jaguar machine at Oak Ridge National Laboratory, which lists “proprietary,” when in fact the system also has a large InfiniBand and OFED-driven storage infrastructure.

Key new members that joined the OFA last year included Cray and Microsoft. These new memberships convey the degree to which OFED has become a de-facto standard for InfiniBand in HPC, where the lowest possible latency brings the most value to the computing and application performance.

There are, of course, significant numbers of sites in the Top500 where legacy 1 Gigabit Ethernet and TCP/IP are still perceived as being sufficient from a cost-performance perspective. OpenFabrics believes that as the cost for 10 Gigabit Ethernet chips on motherboards and NICs come down, many of these sites will consider moving to 10 Gigabit Ethernet. As InfiniBand over Ethernet (commonly referred to as RoCEE) and iWARP are going to be in OFED 1.5.1 together beginning in March 2010, some sites will move to InfiniBand to capture the biggest improvement possible.

OpenIB, predecessor to OpenFabrics, helped in the early adoption of InfiniBand. The existence of free software, fully supporting a variety of vendors’ proprietary hardware, makes it easier for vendors to increase their hardware investments. The end user subsequently is able to purchase more hardware, as nothing needs to be spent on proprietary software to enable the system. This is one open source business model that has evolved, but none of us know if this is sustainable for the long term in HPC, or whether the enterprise and cloud communities will adopt it as well.

Please join me at this years’ OFA Sonoma Workshop where the focus will continue to be on the integrated delivery of both InfiniBand and 10 Gigabit Ethernet support in OFED releases which we believe makes OFED very attractive for cloud and Enterprise Data Centers.

bill-boas

Bill Boas

Executive Director, OpenFabrics Alliance

Author: admin Categories: InfiniBand Tags: , , , ,

A Look Back at InfiniBand in 2009

December 21st, 2009

Dear IBTA Members,

As we wind down 2009 and look forward to spending time with family and friends, I thought it would be nice to have a blog posting to review all we have accomplished this year. From an InfiniBand perspective, 2009 was a good year.

We saw continued success on the TOP500, with the November 2009 list showing significant InfiniBand growth compared to the November 2008 list. Clearly InfiniBand is well on its way to becoming the number one interconnect on the TOP500. This is encouraging because it demonstrates user validation of the benefits of using InfiniBand.

Cable suppliers helped drive IBTA membership this year, as well as participation at the IBTA-sponsored Plugfest events held in April and October 2009. Also on the hardware side, we saw the introduction of numerous new switching platforms and new chips, both on the HCA and switch side. These types of investments further validate that multiple suppliers see InfiniBand as a growing, healthy marketplace.

In addition to all of the new hardware available, many software advancements were introduced. These included everything from wizards to simplify cluster installations to new accelerators to drive-enhanced application performance – not to mention APIs to allow users to better integrate interconnect hardware with applications and schedulers.

As noted in an earlier blog, we had a very successful showing at SC09 with several hundred people visiting the booth and a number of well-attended presentations. To view presentations from the show, please visit the IBTA Resources Web pages.

During November, the IBTA conducted a survey of SC09 attendees. Highlights include:

  • 67.5% of survey respondents are using both InfiniBand and Ethernet in their current deployments
  • 62.5% of survey respondents are planning on using InfiniBand in their next server cluster, while only 20% are planning on using 10GE, 12.5% Ethernet and 5% proprietary solutions
  • The vast majority of respondents (85%) have between 1-36 storage nodes on average for their current clusters
  • 95% are using Linux for their operating system
  • 60% are using OFED as the software stack with the OS for interconnect, network and storage
  • When it comes to applications, the most important factors are: latency (35%), performance (35%), bandwidth (12.5%), cost (12.5%) and messaging rate (5%)
  • Over 67% indicated that they had considered IB as the interconnect for their storage

These results are encouraging, as they show that there is growing interest in InfiniBand, not only as the interconnect for the cluster compute nodes, but also for the storage side of the cluster.

Please be sure to check out the IBTA online press room for highlights from press and analyst coverage in 2009.

The Marketing Working Group would like to thank everyone for all of their efforts and continued support. We wish everyone a happy and healthy new year and look forward to working with you in 2010.

Kevin Judd

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , , ,

Celebrating InfiniBand at SC09

November 23rd, 2009

 

I almost lost my voice at the show and, as I write this, I feel my body shutting down. Blame the wet, cold weather and the race to go from one briefing to the next… but it was all very worth it. Supercomputing is certainly one of the biggest shows for the IBTA and its members and, with the amount of press activity that happened this year, it certainly isn’t showing signs of decline.

 

The TOP500 showed InfiniBand rising and connecting 182 systems (36 percent of the TOP500) and it clearly dominated the TOP10 through TOP300 systems. Half of the TOP10 systems are connected via InfiniBand and, although the new #1 system (JAGUAR from ORNL) is a Cray, it’s important to note that InfiniBand is being used as the storage interconnect to connect Jaguar to “Spider” storage systems.

 

But let’s talk efficiency for a moment… this edition of the TOP500 showed that 18 out of the 20 most efficient systems on the TOP500 used InfiniBand and that InfiniBand system efficiency levels reached up to 96 percent! That’s over 50 percent more efficiency than the best GigE cluster. Bottom line: WHEN PURCHASING NEW SYSTEMS, DON’T IGNORE THE NETWORK! You may be saving pennies on the network and spending dollars on the processors with an unbalanced architecture.

 

In addition, the entire ecosystem is rallying around the next level of InfiniBand performance with multiple exhibitors (24) demonstrating up to 120Gb/s InfiniBand bandwidth (switch-to-switch) and the new CXP cabling options to support this jump in bandwidth.

 

We appreciated the participation of the OpenFabrics Alliance (OFA) in our booth and in some of our press and analyst briefings. We were happy to hear their announcement about Microsoft joining OFA, a strong affirmation of the importance of InfiniBand.

 

If you didn’t have a chance to swing by the IBTA booth, don’t worry: we videotaped all the great presentations and will have them up on the web for you shortly.

 

I apologize if I didn’t have a chance to meet with you and talk with all the members 1-on-1, and I hope to make up for it and see you all again at ISC’10 in Hamburg, Germany and SC10 in New Orleans. I’m sure the IBTA will have some more wonderful news to share with everyone. Congratulations IBTA and happy 10-year anniversary!

 

Safe travels to everyone,

briansparks3

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , ,

IBTA to Celebrate 10-Year Anniversary at SC’09

November 5th, 2009

It’s that time of the year again. The most important event for the InfiniBand Trade Association and the InfiniBand community at large is upon us – Supercomputing ’09 (SC’09) taking place in Portland, Oregon from November 14-20.

 

This year’s event promises to be an impressive gathering of the top minds in high-performance computing with expected attendance of 11,000 researchers, scientists, engineers and computing experts from around the world. InfiniBand will certainly have a strong presence and showing at the event with 16 IBTA members exhibiting, a slew of expected InfiniBand-related product announcements from the entire ecosystem, and of course, InfiniBand-related demos set up all over the show floor.

 

The SCinet InfiniBand network – being installed as I write this – will see a substantial speed increase this year – up to 120Gb/s connectivity throughout the entire network. More than 20 exhibitors will be showcasing SCinet and the demonstration is expected to be an SC’09 highlight.

 

The IBTA will also have a 10×20 booth, special logo, collateral, signage, giveaways and presentations to highlight and celebrate its 10-year anniversary.

 

We’re hoping you will stop by booth #139 and visit us, especially Monday night from 7-9pm, as we will be hosting a 10-Year Anniversary Cocktail Reception.

 

Come Celebrate with the IBTA!

 briansparks2

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , ,

SCinet at SC’09: building the biggest and best InfiniBand network the Supercomputing conference has ever seen

October 22nd, 2009

For the past several years, the SCinet organization has built an InfiniBand network infrastructure at the annual Supercomputing conference, providing connectivity to numerous organizations and vendors in support of various application and storage demonstrations.

While I’ve been fortunate enough to participate in SCinet almost every year since InfiniBand was first deployed at SC’04, this is my first year as the OpenFabrics committee co-chair and now I’m tasked with the responsibility of putting together the InfiniBand network for SC’09 in Portland, Ore.

With SCinet’s InfiniBand network celebrating its 5th year of operation, we wanted to make sure this was going to be the biggest and best year ever. We decided to give the network a substantial speed increase using 12X InfiniBand QDR, providing up to 120 Gbps connectivity throughout the entire network.

In addition to the network performance enhancements, we’ve teamed up with the HPC Advisory Council to make several different application demonstrations available to all InfiniBand network participants. Demonstrations include:

  • A Remote Desktop over InfiniBand (RDI) demonstration enabling live desktop sharing between all participants
  • A Direct Transport Compositor (DTC) demonstration providing real-time rendering of a PSA Peugeot Citroen automotive CAD model in 2D/3D
  • A new high-bandwidth MPI technology demonstration taking advantage of 120Gbps data rates from a single server

It’s no surprise that there’s already a record number of organizations and vendors signed up to take part in these exciting multivendor demonstrations at SC’09. With over 20 connections requested this year, it’s nearly double that of prior years.

Not only is this year turning out to be a record year for connection requests, so is the size of the network infrastructure required to support all of this connectivity. We’re actually working with 18 different hardware and cable manufacturers to secure enough equipment to build the entire InfiniBand network infrastructure from the ground up – including hundreds of cables, over 40 switches, numerous servers and countless HCAs.

I’d personally like to thank all the SCinet volunteers and hardware vendors working together with the common goal of building the biggest and best InfiniBand network SC has ever seen. I encourage SC attendees to check out all the SCinet-connected application demonstrations going on throughout the exhibit hall and to stop by the SCinet Network Operations Center (NOC) and see the 120 Gbps InfiniBand network infrastructure in action.

Eric Dube

Best regards,

Eric Dube

SCinet/OpenFabrics co-chair

Bay Microsystems, Inc.

Author: admin Categories: InfiniBand Tags: , ,

IBTA Pleased to Announce 16th Compliance and Interoperability Plugfest

October 8th, 2009

Greetings IBTA members and the larger InfiniBand community,

As you may have read in our recent press release, the IBTA is pleased to announce dates for the 16th Compliance and Interoperability Plugfest: October 12-16, 2009 in the University of New Hampshire’s Interoperability Lab.

I personally have been involved in this event for the past 8 years and have seen the Plugfests grow tremendously. Products that successfully pass Plugfest testing gain inclusion in the IBTA Integrators’ List. We currently have 297 products on our Integrators’ List and are expecting more than 200 cables and devices to be tested at the October event.

With the introduction of the IBTA Integrators List Logo Program last fall, cable vendors and device manufacturers now have the right to attach IBTA-certified logos to products that are active on the Integrators List. We’ve heard positive feedback from end users excited that they can now easily recognize InfiniBand-compliant products. This is especially useful for end users deploying and maintaining clusters since the cables will now be labeled with the speeds that they support.


This Plugfest will also include many new QDR products from InfiniBand vendors along with testing for the new CXP Interconnect which provides InfiniBand connectivity in a 120 Gb/s in a 12x Small Form-factor Pluggable Interconnect.

If you are interested in joining us at this year’s Plugfest, please visit our Plugfest section of the website, listed under Events or feel free to contact me at plugfest@lampreynetworks.com.

Regards,

Rupert Dance

CIWG – Program Director

Lamprey Networks

Author: admin Categories: InfiniBand Tags: , , ,

IBTA Members at ISC’09

July 24th, 2009

Check out the latest video of InfiniBand Trade Association members at the International Supercomputing Conference in Hamburg, Germany.

Resetting InfiniBand Counters in 2009

June 29th, 2009

IBTA members know that cluster performance software and human troubleshooters monitor InfiniBand port counters for information about traffic on different ports and links.  Occasionally the port counters are reset when a network is reconfigured and a fresh view is needed. Similarly, market research vendors monitor manufacturer quarterly earnings announcements to measure the performance of the overall InfiniBand market as well as the switch, HCA and controller segments.

Read more…