Archive

Posts Tagged ‘Add new tag’

January Course: Writing Application Programs for RDMA using OFA Software

December 16th, 2010

As part of its new training initiative, The OpenFabrics Alliance (OFA) is holding a “Writing Application Programs for RDMA using OFA Software” class this January 19-20, 2011 at the University of New Hampshire’s InterOperability Lab (UNH-IOL). If you are an application developer skilled in C programming and familiar with sockets, but with little or no experience programming with OpenFabrics Software, this class is the perfect opportunity to develop your RDMA expertise.

“Writing Application Programs for RDMA using OFA Software” immediately prepares you for writing application programs using RDMA. The class includes 8 hours of classroom work and 8 hours in the lab on Wednesday and Thursday, January 19 and 20. **Attendees enrolled by Dec. 24 will receive a FREE pass and rentals to Loon Mountain for skiing on Friday, January 21.**

Software Forge is a member of the IBTA and is helping drive this very first RDMA class. More information is available at www.openfabrics.org/training. Feel free to contact me with questions as well.

rupert-dance2

Regards,

Rupert Dance

rsdance@soft-forge.com

Member, IBTA’s Compliance and Interoperability Working Group

InfiniBand Leads List of Russian Top50 Supercomputers; Connects 74 Percent, Including Seven of the Top10 Supercomputers

April 14th, 2010

Last week, the 12th edition of Russia’s Top50 list of the most powerful high performance computing systems was released at the annual Parallel Computing Technologies international conference. The list is ranked according to Linpack benchmark results and provides an important tool for tracking usage trends in HPC in Russia.

The fastest supercomputer on the Top50 is enabled by 40Gb/s InfiniBand with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most-used interconnect solution, connecting 37 systems - including the top three, and seven of the Top10.

According to the Linpack benchmark results, InfiniBand demonstrates up to 92 percent efficiency; InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their HPC server and storage infrastructure. Nearly three quarters of the list - represented by leading research laboratories, universities, industrial companies and banks in Russia - rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability and application performance.

Highlights of InfiniBand usage on the April 2010 Russian TOP50 list include:

  • InfiniBand connects 74 percent of the Top50, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance - the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list - highlighting the increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems) and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference. I will be attending the conference, as will many of our IBTA colleagues, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparksbriansparks

IBTA Marketing Working Group Co-Chair

Highlighting End-User InfiniBand Deployments: Partners Healthcare Cuts Latency of Cloud-Based Storage Solution

February 22nd, 2010

Not enough is said about front- or back-end InfiniBand storage, but fear not! Interesting article just came out from Dave Raffo at SearchStorage.com that I think is worth spreading through the InfiniBand community. I have a quick summary below but be sure to also check out the full article: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners HealthCare recognized early on that a cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners HealthCare network.

In an effort to address their unique needs, Partners HealthCare developed their own specially designed storage network. Initially, they chose Ethernet as the transport technology for their storage system. As demand grew, the solution began hitting significant performance bottlenecks - particularly during the read/write of hundreds of thousands of small files. The issue was found to lie with the interconnect -Ethernet created problems due to its high natural latency. In order to provide a scalable, low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners HealthCare experienced roughly two orders of magnitude faster read times.

“One user had over 1,000 files, but only took up 100 gigs or so,” said Brent Richter, corporate manager for enterprise research infrastructure and services, Partners HealthCare System. “Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners HealthCare chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” said Richter.

Richter mentioned the final price tag came to about $1 per gigabyte.

By integrating InfiniBand into the storage solution, Partners HealthCare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Great to see end-user cases like this come out! If you are a member of the IBTA and would like to share one of your InfiniBand end user deployments, please contact us at press@infinibandta.org.

Till next time,

brian_2

Brian Sparks

IBTA Marketing Working Group Co-Chair

A Look Back at InfiniBand in 2009

December 21st, 2009

Dear IBTA Members,

As we wind down 2009 and look forward to spending time with family and friends, I thought it would be nice to have a blog posting to review all we have accomplished this year. From an InfiniBand perspective, 2009 was a good year.

We saw continued success on the TOP500, with the November 2009 list showing significant InfiniBand growth compared to the November 2008 list. Clearly InfiniBand is well on its way to becoming the number one interconnect on the TOP500. This is encouraging because it demonstrates user validation of the benefits of using InfiniBand.

Cable suppliers helped drive IBTA membership this year, as well as participation at the IBTA-sponsored Plugfest events held in April and October 2009. Also on the hardware side, we saw the introduction of numerous new switching platforms and new chips, both on the HCA and switch side. These types of investments further validate that multiple suppliers see InfiniBand as a growing, healthy marketplace.

In addition to all of the new hardware available, many software advancements were introduced. These included everything from wizards to simplify cluster installations to new accelerators to drive-enhanced application performance – not to mention APIs to allow users to better integrate interconnect hardware with applications and schedulers.

As noted in an earlier blog, we had a very successful showing at SC09 with several hundred people visiting the booth and a number of well-attended presentations. To view presentations from the show, please visit the IBTA Resources Web pages.

During November, the IBTA conducted a survey of SC09 attendees. Highlights include:

  • 67.5% of survey respondents are using both InfiniBand and Ethernet in their current deployments
  • 62.5% of survey respondents are planning on using InfiniBand in their next server cluster, while only 20% are planning on using 10GE, 12.5% Ethernet and 5% proprietary solutions
  • The vast majority of respondents (85%) have between 1-36 storage nodes on average for their current clusters
  • 95% are using Linux for their operating system
  • 60% are using OFED as the software stack with the OS for interconnect, network and storage
  • When it comes to applications, the most important factors are: latency (35%), performance (35%), bandwidth (12.5%), cost (12.5%) and messaging rate (5%)
  • Over 67% indicated that they had considered IB as the interconnect for their storage

These results are encouraging, as they show that there is growing interest in InfiniBand, not only as the interconnect for the cluster compute nodes, but also for the storage side of the cluster.

Please be sure to check out the IBTA online press room for highlights from press and analyst coverage in 2009.

The Marketing Working Group would like to thank everyone for all of their efforts and continued support. We wish everyone a happy and healthy new year and look forward to working with you in 2010.

Kevin Judd

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , , ,

Celebrating InfiniBand at SC09

November 23rd, 2009

 

I almost lost my voice at the show and, as I write this, I feel my body shutting down. Blame the wet, cold weather and the race to go from one briefing to the next… but it was all very worth it. Supercomputing is certainly one of the biggest shows for the IBTA and its members and, with the amount of press activity that happened this year, it certainly isn’t showing signs of decline.

 

The TOP500 showed InfiniBand rising and connecting 182 systems (36 percent of the TOP500) and it clearly dominated the TOP10 through TOP300 systems. Half of the TOP10 systems are connected via InfiniBand and, although the new #1 system (JAGUAR from ORNL) is a Cray, it’s important to note that InfiniBand is being used as the storage interconnect to connect Jaguar to “Spider” storage systems.

 

But let’s talk efficiency for a moment… this edition of the TOP500 showed that 18 out of the 20 most efficient systems on the TOP500 used InfiniBand and that InfiniBand system efficiency levels reached up to 96 percent! That’s over 50 percent more efficiency than the best GigE cluster. Bottom line: WHEN PURCHASING NEW SYSTEMS, DON’T IGNORE THE NETWORK! You may be saving pennies on the network and spending dollars on the processors with an unbalanced architecture.

 

In addition, the entire ecosystem is rallying around the next level of InfiniBand performance with multiple exhibitors (24) demonstrating up to 120Gb/s InfiniBand bandwidth (switch-to-switch) and the new CXP cabling options to support this jump in bandwidth.

 

We appreciated the participation of the OpenFabrics Alliance (OFA) in our booth and in some of our press and analyst briefings. We were happy to hear their announcement about Microsoft joining OFA, a strong affirmation of the importance of InfiniBand.

 

If you didn’t have a chance to swing by the IBTA booth, don’t worry: we videotaped all the great presentations and will have them up on the web for you shortly.

 

I apologize if I didn’t have a chance to meet with you and talk with all the members 1-on-1, and I hope to make up for it and see you all again at ISC’10 in Hamburg, Germany and SC10 in New Orleans. I’m sure the IBTA will have some more wonderful news to share with everyone. Congratulations IBTA and happy 10-year anniversary!

 

Safe travels to everyone,

briansparks3

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , ,