Archive

Posts Tagged ‘HPC’

Documenting the World’s Fastest Network Installation

November 5th, 2010

Greetings InfiniBand Community,

As many of you know, every fall before Supercomputing, over 100 volunteers - including scientists, engineers, and students - come together to build the world’s fastest network: SCinet. This year, over 168 miles of fiber will be used to form the data backbone. The network takes months to build and is only active during the SC10 conference. As a member of the SCinet team, I’d like to use this blog on the InfiniBand Trade Association’s web site to give you an inside look at network as it’s being built from the ground up.SCinet IB Cables

This year, SCinet includes a 100 Gbps circuit alongside other infrastructure capable of delivering 260 gigabits per second of aggregate data bandwidth for conference attendees and exhibitors - that’s enough data to allow the entire collection of books at the Library of Congress to be transferred in well under a minute. However, my main focus will be on building out SCinet’s InfiniBand network in support of distributed HPC applications demonstrations.

For SC10, the InfiniBand fabric will consist of Quad Data Rate (QDR) 40, 80, and 120-gigabit per second (Gbps) circuits linking together various organizations and vendors with high-speed 120Gbps circuits providing backbone connectivity throughout the SCinet InfiniBand switching infrastructure.

Here are some of the InfiniBand network specifics that we have planned for SC10:

  • 12X InfiniBand QDR (120Gbps) connectivity throughout the entire backbone network
  • 12 SCinet InfiniBand Network Participants
  • Approximately 11 Equipment and Software Vendors working together to provide all the resources to build the IB network
  • Approximately 23 InfiniBand Switches will be used for all the connections to the IB network
  • Approximately 5.39 miles (8.67 km) worth of fiber cable will be used to build the IB network

The photos on this page show allSCinet NOC/DNOC equipment racks the IB cabling that we need to sort through and label prior to installation and the numerous SCinet Network Operations Center (NOC) systems and distributed/remote NOC (dNOC) equipment racks getting installed and configured.

In future blog posts, I’ll update you on the status of the SCinet installation and provide more details on the InfiniBand demonstrations that you’ll be able to see at the show, including a flight simulator in 3D and Remote Desktop over InfiniBand (RDI).

Stay tuned!

Eric Dube

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

InfiniBand Leads List of Russian Top50 Supercomputers; Connects 74 Percent, Including Seven of the Top10 Supercomputers

April 14th, 2010

Last week, the 12th edition of Russia’s Top50 list of the most powerful high performance computing systems was released at the annual Parallel Computing Technologies international conference. The list is ranked according to Linpack benchmark results and provides an important tool for tracking usage trends in HPC in Russia.

The fastest supercomputer on the Top50 is enabled by 40Gb/s InfiniBand with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most-used interconnect solution, connecting 37 systems - including the top three, and seven of the Top10.

According to the Linpack benchmark results, InfiniBand demonstrates up to 92 percent efficiency; InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their HPC server and storage infrastructure. Nearly three quarters of the list - represented by leading research laboratories, universities, industrial companies and banks in Russia - rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability and application performance.

Highlights of InfiniBand usage on the April 2010 Russian TOP50 list include:

  • InfiniBand connects 74 percent of the Top50, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance - the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list - highlighting the increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems) and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference. I will be attending the conference, as will many of our IBTA colleagues, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparksbriansparks

IBTA Marketing Working Group Co-Chair

Foundation for the Converging Data Center: InfiniBand with Intelligent Fabric Management

March 26th, 2010

Most data centers are starting to realize the benefits of virtualization, cloud computing and automation. However, the heavy I/O requirements and intense need for better visibility and control quickly become key challenges that create inefficiencies and inhibit wider adoption of these advancements.

InfiniBand coupled with intelligent fabric management software can address many of these challenges-specifically those related to connectivity and I/O.

Technology analysis firm The Taneja Group recently took an in-depth look at this topic and published an interesting whitepaper, “Foundation for the Converging Data Center: Intelligent Fabric Management.” The paper lays out the requirements for intelligent fabric management and highlights how the right software can harness the traffic analysis capabilities that are inherent in and unique to InfiniBand to make data centers run a lot more efficiently. You can download the paper for free here.

Another interesting InfiniBand data point they shared: “in a recent Taneja Group survey of 359 virtual server administrators, those with InfiniBand infrastructures considered storage provisioning and capacity/performance management twice as easy as users of some of the other fabrics (Taneja Group 2009 Survey of Storage Best Practices and Server Virtualization).

The High Performance Computing Center Stuttgart (HLRS) is one organization that has benefited from using 40 Gb/s InfiniBand and Voltaire’s Unified Fabric ManagerTM software (UFMTM software) on their 700-node multi-tenant cluster, which basically operates as a cloud delivering HPC services to their customers. You can read more about it here.

I’d also like to invite you to tune into a recent webinar, “How to Optimize and Acclerate Application Performance with Intelligent Fabric Management,” co-hosted by Voltaire, The Taneja Group and Adaptive Computing. Here we explore this topic further and give an overview of some of the key capabilities of Voltaire’s UFM software.

Look forward to seeing many of you at Interop in Las Vegas next month!

christy-lynch

Christy Lynch

Director, Corporate Communications, Voltaire

Member, IBTA Marketing Working Group

OpenFabrics Alliance Update and Invitation to Sonoma Workshop

January 27th, 2010

ofa-logoHappy New Year, InfiniBand community! I’m writing to you on behalf of the OpenFabrics Alliance (OFA), although I am also a member of the IBTA’s marketing working group. The OFA had many highlights in 2009, many of which may be of interest to InfiniBand vendors and end users. I wanted to review some of the news, events and technology updates from the past year, as well as invite you to our 6th Annual International Sonoma Workshop, taking place March 14-17, 2010.

At the OFA’s Sonoma Workshop in March 2009, OpenFabrics firmed up 40 Gigabit InfiniBand support and 10 Gigabit Ethernet support in the same Linux releases, providing a converged network strategy that leverages the best of both technologies. At the International Supercomputer Conference (ISC) in Germany in June 2009, OFED software was used on the exhibitors’ 40 Gigabit InfiniBand network and then a much larger 120 Gigabit network at SC09 in November.

Also at SC09, the current Top500 list of the world’s fastest supercomputer sites was published. The number of systems on the list that use OpenFabrics software is closely tied to the number of Top500 systems using InfiniBand. For InfiniBand, the numbers on the November 2009 list are as follows: 5 in the Top 10, 64 in the Top 100, and 186 in the Top500. For OpenFabrics, the numbers may be slightly higher because the Top500 does not capture the interconnect used for storage at the sites. One example of this is the Jaguar machine at Oak Ridge National Laboratory, which lists “proprietary,” when in fact the system also has a large InfiniBand and OFED-driven storage infrastructure.

Key new members that joined the OFA last year included Cray and Microsoft. These new memberships convey the degree to which OFED has become a de-facto standard for InfiniBand in HPC, where the lowest possible latency brings the most value to the computing and application performance.

There are, of course, significant numbers of sites in the Top500 where legacy 1 Gigabit Ethernet and TCP/IP are still perceived as being sufficient from a cost-performance perspective. OpenFabrics believes that as the cost for 10 Gigabit Ethernet chips on motherboards and NICs come down, many of these sites will consider moving to 10 Gigabit Ethernet. As InfiniBand over Ethernet (commonly referred to as RoCEE) and iWARP are going to be in OFED 1.5.1 together beginning in March 2010, some sites will move to InfiniBand to capture the biggest improvement possible.

OpenIB, predecessor to OpenFabrics, helped in the early adoption of InfiniBand. The existence of free software, fully supporting a variety of vendors’ proprietary hardware, makes it easier for vendors to increase their hardware investments. The end user subsequently is able to purchase more hardware, as nothing needs to be spent on proprietary software to enable the system. This is one open source business model that has evolved, but none of us know if this is sustainable for the long term in HPC, or whether the enterprise and cloud communities will adopt it as well.

Please join me at this years’ OFA Sonoma Workshop where the focus will continue to be on the integrated delivery of both InfiniBand and 10 Gigabit Ethernet support in OFED releases which we believe makes OFED very attractive for cloud and Enterprise Data Centers.

bill-boas

Bill Boas

Executive Director, OpenFabrics Alliance

Author: admin Categories: InfiniBand Tags: , , , ,

A Look Back at InfiniBand in 2009

December 21st, 2009

Dear IBTA Members,

As we wind down 2009 and look forward to spending time with family and friends, I thought it would be nice to have a blog posting to review all we have accomplished this year. From an InfiniBand perspective, 2009 was a good year.

We saw continued success on the TOP500, with the November 2009 list showing significant InfiniBand growth compared to the November 2008 list. Clearly InfiniBand is well on its way to becoming the number one interconnect on the TOP500. This is encouraging because it demonstrates user validation of the benefits of using InfiniBand.

Cable suppliers helped drive IBTA membership this year, as well as participation at the IBTA-sponsored Plugfest events held in April and October 2009. Also on the hardware side, we saw the introduction of numerous new switching platforms and new chips, both on the HCA and switch side. These types of investments further validate that multiple suppliers see InfiniBand as a growing, healthy marketplace.

In addition to all of the new hardware available, many software advancements were introduced. These included everything from wizards to simplify cluster installations to new accelerators to drive-enhanced application performance – not to mention APIs to allow users to better integrate interconnect hardware with applications and schedulers.

As noted in an earlier blog, we had a very successful showing at SC09 with several hundred people visiting the booth and a number of well-attended presentations. To view presentations from the show, please visit the IBTA Resources Web pages.

During November, the IBTA conducted a survey of SC09 attendees. Highlights include:

  • 67.5% of survey respondents are using both InfiniBand and Ethernet in their current deployments
  • 62.5% of survey respondents are planning on using InfiniBand in their next server cluster, while only 20% are planning on using 10GE, 12.5% Ethernet and 5% proprietary solutions
  • The vast majority of respondents (85%) have between 1-36 storage nodes on average for their current clusters
  • 95% are using Linux for their operating system
  • 60% are using OFED as the software stack with the OS for interconnect, network and storage
  • When it comes to applications, the most important factors are: latency (35%), performance (35%), bandwidth (12.5%), cost (12.5%) and messaging rate (5%)
  • Over 67% indicated that they had considered IB as the interconnect for their storage

These results are encouraging, as they show that there is growing interest in InfiniBand, not only as the interconnect for the cluster compute nodes, but also for the storage side of the cluster.

Please be sure to check out the IBTA online press room for highlights from press and analyst coverage in 2009.

The Marketing Working Group would like to thank everyone for all of their efforts and continued support. We wish everyone a happy and healthy new year and look forward to working with you in 2010.

Kevin Judd

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , , ,

Celebrating InfiniBand at SC09

November 23rd, 2009

 

I almost lost my voice at the show and, as I write this, I feel my body shutting down. Blame the wet, cold weather and the race to go from one briefing to the next… but it was all very worth it. Supercomputing is certainly one of the biggest shows for the IBTA and its members and, with the amount of press activity that happened this year, it certainly isn’t showing signs of decline.

 

The TOP500 showed InfiniBand rising and connecting 182 systems (36 percent of the TOP500) and it clearly dominated the TOP10 through TOP300 systems. Half of the TOP10 systems are connected via InfiniBand and, although the new #1 system (JAGUAR from ORNL) is a Cray, it’s important to note that InfiniBand is being used as the storage interconnect to connect Jaguar to “Spider” storage systems.

 

But let’s talk efficiency for a moment… this edition of the TOP500 showed that 18 out of the 20 most efficient systems on the TOP500 used InfiniBand and that InfiniBand system efficiency levels reached up to 96 percent! That’s over 50 percent more efficiency than the best GigE cluster. Bottom line: WHEN PURCHASING NEW SYSTEMS, DON’T IGNORE THE NETWORK! You may be saving pennies on the network and spending dollars on the processors with an unbalanced architecture.

 

In addition, the entire ecosystem is rallying around the next level of InfiniBand performance with multiple exhibitors (24) demonstrating up to 120Gb/s InfiniBand bandwidth (switch-to-switch) and the new CXP cabling options to support this jump in bandwidth.

 

We appreciated the participation of the OpenFabrics Alliance (OFA) in our booth and in some of our press and analyst briefings. We were happy to hear their announcement about Microsoft joining OFA, a strong affirmation of the importance of InfiniBand.

 

If you didn’t have a chance to swing by the IBTA booth, don’t worry: we videotaped all the great presentations and will have them up on the web for you shortly.

 

I apologize if I didn’t have a chance to meet with you and talk with all the members 1-on-1, and I hope to make up for it and see you all again at ISC’10 in Hamburg, Germany and SC10 in New Orleans. I’m sure the IBTA will have some more wonderful news to share with everyone. Congratulations IBTA and happy 10-year anniversary!

 

Safe travels to everyone,

briansparks3

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , ,

IBTA to Celebrate 10-Year Anniversary at SC’09

November 5th, 2009

It’s that time of the year again. The most important event for the InfiniBand Trade Association and the InfiniBand community at large is upon us – Supercomputing ’09 (SC’09) taking place in Portland, Oregon from November 14-20.

 

This year’s event promises to be an impressive gathering of the top minds in high-performance computing with expected attendance of 11,000 researchers, scientists, engineers and computing experts from around the world. InfiniBand will certainly have a strong presence and showing at the event with 16 IBTA members exhibiting, a slew of expected InfiniBand-related product announcements from the entire ecosystem, and of course, InfiniBand-related demos set up all over the show floor.

 

The SCinet InfiniBand network – being installed as I write this – will see a substantial speed increase this year – up to 120Gb/s connectivity throughout the entire network. More than 20 exhibitors will be showcasing SCinet and the demonstration is expected to be an SC’09 highlight.

 

The IBTA will also have a 10×20 booth, special logo, collateral, signage, giveaways and presentations to highlight and celebrate its 10-year anniversary.

 

We’re hoping you will stop by booth #139 and visit us, especially Monday night from 7-9pm, as we will be hosting a 10-Year Anniversary Cocktail Reception.

 

Come Celebrate with the IBTA!

 briansparks2

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , ,

ISCnet: Europe’s Largest 40Gb/s InfiniBand Demonstration

June 25th, 2009

While many tradeshows this year have suffered from declining vendor participation and user attendance, the high-performance computing community was out in full force this week at the International Supercomputing Conference (ISC’09) in Hamburg, Germany. The IBTA was well represented with several members exhibiting, including: Finisar, Fujitsu Limited, IBM, Intel, LSI, Mellanox , NEC, QLogic, SGI, Sun, Voltaire and W. L. Gore & Associates.

Read more…

Author: admin Categories: InfiniBand Tags: , , , , ,

InfiniBand Enables Low-Latency Market Data Delivery for Financial Services Firms

June 18th, 2009

InfiniBand is largely thought of as a high-performance interconnect for large, TOP500-like HPC systems. It certainly is, but InfiniBand’s high bandwidth, low-latency capabilities are also useful across many vertical industries including financial services. A lot of us will be attending ISC’09 in Hamburg, but InfiniBand will also be in the spotlight at the annual Securities Industry and Financial Markets Association (SIFMA) show in NYC that’s also taking place next week.

Read more…

Author: admin Categories: InfiniBand Tags: , , , , , , ,

Analyst Reports Confirm InfiniBand Growth

June 15th, 2009

Recently, the InfiniBand Trade Association (IBTA) released results from three analyst reports, demonstrating the continued market growth of InfiniBand products in the HPC and data center server and storage markets.

HPC Market Growth

For those following InfiniBand, the March 2009 Tabor Research report, “InfiniBand: Increases in Speed, Usage, Competition,” noting that “60 percent of surveyed HPC systems installed since the start of 2007 use InfiniBand as a system interconnect,” should come as no surprise. What is most promising is that “among those systems, over 30 percent also use InfiniBand for a LAN or storage interconnect.” In my view, that serves as a leading indication that customers are leveraging InfiniBand’s capabilities as the “unified wire”.

The report goes on to highlight increased demand for InfiniBand among customers consolidating their I/O fabrics. “HPC-using organizations that are considering converged fabric strategies are more likely to consolidate on InfiniBand than Ethernet.”

Read more…