Archive

Posts Tagged ‘HPC’

InfiniBand on the Road to Exascale Computing

January 21st, 2011

(Note: This article appears with reprint permission of The Exascale Reporttm)

InfiniBand has been making remarkable progress in HPC, as evidenced by its growth in the Top5002 rankings of the highest performing computers. In the November 2010 update to these rankings, InfiniBand’s use increased another 18 percent, to help power 43 percent of all listed systems, including 57 percent of all high-end “Petascale” systems.

The continuing march for higher and higher performance levels continues. Today, computation is a critical part of science, where computation compliments observation, experiment and theory. The computational performance of high-end computers has been increasing by a factor of 1000X every 11 years.

InfiniBand has demonstrated that it plays an important role in the current Petascale level of computing driven by its bandwidth, low latency implementations and fabric efficiency. This article will explore how InfiniBand will continue to pace high-end computing as it moves towards the Exascale level of computing.

Figure 1 - The Golden Age of Cluster Computing

Figure 1 - The Golden Age of Cluster Computing

InfiniBand Today

Figure 1 illustrates how the high end of HPC crossed the 1 Terascale mark in 1997 (1012 floating operations per second) and increased three orders of magnitude to the 1 Petascale mark in 2008 (1015 floating operations per second). As you can see, the underlying system architectures changed dramatically during this time. The growth of the cluster computing model, based on commodity server processors, has come to dominate much of high-end HPC. Recently, this model is being augmented by the emergence of GPUs.

Figure 2 - Emergence of InfiniBand in the Top500

Figure 2 - Emergence of InfiniBand in the Top500

Figure 2 shows how interconnects track with changes in the underlying system architectures. The appearance of first 1 GbE, followed by the growth of InfiniBand interconnects, are key enablers of the cluster computing model. The industry standard InfiniBand and Ethernet interconnects have largely displaced earlier proprietary interconnects. InfiniBand interconnects continue to grow share relative to Ethernet, largely driven by performance factors such as low latency and high bandwidth, the ability to support high bisectional bandwidth fabrics, as well as overall cost-effectiveness.

Getting to Exascale
What we know today is that Exascale computing will require enormously larger computer systems than what are available today. What we don’t know is how those computers will look. We have been in the golden age of cluster computing for much of the past decade and the model appears to scale well going forward. However, there is yet no clear consensus regarding the system architecture for Exascale. What we can do is map the evolution of InfiniBand to the evolution of Exascale.

Given historical growth rates, Exascale computing is being anticipated by the industry to be reached around 2018. However, three orders of magnitude beyond where we are today represents too great a change to make as a single leap. In addition, the industry is continuing to assess what system structures will comprise systems of that size.

Figure 3 - Steps from Petascale to Exascale

Figure 3 - Steps from Petascale to Exascale

Figure 3 provides guidance as to the key capabilities of the interconnect as computer systems increase in power by each order of magnitude from current high-end systems with 1 PetaFLOPS performance, to 10 PF, 100 PF and finally 1000PF = 1 ExaFLOPS. Over time, computational nodes will provide increasing performance with advances in processor and system architecture. This performance increase must be matched by a corresponding increase in network bandwidth to each node. However, the increased performance per node also tends to hold down the increase in the total number of nodes required to reach a given level of system performance.

Today, 4x QDR InfiniBand (40 Gbps) is the interconnect of choice for many large-scale clusters. Current InfiniBand technology well supports systems with performance in the order of 1 PetaFLOPS. Deployments in the order of 10,000 nodes have been achieved, and 4x QDR link bandwidths are offered by multiple vendors. InfiniBand interconnects are used in 57 percent of the current Petascale systems on the Top500 list.

Moving from 1 PetaFLOPS to 10 PetaFLOPS is well within the reach of the current InfiniBand roadmap. Reaching 35,000 nodes is within the currently-defined InfiniBand address space. Required 12 GB/s links can either be achieved by 12x QDR, or more likely, by 4x EDR data rates (104 Gbps) now being defined according to the InfiniBand industry bandwidth roadmap. Such data rates also anticipate PCIe Gen3 host connects, which are anticipated in the forthcoming processor generation.

The next order of magnitude increase in system performance from 10 PetaFLOPS to 100 PetaFLOPS will require additional evolution of the InfiniBand standards to permit hundreds of thousands of nodes to be addressed. The InfiniBand industry is already initiating discussions as to what evolved capabilities are needed for systems of such scale. As in the prior step up to more performance, required link bandwidths can be achieved by 12x EDR (which is currently being defined) or perhaps 4x HDR (which has been identified on the InfiniBand industry roadmap). Systems of such scale may also exploit topologies such as mesh/torus or hypercube, for which there are already large scale InfiniBand deployments.

The remaining order of magnitude increase in system performance from 100 PetaFLOPS to 1 ExaFLOPS requires link bandwidths to once again increase. Either 12x HDR, or 4X NDR links will need to be defined. It is also expected that optical technology will play a greater role in systems of such scale.
The Meaning of Exascale

Reaching Exascale computing levels involves much more than just the interconnect. Pending further developments in computer systems design and technology, such systems are expected to occupy many hundreds of racks and consume perhaps 20 MWatts of power. Just as many of the high-end systems today are purpose-built with unique packaging, power distribution, cooling and interconnect architectures, we should expect Exascale systems to be predominantly purpose-built. However, before we conclude that the golden age of cluster computer has ended with its reliance on effective industry standard interconnects such as InfiniBand, let’s look further at the data.

Figure 4 - Top500 Performance Trends

Figure 4 - Top500 Performance Trends

Figure 4 is the trends chart from Top500. At first glance, it shows the tremendous growth over the past two decades of high-end HPC, as well as projecting these trends to continue for the next decade. However, it also shows that the performance of the #1 ranked system is about two orders of magnitude greater than the #500 ranked system.

Figure 5 - Top500 below 1 PetaFLOPS (November 2010)

Figure 5 - Top500 below 1 PetaFLOPS (November 2010)

This is further illustrated in Figure 5, which shows the performance vs. rank from the November 2010 Top500 list – the seven systems above 1 PetaFLOPS have been omitted so as not to stretch the vertical axis too much. We see that only the highest 72 ranked systems come within an order of magnitude of 1 PetaFLOPS (1000 TeraFLOPS). This trend is expected to continue with the implication that once the highest-end HPC systems reach the 1 Exascale threshold, the majority of Top500 systems will be a maximum of order of 100 PetaFLOPS, with the #500 ranked system at an order of 10 PetaFLOPS.

Although we often use the Top500 rankings as an indicator of high-end HPC, the vast majority of HPC deployments occur below the Top500.

InfiniBand Evolution
InfiniBand has been an extraordinarily effective interconnect for HPC, with demonstrated scaling up to the Petascale level. InfiniBand architecture permits low latency implementations and has a bandwidth roadmap matching the capabilities of host processor technology. InfiniBand’s fabric architecture permits implementation and deployment of highly efficient fabrics, in a range of topologies, with congestion management and resiliency capabilities.

The InfiniBand community has demonstrated that the architecture has previously evolved to remain vibrant. The Technical Working Group is currently assessing architectural evolution to permit InfiniBand to continue to meet the needs of increasing system scale.
As we move towards an Exascale HPC environment with possibly purpose-built systems, the cluster computing model enabled by InfiniBand interconnects will remain a vital communications model capable of extending well into the Top500.

Lloyd Dickman
Technical Working Group, IBTA

(Note: This article appears with reprint permission of The Exascale Reporttm)

Author: admin Categories: InfiniBand Tags: , , , , ,

Last days of SC10

November 19th, 2010

I’ve heard there were more than 10,000 people in New Orleans this week for the SC10 conference and from what I saw on the show floor, in sessions and in the restaurants around the French Quarter, I believe it. The SCinet team has had several tiring yet productive days ensuring the network ran smoothly for the more than 342 exhibitors at the show.

One very popular demo was the real time 3D flight simulator (see photo below) that was displayed in multiple booths at the show. The flight simulator provided a virtual high-resolution rendering of Salt Lake City, Utah, from the air running over SCinet’s high speed, low latency InfiniBand/RDMA network.

Real time 3D flight simulator

Real time 3D flight simulator

This year, SCinet introduced the SCinet Research Sandbox. Sandbox participants were able to utilize the network infrastructure to demonstrate 100G networks for a wide variety of applications, including petascale computing, next-generation approaches to wide area file transfer, security analysis tools, and data-intensive computing.

This is the tenth Supercomputing show I’ve attended and I’ve made a few observations. Years ago, I used to see a lot of proprietary processors, interconnects, and storage. Now we’re seeing much more standardization around technologies such as InfiniBand. In addition, there’s also been a lot of interest this year around 100G connectivity and the need for higher faster data rates.

Several members of the SCinet team. Thank you to all of the volunteers who helped make SCinet a success this week!

Several members of the SCinet team. Thank you to all of the volunteers who helped make SCinet a success this week!

The first couple of shows I attended were very scientific and academic in nature. Now as I walk the show floor, it’s exciting to see more commercial HPC applications for financial services, automotive/aviation, and oil & gas.

I had a great time in New Orleans, and I look forward to my next ten SC conferences. See you next year at SC11 in Seattle, WA!

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

SCinet Update November 13 – Conference Begins!

November 13th, 2010

scinet-help-deskToday is Saturday, November 13, and sessions for SC10 have begun. We’re in the home stretch to get SCinet installed. We’ve been working feverishly to get everything running before the start of the conference. In addition, the network demonstrations should all be live in time for the Exhibition Press Tour on Monday night from 6-7 pm.

I’ve included more photos to show you the network in progress. If you’re going to New Orleans and have any questions about SCinet, be sure to stop by our help desk.

All the SCinet DNOCs located throughout the show floor are now finished and ready to go.

All the SCinet DNOCs located throughout the show floor are now finished and ready to go.

The show floor is busy as ever and exhibitor booth construction is well underway.

The show floor is busy as ever and exhibitor booth construction is well underway.

SCinet's main NOC network equipment racks provide connectivity for network subscribers to the world's fastest network.

SCinet's main NOC network equipment racks provide connectivity for network subscribers to the world's fastest network.

All the power distribution units needed to supply power to all the network equipment in the SCinet main NOC.

All the power distribution units needed to supply power to all the network equipment in the SCinet main NOC.

scinet-team

SCinet has more than 100 volunteers working behind the scenes to bring up the world’s fastest network. Planning began more than a year ago and all of our hard work is about to pay off as we connect SC10 exhibitors and attendees to leading research and commercial networks around the world, including the Department of Energy’s ESnet, Internet2, National LambdaRail and LONI (Louisiana Optical Network Initiative).

I will blog again as the show gets rolling and provide updates on our demos in action.

See you soon!

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

SCinet Update November 11 – Two Days Until the Show!

November 11th, 2010
scinet-noc-stage-1

SCinet Network Operations Center (NOC) stage

scinet-noc-stage-2

SCinet Network Operations Center (NOC) stage

For those heading to New Orleans for the SC10 conference, the weather this week is upper 70s and clear - although the SCinet team hasn’t had much of a chance to soak up the sun. We’re busy building the world’s fastest network - to be up and running this Sunday, November 14, for one week only. It’s going to be a busy couple of days… let me give you an update on progress to date.

The main SCinet Network Operations Center (NOC) stage is in the process of being built. I’ve included a photo of the convention center and our initial framing, followed by a picture after the power and aerial fiber cable drops have been installed.

Our SCinet team includes volunteers from educational institutions, high performance computing centers, network equipment vendors, U.S. national laboratories, research institutions, and research networks and telecommunication carriers that work together to design and deliver the SCinet infrastructure.

The picture below shows SCinet team members Mary Ellen Dube and Parks Fields receiving the aerial fiber cables that are being lowered from the catwalks scattered throughout the convention center floor.

cables-from-catwalks

Aerial fiber cables being lowered from catwalks

Below is a photo of Cary Whitney (my fellow SCinet/InfiniBand Co-Chair) and Parks Fields testing aerial InfiniBand active optical cables going between the distributed/remote NOC.

cary_parks

Cary Whitney and Parks Fields

cables-1

Aerial InfiniBand active optical cables

I’ve also included a picture of myself and team member DA Fye running a lift to install the aerial fiber cables going between the main NOC and distributed/remote NOCs throughout the show floor. Next to that is a photo of some of the many InfiniBand active optical cables going to the main SCinet NOC.

eric_da_lift

Running a lift with DA Fye

cables-2

InfiniBand active optical cables going to the main SCinet NOC

This year’s SC10 exhibitors and attendees are anticipated to push SCinet’s capacity and capabilities to the extreme. I’ll keep updating this blog to show you how we’re preparing for the show and expected demands on the network.

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

Documenting the World’s Fastest Network Installation

November 5th, 2010

Greetings InfiniBand Community,

As many of you know, every fall before Supercomputing, over 100 volunteers - including scientists, engineers, and students - come together to build the world’s fastest network: SCinet. This year, over 168 miles of fiber will be used to form the data backbone. The network takes months to build and is only active during the SC10 conference. As a member of the SCinet team, I’d like to use this blog on the InfiniBand Trade Association’s web site to give you an inside look at network as it’s being built from the ground up.SCinet IB Cables

This year, SCinet includes a 100 Gbps circuit alongside other infrastructure capable of delivering 260 gigabits per second of aggregate data bandwidth for conference attendees and exhibitors - that’s enough data to allow the entire collection of books at the Library of Congress to be transferred in well under a minute. However, my main focus will be on building out SCinet’s InfiniBand network in support of distributed HPC applications demonstrations.

For SC10, the InfiniBand fabric will consist of Quad Data Rate (QDR) 40, 80, and 120-gigabit per second (Gbps) circuits linking together various organizations and vendors with high-speed 120Gbps circuits providing backbone connectivity throughout the SCinet InfiniBand switching infrastructure.

Here are some of the InfiniBand network specifics that we have planned for SC10:

  • 12X InfiniBand QDR (120Gbps) connectivity throughout the entire backbone network
  • 12 SCinet InfiniBand Network Participants
  • Approximately 11 Equipment and Software Vendors working together to provide all the resources to build the IB network
  • Approximately 23 InfiniBand Switches will be used for all the connections to the IB network
  • Approximately 5.39 miles (8.67 km) worth of fiber cable will be used to build the IB network

The photos on this page show allSCinet NOC/DNOC equipment racks the IB cabling that we need to sort through and label prior to installation and the numerous SCinet Network Operations Center (NOC) systems and distributed/remote NOC (dNOC) equipment racks getting installed and configured.

In future blog posts, I’ll update you on the status of the SCinet installation and provide more details on the InfiniBand demonstrations that you’ll be able to see at the show, including a flight simulator in 3D and Remote Desktop over InfiniBand (RDI).

Stay tuned!

Eric Dube

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

InfiniBand Leads List of Russian Top50 Supercomputers; Connects 74 Percent, Including Seven of the Top10 Supercomputers

April 14th, 2010

Last week, the 12th edition of Russia’s Top50 list of the most powerful high performance computing systems was released at the annual Parallel Computing Technologies international conference. The list is ranked according to Linpack benchmark results and provides an important tool for tracking usage trends in HPC in Russia.

The fastest supercomputer on the Top50 is enabled by 40Gb/s InfiniBand with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most-used interconnect solution, connecting 37 systems - including the top three, and seven of the Top10.

According to the Linpack benchmark results, InfiniBand demonstrates up to 92 percent efficiency; InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their HPC server and storage infrastructure. Nearly three quarters of the list - represented by leading research laboratories, universities, industrial companies and banks in Russia - rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability and application performance.

Highlights of InfiniBand usage on the April 2010 Russian TOP50 list include:

  • InfiniBand connects 74 percent of the Top50, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance - the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list - highlighting the increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems) and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference. I will be attending the conference, as will many of our IBTA colleagues, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparksbriansparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

Foundation for the Converging Data Center: InfiniBand with Intelligent Fabric Management

March 26th, 2010

Most data centers are starting to realize the benefits of virtualization, cloud computing and automation. However, the heavy I/O requirements and intense need for better visibility and control quickly become key challenges that create inefficiencies and inhibit wider adoption of these advancements.

InfiniBand coupled with intelligent fabric management software can address many of these challenges-specifically those related to connectivity and I/O.

Technology analysis firm The Taneja Group recently took an in-depth look at this topic and published an interesting whitepaper, “Foundation for the Converging Data Center: Intelligent Fabric Management.” The paper lays out the requirements for intelligent fabric management and highlights how the right software can harness the traffic analysis capabilities that are inherent in and unique to InfiniBand to make data centers run a lot more efficiently. You can download the paper for free here.

Another interesting InfiniBand data point they shared: “in a recent Taneja Group survey of 359 virtual server administrators, those with InfiniBand infrastructures considered storage provisioning and capacity/performance management twice as easy as users of some of the other fabrics (Taneja Group 2009 Survey of Storage Best Practices and Server Virtualization).

The High Performance Computing Center Stuttgart (HLRS) is one organization that has benefited from using 40 Gb/s InfiniBand and Voltaire’s Unified Fabric ManagerTM software (UFMTM software) on their 700-node multi-tenant cluster, which basically operates as a cloud delivering HPC services to their customers. You can read more about it here.

I’d also like to invite you to tune into a recent webinar, “How to Optimize and Acclerate Application Performance with Intelligent Fabric Management,” co-hosted by Voltaire, The Taneja Group and Adaptive Computing. Here we explore this topic further and give an overview of some of the key capabilities of Voltaire’s UFM software.

Look forward to seeing many of you at Interop in Las Vegas next month!

christy-lynch

Christy Lynch

Director, Corporate Communications, Voltaire

Member, IBTA Marketing Working Group

OpenFabrics Alliance Update and Invitation to Sonoma Workshop

January 27th, 2010

ofa-logoHappy New Year, InfiniBand community! I’m writing to you on behalf of the OpenFabrics Alliance (OFA), although I am also a member of the IBTA’s marketing working group. The OFA had many highlights in 2009, many of which may be of interest to InfiniBand vendors and end users. I wanted to review some of the news, events and technology updates from the past year, as well as invite you to our 6th Annual International Sonoma Workshop, taking place March 14-17, 2010.

At the OFA’s Sonoma Workshop in March 2009, OpenFabrics firmed up 40 Gigabit InfiniBand support and 10 Gigabit Ethernet support in the same Linux releases, providing a converged network strategy that leverages the best of both technologies. At the International Supercomputer Conference (ISC) in Germany in June 2009, OFED software was used on the exhibitors’ 40 Gigabit InfiniBand network and then a much larger 120 Gigabit network at SC09 in November.

Also at SC09, the current Top500 list of the world’s fastest supercomputer sites was published. The number of systems on the list that use OpenFabrics software is closely tied to the number of Top500 systems using InfiniBand. For InfiniBand, the numbers on the November 2009 list are as follows: 5 in the Top 10, 64 in the Top 100, and 186 in the Top500. For OpenFabrics, the numbers may be slightly higher because the Top500 does not capture the interconnect used for storage at the sites. One example of this is the Jaguar machine at Oak Ridge National Laboratory, which lists “proprietary,” when in fact the system also has a large InfiniBand and OFED-driven storage infrastructure.

Key new members that joined the OFA last year included Cray and Microsoft. These new memberships convey the degree to which OFED has become a de-facto standard for InfiniBand in HPC, where the lowest possible latency brings the most value to the computing and application performance.

There are, of course, significant numbers of sites in the Top500 where legacy 1 Gigabit Ethernet and TCP/IP are still perceived as being sufficient from a cost-performance perspective. OpenFabrics believes that as the cost for 10 Gigabit Ethernet chips on motherboards and NICs come down, many of these sites will consider moving to 10 Gigabit Ethernet. As InfiniBand over Ethernet (commonly referred to as RoCEE) and iWARP are going to be in OFED 1.5.1 together beginning in March 2010, some sites will move to InfiniBand to capture the biggest improvement possible.

OpenIB, predecessor to OpenFabrics, helped in the early adoption of InfiniBand. The existence of free software, fully supporting a variety of vendors’ proprietary hardware, makes it easier for vendors to increase their hardware investments. The end user subsequently is able to purchase more hardware, as nothing needs to be spent on proprietary software to enable the system. This is one open source business model that has evolved, but none of us know if this is sustainable for the long term in HPC, or whether the enterprise and cloud communities will adopt it as well.

Please join me at this years’ OFA Sonoma Workshop where the focus will continue to be on the integrated delivery of both InfiniBand and 10 Gigabit Ethernet support in OFED releases which we believe makes OFED very attractive for cloud and Enterprise Data Centers.

bill-boas

Bill Boas

Executive Director, OpenFabrics Alliance

Author: admin Categories: InfiniBand Tags: , , , ,

A Look Back at InfiniBand in 2009

December 21st, 2009

Dear IBTA Members,

As we wind down 2009 and look forward to spending time with family and friends, I thought it would be nice to have a blog posting to review all we have accomplished this year. From an InfiniBand perspective, 2009 was a good year.

We saw continued success on the TOP500, with the November 2009 list showing significant InfiniBand growth compared to the November 2008 list. Clearly InfiniBand is well on its way to becoming the number one interconnect on the TOP500. This is encouraging because it demonstrates user validation of the benefits of using InfiniBand.

Cable suppliers helped drive IBTA membership this year, as well as participation at the IBTA-sponsored Plugfest events held in April and October 2009. Also on the hardware side, we saw the introduction of numerous new switching platforms and new chips, both on the HCA and switch side. These types of investments further validate that multiple suppliers see InfiniBand as a growing, healthy marketplace.

In addition to all of the new hardware available, many software advancements were introduced. These included everything from wizards to simplify cluster installations to new accelerators to drive-enhanced application performance – not to mention APIs to allow users to better integrate interconnect hardware with applications and schedulers.

As noted in an earlier blog, we had a very successful showing at SC09 with several hundred people visiting the booth and a number of well-attended presentations. To view presentations from the show, please visit the IBTA Resources Web pages.

During November, the IBTA conducted a survey of SC09 attendees. Highlights include:

  • 67.5% of survey respondents are using both InfiniBand and Ethernet in their current deployments
  • 62.5% of survey respondents are planning on using InfiniBand in their next server cluster, while only 20% are planning on using 10GE, 12.5% Ethernet and 5% proprietary solutions
  • The vast majority of respondents (85%) have between 1-36 storage nodes on average for their current clusters
  • 95% are using Linux for their operating system
  • 60% are using OFED as the software stack with the OS for interconnect, network and storage
  • When it comes to applications, the most important factors are: latency (35%), performance (35%), bandwidth (12.5%), cost (12.5%) and messaging rate (5%)
  • Over 67% indicated that they had considered IB as the interconnect for their storage

These results are encouraging, as they show that there is growing interest in InfiniBand, not only as the interconnect for the cluster compute nodes, but also for the storage side of the cluster.

Please be sure to check out the IBTA online press room for highlights from press and analyst coverage in 2009.

The Marketing Working Group would like to thank everyone for all of their efforts and continued support. We wish everyone a happy and healthy new year and look forward to working with you in 2010.

Kevin Judd

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , ,

Celebrating InfiniBand at SC09

November 23rd, 2009

I almost lost my voice at the show and, as I write this, I feel my body shutting down. Blame the wet, cold weather and the race to go from one briefing to the next… but it was all very worth it. Supercomputing is certainly one of the biggest shows for the IBTA and its members and, with the amount of press activity that happened this year, it certainly isn’t showing signs of decline.

The TOP500 showed InfiniBand rising and connecting 182 systems (36 percent of the TOP500) and it clearly dominated the TOP10 through TOP300 systems. Half of the TOP10 systems are connected via InfiniBand and, although the new #1 system (JAGUAR from ORNL) is a Cray, it’s important to note that InfiniBand is being used as the storage interconnect to connect Jaguar to “Spider” storage systems.

But let’s talk efficiency for a moment… this edition of the TOP500 showed that 18 out of the 20 most efficient systems on the TOP500 used InfiniBand and that InfiniBand system efficiency levels reached up to 96 percent! That’s over 50 percent more efficiency than the best GigE cluster. Bottom line: WHEN PURCHASING NEW SYSTEMS, DON’T IGNORE THE NETWORK! You may be saving pennies on the network and spending dollars on the processors with an unbalanced architecture.

In addition, the entire ecosystem is rallying around the next level of InfiniBand performance with multiple exhibitors (24) demonstrating up to 120Gb/s InfiniBand bandwidth (switch-to-switch) and the new CXP cabling options to support this jump in bandwidth.

We appreciated the participation of the OpenFabrics Alliance (OFA) in our booth and in some of our press and analyst briefings. We were happy to hear their announcement about Microsoft joining OFA, a strong affirmation of the importance of InfiniBand.

If you didn’t have a chance to swing by the IBTA booth, don’t worry: we videotaped all the great presentations and will have them up on the web for you shortly.

I apologize if I didn’t have a chance to meet with you and talk with all the members 1-on-1, and I hope to make up for it and see you all again at ISC’10 in Hamburg, Germany and SC10 in New Orleans. I’m sure the IBTA will have some more wonderful news to share with everyone. Congratulations IBTA and happy 10-year anniversary!

Safe travels to everyone,

briansparks3

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,