Archive

Posts Tagged ‘TOP500’

FDR InfiniBand Continues Rapid Growth on TOP500 Year-Over-Year

November 22nd, 2013

The newest TOP500 list of the world’s most powerful supercomputers was released at SC13 this week, and showed the continued adoption of Fourteen Data Rate (FDR) InfiniBand – the fastest growing interconnect technology on the list! FDR, the latest generation of InfiniBand technology, now connects 80 systems on the TOP500, growing almost 2X year-over-year from 45 systems in November 2012. Overall, InfiniBand technology connects 207 systems, accounting for over 40 percent of the TOP500 list.

InfiniBand technology overall connects 48 percent of the Petascale-capable systems on the list. Petascale-capable systems generally favor the InfiniBand interconnect due to its computing efficiency, low application latency and high speeds. More relevant highlights from the November TOP500 list include:

  • InfiniBand is the most used interconnect in the TOP100, connecting 48 percent of the systems, and in the TOP200, connecting 48.5 percent of the systems.

  • InfiniBand-connected systems deliver 2X the performance of Ethernet systems, while the total performance supported by InfiniBand systems continues to grow.

  • With a peak efficiency of 97 percent and an average efficiency of 86 percent, InfiniBand continues to be the most efficient of interconnects on the TOP500.

The TOP500 list continues to show that InfiniBand technology is the interconnect of choice for HPC and data centers wanting the highest performance, with FDR InfiniBand adoption as a large part of this solution. The graph below demonstrates this further:

TOP500 Results, November 2013

Image source: Mellanox Technologies

Delivering bandwidth up to 56Gb/s with application latencies less than one microsecond, InfiniBand enables the highest server efficiency and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded environments that scale from two nodes up to clusters with thousands of nodes.

The TOP500 list is published twice per year and recognizes and ranks the world’s fastest supercomputers. The list was announced November 18 at the SC13 conference in Denver, Colorado.

Interested in learning more about the TOP500, or how InfiniBand performed? Check out the TOP500 website: www.top500.org.

bill_lee_square

Bill Lee, chair of Marketing Working Group (MWG) at IBTA

InfiniBand on the Road to Exascale Computing

January 21st, 2011

(Note: This article appears with reprint permission of The Exascale Reporttm)

InfiniBand has been making remarkable progress in HPC, as evidenced by its growth in the Top5002 rankings of the highest performing computers. In the November 2010 update to these rankings, InfiniBand’s use increased another 18 percent, to help power 43 percent of all listed systems, including 57 percent of all high-end “Petascale” systems.

The continuing march for higher and higher performance levels continues. Today, computation is a critical part of science, where computation compliments observation, experiment and theory. The computational performance of high-end computers has been increasing by a factor of 1000X every 11 years.

InfiniBand has demonstrated that it plays an important role in the current Petascale level of computing driven by its bandwidth, low latency implementations and fabric efficiency. This article will explore how InfiniBand will continue to pace high-end computing as it moves towards the Exascale level of computing.

Figure 1 - The Golden Age of Cluster Computing

Figure 1 - The Golden Age of Cluster Computing

InfiniBand Today

Figure 1 illustrates how the high end of HPC crossed the 1 Terascale mark in 1997 (1012 floating operations per second) and increased three orders of magnitude to the 1 Petascale mark in 2008 (1015 floating operations per second). As you can see, the underlying system architectures changed dramatically during this time. The growth of the cluster computing model, based on commodity server processors, has come to dominate much of high-end HPC. Recently, this model is being augmented by the emergence of GPUs.

Figure 2 - Emergence of InfiniBand in the Top500

Figure 2 - Emergence of InfiniBand in the Top500

Figure 2 shows how interconnects track with changes in the underlying system architectures. The appearance of first 1 GbE, followed by the growth of InfiniBand interconnects, are key enablers of the cluster computing model. The industry standard InfiniBand and Ethernet interconnects have largely displaced earlier proprietary interconnects. InfiniBand interconnects continue to grow share relative to Ethernet, largely driven by performance factors such as low latency and high bandwidth, the ability to support high bisectional bandwidth fabrics, as well as overall cost-effectiveness.

Getting to Exascale
What we know today is that Exascale computing will require enormously larger computer systems than what are available today. What we don’t know is how those computers will look. We have been in the golden age of cluster computing for much of the past decade and the model appears to scale well going forward. However, there is yet no clear consensus regarding the system architecture for Exascale. What we can do is map the evolution of InfiniBand to the evolution of Exascale.

Given historical growth rates, Exascale computing is being anticipated by the industry to be reached around 2018. However, three orders of magnitude beyond where we are today represents too great a change to make as a single leap. In addition, the industry is continuing to assess what system structures will comprise systems of that size.

Figure 3 - Steps from Petascale to Exascale

Figure 3 - Steps from Petascale to Exascale

Figure 3 provides guidance as to the key capabilities of the interconnect as computer systems increase in power by each order of magnitude from current high-end systems with 1 PetaFLOPS performance, to 10 PF, 100 PF and finally 1000PF = 1 ExaFLOPS. Over time, computational nodes will provide increasing performance with advances in processor and system architecture. This performance increase must be matched by a corresponding increase in network bandwidth to each node. However, the increased performance per node also tends to hold down the increase in the total number of nodes required to reach a given level of system performance.

Today, 4x QDR InfiniBand (40 Gbps) is the interconnect of choice for many large-scale clusters. Current InfiniBand technology well supports systems with performance in the order of 1 PetaFLOPS. Deployments in the order of 10,000 nodes have been achieved, and 4x QDR link bandwidths are offered by multiple vendors. InfiniBand interconnects are used in 57 percent of the current Petascale systems on the Top500 list.

Moving from 1 PetaFLOPS to 10 PetaFLOPS is well within the reach of the current InfiniBand roadmap. Reaching 35,000 nodes is within the currently-defined InfiniBand address space. Required 12 GB/s links can either be achieved by 12x QDR, or more likely, by 4x EDR data rates (104 Gbps) now being defined according to the InfiniBand industry bandwidth roadmap. Such data rates also anticipate PCIe Gen3 host connects, which are anticipated in the forthcoming processor generation.

The next order of magnitude increase in system performance from 10 PetaFLOPS to 100 PetaFLOPS will require additional evolution of the InfiniBand standards to permit hundreds of thousands of nodes to be addressed. The InfiniBand industry is already initiating discussions as to what evolved capabilities are needed for systems of such scale. As in the prior step up to more performance, required link bandwidths can be achieved by 12x EDR (which is currently being defined) or perhaps 4x HDR (which has been identified on the InfiniBand industry roadmap). Systems of such scale may also exploit topologies such as mesh/torus or hypercube, for which there are already large scale InfiniBand deployments.

The remaining order of magnitude increase in system performance from 100 PetaFLOPS to 1 ExaFLOPS requires link bandwidths to once again increase. Either 12x HDR, or 4X NDR links will need to be defined. It is also expected that optical technology will play a greater role in systems of such scale.
The Meaning of Exascale

Reaching Exascale computing levels involves much more than just the interconnect. Pending further developments in computer systems design and technology, such systems are expected to occupy many hundreds of racks and consume perhaps 20 MWatts of power. Just as many of the high-end systems today are purpose-built with unique packaging, power distribution, cooling and interconnect architectures, we should expect Exascale systems to be predominantly purpose-built. However, before we conclude that the golden age of cluster computer has ended with its reliance on effective industry standard interconnects such as InfiniBand, let’s look further at the data.

Figure 4 - Top500 Performance Trends

Figure 4 - Top500 Performance Trends

Figure 4 is the trends chart from Top500. At first glance, it shows the tremendous growth over the past two decades of high-end HPC, as well as projecting these trends to continue for the next decade. However, it also shows that the performance of the #1 ranked system is about two orders of magnitude greater than the #500 ranked system.

Figure 5 - Top500 below 1 PetaFLOPS (November 2010)

Figure 5 - Top500 below 1 PetaFLOPS (November 2010)

This is further illustrated in Figure 5, which shows the performance vs. rank from the November 2010 Top500 list – the seven systems above 1 PetaFLOPS have been omitted so as not to stretch the vertical axis too much. We see that only the highest 72 ranked systems come within an order of magnitude of 1 PetaFLOPS (1000 TeraFLOPS). This trend is expected to continue with the implication that once the highest-end HPC systems reach the 1 Exascale threshold, the majority of Top500 systems will be a maximum of order of 100 PetaFLOPS, with the #500 ranked system at an order of 10 PetaFLOPS.

Although we often use the Top500 rankings as an indicator of high-end HPC, the vast majority of HPC deployments occur below the Top500.

InfiniBand Evolution
InfiniBand has been an extraordinarily effective interconnect for HPC, with demonstrated scaling up to the Petascale level. InfiniBand architecture permits low latency implementations and has a bandwidth roadmap matching the capabilities of host processor technology. InfiniBand’s fabric architecture permits implementation and deployment of highly efficient fabrics, in a range of topologies, with congestion management and resiliency capabilities.

The InfiniBand community has demonstrated that the architecture has previously evolved to remain vibrant. The Technical Working Group is currently assessing architectural evolution to permit InfiniBand to continue to meet the needs of increasing system scale.
As we move towards an Exascale HPC environment with possibly purpose-built systems, the cluster computing model enabled by InfiniBand interconnects will remain a vital communications model capable of extending well into the Top500.

Lloyd Dickman
Technical Working Group, IBTA

(Note: This article appears with reprint permission of The Exascale Reporttm)

Author: admin Categories: InfiniBand Tags: , , , , ,

OpenFabrics Alliance Update and Invitation to Sonoma Workshop

January 27th, 2010

ofa-logoHappy New Year, InfiniBand community! I’m writing to you on behalf of the OpenFabrics Alliance (OFA), although I am also a member of the IBTA’s marketing working group. The OFA had many highlights in 2009, many of which may be of interest to InfiniBand vendors and end users. I wanted to review some of the news, events and technology updates from the past year, as well as invite you to our 6th Annual International Sonoma Workshop, taking place March 14-17, 2010.

At the OFA’s Sonoma Workshop in March 2009, OpenFabrics firmed up 40 Gigabit InfiniBand support and 10 Gigabit Ethernet support in the same Linux releases, providing a converged network strategy that leverages the best of both technologies. At the International Supercomputer Conference (ISC) in Germany in June 2009, OFED software was used on the exhibitors’ 40 Gigabit InfiniBand network and then a much larger 120 Gigabit network at SC09 in November.

Also at SC09, the current Top500 list of the world’s fastest supercomputer sites was published. The number of systems on the list that use OpenFabrics software is closely tied to the number of Top500 systems using InfiniBand. For InfiniBand, the numbers on the November 2009 list are as follows: 5 in the Top 10, 64 in the Top 100, and 186 in the Top500. For OpenFabrics, the numbers may be slightly higher because the Top500 does not capture the interconnect used for storage at the sites. One example of this is the Jaguar machine at Oak Ridge National Laboratory, which lists “proprietary,” when in fact the system also has a large InfiniBand and OFED-driven storage infrastructure.

Key new members that joined the OFA last year included Cray and Microsoft. These new memberships convey the degree to which OFED has become a de-facto standard for InfiniBand in HPC, where the lowest possible latency brings the most value to the computing and application performance.

There are, of course, significant numbers of sites in the Top500 where legacy 1 Gigabit Ethernet and TCP/IP are still perceived as being sufficient from a cost-performance perspective. OpenFabrics believes that as the cost for 10 Gigabit Ethernet chips on motherboards and NICs come down, many of these sites will consider moving to 10 Gigabit Ethernet. As InfiniBand over Ethernet (commonly referred to as RoCEE) and iWARP are going to be in OFED 1.5.1 together beginning in March 2010, some sites will move to InfiniBand to capture the biggest improvement possible.

OpenIB, predecessor to OpenFabrics, helped in the early adoption of InfiniBand. The existence of free software, fully supporting a variety of vendors’ proprietary hardware, makes it easier for vendors to increase their hardware investments. The end user subsequently is able to purchase more hardware, as nothing needs to be spent on proprietary software to enable the system. This is one open source business model that has evolved, but none of us know if this is sustainable for the long term in HPC, or whether the enterprise and cloud communities will adopt it as well.

Please join me at this years’ OFA Sonoma Workshop where the focus will continue to be on the integrated delivery of both InfiniBand and 10 Gigabit Ethernet support in OFED releases which we believe makes OFED very attractive for cloud and Enterprise Data Centers.

bill-boas

Bill Boas

Executive Director, OpenFabrics Alliance

Author: admin Categories: InfiniBand Tags: , , , ,

A Look Back at InfiniBand in 2009

December 21st, 2009

Dear IBTA Members,

As we wind down 2009 and look forward to spending time with family and friends, I thought it would be nice to have a blog posting to review all we have accomplished this year. From an InfiniBand perspective, 2009 was a good year.

We saw continued success on the TOP500, with the November 2009 list showing significant InfiniBand growth compared to the November 2008 list. Clearly InfiniBand is well on its way to becoming the number one interconnect on the TOP500. This is encouraging because it demonstrates user validation of the benefits of using InfiniBand.

Cable suppliers helped drive IBTA membership this year, as well as participation at the IBTA-sponsored Plugfest events held in April and October 2009. Also on the hardware side, we saw the introduction of numerous new switching platforms and new chips, both on the HCA and switch side. These types of investments further validate that multiple suppliers see InfiniBand as a growing, healthy marketplace.

In addition to all of the new hardware available, many software advancements were introduced. These included everything from wizards to simplify cluster installations to new accelerators to drive-enhanced application performance – not to mention APIs to allow users to better integrate interconnect hardware with applications and schedulers.

As noted in an earlier blog, we had a very successful showing at SC09 with several hundred people visiting the booth and a number of well-attended presentations. To view presentations from the show, please visit the IBTA Resources Web pages.

During November, the IBTA conducted a survey of SC09 attendees. Highlights include:

  • 67.5% of survey respondents are using both InfiniBand and Ethernet in their current deployments
  • 62.5% of survey respondents are planning on using InfiniBand in their next server cluster, while only 20% are planning on using 10GE, 12.5% Ethernet and 5% proprietary solutions
  • The vast majority of respondents (85%) have between 1-36 storage nodes on average for their current clusters
  • 95% are using Linux for their operating system
  • 60% are using OFED as the software stack with the OS for interconnect, network and storage
  • When it comes to applications, the most important factors are: latency (35%), performance (35%), bandwidth (12.5%), cost (12.5%) and messaging rate (5%)
  • Over 67% indicated that they had considered IB as the interconnect for their storage

These results are encouraging, as they show that there is growing interest in InfiniBand, not only as the interconnect for the cluster compute nodes, but also for the storage side of the cluster.

Please be sure to check out the IBTA online press room for highlights from press and analyst coverage in 2009.

The Marketing Working Group would like to thank everyone for all of their efforts and continued support. We wish everyone a happy and healthy new year and look forward to working with you in 2010.

Kevin Judd

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , , ,

Celebrating InfiniBand at SC09

November 23rd, 2009

 

I almost lost my voice at the show and, as I write this, I feel my body shutting down. Blame the wet, cold weather and the race to go from one briefing to the next… but it was all very worth it. Supercomputing is certainly one of the biggest shows for the IBTA and its members and, with the amount of press activity that happened this year, it certainly isn’t showing signs of decline.

 

The TOP500 showed InfiniBand rising and connecting 182 systems (36 percent of the TOP500) and it clearly dominated the TOP10 through TOP300 systems. Half of the TOP10 systems are connected via InfiniBand and, although the new #1 system (JAGUAR from ORNL) is a Cray, it’s important to note that InfiniBand is being used as the storage interconnect to connect Jaguar to “Spider” storage systems.

 

But let’s talk efficiency for a moment… this edition of the TOP500 showed that 18 out of the 20 most efficient systems on the TOP500 used InfiniBand and that InfiniBand system efficiency levels reached up to 96 percent! That’s over 50 percent more efficiency than the best GigE cluster. Bottom line: WHEN PURCHASING NEW SYSTEMS, DON’T IGNORE THE NETWORK! You may be saving pennies on the network and spending dollars on the processors with an unbalanced architecture.

 

In addition, the entire ecosystem is rallying around the next level of InfiniBand performance with multiple exhibitors (24) demonstrating up to 120Gb/s InfiniBand bandwidth (switch-to-switch) and the new CXP cabling options to support this jump in bandwidth.

 

We appreciated the participation of the OpenFabrics Alliance (OFA) in our booth and in some of our press and analyst briefings. We were happy to hear their announcement about Microsoft joining OFA, a strong affirmation of the importance of InfiniBand.

 

If you didn’t have a chance to swing by the IBTA booth, don’t worry: we videotaped all the great presentations and will have them up on the web for you shortly.

 

I apologize if I didn’t have a chance to meet with you and talk with all the members 1-on-1, and I hope to make up for it and see you all again at ISC’10 in Hamburg, Germany and SC10 in New Orleans. I’m sure the IBTA will have some more wonderful news to share with everyone. Congratulations IBTA and happy 10-year anniversary!

 

Safe travels to everyone,

briansparks3

Brian Sparks

IBTA Marketing Working Group Co-Chair

Author: admin Categories: InfiniBand Tags: , , , , ,

InfiniBand Enables Low-Latency Market Data Delivery for Financial Services Firms

June 18th, 2009

InfiniBand is largely thought of as a high-performance interconnect for large, TOP500-like HPC systems. It certainly is, but InfiniBand’s high bandwidth, low-latency capabilities are also useful across many vertical industries including financial services. A lot of us will be attending ISC’09 in Hamburg, but InfiniBand will also be in the spotlight at the annual Securities Industry and Financial Markets Association (SIFMA) show in NYC that’s also taking place next week.

Read more…

Author: admin Categories: InfiniBand Tags: , , , , , , ,