Archive

Archive for April, 2010

IBTA Announces RoCE Specification, Bringing the Power of the RDMA I/O Architecture to Ethernet-Based Business Solutions

April 22nd, 2010

As you may have already heard, earlier this week at HPC Financial Markets in New York, the IBTA officially announced the release of RDMA over Converged Ethernet - i.e. RoCE. The new specification, pronounced “Rocky,” provides the best of both worlds: InfiniBand efficiency and Ethernet ubiquity.

RoCE utilizes Remote Direct Memory Access (RDMA) to enable ultra low latency communication - 1/10th that of other standards-based solutions. The low latency is enabled by the use of RDMA which moves data from one node to another without requiring a lot of help from the CPU or operating system. The specification applies to 10GigE, 40GigE or higher speed adapters.

For people locked into an Ethernet infrastructure who are not currently using RDMA but would like to, RoCE lowers the barriers to deployment. In addition to low latency, RoCE end user benefits include improved application performance, efficiency, and cost and power savings.

RoCE delivers compelling benefits to high-growth markets and applications, including financial services, data warehousing and clustered cloud computing. Products based on RoCE will be available over the coming year.

Since our April 19 launch, we have seen great news coverage:

Be sure to watch for another announcement from the IBTA next week at Interop. I hope to connect with several of you at the show.

brian_2Brian Sparks

IBTA Marketing Working Group Co-Chair

InfiniBand Leads List of Russian Top50 Supercomputers; Connects 74 Percent, Including Seven of the Top10 Supercomputers

April 14th, 2010

Last week, the 12th edition of Russia’s Top50 list of the most powerful high performance computing systems was released at the annual Parallel Computing Technologies international conference. The list is ranked according to Linpack benchmark results and provides an important tool for tracking usage trends in HPC in Russia.

The fastest supercomputer on the Top50 is enabled by 40Gb/s InfiniBand with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most-used interconnect solution, connecting 37 systems - including the top three, and seven of the Top10.

According to the Linpack benchmark results, InfiniBand demonstrates up to 92 percent efficiency; InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their HPC server and storage infrastructure. Nearly three quarters of the list - represented by leading research laboratories, universities, industrial companies and banks in Russia - rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability and application performance.

Highlights of InfiniBand usage on the April 2010 Russian TOP50 list include:

  • InfiniBand connects 74 percent of the Top50, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance - the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list - highlighting the increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems) and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference. I will be attending the conference, as will many of our IBTA colleagues, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparksbriansparks

IBTA Marketing Working Group Co-Chair