Archive

Author Archive

January Course: Writing Application Programs for RDMA using OFA Software

December 16th, 2010

As part of its new training initiative, The OpenFabrics Alliance (OFA) is holding a “Writing Application Programs for RDMA using OFA Software” class this January 19-20, 2011 at the University of New Hampshire’s InterOperability Lab (UNH-IOL). If you are an application developer skilled in C programming and familiar with sockets, but with little or no experience programming with OpenFabrics Software, this class is the perfect opportunity to develop your RDMA expertise.

“Writing Application Programs for RDMA using OFA Software” immediately prepares you for writing application programs using RDMA. The class includes 8 hours of classroom work and 8 hours in the lab on Wednesday and Thursday, January 19 and 20. **Attendees enrolled by Dec. 24 will receive a FREE pass and rentals to Loon Mountain for skiing on Friday, January 21.**

Software Forge is a member of the IBTA and is helping drive this very first RDMA class. More information is available at www.openfabrics.org/training. Feel free to contact me with questions as well.

rupert-dance2

Regards,

Rupert Dance

rsdance@soft-forge.com

Member, IBTA’s Compliance and Interoperability Working Group

Last days of SC10

November 19th, 2010

I’ve heard there were more than 10,000 people in New Orleans this week for the SC10 conference and from what I saw on the show floor, in sessions and in the restaurants around the French Quarter, I believe it. The SCinet team has had several tiring yet productive days ensuring the network ran smoothly for the more than 342 exhibitors at the show.

One very popular demo was the real time 3D flight simulator (see photo below) that was displayed in multiple booths at the show. The flight simulator provided a virtual high-resolution rendering of Salt Lake City, Utah, from the air running over SCinet’s high speed, low latency InfiniBand/RDMA network.

Real time 3D flight simulator

Real time 3D flight simulator

This year, SCinet introduced the SCinet Research Sandbox. Sandbox participants were able to utilize the network infrastructure to demonstrate 100G networks for a wide variety of applications, including petascale computing, next-generation approaches to wide area file transfer, security analysis tools, and data-intensive computing.

This is the tenth Supercomputing show I’ve attended and I’ve made a few observations. Years ago, I used to see a lot of proprietary processors, interconnects, and storage. Now we’re seeing much more standardization around technologies such as InfiniBand. In addition, there’s also been a lot of interest this year around 100G connectivity and the need for higher faster data rates.

Several members of the SCinet team. Thank you to all of the volunteers who helped make SCinet a success this week!

Several members of the SCinet team. Thank you to all of the volunteers who helped make SCinet a success this week!

The first couple of shows I attended were very scientific and academic in nature. Now as I walk the show floor, it’s exciting to see more commercial HPC applications for financial services, automotive/aviation, and oil & gas.

I had a great time in New Orleans, and I look forward to my next ten SC conferences. See you next year at SC11 in Seattle, WA!

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

SCinet Update November 13 – Conference Begins!

November 13th, 2010

scinet-help-deskToday is Saturday, November 13, and sessions for SC10 have begun. We’re in the home stretch to get SCinet installed. We’ve been working feverishly to get everything running before the start of the conference. In addition, the network demonstrations should all be live in time for the Exhibition Press Tour on Monday night from 6-7 pm.

I’ve included more photos to show you the network in progress. If you’re going to New Orleans and have any questions about SCinet, be sure to stop by our help desk.

All the SCinet DNOCs located throughout the show floor are now finished and ready to go.

All the SCinet DNOCs located throughout the show floor are now finished and ready to go.

The show floor is busy as ever and exhibitor booth construction is well underway.

The show floor is busy as ever and exhibitor booth construction is well underway.

SCinet's main NOC network equipment racks provide connectivity for network subscribers to the world's fastest network.

SCinet's main NOC network equipment racks provide connectivity for network subscribers to the world's fastest network.

All the power distribution units needed to supply power to all the network equipment in the SCinet main NOC.

All the power distribution units needed to supply power to all the network equipment in the SCinet main NOC.

scinet-team

SCinet has more than 100 volunteers working behind the scenes to bring up the world’s fastest network. Planning began more than a year ago and all of our hard work is about to pay off as we connect SC10 exhibitors and attendees to leading research and commercial networks around the world, including the Department of Energy’s ESnet, Internet2, National LambdaRail and LONI (Louisiana Optical Network Initiative).

I will blog again as the show gets rolling and provide updates on our demos in action.

See you soon!

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

SCinet Update November 11 – Two Days Until the Show!

November 11th, 2010

scinet-noc-stage-1

SCinet Network Operations Center (NOC) stage

scinet-noc-stage-2

SCinet Network Operations Center (NOC) stage

For those heading to New Orleans for the SC10 conference, the weather this week is upper 70s and clear - although the SCinet team hasn’t had much of a chance to soak up the sun. We’re busy building the world’s fastest network - to be up and running this Sunday, November 14, for one week only. It’s going to be a busy couple of days… let me give you an update on progress to date.

The main SCinet Network Operations Center (NOC) stage is in the process of being built. I’ve included a photo of the convention center and our initial framing, followed by a picture after the power and aerial fiber cable drops have been installed.

Our SCinet team includes volunteers from educational institutions, high performance computing centers, network equipment vendors, U.S. national laboratories, research institutions, and research networks and telecommunication carriers that work together to design and deliver the SCinet infrastructure.

The picture below shows SCinet team members Mary Ellen Dube and Parks Fields receiving the aerial fiber cables that are being lowered from the catwalks scattered throughout the convention center floor.

cables-from-catwalks

Aerial fiber cables being lowered from catwalks

Below is a photo of Cary Whitney (my fellow SCinet/InfiniBand Co-Chair) and Parks Fields testing aerial InfiniBand active optical cables going between the distributed/remote NOC.

cary_parks

Cary Whitney and Parks Fields

cables-1

Aerial InfiniBand active optical cables

I’ve also included a picture of myself and team member DA Fye running a lift to install the aerial fiber cables going between the main NOC and distributed/remote NOCs throughout the show floor. Next to that is a photo of some of the many InfiniBand active optical cables going to the main SCinet NOC.

eric_da_lift

Running a lift with DA Fye

cables-2

InfiniBand active optical cables going to the main SCinet NOC

This year’s SC10 exhibitors and attendees are anticipated to push SCinet’s capacity and capabilities to the extreme. I’ll keep updating this blog to show you how we’re preparing for the show and expected demands on the network.

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

Documenting the World’s Fastest Network Installation

November 5th, 2010

Greetings InfiniBand Community,

As many of you know, every fall before Supercomputing, over 100 volunteers - including scientists, engineers, and students - come together to build the world’s fastest network: SCinet. This year, over 168 miles of fiber will be used to form the data backbone. The network takes months to build and is only active during the SC10 conference. As a member of the SCinet team, I’d like to use this blog on the InfiniBand Trade Association’s web site to give you an inside look at network as it’s being built from the ground up.SCinet IB Cables

This year, SCinet includes a 100 Gbps circuit alongside other infrastructure capable of delivering 260 gigabits per second of aggregate data bandwidth for conference attendees and exhibitors - that’s enough data to allow the entire collection of books at the Library of Congress to be transferred in well under a minute. However, my main focus will be on building out SCinet’s InfiniBand network in support of distributed HPC applications demonstrations.

For SC10, the InfiniBand fabric will consist of Quad Data Rate (QDR) 40, 80, and 120-gigabit per second (Gbps) circuits linking together various organizations and vendors with high-speed 120Gbps circuits providing backbone connectivity throughout the SCinet InfiniBand switching infrastructure.

Here are some of the InfiniBand network specifics that we have planned for SC10:

  • 12X InfiniBand QDR (120Gbps) connectivity throughout the entire backbone network
  • 12 SCinet InfiniBand Network Participants
  • Approximately 11 Equipment and Software Vendors working together to provide all the resources to build the IB network
  • Approximately 23 InfiniBand Switches will be used for all the connections to the IB network
  • Approximately 5.39 miles (8.67 km) worth of fiber cable will be used to build the IB network

The photos on this page show allSCinet NOC/DNOC equipment racks the IB cabling that we need to sort through and label prior to installation and the numerous SCinet Network Operations Center (NOC) systems and distributed/remote NOC (dNOC) equipment racks getting installed and configured.

In future blog posts, I’ll update you on the status of the SCinet installation and provide more details on the InfiniBand demonstrations that you’ll be able to see at the show, including a flight simulator in 3D and Remote Desktop over InfiniBand (RDI).

Stay tuned!

Eric Dube

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

IBTA Announces RoCE Specification, Bringing the Power of the RDMA I/O Architecture to Ethernet-Based Business Solutions

April 22nd, 2010

As you may have already heard, earlier this week at HPC Financial Markets in New York, the IBTA officially announced the release of RDMA over Converged Ethernet - i.e. RoCE. The new specification, pronounced “Rocky,” provides the best of both worlds: InfiniBand efficiency and Ethernet ubiquity.

RoCE utilizes Remote Direct Memory Access (RDMA) to enable ultra low latency communication - 1/10th that of other standards-based solutions. The low latency is enabled by the use of RDMA which moves data from one node to another without requiring a lot of help from the CPU or operating system. The specification applies to 10GigE, 40GigE or higher speed adapters.

For people locked into an Ethernet infrastructure who are not currently using RDMA but would like to, RoCE lowers the barriers to deployment. In addition to low latency, RoCE end user benefits include improved application performance, efficiency, and cost and power savings.

RoCE delivers compelling benefits to high-growth markets and applications, including financial services, data warehousing and clustered cloud computing. Products based on RoCE will be available over the coming year.

Since our April 19 launch, we have seen great news coverage:

Be sure to watch for another announcement from the IBTA next week at Interop. I hope to connect with several of you at the show.

brian_2Brian Sparks

IBTA Marketing Working Group Co-Chair

InfiniBand Leads List of Russian Top50 Supercomputers; Connects 74 Percent, Including Seven of the Top10 Supercomputers

April 14th, 2010

Last week, the 12th edition of Russia’s Top50 list of the most powerful high performance computing systems was released at the annual Parallel Computing Technologies international conference. The list is ranked according to Linpack benchmark results and provides an important tool for tracking usage trends in HPC in Russia.

The fastest supercomputer on the Top50 is enabled by 40Gb/s InfiniBand with peak performance of 414 teraflops. More importantly, it is clear that InfiniBand is dominating the list as the most-used interconnect solution, connecting 37 systems - including the top three, and seven of the Top10.

According to the Linpack benchmark results, InfiniBand demonstrates up to 92 percent efficiency; InfiniBand’s high system efficiency and utilization allow users to maximize their return-on-investment for their HPC server and storage infrastructure. Nearly three quarters of the list - represented by leading research laboratories, universities, industrial companies and banks in Russia - rely on industry-leading InfiniBand solutions to provide the highest in bandwidth, efficiency, scalability and application performance.

Highlights of InfiniBand usage on the April 2010 Russian TOP50 list include:

  • InfiniBand connects 74 percent of the Top50, including seven of the Top10 most prestigious positions (#1, #2, #3, #6, #8, #9 and #10)
  • InfiniBand provides world-leading system utilization, up to 92 percent efficiency as measured by the Linpack benchmark
  • The list showed a sharp increase in the aggregated performance - the total peak performance of the list exceeded 1PFlops to reach 1152.9TFlops, an increase of 120 percent compared to the September 2009 list - highlighting the increasing demand for higher performance
  • Ethernet connects only 14 percent of the list (seven systems) and there were no 10GigE clusters
  • Proprietary clustering interconnects declined 40 percent to connect only three systems on the list

I look forward to seeing the results of the Top500 in June at the International Supercomputing Conference. I will be attending the conference, as will many of our IBTA colleagues, and I look forward to seeing all of our HPC friends in Germany.

Brian Sparksbriansparks

IBTA Marketing Working Group Co-Chair

Foundation for the Converging Data Center: InfiniBand with Intelligent Fabric Management

March 26th, 2010

Most data centers are starting to realize the benefits of virtualization, cloud computing and automation. However, the heavy I/O requirements and intense need for better visibility and control quickly become key challenges that create inefficiencies and inhibit wider adoption of these advancements.

InfiniBand coupled with intelligent fabric management software can address many of these challenges-specifically those related to connectivity and I/O.

Technology analysis firm The Taneja Group recently took an in-depth look at this topic and published an interesting whitepaper, “Foundation for the Converging Data Center: Intelligent Fabric Management.” The paper lays out the requirements for intelligent fabric management and highlights how the right software can harness the traffic analysis capabilities that are inherent in and unique to InfiniBand to make data centers run a lot more efficiently. You can download the paper for free here.

Another interesting InfiniBand data point they shared: “in a recent Taneja Group survey of 359 virtual server administrators, those with InfiniBand infrastructures considered storage provisioning and capacity/performance management twice as easy as users of some of the other fabrics (Taneja Group 2009 Survey of Storage Best Practices and Server Virtualization).

The High Performance Computing Center Stuttgart (HLRS) is one organization that has benefited from using 40 Gb/s InfiniBand and Voltaire’s Unified Fabric ManagerTM software (UFMTM software) on their 700-node multi-tenant cluster, which basically operates as a cloud delivering HPC services to their customers. You can read more about it here.

I’d also like to invite you to tune into a recent webinar, “How to Optimize and Acclerate Application Performance with Intelligent Fabric Management,” co-hosted by Voltaire, The Taneja Group and Adaptive Computing. Here we explore this topic further and give an overview of some of the key capabilities of Voltaire’s UFM software.

Look forward to seeing many of you at Interop in Las Vegas next month!

christy-lynch

Christy Lynch

Director, Corporate Communications, Voltaire

Member, IBTA Marketing Working Group

Highlighting End-User InfiniBand Deployments: Partners Healthcare Cuts Latency of Cloud-Based Storage Solution

February 22nd, 2010

Not enough is said about front- or back-end InfiniBand storage, but fear not! Interesting article just came out from Dave Raffo at SearchStorage.com that I think is worth spreading through the InfiniBand community. I have a quick summary below but be sure to also check out the full article: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners HealthCare recognized early on that a cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners HealthCare network.

In an effort to address their unique needs, Partners HealthCare developed their own specially designed storage network. Initially, they chose Ethernet as the transport technology for their storage system. As demand grew, the solution began hitting significant performance bottlenecks - particularly during the read/write of hundreds of thousands of small files. The issue was found to lie with the interconnect -Ethernet created problems due to its high natural latency. In order to provide a scalable, low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners HealthCare experienced roughly two orders of magnitude faster read times.

“One user had over 1,000 files, but only took up 100 gigs or so,” said Brent Richter, corporate manager for enterprise research infrastructure and services, Partners HealthCare System. “Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners HealthCare chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” said Richter.

Richter mentioned the final price tag came to about $1 per gigabyte.

By integrating InfiniBand into the storage solution, Partners HealthCare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Great to see end-user cases like this come out! If you are a member of the IBTA and would like to share one of your InfiniBand end user deployments, please contact us at press@infinibandta.org.

Till next time,

brian_2

Brian Sparks

IBTA Marketing Working Group Co-Chair

OpenFabrics Alliance Update and Invitation to Sonoma Workshop

January 27th, 2010

ofa-logoHappy New Year, InfiniBand community! I’m writing to you on behalf of the OpenFabrics Alliance (OFA), although I am also a member of the IBTA’s marketing working group. The OFA had many highlights in 2009, many of which may be of interest to InfiniBand vendors and end users. I wanted to review some of the news, events and technology updates from the past year, as well as invite you to our 6th Annual International Sonoma Workshop, taking place March 14-17, 2010.

At the OFA’s Sonoma Workshop in March 2009, OpenFabrics firmed up 40 Gigabit InfiniBand support and 10 Gigabit Ethernet support in the same Linux releases, providing a converged network strategy that leverages the best of both technologies. At the International Supercomputer Conference (ISC) in Germany in June 2009, OFED software was used on the exhibitors’ 40 Gigabit InfiniBand network and then a much larger 120 Gigabit network at SC09 in November.

Also at SC09, the current Top500 list of the world’s fastest supercomputer sites was published. The number of systems on the list that use OpenFabrics software is closely tied to the number of Top500 systems using InfiniBand. For InfiniBand, the numbers on the November 2009 list are as follows: 5 in the Top 10, 64 in the Top 100, and 186 in the Top500. For OpenFabrics, the numbers may be slightly higher because the Top500 does not capture the interconnect used for storage at the sites. One example of this is the Jaguar machine at Oak Ridge National Laboratory, which lists “proprietary,” when in fact the system also has a large InfiniBand and OFED-driven storage infrastructure.

Key new members that joined the OFA last year included Cray and Microsoft. These new memberships convey the degree to which OFED has become a de-facto standard for InfiniBand in HPC, where the lowest possible latency brings the most value to the computing and application performance.

There are, of course, significant numbers of sites in the Top500 where legacy 1 Gigabit Ethernet and TCP/IP are still perceived as being sufficient from a cost-performance perspective. OpenFabrics believes that as the cost for 10 Gigabit Ethernet chips on motherboards and NICs come down, many of these sites will consider moving to 10 Gigabit Ethernet. As InfiniBand over Ethernet (commonly referred to as RoCEE) and iWARP are going to be in OFED 1.5.1 together beginning in March 2010, some sites will move to InfiniBand to capture the biggest improvement possible.

OpenIB, predecessor to OpenFabrics, helped in the early adoption of InfiniBand. The existence of free software, fully supporting a variety of vendors’ proprietary hardware, makes it easier for vendors to increase their hardware investments. The end user subsequently is able to purchase more hardware, as nothing needs to be spent on proprietary software to enable the system. This is one open source business model that has evolved, but none of us know if this is sustainable for the long term in HPC, or whether the enterprise and cloud communities will adopt it as well.

Please join me at this years’ OFA Sonoma Workshop where the focus will continue to be on the integrated delivery of both InfiniBand and 10 Gigabit Ethernet support in OFED releases which we believe makes OFED very attractive for cloud and Enterprise Data Centers.

bill-boas

Bill Boas

Executive Director, OpenFabrics Alliance

Author: admin Categories: InfiniBand Tags: , , , ,