Archive

Archive for the ‘InfiniBand’ Category

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015

pleiades

High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here: http://www.nas.nasa.gov/hecc/about/hecc_facts.html

Bill Lee

IBTA Publishes Updated Integrators’ List & Announces Plugfest #27

March 17th, 2015

We’re excited to announce the availability of the IBTA October 2014 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest #26 held last fall. The list gathers products that have been tested and accepted by the IBTA as being compliant with the InfiniBand™ architecture specifications, with new products being added every spring and fall, following each Plugfest event.

The updated Integrators’ List, along with the bi-annual Plugfest testing, is a testament to the IBTA’s dedication to advancing InfiniBand technology and furthering industry adoption. The list demonstrates how our organization continues to ensure InfiniBand interoperability, as all cables and devices listed have successfully passed all the required compliance tests and procedures.

Additionally, the consistently-updated Integrators’ List assures vendors’ customers and end-users that the manufacturers’ equipment included on the Integrators’ List has achieved the necessary level of compliance and interoperability. It also helps the IBTA to assess current and future industry demands providing background for future InfiniBand specifications.

We’ve already begun preparations for Plugfest #27, taking place April 13-24, 2015 at the University of New Hampshire’s Interoperability Laboratory in Durham, N.H. For more information, or to register for Plugfest #27, please visit the IBTA Plugfest website.

For any questions related to the Integrators’ List or IBTA membership, please visit the IBTA website: http://www.infinibandta.org/.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA Releases New Update to InfiniBand Architecture Specification

March 10th, 2015

Today, the IBTA announced Release 1.3 of the InfiniBand Architecture Specification Volume 1. Dedicated to our mission to maintain and further the specification, we worked to incorporate capabilities to help keep up with the increasing customer demand for expedited data transfer and accessibility. This effort resulted in a fresh version of the specification, which incorporates new features that enable computer systems to match the continuous high performance computing and data center needs required for increased stability and bandwidth, as well as high computing efficiency, availability and isolation.

The rapid evolution of the computer and internet have left existing interconnect technologies struggling to keep pace. The increasing popularity of high-end computing concepts like clustering, CPU offloads and movement of large amounts of data demand a robust fabric.

Additionally, I/O devices are now expected to have link-level data protection, traffic isolation, deterministic behavior and a high quality of service. In order to help meet these needs, the IBTA developed InfiniBand Architecture (IBA) in 2000 and has continued to update the specification suite as the industry evolves.

As a result of this initiative, the updated Volume 1.3 enables notable improvements in scalability with deeper visibility into switch-fabric configuration for improved monitoring of traffic patterns and network maintenance. The updated specification allows for more in-depth cable management with greater accessibility to cables for improved diagnostics mechanisms allowing for shorter response times and improved messaging capabilities, thus improving overall network performance.

Tweet: Learn more about the recent updates to the #InfiniBand architecture specification here:http://bit.ly/1FBbpcJ

Bill Lee

Author: admin Categories: InfiniBand, Volume 1 Specification Tags:

Storage with Intense Network Growth and the Rise of RoCE

February 4th, 2015

On January 4 and 5, the Entertainment Storage Alliances held the 14th annual Storage Visions conference in Las Vegas, highlighting advances in storage technologies utilized in consumer electronics, the media and entertainment industries. The theme of Storage Visions 2015 was Storage with Intense Network Growth (SWING), which was very appropriate given the explosions going on in both data storage and networking.

SV1

While the primary focus of Storage Visions is storage technologies, this year’s theme acknowledges the corollary between storage growth and network growth. Therefore, among the many sessions offered on increased capacity and higher performance, the storage networking session was specifically designed to educate the audience on advances in network technology – “Speed is the Need: High Performance Data Center Fabrics to Speed Networking.”

More pressure is being put on the data center network from a variety sources, including continued growth in enterprise application transactions, new sources of data (aka, big data) and the growth in streaming video and emergence of 4K video. According to Cisco, global IP data center traffic will grow 23% annually to 8.7 zettabytes by 2018. Three quarters of this traffic will be inter-data center, or traffic between servers (East-West) or between servers and storage (North-South). Given this, data centers need to factor in technologies designed to optimize data center traffic.

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Storage administrators have always placed emphasis on two important metrics, I/O operations per second (IOPS) and throughput, to measure the ability of the network to server storage devices. Lately, a third metric, latency, has become equally important. When balanced with the IOPS and throughput, low latency technologies can bring dramatic benefits to storage.
At this year’s Storage Visions conference, I was asked to sit on a panel discussing the benefits of Remote Direct Memory Access (RDMA) for storage traffic. I specifically called out the benefits of RDMA over Converged Ethernet (RoCE). Joining me on the panel were representatives from Mellanox, speaking about InfiniBand, and Chelsio, speaking about iWARP. The storage-focused audience showed real interest in the topic and asked a number of insightful questions about RDMA benefits for their storage implementations.

RoCE in particular will bring specific benefits to data center storage environments. As the purest implementation of the InfiniBand specification in the Ethernet environment, it has the ability to provide the lowest latency for storage. In addition, it capitalizes on the converged Ethernet standards defined in the IEEE 802.1 standards for Ethernet, include Congestion Management, Enhanced Transmission Selection and Priority Flow Control, which collectively allow for lossless transmission, bandwidth allocation and quality of service. With the introduction of RoCEv2 in September 2014, the technology moves from support for a (flat) Layer 2 network to become a routable protocol supporting Layer 3 networks, allowing for use in distributed storage environments.
Ultimately, what customers need for optimal Ethernet-based storage is technology which will balance between IOPS, throughput, and latency while allowing for flexible storage placement in their network. RoCE addresses all of these needs and is becoming widely available in popular server and storage offerings.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Tweet: Get a recap from the IBTA's #SWING & #RoCE panel from #StorageVisions: http://blog.infinibandta.org/2015/02/03/storage-with-i…e-rise-of-roce/

The IBTA Celebrates Its 15th Anniversary

December 15th, 2014

Since 1999, the IBTA has worked to further the InfiniBand specification in order to provide the IT industry with an advanced fabric architecture that transmits large amounts of data between data centers around the globe. This year, the IBTA is celebrating 15 years of growth and success.

In its mission to unite the IT industry, the IBTA has welcomed an array of distinguished members including Cray, Microsoft, Oracle and QLogic. The IBTA now boasts over 50 member companies all dedicated to furthering the InfiniBand specification.

The continued growth of the IBTA reflects the IT industry’s dedication to the advancement of InfiniBand. Many IBTA member companies are developing products incorporating InfiniBand technology, including FDR, which has proven to be the fastest growing generation of InfiniBand technology: FDR adoption grew 76 percent year over year from 80 systems in 2013 to 141 systems in 2014. Most recently, the Top500 list announced that 225 of the world’s most powerful computers chose InfiniBand as their interconnect device in 2014.

2014 also marked the release of RoCEv2. RoCEv2 is an extension of the original RoCE specification announced in 2010 that brought benefits of Remote Direct Memory Access (RDMA) I/O architecture to Ethernet-based networks. The updated specification addresses the needs of today’s evolving enterprise data centers by enabling routing across Layer 3 networks. By extending RoCE to allow Layer 3 routing, the specification can provide better traffic isolation and enables hyperscale data center deployments.

Below is a timeline that further illustrates the IBTA’s advancements over the past 15 years that have helped to bring InfiniBand technology to the forefront of the interconnect industry.

ibta-15th-anniv-graph

Volume 1 – General Specification
Volume 2 – Physical Specification

Bill Lee

ISC’14 Insights

July 10th, 2014

With ISC’14 in June recently concluded, the IBTA would like to highlight some of our members’ biggest announcements, key takeaways from the event and give an overall look at the impact of InfiniBand in the supercomputing world.

Event Overview

Overall the event was a huge success, showcasing demonstrations and facilitating many major announcements in the supercomputing industry. As always, the event attracted supercomputing leaders from around the world, who provided thought leadership and knowledge to thousands of attendees and exhibitors.

Chris Pfistner, senior director of marketing at Finisar, noted, “ISC in Leipzig is a vibrant show!  Between SC in the US and ISC focused on Europe and Asia, the global Super Computing community is well represented, and the space is on fire with several players unveiling new systems and showing great interest in Finisar’s EDR AOC demo – 100G links are becoming reality.”

The ISC YouTube channel is another great resource for checking out all that happened at ISC’14!

Major InfiniBand Announcements

During the event, the updated TOP500 list was released, showing that InfiniBand is the most used interconnect by the world’s fastest supercomputers and represents 44.4 percent of the TOP500 at 222 systems. InfiniBand also connects the top 17 most efficient systems and 50 percent of Petaflop-capable systems on the list. Fourteen Data Rate (FDR) InfiniBand, delivering 56Gb/s per link to address the bandwidth demand by high performance clustered computing, drives the systems with the highest utilization on the TOP500, achieving a record breaking 99.8 percent system efficiency.

IBTA member Finisar showcased a live demonstration of its next generation 100Gb/s Quadwire Active Optical Cable (AOC). This optical interconnect runs four parallel lanes of 25Gb/s to enable the next generation of InfiniBand EDR-based supercomputers. Leveraging Finisar’s patented fiber optic technology, the AOC will allow the transmission of high-speed data with enhanced signal integrity, higher density, lower power consumption, smaller bend radius and longer reach than traditional copper solutions.

Mellanox Technologies, also an IBTA member, unveiled the world’s first 100Gb/s EDR InfiniBand switch at ISC’14. The 100Gb/s switch expands InfiniBand’s bandwidth and provides increased switching capacity while lowering latency and power consumption. This solution, called Switch-IB, will enable application managers to use the power of data, delivering 5.4 billion packets per second, and making it an excellent solution for HPC, cloud, Web 2.0, database and storage centers.

In other news, Applied Micro teamed up with Nvidia to promote the X-Gene and Tesla duo. The Ethernet ports on the X-Gene 2 chip will be able to run RDMA over Converged Ethernet (RoCE), which brings the low latency of RDMA to the Ethernet protocol. This will make the chip not only suitable for HPC workloads that are latency sensitive, but also for database, storage, and transaction processing workloads in enterprise datacenters that also demand low latency. In tandem, Mellanox announced that their 56Gb/s InfiniBand products will now be optimized for the X-Gene platform.

To sum it all up, here are the IBTA’s key takeaways from the event:

· 100Gb/s is becoming a reality

· InfiniBand is leading the way with EDR taking center stage

· The industry is taking full advantage today of the performance and efficiency benefits of FDR and RoCE

ISC’14 was a great forum for knowledge sharing and big industry announcements. All of us at the IBTA look forward to seeing more InfiniBand news and discussions in the coming months.

bill-lee

IBTA Updates Integrators’ List Following PF23 Compliance & Interoperability Testing

September 25th, 2013

We’re proud to announce the availability of the IBTA April 2013 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest 23, during which we conducted the first-ever Enhanced Data Rate (EDR) 100Gb/s InfiniBand standard compliance testing.

IBTA’s updated Integrators’ List and our newest Plugfest testing is a testament to the IBTA’s commitment to advancing InfiniBand technology and ensuring its interoperability, as all cables and devices on the list successfully passed the required compliance tests and interoperability procedures. Vendors listed may now access the IBTA Integrators’ List promotional materials and a special marketing program for their products.

Plugfest 23 was a huge success, attracting top manufacturers and would not have been possible without donated testing equipment from the following vendors: Agilent Technologies, Anritsu, Molex, Tektronix and Wilder Technologies. We are thrilled with the level of participation and the caliber of technology manufacturers who came out and supported the IBTA.

The updated Integrators’ List is a tool used by the IBTA to assure vendors’ customers and end-users that manufacturers have made the mark of compliance and interoperability. It is also a method for furthering the InfiniBand specification. The integrator’s list is published every spring and fall following the bi-annual Plugfest and serves to assist IT professionals, including data center managers and CIOs, with their planned deployment of InfiniBand solutions.

We’ve already begun preparations for Plugfest 24, which will take place October 7-18, 2013 at the University of New Hampshire’s Interoperability Laboratory. For more information, or to register for Plugfest 24, please visit IBTA Plugfest website.

If you have any questions related to IBTA membership or Integrators’ List, please visit the IBTA website: http://www.infinibandta.org/, or email us: ibta_plugfest@soft-forge.com.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

High Performance Computing by Any Other Name Smells So Sweet

March 29th, 2012
Taken from the ISC HPC Blog, by Peter Ffoulkes

This month, the annual HPC Advisory Council meeting took place in Switzerland, stirring many discussions about the future of big data, and the available and emerging technologies slated to solve the inundation of data on enterprises.

On the ISC blog, Peter Ffoulkes from TheInfoPro writes that the enterprise is finally starting to discover that HPC is the pathway to success, especially when it comes to Big Data Analytics.

“Going back to fundamentals, HPC is frequently defined as either compute intensive or data intensive computing or both. Welcome to today’s hottest commercial computing workload, “Total Data” and business analytics. As described by 451 Research, “Total Data” involves processing any data that might be applicable to the query at hand, whether that data is structured or unstructured, and whether it resides in the data warehouse, or a distributed Hadoop file system, or archived systems, or any operational data source – SQL or NoSQL – and whether it is on-premises or in the cloud.”

According to Ffoulkes, the answer to the total data question will remain in HPC. This week, many of us attended the OpenFabrics Alliance User and Developer Workshop and discussed these same topics: enterprise data processing needs, cloud computing, big data, and while the event has ended, I hope the discussions continue as we look to the future of big data.

In the meantime, be sure to check out Peter’s thoughts in his full blog post.

Jim Ryan

Jim Ryan
Intel

Author: admin Categories: InfiniBand Tags:

Why I/O is Worth a Fresh Look

October 5th, 2011

In September, I had the privilege of working with my friend and colleague, Paul Grun of System Fabric Works (SFW) on the first webinar in a four-part series, “Why I/O is Worth a Fresh Look,” presented by InfiniBand Trade Association on September 23.

The IBTA Fall Webinar Series is part of a planned outreach program led by the IBTA to expand InfiniBand technology to new areas where its capabilities may be especially useful. InfiniBand is well-accepted in the High-Performance Community (HPC), but the technology can be just as beneficial in “mainstream” Enterprise Data Centers (EDC). The webinar series addresses the role of remote direct memory access (RDMA) technologies, such as InfiniBand and RDMA over Converged Ethernet (RoCE), in the EDC, highlighting the rising importance of I/O technology in the on-going transformation of modern data centers. We know that broadening into EDC is a difficult task for several reasons, including the fact that InfiniBand could be viewed as a “disruptive” technology, not based on the familiar Ethernet transport, and therefore requires new components in the EDC. The benefits are certainly there, but so are the challenges, hence the difficulty of our task.

Like all new technologies, one of our challenges is educating those who are not familiar with InfiniBand and challenging them to look at their current systems differently – just as our first part in this webinar series suggests - taking a fresh look at I/O. In this first webinar, we took on the task of reexamining I/O and assessing genuine advancements in I/O, specifically InfiniBand and making case for how this technology should be considered when improving your data center. We believe the developments in the InfiniBand world over the last decade are not well-known to EDC managers, or at least not well understood.

I am very happy with the result, and the first webinar really set the stage for the next three webinars which dive into the nuts and bolts of this technology and give practical information on how this technology can be implemented and improve your data center.

During the webinar we answered several questions, but one in particular, I felt we did not spend enough time discussing due to time limitations. The attendee asked, “How will interoperability in the data center be assured? The results from the IBTA plugfests are less than impressive. Will this improve with the next generation FDR product?”

First, this question requires a little explanation, because it uses terminology and implies knowledge outside of the webinar itself. There is testing of InfiniBand components which takes place jointly between the IBTA and OpenFabrics Alliance (OFA) at the University of New Hampshire Interoperability Lab (UNH-IOL). We test InfiniBand components for compliance to the InfiniBand specification and for interoperability with other compliant InfiniBand components.

In the opinion of IBTA and OFA members, vendors and customers alike, interoperability must be verified with a variety of vendors and their products. However, that makes the testing much more difficult and results in lower success rates than if a less demanding approach were to be taken. The ever-increasing data rates also put additional demands on cable vendors and InfiniBand Channel Adapter and Switch vendors.

The real world result of our testing is a documented pass rate of about 90%, and a continuing commitment to do better.

What this means in real world terms is that the InfiniBand community has achieved the most comprehensive and strictest compliance and interoperability program in the industry. This fact, in and of itself, is probably the strongest foundational element that justifies our belief that InfiniBand can and should be considered for adoption in the mainstream EDC, with complete confidence as to its quality, reliability and maturity.

If you were unable to attend the webinar, be sure to check out the recorded webinar and download the presentation slides here. We’re looking forward to the next webinar (The Practical Approach to Applying InfiniBand in Your Data Center, taking place October 21) in the series which will dig more deeply into how this technology can be integrated into the data center and get into the meat of this technology. I look forward to your participation in the remaining webinars. There’s a lot we can accomplish together, and it starts with this basic understanding of the technology and how it can help you reach your company’s goals.

Jim Ryan
Chairman of the OpenFabrics Alliance

Author: admin Categories: InfiniBand, Uncategorized Tags:

InfiniBand at VMworld!

September 2nd, 2011

VMworld 2011 took place this week in sunny Las Vegas, and with over 20,000 attendees, this show has quickly developed into one of the largest enterprise IT events in the world. Virtualization continues to be one of the hottest topics in the industry, providing a great opportunity for InfiniBand vendors to market the wide-array of benefits that InfiniBand is enabling in virtualized environments. There were several in the IBTA community spreading the InfiniBand message, but here were a few of note.

vmworld-8312011-image-11

On the networking side, Mellanox Technologies showed the latest generation of InfiniBand technology, FDR 56Gb/s. With FDR adapters, switches and cables available today, IT managers can immediately deploy this next generation technology into their data center and get instant performance improvements, whether it be leading vMotion performance, the ability to support more virtual machines per server at higher bandwidth per virtual machine, or higher capital and lower operating expenses by consolidating the networking, management and storage I/O into a one-wire infrastructure.

vmworld-8312011-image-21

Fusion-io, a Flash-based storage manufacturer that targets heavy data acceleration needs from such applications as database, virtualization, Memchached and VDI, also made a big splash at VMworld. Their booth featured an excellent demonstration of how low-latency, high-speed InfiniBand networks help enable Fusion-io to show 800 virtual desktops being accessed and displayed over 17 monitors. InfiniBand enabled them to stream bandwidth-intensive HD movies (over 2,000) from just eight servers.

vmworld-8312011-image-3

Pure Storage, a newcomer in the storage arena, announced their 40Gb/s InfiniBand-based enterprise storage array that targets applications such as database, VDI, etc. With InfiniBand they are able to reduce their latency by over 800 percent while increasing performance by 10X.

vmworld-8312011-image-41

Isilon was recently acquired by EMC, and in the EMC booth, a rack of Isilon storage systems was displayed, scaling out by running 40Gb/s InfiniBand on the back-end. These storage systems excel in VDI implementations and are ripe for customers implementing a cloud solution where performance, reliability and storage resiliency are vital.

vmworld-8312011-image-5

Also exhibiting at VMworld was Xsigo Systems. Xsigo showed their latest Virtual I/O Director which now includes 40Gb/s InfiniBand. The previous generation used 20Gb/s InfiniBand. With the upgraded bandwidth capabilities, Xsigo can now offer their customers with 12-30X acceleration of I/O intensive tasks such as vMotion, queries, backup, etc all while providing dynamic bandwidth allocation per VM or job. In addition, by consolidating the network over a single wire, Xsigo is able to provide customers with 85 percent less hardware cost per virtual machine.

The items mentioned above are just a small slice of the excitement that was at VMworld. I’m glad to have seen so many InfiniBand solutions displayed. For more information on InfiniBand in the enterprise, watch for an upcoming webinar series being produced by the IBTA.

Brian Sparks

IBTA Marketing Working Group Co-Chair