Archive

Archive for the ‘InfiniBand’ Category

IBTA Wants You – Guide the Future of InfiniBand, RoCE and Performance-Driven Data Centers

January 25th, 2016

For any organization, the New Year provides a great opportunity to reflect on the past and set a healthier course for the future. Companies can take a variety of internal actions to prepare for impeding market changes, but rarely do they have the power to influence the course of an entire industry on their own. For those devoted to improving clustered server and data center performance, joining an industry alliance such as the InfiniBand Trade Association (IBTA) offers a chance to contribute to the foundational work that sets the path for technological advances one, five and ten years into the future.

The IBTA is the organization that maintains and furthers the InfiniBand specification, used by Cloud service providers and high performing enterprise data centers as well as the interconnect of choice for the world’s fastest supercomputers. Additionally, the IBTA defines the specification for RDMA over Converged Ethernet (RoCE), which leverages the advantages of RDMA technology for Ethernet-based environments.

Leading enterprise IT vendors and HPC research facilities make up the coalition of more than 50 members that all have a shared interest in the advancement of InfiniBand and/or RoCE technology. Each member company contributes specialized expertise to IBTA’s various technical working groups, which shape and guide the progression of InfiniBand and RoCE capabilities.

Joining the IBTA comes with a variety of membership benefits, including:

Access to:

  • The InfiniBand and RoCE architecture specifications as they are being developed
  • Meeting minutes and notices of proposed and actual changes to IBTA-controlled documents
  • IBTA-sponsored Compliance and Interoperability Plugfests and workshops

Participation in:

  • The maintenance of the InfiniBand Roadmap, which defines future speeds and lane widths for InfiniBand-based technologies
  • IBTA sponsored activities at tradeshows including the annual Supercomputing Conference in November
  • IBTA speaking and demo opportunities

Opportunity to:

  • Influence and contribute to the ongoing development and maintenance of the InfiniBand and RoCE architecture specifications
  • Add approved products to the IBTA Integrators’ List, which provides a centralized listing of products that have passed a suite of compliance and interoperability testing
  • Post InfiniBand and RoCE related whitepapers, webinars, podcasts and press releases on the IBTA and RoCE Initiative web sites
  • Submit and obtain access to information regarding licensing policies posted by member patent holders on specific InfiniBand and RoCE architecture specifications
  • Network with the world’s foremost developers of InfiniBand and RoCE hardware and software

Make 2016 the year your company defines the future of the HPC industry! Visit our Membership page to learn how to join or contact membership@infinibandta.org for more information.

Bill Lee

InfiniBand Roadmap – Charting Speeds for Future Needs

December 14th, 2015

ib-roadmap

Defining the InfiniBand ecosystem to accommodate future performance increases is similar to city planners preparing for urban growth. Both require a collaborative effort between experts and the community for whom they serve.

The High Performance Computing (HPC) community continues to call for faster interconnects to transfer massive amounts of data between its servers and clusters. Today, the industry’s fastest supercomputers are processing data in petaflops and experts expect that they will reach Exascale computing by 2025.

IBTA’s working groups are always looking ahead to meet the HPC community’s future performance demands. We are constantly updating the InfiniBand Roadmap, a visual representation of InfiniBand speed increases, to keep our work in line with expected industry trends and systems-level performance gains.

The roadmap itself is dotted by data rates, which are defined by transfer speeds and release dates. Each data rate has a designated moniker and is measured in three ways; 1x, 4x and 12x. The number refers to the amount of lanes per port with each additional lane allowing for greater bandwidth.

Current defined InfiniBand Data Rates include the following:

Data Rate: 4x Link Bandwidth 12x Link Bandwidth
SDR 8 Gb/s 24 Gb/s
DDR 16 Gb/s 48 Gb/s
QDR 32 Gb/s 96 Gb/s
FDR 56 Gb/s 168 Gb/s
EDR 100 Gb/s 300 Gb/s
HDR 200 Gb/s 600 Gb/s

The evolution of InfiniBand can be easily tracked by its data rates as demonstrated in the table above. A typical server or storage interconnect uses 4x links or 4 lanes per port. However, clusters and supercomputers can leverage 12x link bandwidth interconnects for even greater performance. Looking ahead, we expect to see a number of technical advances as the race to Exascale heats up.

As the roadmap demonstrates, planning for future data rates starts years in advance of their expected availability. In the latest edition, you will find two data rates scheduled beyond HDR - NDR and the newly christened XDR. Stayed tuned as the IBTA specifies NDR and XDR’s release dates and bandwidths.

Bill Lee

Changes to the Modern Data Center – Recap from SDC 15

October 19th, 2015

sdc15_logo
The InfiniBand Trade Association recently had the opportunity to speak on RDMA technology at the 2015 Storage Developer Conference. For the first time, SDC15 introduced Pre-conference Primer Sessions covering topics such as Persistent Memory, Cloud and Interop and Data Center Infrastructure. Intel’s David Cohen, System Architect and Brian Hausauer, Hardware Architect spoke on behalf of IBTA in a pre-conference session and discussed “Nonvolatile Memory (NVM), four trends in the modern data center and implications for the design of next generation distributed storage systems.”

Below is a high level overview of their presentation:

The modern data center continues to transform as applications and uses change and develop. Most recently, we have seen users abandon traditional storage architectures for the cloud. Cloud storage is founded on data-center-wide connectivity and scale-out storage, which delivers significant increases in capacity and performance, enabling application deployment anytime, anywhere. Additionally, job scheduling and system balance capabilities are boosting overall efficiency and optimizing a variety of essential data center functions.

Trends in the modern data center are appearing as cloud architecture takes hold. First, the performance of network bandwidth and storage media is growing rapidly. Furthermore, operating system vendors (OSV) are optimizing the code path of their network and storage stacks. All of these speed and efficiency gains to network bandwidth and storage are occurring while single processor/core performance remains relatively flat.

Data comes in a variety of flavors, some of which is accessed frequently for application I/O requests and others that are rarely retrieved. To enable higher performance and resource efficiency, cloud storage uses a tiering model to access data based on what is accessed most often. Data that is regularly accessed is stored on expensive, high performance media (solid-state drives). Data that is hardly or never retrieved is relegated to less expensive media with the lowest $/GB (rotational drives). This model follows a Hot, Warm and Cold data pattern and allows you faster access to what you use the most.

The growth of high performance storage media is driving the need for innovation in the network, primarily addressing application latency. This is where Remote Direct Memory Access (RDMA) comes into play. RDMA is an advanced, reliable transport protocol that enhances the efficiency of workload processing. Essentially, it increases data center application performance by offloading the movement of data from the CPU. This lowers overhead and allows the CPU to focus its processing power on running applications, which in turn reduces latency.

Demand for cloud storage is increasing and the need for RDMA and high performance storage networking grows as well. With this in mind, the InfiniBand Trade Association is continuing its work developing the RDMA architecture for InfiniBand and Ethernet (via RDMA over Converged Ethernet or RoCE) topologies.

Bill Lee

Race to Exascale – Nations Vie to Build Fastest Supercomputer

September 28th, 2015

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

“Discover Supercomputer 3” by NASA Goddard Space Flight Center is licensed under CC BY 2.0

The race between countries to build the fastest, biggest or first of anything is nothing new – think the race to the moon. One of the current global competitions is focused on supercomputing, specifically the race to Exascale computing or a billion billion calculations per second. Recently, governments (President Obama’s Executive Order and China’s current lead in supercomputing) are allocating significant resources toward Exascale initiatives as they start to understand its vast potential for a variety of industries, including healthcare, defense and space exploration.

The TOP500 list ranking the top supercomputers in the world will continue to be the scorecard. Currently, the U.S. leads with 233 of the top 500 supercomputers, Europe with 141 and China with 37. However, China’s small portfolio of supercomputers does not mean it is not a significant competitor in the supercomputing space as China has the #1 supercomputer on the TOP500 list for the fifth consecutive time.

When looking to build the supercomputers of the future, there are a number of factors which need to be taken into consideration, including superior application performance, compute scalability and resource efficiency. InfiniBand’s compute offloads and scalability makes it extremely attractive to supercomputer architects. Proof of the performance and scalability can be found in places such as the HPC Advisory Council’s library of case studies. InfiniBand makes it possible to achieve near linear performance improvement as more computers are connected to the array. Since observers of this space expect Exascale systems to require a massive amount of compute hardware, InfiniBand’s scalability looks to be a requirement to achieve this goal.

As the race to supercomputing speeds up we expect to see a number of exciting advances in technology as we shift from petaflops to exaflops. To give you an idea of how far we have come and where we are heading here is a comparison from the speed of computers that powered the race to space and the goals for Exascale.

Speeds Then vs. Now – Race to Space vs. Race to Supercomputing

  • Computers in 1960s (Speed of the Race to Space): Hectoscale (hundreds of FLOPs per second)
  • Goal for Computers in 2025 (Speed of the Race to Supercomputing): Exascale (quintillions of FLOPs per second)

Advances in supercomputing will continue to dominate the news with these two nations making the development of the fastest supercomputer a priority. As November approaches and the new TOP500 list is released, it will be very interesting to see where the rankings lie and what interconnects the respective architects will pick.


Bill Lee

EDR Hits Primetime! Newly Published IBTA Integrators’ List Highlights Growth of EDR

August 20th, 2015

The highly anticipated IBTA April 2015 Combined Cable and Device Integrators’ List is now available for download. The list highlights the results of the IBTA Plugfest 27 held at the University of New Hampshire’s Interoperability Lab earlier this year. The updated list consists of newly verified products that are compliant to the InfiniBand specification as well as details on solution interoperability.

Of particular note was the rise of EDR submissions. At IBTA Plugfest 27, eight companies provided 32 EDR cables for testing, up from three companies and 12 EDR cables at IBTA Plugfest 26. The increase in EDR cable solutions indicates that the technology is beginning to hit its stride. At Plugfest 28 we anticipate even more EDR solutions.

The IBTA is known in the industry for its rigorous testing procedures and subsequent Integrators’ List. The Integrators’ List provides IT professionals with peace of mind when purchasing new components to incorporate into new and existing infrastructure. To ensure the most reliable results, the IBTA uses industry-leading test equipment from Anritsu, Keysight, Molex, Tektronix, Total Phase and Wilder Technologies. We appreciate their commitment to our compliance program; we couldn’t do it without them.

The IBTA hosts its Plugfest twice a year to give members a chance to test new configurations or form factors. Although many technical associations require substantial attendance fees for testing events, the IBTA covers the bulk of Plugfest costs through membership dues.

The companies participating in Plugfest 27 included 3M Company, Advanced Photonics, Inc., Amphenol, FCI, Finisar, Fujikura, Ltd., Fujitsu Component Limited, Lorom America, Luxshare-ICT, Mellanox Technologies, Molex Incorporated, SAE Magnetics, Samtec, Shanghai Net Miles Fiber Technology Co. Ltd, Siemon, Sumitomo, and Volex.

We’ve already begun planning for IBTA Plugfest 28, which will be held October 12-23, 2015. For questions about Plugfest, contact ibta_plugfest@soft-forge.com or visit the Plugfest page for additional information.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

InfiniBand leads the TOP500 powering more than 50 percent of the world’s supercomputing systems

August 4th, 2015

TOP500 Interconnect Trends

TOP500 Interconnect Trends

TOP500.org released the list of the 500 most powerful commercially available computer systems in the world, reporting that InfiniBand powers 257 systems, 51.4 percent of the list. This marks a 15.8 percent year over year growth from June 2014.

Demand for higher bandwidth, lower latency and higher message rates along with the need for application acceleration is driving continued adoption of InfiniBand in traditional High Performance Computing (HPC) as well as commercial HPC, cloud and enterprise data centers. InfiniBand is the only open-standard I/O that provides the capability required to handling supercomputing’s high demand for CPU cycles without time wasted on I/O transactions.

  • InfiniBand powers the most efficient system on the list with 98.8% efficiency.TOP100 Systems
  • EDR (Enhanced Data Rate) InfiniBand delivers 100Gbps and enters the TOP500 for the first time, powering three systems.
  • FDR (Fourteen Data Rate) InfiniBand at 56Gbps continues to be the most used technology on the TOP500, connecting 156 systems.
  • InfiniBand connects the most powerful clusters, 33 of the Petascale-performance systems, up from 24 in June 2014.
  • InfiniBand Is the leading interconnect for accelerator-based systems covering 77% of the list.

Not only is InfiniBand the most used interconnect solution in the world’s 500 most powerful supercomputers, it’s also the leader in the TOP100 as well. The TOP100 encompasses the top 100 supercomputing systems, as ranked in the TOP500. InfiniBand is the natural choice for world-leading supercomputers because of its performance, efficiency and scalability.

The full TOP500 list is available at www.top500.org.

Bill Lee

Author: admin Categories: InfiniBand, TOP500 Tags: , , , ,

IBTA Members to Exhibit at ISC High Performance 2015

July 10th, 2015

isc_conference1

The ISC High Performance 2015 conference gets underway this weekend in Frankfurt, Germany, where experts in the high performance computing field will gather to discuss the latest developments and trends driving the industry. Event organizers are expecting over 2,500 attendees at this year’s show, which will feature speakers, presentations, BoF sessions, tutorials and workshops on a variety of topics.

IBTA members will be on hand exhibiting their latest InfiniBand-based HPC solutions. Multiple EDR 100Gb/s InfiniBand products and demonstrations can be seen across the exhibit hall at ISC High Performance at the following member company booths:

  • Applied Micro (Booth #1431)
  • Bull (Booth #1230)HP (Booth #732)
  • IBM (Booth #928)
  • Lenovo (Booth #1020)
  • Mellanox (Booth #905)
  • SGI (Booth #910)

Be sure to stop by each of our member booths and ask about their InfiniBand offerings! For additional details on ISC High Performance 2015 keynotes and sessions, visit its program overview page.

Bill Lee

Author: admin Categories: InfiniBand Tags: , ,

To InfiniBand and Beyond – Supercomputing Support for NASA Missions

June 11th, 2015

pleiades

High performance computing has been integral to solving large-scale problems across many industries, including science, engineering and business. Some of the most interesting use cases have come out of NASA, where supercomputing is essential to conduct accurate simulations and models for a variety of missions.

NASA’s flagship supercomputer, Pleiades, is among the world’s most powerful, currently ranking seventh in the United States and eleventh globally. It is housed at the NASA Advanced Supercomputing (NAS) facility in California and supports the agency’s work in aeronautics, Earth and space science and the future of space travel. At the heart of the system is InfiniBand technology, including DDR, QDR and FDR adapters and cabling.

The incremental expansion of Pleiades’ computing performance has been fundamental to its lasting success. Typically, a computer cluster is fully built from the onset and rarely expanded or upgraded during its lifetime. Built in 2008, Pleiades initially consisted of 64 server racks achieving 393 teraflops with a maximum link speed of 20Gb/s. Today, the supercomputer boasts 160 racks with a theoretical peak performance of 5.35 petaflops, or 5,350 teraflops, and a maximum link speed of 56Gb/s.

To further demonstrate the power of the InfiniBand-based Pleiades supercomputer, here are several fun facts to consider:

  • Today’s Pleiades supercomputer delivers more than 25 million times the computational power of the first Cray X-MP supercomputer at the NAS facility in 1984.
  • The number of days it would take every person in the world to complete one minute of Pleiades’ calculations if they each performed one calculation per second, eight hours per day: 1,592.
  • The NAS facility has the largest InfiniBand network in the world, with over 65 miles (104.6 km) of cable interconnecting its supercomputing systems and storage devices-the same distance it would take to stretch to from the Earth’s surface to the part of the thermosphere where auroras are formed.

For additional facts and impacts of NASA’s high-end computing capability, check out its website here: http://www.nas.nasa.gov/hecc/about/hecc_facts.html

Bill Lee

IBTA Publishes Updated Integrators’ List & Announces Plugfest #27

March 17th, 2015

We’re excited to announce the availability of the IBTA October 2014 Combined Cable and Device Integrators’ List – a compilation of results from the IBTA Plugfest #26 held last fall. The list gathers products that have been tested and accepted by the IBTA as being compliant with the InfiniBand™ architecture specifications, with new products being added every spring and fall, following each Plugfest event.

The updated Integrators’ List, along with the bi-annual Plugfest testing, is a testament to the IBTA’s dedication to advancing InfiniBand technology and furthering industry adoption. The list demonstrates how our organization continues to ensure InfiniBand interoperability, as all cables and devices listed have successfully passed all the required compliance tests and procedures.

Additionally, the consistently-updated Integrators’ List assures vendors’ customers and end-users that the manufacturers’ equipment included on the Integrators’ List has achieved the necessary level of compliance and interoperability. It also helps the IBTA to assess current and future industry demands providing background for future InfiniBand specifications.

We’ve already begun preparations for Plugfest #27, taking place April 13-24, 2015 at the University of New Hampshire’s Interoperability Laboratory in Durham, N.H. For more information, or to register for Plugfest #27, please visit the IBTA Plugfest website.

For any questions related to the Integrators’ List or IBTA membership, please visit the IBTA website: http://www.infinibandta.org/.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA Releases New Update to InfiniBand Architecture Specification

March 10th, 2015

Today, the IBTA announced Release 1.3 of the InfiniBand Architecture Specification Volume 1. Dedicated to our mission to maintain and further the specification, we worked to incorporate capabilities to help keep up with the increasing customer demand for expedited data transfer and accessibility. This effort resulted in a fresh version of the specification, which incorporates new features that enable computer systems to match the continuous high performance computing and data center needs required for increased stability and bandwidth, as well as high computing efficiency, availability and isolation.

The rapid evolution of the computer and internet have left existing interconnect technologies struggling to keep pace. The increasing popularity of high-end computing concepts like clustering, CPU offloads and movement of large amounts of data demand a robust fabric.

Additionally, I/O devices are now expected to have link-level data protection, traffic isolation, deterministic behavior and a high quality of service. In order to help meet these needs, the IBTA developed InfiniBand Architecture (IBA) in 2000 and has continued to update the specification suite as the industry evolves.

As a result of this initiative, the updated Volume 1.3 enables notable improvements in scalability with deeper visibility into switch-fabric configuration for improved monitoring of traffic patterns and network maintenance. The updated specification allows for more in-depth cable management with greater accessibility to cables for improved diagnostics mechanisms allowing for shorter response times and improved messaging capabilities, thus improving overall network performance.

Tweet: Learn more about the recent updates to the #InfiniBand architecture specification here:http://bit.ly/1FBbpcJ

Bill Lee

Author: admin Categories: InfiniBand, Volume 1 Specification Tags: