Archive

Posts Tagged ‘IBTA’

New InfiniBand Specification Updates Expand Interoperability, Flexibility, and Virtualization Support

November 29th, 2016

ibta_logohorz

Performance demands continue to evolve in High Performance Computing (HPC) and enterprise cloud networks, increasing the need for enhancements to InfiniBand capabilities, support features, and overall interoperability. To address this need, the InfiniBand Trade Association (IBTA) is announcing the public availability of two new InfiniBand Architecture Specification updates - the Volume 2 Release 1.3.1 and a Virtualization Annex to Volume 1 Release 1.3.

The Volume 2 Release 1.3.1 adds flexible performance enhancements to InfiniBand-based networks. With the addition of Forward Error Correction (FEC) upgrades, IT managers can experience both minimal error rates and low latency performance. The new release also enables the InfiniBand subnet manager to optimize signal integrity while maintaining the lowest power possible from the port. Additionally, updates to QSFP28 and CXP28 memory mapping support improved InfiniBand cable management.

This new Volume 2 release also improves upon interoperability and test methodologies for the latest InfiniBand data rates, namely EDR 100 Gb/s and FDR 56 Gb/s. These enhancements are achieved through updated EDR electrical requirements, amended testing methodology for EDR Limiting Active Cables, and FDR interoperability and test specification corrections.

With an aim toward supporting the ever-increasing deployment of virtualized solutions in HPC and enterprise cloud networks, the IBTA also published a new Virtualization Annex to Volume 1 Release 1.3. The Annex extends the InfiniBand specification to address multiple virtual machines connected to a single physical port, which allows subnet managers to recognize each logical endpoint and reduces the burden on the subnet managers as networks leverage virtualization for greater system scalability.

The InfiniBand Architecture Specification Volume 2 Release 1.3.1 and Volume 1 Release 1.3 are available for public download here.

Please contact us at press@infinibandta.org with questions about InfiniBand’s latest updates.

Bill Lee

New RoCE Interoperability List Features Higher Test Speeds, Additional Vendors

August 24th, 2016

dsc_0050-1600x1200

It’s that time again! Having finalized the RDMA over Converged Ethernet (RoCE) test results from Plugfest 29, the IBTA is pleased to announce the availability of the new RoCE Interoperability List. Designed to support data center managers, CIOs and other IT decision makers with their planned RoCE deployments for enterprise and high performance computing, the latest edition features a growing number of cable and equipment vendors and Ethernet test speeds.

In April 2016, Plugfest 29 saw nine member companies submit RoCE capable RNICS, switches, QFSP, QFSP28 and SFP28 cables for interoperability testing. This is an encouraging sign for the RoCE ecosystem as more and more vendors begin to offer solutions that are proven to work seamlessly with each other, regardless of brand. Furthermore, the new list now features 50 and 100 GbE test scenarios, which complements the IBTA’s existing 10, 25 and 40 GbE interoperability testing. This expansion gives RoCE deployers confidence in knowing that as they integrate faster Ethernet speeds in their systems, their applications can still leverage the advantages of tested RDMA technology.

The RoCE Interoperability List is created twice a year following bi-annual IBTA-sponsored Plugfests, which take place at the University of New Hampshire InterOperability Lab (UNH-IOL). The IBTA Integrators’ Program, made up of both the InfiniBand Integrators’ List and the RoCE Interoperability List, is founded on rigorous testing procedures that establish compliance and real-world interoperability.

The InfiniBand Integrators’ List, which features InfiniBand Host Channel Adapters (HCAs), switches, SCSI Remote Protocol (SRP) targets and cables, will be available soon via the IBTA Integrators’ List page. Additionally, mark your calendars for Plugfest 30 – October 17-29, 2016 at UNH-IOL. Registration information and event details will be available on the IBTA Plugfest page in the coming month.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Dive into RDMA’s Impact on NVMe Devices at the 2016 Flash Memory Summit

August 5th, 2016

fms

Next week, storage experts will gather at the 2016 Flash Memory Summit (FMS) in Santa Clara, CA, to discuss the current state of flash memory applications and how these technologies are enabling new designs for many products in the consumer and enterprise markets. This year’s program will include three days packed with sessions, tutorials and forums on a variety of flash storage trends, including new architectures, systems and standards.

NVMe technology, and its impact on enterprise flash applications, is among the major topics that will be discussed at the show. The growing industry demand to unlock flash storage’s full potential by leveraging high performance networking has led to the NVMe community to develop a new standard for fabrics. NVMe over Fabrics (NVMe/F) allows flash storage devices to communicate over RDMA fabrics, such as InfiniBand and RDMA over Converged Ethernet (RoCE), and thereby enabling all flash arrays to overcome existing performance bottlenecks.

Attending FMS 2016?

If you’re attending FMS 2016 and are interested in learning more about the importance of RDMA fabrics for NVMe/F solutions, I recommend the following two educational sessions:

NVMe over Fabrics Panel – Which Transport Is Best?
Tuesday, August 9, 2016 (9:45-10:50 a.m.)

Representatives from the IBTA will join a panel to discuss the value of RDMA interconnects for the NVMe/F standard. Attendees can expect to receive an overview of each RDMA fabric and the benefits they bring to specific applications and workloads. Additionally, the session will cover the promise that NVMe/F has for unleashing the potential performance of NVMe drives via mainstream high performance interconnects.

Beer, Pizza and Chat with the Experts
Tuesday, August 9, 2016 (7-8:30 p.m.)

This informal event encourages attendees to “sit and talk shop” with experts about a diverse set of storage and networking topics. As IBTA’s Marketing Work Group Co-Chair, I will be hosting a table focused on RDMA interconnects. I’d love to meet with you to answer questions about InfiniBand and RoCE and discuss the advantages they provide the flash storage industry.

Additionally, there will be various IBTA member companies exhibiting on the show floor, so stop by their booths to learn about the new InfiniBand and RoCE solutions:

·HPE (#600)

· Keysight Technologies (#810)

· Mellanox Technologies (#138)

· Tektronix (#641)

· University of New Hampshire InterOperability Lab (#719)

For more information on the FMS 2016 program and exhibitors, visit the event website.

Bill Lee

Life in the Fast Lane: InfiniBand Continues to Reign as HPC Interconnect of Choice

July 8th, 2016

top500

TOP500.org recently released its latest account of the world’s most powerful supercomputers and, as with previous reports, InfiniBand leads the way. The 47th edition of the bi-annual list shows that 205 of the fastest commercially available systems are accelerated by InfiniBand and OpenFabrics Software (OFS).

The InfiniBand fabric, with the OFS open source software, is the High Performance Computing (HPC) interconnect of choice because it delivers a distinctive combination of superior performance, efficiency, scalability and low latency. InfiniBand is the only open-standard I/O that provides the capability required to handle supercomputing’s high demand for CPU cycles without time wasted on I/O transactions. With today’s supercomputers pushing nearly 100 petaflops on the LINPACK benchmark, the need for efficient, low latency performance is higher than ever.

High Marks for InfiniBand and OFS

  • InfiniBand and OFS systems outperformed competing technologies in overall efficiency, scoring an 85 percent list average for compute efficiency – with one system even reaching an incredible 99.8 percent.
  • The technologies enable 70 percent of the HPC system segment. This segment includes academic, research and government fields.
  • For supercomputers capable of Petascale performance, the number of InfiniBand and OFS systems grew from 33 to 45.

InfiniBand’s ability to carry multiple traffic types over a single connection makes it ideal for clustering, communications, storage and management. As a result, the interconnect technology is used in thousands of data centers, HPC clusters, storage, and embedded application that scale from two nodes to a single cluster of tens-of-thousands of nodes. Supercomputers powered by OFS reach their highest performance capacity through the speed and efficiency delivered by Remote Direct Memory Access (RDMA). In turn, OFS enables RDMA fabrics, such as InfiniBand, to run applications that require extreme speeds, Petascale-level scalability and utility-class reliability.

Check out the full list at www.top500.org.

Bill Lee

Plugfest 28 Results Highlight Expanding InfiniBand EDR 100 Gb/s & RoCE Ecosystems

March 21st, 2016

il-interopWe are excited to announce the availability of our latest InfiniBand Integrators’ List and RoCE Interoperability List. The two lists make up the backbone of our Integrators’ Program and are designed to support data center managers, CIOs and other IT decision makers with their planned InfiniBand and RoCE deployments for enterprise and high performance computing systems. To keep data up to date and as useful as possible, both documents are refreshed twice a year following our bi-annual plugfests, which are held at the University of New Hampshire InterOperability Lab (UNH-IOL).

Having recently finalized the results from Plugfest 28, we can report a significant increase in InfiniBand EDR 100 Gb/s submissions compared to the last Integrators’ List. This trend demonstrates a continued industry demand for InfiniBand-based systems that are capable of higher bandwidth and faster performance. The updated list features a variety of InfiniBand devices, including Host Channel Adapters (HCAs), Switches, SCSI Remote Protocol (SRP) targets and cables (QDR, FDR and EDR).

Additionally, we held our second RoCE interoperability event at Plugfest 28, testing 10, 25 and 40 GbE RNICs, Switches and SFP+, SFP28, QSFP and QSFP28 cables. Although a full spec compliance program is still under development for RoCE, the existing interoperability testing offers solid insight into the ecosystem’s robustness and viability. We plan to continue our work creating a comprehensive RoCE compliance program at Plugfest 29. RoCE testing at Plugfest 29 will include testing of more than 16 different 10, 25, 40, 50 and 100 GbE RNICs and Switches along with all of the various cables to support these devices. Plugfest 29 testing of RoCE products, which use Ethernet physical and link layers, will be the most comprehensive interoperability testing ever performed.

As always, we’d like to thank the leading vendors that contributed test equipment to IBTA Plugfest 28. These invaluable members include Anritsu, Keysight Technologies, Matlab, Molex, Tektronix, Total Phase and Wilder Technologies.

The next opportunity for members to test InfiniBand and RoCE products is Plugfest 29 is scheduled for April 4-15, 2016 at UNH-IOL. Event details and registration information are available here.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

Plugfest #29 Registration Now Open, Next Integrators’ List on the Horizon

February 24th, 2016

plugfest

Registration is now open for the 29th InfiniBand Compliance and Interoperability Plugfest! Join us April 4-15, 2016, at the University of New Hampshire (UNH) Interoperability Lab (IOL) where InfiniBand and RoCE vendors will gather to measure their products for compliance with the InfiniBand architecture specification. Also, participants will have the opportunity to test their products’ interoperability with other InfiniBand and RoCE solutions.

IBTA’s rigorous world-class testing program ensures the dependability of the InfiniBand specification, which in turn furthers industry adoption and user admiration. Additionally, we leverage these bi-annual plugfests to help guide future improvements to the standard.

Vendor devices and cables that successfully pass all required compliance tests and interoperability procedures will be listed on either the IBTA Integrators’ List or the RoCE Interoperability List and granted use of the IBTA Integrators’ List Logo. Data center managers, CIOs and other IT professionals frequently reference these lists to help plan deployment of InfiniBand or RoCE systems, while many OEMs use them as a gateway in the procurement process.

We are close to finalizing the results from Plugfest 28, so stay tuned for an update on the availability of the latest Integrators’ List and RoCE Interoperability List.

For questions or additional information related to IBTA’s plugfests, contact ibta_plugfest@soft-forge.com.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

SC15 Preview: Exascale or Bust! - On the Road to ExaFLOPS with IBTA

November 17th, 2015

supercomputing-2015-logo

This year’s Supercomputing Conference is shaping up to be one of IBTA’s most comprehensive to date. Whether listening to industry experts debate future system tune ups or cruising the show floor learning from members one-on-one, event attendees are bound to leave revved up about how interconnect advancements will alleviate data management road blocks.

OK. OK. Enough with the car metaphors-read on, though, to see what IBTA’s got in store for you (including some enticing giveaways!):


Birds of a Feather Panel Session
, Nov. 17 at 12:15 p.m. (Room 18AB)

“The Challenge of a Billion Billion Calculations Per Second: InfiniBand Roadmap Shows the Future of the High Performance Standard Interconnect for Exascale Programs”

It’s no secret. The race to exascale is on-the exponentially increasing amounts of data bogging down systems demand such speed. The critical issues now? What technological changes will be required to support this new High Performance Computing (HPC) vision and is the industry ready to solve them? I’ll be joining Brandon Hoff to lead the following esteemed panelists through a discussion on interconnect capabilities and roadmaps that will deliver a billion billion calculations per second:

  • Ola Tørudbakken, Oracle
  • Gilad Shainer, Mellanox
  • Pavan Balaji, Argonne National Laboratory
  • Bob Ciotti, NASA


IBTA Update
, Nov. 18 at 5 p.m. (Booth 613)

Standards-based, multi-vendor solutions are transforming HPC. In the Mellanox booth, I’ll be presenting the latest developments in InifiniBand technology and shed light on what’s to come-proving why this interconnect standard powers some of the world’s fastest supercomputers.


IBTA Roadmap Game,
Nov. 16-19 (Exhibitor Hall)
Interested in timing just how fast IBTA-driven data moves? Track it with a slick Pebble Steel Smartwatch offered as the IBTA Roadmap Game grand prize! To enter:

  • Pick up a Game Card at the Birds of a Feather session or participating member booths
    • Finisar Corporation (#2018)
    • Hewlett-Packard (#603)
    • Lenovo (#1509)
    • Mellanox (#613)
    • Samtec (#1943)
  • Visit experts from our five game-participating members
  • Learn how their companies are revolutionizing HPC technology with IBTA technologies
  • Get your Game Card stamped
  • Submit a completed Roadmap Game Card to an official drop-off location (noted on the card) to receive a handy IBTA convertible flashlight/lantern and a chance to win the watch

The above three activities are only the tip of the iceberg… or, maybe I should have written the white stripe on a ‘69 Chevy Camaro? Many IBTA members are geared up for SC15 with products, solutions and guidance guaranteed to help you achieve your computing goals. Be sure to stop by for an introduction:

  • Bull SAS (#2131)
  • Cisco (#588)
  • Cray, Inc. (#1833)
  • Finisar Corporation (#2018)
  • Fujitsu Limited (#1827)
  • Hewlett-Packard (#603)
  • Hitachi (#1227)
  • IBM (#522)
  • Intel (#1333, #1533)
  • Lenovo (#1509)
  • Mellanox (#613)
  • Microsoft (#1319)
  • Molex (#268)
  • NetApp (#1537)
  • Oracle (#1327)
  • Samtec (#1943)

Last, but not least…if you can’t make it this year, but are interested in learning about any of the topics above, email us at press@infinibandta.org or follow us on twitter for updates @InfiniBandTrade.

Looking forward to seeing you there.

Bill Lee

Author: admin Categories: SC15 Tags: , ,

EDR Hits Primetime! Newly Published IBTA Integrators’ List Highlights Growth of EDR

August 20th, 2015

The highly anticipated IBTA April 2015 Combined Cable and Device Integrators’ List is now available for download. The list highlights the results of the IBTA Plugfest 27 held at the University of New Hampshire’s Interoperability Lab earlier this year. The updated list consists of newly verified products that are compliant to the InfiniBand specification as well as details on solution interoperability.

Of particular note was the rise of EDR submissions. At IBTA Plugfest 27, eight companies provided 32 EDR cables for testing, up from three companies and 12 EDR cables at IBTA Plugfest 26. The increase in EDR cable solutions indicates that the technology is beginning to hit its stride. At Plugfest 28 we anticipate even more EDR solutions.

The IBTA is known in the industry for its rigorous testing procedures and subsequent Integrators’ List. The Integrators’ List provides IT professionals with peace of mind when purchasing new components to incorporate into new and existing infrastructure. To ensure the most reliable results, the IBTA uses industry-leading test equipment from Anritsu, Keysight, Molex, Tektronix, Total Phase and Wilder Technologies. We appreciate their commitment to our compliance program; we couldn’t do it without them.

The IBTA hosts its Plugfest twice a year to give members a chance to test new configurations or form factors. Although many technical associations require substantial attendance fees for testing events, the IBTA covers the bulk of Plugfest costs through membership dues.

The companies participating in Plugfest 27 included 3M Company, Advanced Photonics, Inc., Amphenol, FCI, Finisar, Fujikura, Ltd., Fujitsu Component Limited, Lorom America, Luxshare-ICT, Mellanox Technologies, Molex Incorporated, SAE Magnetics, Samtec, Shanghai Net Miles Fiber Technology Co. Ltd, Siemon, Sumitomo, and Volex.

We’ve already begun planning for IBTA Plugfest 28, which will be held October 12-23, 2015. For questions about Plugfest, contact ibta_plugfest@soft-forge.com or visit the Plugfest page for additional information.

Rupert Dance, IBTA CIWG

Rupert Dance, IBTA CIWG

IBTA Members to Exhibit at ISC High Performance 2015

July 10th, 2015

isc_conference1

The ISC High Performance 2015 conference gets underway this weekend in Frankfurt, Germany, where experts in the high performance computing field will gather to discuss the latest developments and trends driving the industry. Event organizers are expecting over 2,500 attendees at this year’s show, which will feature speakers, presentations, BoF sessions, tutorials and workshops on a variety of topics.

IBTA members will be on hand exhibiting their latest InfiniBand-based HPC solutions. Multiple EDR 100Gb/s InfiniBand products and demonstrations can be seen across the exhibit hall at ISC High Performance at the following member company booths:

  • Applied Micro (Booth #1431)
  • Bull (Booth #1230)HP (Booth #732)
  • IBM (Booth #928)
  • Lenovo (Booth #1020)
  • Mellanox (Booth #905)
  • SGI (Booth #910)

Be sure to stop by each of our member booths and ask about their InfiniBand offerings! For additional details on ISC High Performance 2015 keynotes and sessions, visit its program overview page.

Bill Lee

Author: admin Categories: InfiniBand Tags: , ,

InfiniBand Volume 1, Release 1.3 – The Industry Sounds Off

May 14th, 2015

spec-roadmap

On March 10, 2015, IBTA announced the availability of Release 1.3 of Volume 1 of the InfiniBand Architecture Specification and it’s creating a lot of buzz in the industry. IBTA members recognized that as compute clusters and data centers grew larger and more complex, the network equipment architecture would have difficulty keeping pace with the need for more processing power. With that in mind, the new release included improvements to scalability and management for both high performance computing and enterprise data centers.

Here’s a snap shot of what industry experts and media have said about the new specification:

“Release 1.3 of the Volume 1 InfiniBand Architecture Specification provides several improvements, including deeper visibility into switch hierarchy, improved diagnostics allowing for faster response times to connectivity problems, enhanced network statistics, and added counters for Enhanced Data Rate (EDR) to improve network management. These features will allow network administrators to more easily install, maintain, and optimize very large InfiniBand clusters.” - Kurt Yamamoto, Tom’s IT PRO

“It’s worth keeping up with [InfiniBand], as it clearly shows where the broader networking market is capable of going… Maybe geeky stuff, but it allows [InfiniBand] to keep up with “exascales” of data and lead the way large scale-out computer networking gets done. This is particularly important as the 1000 node clusters of today grow towards the 10,000 node clusters of tomorrow.” - Mike Matchett, Taneja Group, Inc.

“Indeed, a rising tide lifts all boats, and the InfiniBand community does not intend to get caught in the shallows of the Big Data surge. The InfiniBand Trade Association recently issued Release 1.3 of Volume I of the format’s reference architecture, designed to incorporate increased scalability, efficiency, availability and other functions that are becoming central to modern data infrastructure.” - Arthur Cole, Enterprise Networking Planet

“The InifiniBand Trade Association (IBTA) hopes to ward off the risk of an Ethernet invasion in the ranks of HPC users with a renewed focus on manageability and visibility. Such features have just appeared in release 1.3 of the Volume 1 standard. The IBTA’s Bill Lee told The Register that as HPC clusters grow, ‘you want to be able to see every level of switch interconnect, so you can identify choke-points and work around them.’” - Richard Chirgwin, The Register

To read more industry coverage of the new release, visit the InfiniBand in the News page.

For additional information about the InifiniBand specification, check out the InifiniBand specification FAQ or access the InfiniBand specification here.

Bill Lee