Archive

Posts Tagged ‘HPC’

FDR InfiniBand Continues Rapid Growth on TOP500 Year-Over-Year

November 22nd, 2013

The newest TOP500 list of the world’s most powerful supercomputers was released at SC13 this week, and showed the continued adoption of Fourteen Data Rate (FDR) InfiniBand – the fastest growing interconnect technology on the list! FDR, the latest generation of InfiniBand technology, now connects 80 systems on the TOP500, growing almost 2X year-over-year from 45 systems in November 2012. Overall, InfiniBand technology connects 207 systems, accounting for over 40 percent of the TOP500 list.

InfiniBand technology overall connects 48 percent of the Petascale-capable systems on the list. Petascale-capable systems generally favor the InfiniBand interconnect due to its computing efficiency, low application latency and high speeds. More relevant highlights from the November TOP500 list include:

  • InfiniBand is the most used interconnect in the TOP100, connecting 48 percent of the systems, and in the TOP200, connecting 48.5 percent of the systems.

  • InfiniBand-connected systems deliver 2X the performance of Ethernet systems, while the total performance supported by InfiniBand systems continues to grow.

  • With a peak efficiency of 97 percent and an average efficiency of 86 percent, InfiniBand continues to be the most efficient of interconnects on the TOP500.

The TOP500 list continues to show that InfiniBand technology is the interconnect of choice for HPC and data centers wanting the highest performance, with FDR InfiniBand adoption as a large part of this solution. The graph below demonstrates this further:

TOP500 Results, November 2013

Image source: Mellanox Technologies

Delivering bandwidth up to 56Gb/s with application latencies less than one microsecond, InfiniBand enables the highest server efficiency and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded environments that scale from two nodes up to clusters with thousands of nodes.

The TOP500 list is published twice per year and recognizes and ranks the world’s fastest supercomputers. The list was announced November 18 at the SC13 conference in Denver, Colorado.

Interested in learning more about the TOP500, or how InfiniBand performed? Check out the TOP500 website: www.top500.org.

bill_lee_square

Bill Lee, chair of Marketing Working Group (MWG) at IBTA

Emulex Joins the InfiniBand Trade Association

October 30th, 2013

Emulex is proud to be a new member of the IBTA.  The IBTA has a great history of furthering the InfiniBand and RDMA over Converged Ethernet (RoCE) specifications, developing an active solution ecosystem and building market momentum around technologies with strong value propositions.  We are excited to be a voting member on the Organization’s Steering Committee and look forward tocontributing to relevant technical and marketing working groups, as well as participating in IBTA-sponsored Plugfests and other interoperability activities.

Why the IBTA?

Through our experience in building high performance, large-scale, mission critical networks we understand the benefits of RDMA.  Since its original implementation as part of InfiniBand back in 1999, RDMA has a well-proven track record of delivering better application performance, reduced CPU overhead and increased bandwidth efficiency in demanding computing environments.

Due to increased adoption of technologies such as cloud computing, big data analytics, virtualization and mobile computing, more and more commercial IT infrastructures are starting to run into the same challenges that supercomputing was forced to confront around performance bottlenecks, resource utilization and moving large data sets.  In other words, data centers supporting the Fortune 2000, vertical applications for media and entertainment or life sciences, telecommunications and cloud service providers are starting to look a lot more like the data centers at research institutions or systems on the TOP500 .  Thus, with the advent of RoCE, we see an opportunity to bring the benefits of RDMA that have been well-proven in high performance computing (HPC) markets to the mainstream commercial markets.

RoCE is a key enabling technology for converging data center infrastructure and enabling application centric I/O across a broad spectrum of requirements.  From supercomputing to share drives, RDMA deliveries broad based benefits for fundamentally more efficient network communications.  Emulex has a data center vision to connect, monitor and manage on a single unified fabric.  We are looking forward to supporting the IBTA, contributing to the advancement of RoCE and helping to bring RDMA to mainstream markets.

Jon Affled

Jon Affeld, senior director, marketing alliances at Emulex

Observations from SC12

December 3rd, 2012

The week of Supercomputing went by quickly and resulted in many interesting discussions around supercomputing and its role in both HPC environments and enterprise data centers. Now that we’re back to work, we’d like to reflect back on the successful supercomputing event. The conference this year saw a huge diversity of attendees from various countries, with major participation from top universities, which seemed to be on the leading edge of Remote Direct Memory Access (RDMA) and InfiniBand deployments.

Overall, we saw InfiniBand and Open Fabrics technologies continue their strong presence at the conference. InfiniBand dominated the Top500 list and is still the #1 interconnect of choice for the world’s fastest supercomputers. The Top500 list also demonstrated that InfiniBand is leading the way to efficient computing, which not only benefits high performance computing, but enterprise data center environments as well.

We also engaged in several discussions around RDMA. Attendees, analysts specifically, were interested in new products using RDMA over Converged Ethernet (RoCE) and their availability, and were impressed that Microsoft Server 2012 natively supports all three RDMA transports, including InfiniBand and RoCE. Another interesting development is InfiniBand customer Microsoft Windows Azure, and their increased efficiency placing them at 165 on the Top500 list.

IBTA & OFA Booth at SC12

IBTA’s Electro-Mechanical Working Group Chair, Alan Benner discussing the new InfiniBand specification with attendees at the IBTA & OFA SC12 booth

IBTA’s release of the new InfiniBand Architecture Specification 1.3 generated a lot of buzz among attendees, press and analysts. IBTA’s Electro-Mechanical Working Group Chair, Alan Benner, was one of our experts at the booth and drew a large crowd of people interested in the InfiniBand roadmap and his projections around the availability of the next specification, which is expected to include EDR and become available in draft form in April 2013.

SC12 provides a great opportunity those in high performance computing to connect in person and engage in discussions around hot industry topics; this year was focused on Software Defined Networking (SDN), OpenSM, and the pioneering efforts by both IBTA and OFA. We enjoyed conversations with exhibitors and attendees that visited our booth and a special thank you to all of those RDMA experts who participated in our booth session: Bill Boas, Cray; Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Kevin Moran, System Fabric Works; and Josh Simons, VMware.

Rupert Dance, Software Forge

Rupert Dance, Software Forge

IBTA & OFA Join Forces at SC12

November 7th, 2012

Attending SC12? Check out OFA’s Exascale and Big Data I/O panel discussion and stop by the IBTA/OFA booth to meet our industry experts

The IBTA is gearing up for the annual SC12 conference taking place November 10-16 at the UT Salt Palace Convention Center in Salt Lake City, Utah. We will be joining forces with the OpenFabrics Alliance (OFA) on a number of conference activities and will be exhibiting together at SC12 booth #3630.

IBTA members will participate in the OFA-moderated panel, Exascale and Big Data I/O, which we highly recommend attending if you’re at the conference.  The panel session, moderated by IBTA and OFA member Bill Boas, takes place Wednesday, November 14 at 1:30 p.m. Mountain Time and will discuss drivers for future I/O architectures.

Also be sure to stop by the IBTA and OFA booth #3630 to chat with industry experts regarding a wide range of industry topics, including:

·         Behind the IBTA integrators list

·         High speed optical connectivity

·         Building and validating OFA software

·         Achieving low latency with RDMA in virtualized cloud environments

·         UNH-IOL hardware testing and interoperability capabilities

·         Utilizing high-speed interconnects for HPC

·         Release 1.3 of IBA Vol2

·         Peering into a live OFS cluster

·         RoCE in Wide Area Networks

·         OpenFabrics for high speed SAN and NAS

Experts, including: Katharine Schmidtke, Finisar; Alan Brenner, IBM; Todd Wilde, Mellanox; Rupert Dance, Software Forge; Bill Boas and Kevin Moran, System Fabric Works; and Josh Simons, VMware will be in the booth to answer your questions and discuss topics currently affecting the HPC community.

Be sure to check the SC12 website to learn more about Supercomputing 2012, and stay tuned to the IBTA website and Twitter to follow IBTA’s plans and activities at SC12.

See you there!

HPC Advisory Council Showcases World’s First FDR 56Gb/s InfiniBand Demonstration at ISC’11

July 1st, 2011

The HPC Advisory Council, together with ISC’11, showcased the world’s first demonstration of FDR 56Gb/s InfiniBand in Hamburg, Germany, June 20-22. The HPC Advisory Council is hosting and organizing new technology demonstrations at leading HPC conferences around the world to highlight new solutions which will influence future HPC systems in term of performance, scalability and utilization.

The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths.

The FDR InfiniBand network included dedicated and distributed clusters, as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualization applications using car models courtesy of Peugeot Citroën.

The installation of the fiber cables (we used 20 and 50 meter cables) was completed a few days before the show opened, and we placed the cables on the floor, protecting them with wooden bridges. The clusters, Lustre and application setup was done the day before and everything ran perfectly.

You can see the network architecture of the ISCnet FDR InfiniBand demo below. We have combined both MPI traffic and storage traffic (Lustre) on the same fabric, utilizing the new bandwidth capabilities to provide a high performance, consolidated fabric for the high speed rendering and visualization application demonstration.

iscnet3

The following HPC Council member organizations contributed to the FDR 56Gb/s InfiniBand demo and I would like to personally thank each of them: AMD, Corning Cable Systems, Dell, Fujitsu, HP, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

Regards,

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council

ISC’11 Highlights: ISCnet to Feature FDR InfiniBand

June 13th, 2011

ISC’11 — taking place in Hamburg, Germany from June 19-23 - will include major new product introductions and groundbreaking talks from users worldwide. We are happy to call to your attention to the fact that this year’s conference will feature the world’s first large-scale demonstration of next-generation FDR InfiniBand technology. 

With link speeds of 56Gb/s, FDR InfiniBand uses the latest version of the OpenFabrics Enterprise Distribution (OFEDTM) and delivers an increase of nearly 80% in data rate compared to previous InfiniBand generations.

Running on the ISC’11 network “ISCnet,” the multi-vendor, FDR 56Gb/s InfiniBand demo will provide exhibitors with fast interconnect connectivity between their booths on the show floor and enable them to demonstrate a wide variety of applications and experimental HPC applications, as well as new developments and products.

The demo will also continue to show the processing efficiency of RDMA and microsecond latency of OFED, which reduces the cost of the servers, increases productivity and improves customer’s ROI. ISCnet will be the fastest open commodity network demonstration ever assembled to date.

If you are heading to the conference, be sure to visit the booths of IBTA and OFA members who are exhibiting, as well as the many users of InfiniBand and OFED. The last TOP500 list (published in November 2010) showed that nearly half of the most powerful computers in the world are using these technologies.

InfiniBand Trade Association Members Exhibiting

  • Bull - booth 410
  • Fujitsu Limted - booth 620
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Leoni - booth 842
  • Mellanox - booth 331
  • Molex - booth 730 Co-Exhibitor of Stordis
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330
OpenFabrics Alliance Members Exhibiting

  • AMD - booth 752
  • APPRO - booth 751 Co-Exhibitor of AMD
  • Chelsio - booth 702
  • Cray - booth 650
  • DataDirect Networks - booth 550
  • HP - booth 430
  • IBM - booth 231
  • Intel - booths 530+801
  • Lawrence Livermore National Laboratory - booth 143
  • LSI - booth 341
  • Mellanox - booth 331
  • Microsoft - booth 832
  • NetApp - booth 743
  • Obsidian - booth 151
  • QLogic - booth 240
  • SGI - booth 330

Also, don’t miss the HPC Advisory Council Workshop on June 19 at ISC’11 that includes talks on the following hot topics related to InfiniBand and OFED:

  • GPU Access and Acceleration
  • MPI Optimizations and Futures
  • Fat-Tree Routing
  • Improved Congestion Management
  • Shared Memory models
  • Lustre Release Update and Roadmap 

Go to http://www.hpcadvisorycouncil.com/events/2011/european_workshop/index.php to learn more and register.

For those who are InfiniBand, OFED and Lustre users, don’t miss the important announcement and breakfast on June 22 regarding the coming together of the worldwide Lustre vendor and user community. This announcement is focused on ensuring the continued development, upcoming releases and consolidated support for Lustre; details and location will be available on site at ISC’11. This is your opportunity to meet with representatives of OpenSFS, HPCFS and EOFS to learn how the whole community is working together.

Looking forward to seeing several of you at the show!

briansparksBrian Sparks
IBTA & OFA Marketing Working Groups Co-Chair
HPC Advisory Council Media Relations

InfiniBand on the Road to Exascale Computing

January 21st, 2011

(Note: This article appears with reprint permission of The Exascale Reporttm)

InfiniBand has been making remarkable progress in HPC, as evidenced by its growth in the Top5002 rankings of the highest performing computers. In the November 2010 update to these rankings, InfiniBand’s use increased another 18 percent, to help power 43 percent of all listed systems, including 57 percent of all high-end “Petascale” systems.

The continuing march for higher and higher performance levels continues. Today, computation is a critical part of science, where computation compliments observation, experiment and theory. The computational performance of high-end computers has been increasing by a factor of 1000X every 11 years.

InfiniBand has demonstrated that it plays an important role in the current Petascale level of computing driven by its bandwidth, low latency implementations and fabric efficiency. This article will explore how InfiniBand will continue to pace high-end computing as it moves towards the Exascale level of computing.

Figure 1 - The Golden Age of Cluster Computing

Figure 1 - The Golden Age of Cluster Computing

InfiniBand Today

Figure 1 illustrates how the high end of HPC crossed the 1 Terascale mark in 1997 (1012 floating operations per second) and increased three orders of magnitude to the 1 Petascale mark in 2008 (1015 floating operations per second). As you can see, the underlying system architectures changed dramatically during this time. The growth of the cluster computing model, based on commodity server processors, has come to dominate much of high-end HPC. Recently, this model is being augmented by the emergence of GPUs.

Figure 2 - Emergence of InfiniBand in the Top500

Figure 2 - Emergence of InfiniBand in the Top500

Figure 2 shows how interconnects track with changes in the underlying system architectures. The appearance of first 1 GbE, followed by the growth of InfiniBand interconnects, are key enablers of the cluster computing model. The industry standard InfiniBand and Ethernet interconnects have largely displaced earlier proprietary interconnects. InfiniBand interconnects continue to grow share relative to Ethernet, largely driven by performance factors such as low latency and high bandwidth, the ability to support high bisectional bandwidth fabrics, as well as overall cost-effectiveness.

Getting to Exascale
What we know today is that Exascale computing will require enormously larger computer systems than what are available today. What we don’t know is how those computers will look. We have been in the golden age of cluster computing for much of the past decade and the model appears to scale well going forward. However, there is yet no clear consensus regarding the system architecture for Exascale. What we can do is map the evolution of InfiniBand to the evolution of Exascale.

Given historical growth rates, Exascale computing is being anticipated by the industry to be reached around 2018. However, three orders of magnitude beyond where we are today represents too great a change to make as a single leap. In addition, the industry is continuing to assess what system structures will comprise systems of that size.

Figure 3 - Steps from Petascale to Exascale

Figure 3 - Steps from Petascale to Exascale

Figure 3 provides guidance as to the key capabilities of the interconnect as computer systems increase in power by each order of magnitude from current high-end systems with 1 PetaFLOPS performance, to 10 PF, 100 PF and finally 1000PF = 1 ExaFLOPS. Over time, computational nodes will provide increasing performance with advances in processor and system architecture. This performance increase must be matched by a corresponding increase in network bandwidth to each node. However, the increased performance per node also tends to hold down the increase in the total number of nodes required to reach a given level of system performance.

Today, 4x QDR InfiniBand (40 Gbps) is the interconnect of choice for many large-scale clusters. Current InfiniBand technology well supports systems with performance in the order of 1 PetaFLOPS. Deployments in the order of 10,000 nodes have been achieved, and 4x QDR link bandwidths are offered by multiple vendors. InfiniBand interconnects are used in 57 percent of the current Petascale systems on the Top500 list.

Moving from 1 PetaFLOPS to 10 PetaFLOPS is well within the reach of the current InfiniBand roadmap. Reaching 35,000 nodes is within the currently-defined InfiniBand address space. Required 12 GB/s links can either be achieved by 12x QDR, or more likely, by 4x EDR data rates (104 Gbps) now being defined according to the InfiniBand industry bandwidth roadmap. Such data rates also anticipate PCIe Gen3 host connects, which are anticipated in the forthcoming processor generation.

The next order of magnitude increase in system performance from 10 PetaFLOPS to 100 PetaFLOPS will require additional evolution of the InfiniBand standards to permit hundreds of thousands of nodes to be addressed. The InfiniBand industry is already initiating discussions as to what evolved capabilities are needed for systems of such scale. As in the prior step up to more performance, required link bandwidths can be achieved by 12x EDR (which is currently being defined) or perhaps 4x HDR (which has been identified on the InfiniBand industry roadmap). Systems of such scale may also exploit topologies such as mesh/torus or hypercube, for which there are already large scale InfiniBand deployments.

The remaining order of magnitude increase in system performance from 100 PetaFLOPS to 1 ExaFLOPS requires link bandwidths to once again increase. Either 12x HDR, or 4X NDR links will need to be defined. It is also expected that optical technology will play a greater role in systems of such scale.
The Meaning of Exascale

Reaching Exascale computing levels involves much more than just the interconnect. Pending further developments in computer systems design and technology, such systems are expected to occupy many hundreds of racks and consume perhaps 20 MWatts of power. Just as many of the high-end systems today are purpose-built with unique packaging, power distribution, cooling and interconnect architectures, we should expect Exascale systems to be predominantly purpose-built. However, before we conclude that the golden age of cluster computer has ended with its reliance on effective industry standard interconnects such as InfiniBand, let’s look further at the data.

Figure 4 - Top500 Performance Trends

Figure 4 - Top500 Performance Trends

Figure 4 is the trends chart from Top500. At first glance, it shows the tremendous growth over the past two decades of high-end HPC, as well as projecting these trends to continue for the next decade. However, it also shows that the performance of the #1 ranked system is about two orders of magnitude greater than the #500 ranked system.

Figure 5 - Top500 below 1 PetaFLOPS (November 2010)

Figure 5 - Top500 below 1 PetaFLOPS (November 2010)

This is further illustrated in Figure 5, which shows the performance vs. rank from the November 2010 Top500 list – the seven systems above 1 PetaFLOPS have been omitted so as not to stretch the vertical axis too much. We see that only the highest 72 ranked systems come within an order of magnitude of 1 PetaFLOPS (1000 TeraFLOPS). This trend is expected to continue with the implication that once the highest-end HPC systems reach the 1 Exascale threshold, the majority of Top500 systems will be a maximum of order of 100 PetaFLOPS, with the #500 ranked system at an order of 10 PetaFLOPS.

Although we often use the Top500 rankings as an indicator of high-end HPC, the vast majority of HPC deployments occur below the Top500.

InfiniBand Evolution
InfiniBand has been an extraordinarily effective interconnect for HPC, with demonstrated scaling up to the Petascale level. InfiniBand architecture permits low latency implementations and has a bandwidth roadmap matching the capabilities of host processor technology. InfiniBand’s fabric architecture permits implementation and deployment of highly efficient fabrics, in a range of topologies, with congestion management and resiliency capabilities.

The InfiniBand community has demonstrated that the architecture has previously evolved to remain vibrant. The Technical Working Group is currently assessing architectural evolution to permit InfiniBand to continue to meet the needs of increasing system scale.
As we move towards an Exascale HPC environment with possibly purpose-built systems, the cluster computing model enabled by InfiniBand interconnects will remain a vital communications model capable of extending well into the Top500.

Lloyd Dickman
Technical Working Group, IBTA

(Note: This article appears with reprint permission of The Exascale Reporttm)

Author: admin Categories: InfiniBand Tags: , , , , ,

Last days of SC10

November 19th, 2010

I’ve heard there were more than 10,000 people in New Orleans this week for the SC10 conference and from what I saw on the show floor, in sessions and in the restaurants around the French Quarter, I believe it. The SCinet team has had several tiring yet productive days ensuring the network ran smoothly for the more than 342 exhibitors at the show.

One very popular demo was the real time 3D flight simulator (see photo below) that was displayed in multiple booths at the show. The flight simulator provided a virtual high-resolution rendering of Salt Lake City, Utah, from the air running over SCinet’s high speed, low latency InfiniBand/RDMA network.

Real time 3D flight simulator

Real time 3D flight simulator

This year, SCinet introduced the SCinet Research Sandbox. Sandbox participants were able to utilize the network infrastructure to demonstrate 100G networks for a wide variety of applications, including petascale computing, next-generation approaches to wide area file transfer, security analysis tools, and data-intensive computing.

This is the tenth Supercomputing show I’ve attended and I’ve made a few observations. Years ago, I used to see a lot of proprietary processors, interconnects, and storage. Now we’re seeing much more standardization around technologies such as InfiniBand. In addition, there’s also been a lot of interest this year around 100G connectivity and the need for higher faster data rates.

Several members of the SCinet team. Thank you to all of the volunteers who helped make SCinet a success this week!

Several members of the SCinet team. Thank you to all of the volunteers who helped make SCinet a success this week!

The first couple of shows I attended were very scientific and academic in nature. Now as I walk the show floor, it’s exciting to see more commercial HPC applications for financial services, automotive/aviation, and oil & gas.

I had a great time in New Orleans, and I look forward to my next ten SC conferences. See you next year at SC11 in Seattle, WA!

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

SCinet Update November 13 – Conference Begins!

November 13th, 2010

scinet-help-deskToday is Saturday, November 13, and sessions for SC10 have begun. We’re in the home stretch to get SCinet installed. We’ve been working feverishly to get everything running before the start of the conference. In addition, the network demonstrations should all be live in time for the Exhibition Press Tour on Monday night from 6-7 pm.

I’ve included more photos to show you the network in progress. If you’re going to New Orleans and have any questions about SCinet, be sure to stop by our help desk.

All the SCinet DNOCs located throughout the show floor are now finished and ready to go.

All the SCinet DNOCs located throughout the show floor are now finished and ready to go.

The show floor is busy as ever and exhibitor booth construction is well underway.

The show floor is busy as ever and exhibitor booth construction is well underway.

SCinet's main NOC network equipment racks provide connectivity for network subscribers to the world's fastest network.

SCinet's main NOC network equipment racks provide connectivity for network subscribers to the world's fastest network.

All the power distribution units needed to supply power to all the network equipment in the SCinet main NOC.

All the power distribution units needed to supply power to all the network equipment in the SCinet main NOC.

scinet-team

SCinet has more than 100 volunteers working behind the scenes to bring up the world’s fastest network. Planning began more than a year ago and all of our hard work is about to pay off as we connect SC10 exhibitors and attendees to leading research and commercial networks around the world, including the Department of Energy’s ESnet, Internet2, National LambdaRail and LONI (Louisiana Optical Network Initiative).

I will blog again as the show gets rolling and provide updates on our demos in action.

See you soon!

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,

SCinet Update November 11 – Two Days Until the Show!

November 11th, 2010

scinet-noc-stage-1

SCinet Network Operations Center (NOC) stage

scinet-noc-stage-2

SCinet Network Operations Center (NOC) stage

For those heading to New Orleans for the SC10 conference, the weather this week is upper 70s and clear - although the SCinet team hasn’t had much of a chance to soak up the sun. We’re busy building the world’s fastest network - to be up and running this Sunday, November 14, for one week only. It’s going to be a busy couple of days… let me give you an update on progress to date.

The main SCinet Network Operations Center (NOC) stage is in the process of being built. I’ve included a photo of the convention center and our initial framing, followed by a picture after the power and aerial fiber cable drops have been installed.

Our SCinet team includes volunteers from educational institutions, high performance computing centers, network equipment vendors, U.S. national laboratories, research institutions, and research networks and telecommunication carriers that work together to design and deliver the SCinet infrastructure.

The picture below shows SCinet team members Mary Ellen Dube and Parks Fields receiving the aerial fiber cables that are being lowered from the catwalks scattered throughout the convention center floor.

cables-from-catwalks

Aerial fiber cables being lowered from catwalks

Below is a photo of Cary Whitney (my fellow SCinet/InfiniBand Co-Chair) and Parks Fields testing aerial InfiniBand active optical cables going between the distributed/remote NOC.

cary_parks

Cary Whitney and Parks Fields

cables-1

Aerial InfiniBand active optical cables

I’ve also included a picture of myself and team member DA Fye running a lift to install the aerial fiber cables going between the main NOC and distributed/remote NOCs throughout the show floor. Next to that is a photo of some of the many InfiniBand active optical cables going to the main SCinet NOC.

eric_da_lift

Running a lift with DA Fye

cables-2

InfiniBand active optical cables going to the main SCinet NOC

This year’s SC10 exhibitors and attendees are anticipated to push SCinet’s capacity and capabilities to the extreme. I’ll keep updating this blog to show you how we’re preparing for the show and expected demands on the network.

Eric Dube

SCinet/InfiniBand Co-Chair

Author: admin Categories: InfiniBand Tags: , , , ,