Archive

Posts Tagged ‘56Gb/s’

RoCE Benefits on Full Display at Ignite 2015

May 27th, 2015

ignite-2015

On May 4-8, IT professionals and enterprise developers gathered in Chicago for the 2015 Microsoft Ignite conference. Attendees were given a first-hand glimpse at the future of a variety of Microsoft business solutions through a number of sessions, presentations and workshops.

Of particular note were two demonstrations of RDMA over Converged Ethernet (RoCE) technology and the resulting benefits for Windows Server 2016. In both demos, RoCE technology showed significant improvements over Ethernet implementations without RDMA in terms of throughput, latency and processor efficiency.

Below is a summary of each presentation featuring RoCE at Ignite 2015:

Platform Vision and Strategy (4 of 7): Storage Overview
This demonstration highlighted the extreme performance and scalability of Windows Server 2016 through RoCE enabled servers populated with NVMe and SATA SSDs. It simulated application and user workloads using SMB3 servers with Mellanox ConnectX-4 100 GbE RDMA enabled Ethernet adapters, Micron DRAM and enterprise NVMe SSDs for performance and SATA SSDs for capacity.

During the presentation, the use of RoCE compared to TCP/IP showcased drastically different performance. With RDMA enabled, the SMB3 server was able to achieve about twice the throughput, half the latency and around 33 percent less CPU overhead than that attained by TCP/IP.

Check out the video to see the demonstration in action.

Enabling Private Cloud Storage Using Servers with Local Disks

Claus Joergensen, a principal program manager at Microsoft, demonstrated a Windows Server 2016’s Storage Spaces Direct with Mellanox’s ConnectX-3 56Gb/s RoCE with Micron RAM and M500DC local SATA storage.

The goal of the demo was to highlight the value of running RoCE on a system as it related to performance, latency and processor utilization. The system was able to achieve a combined 680,000 4KB IOPS and 2ms latency when RoCE was disabled. With RoCE enabled, the system increased the 4KB IOPS to about 1.1 million and reduced the latency to 1ms. This translated roughly to a 40 percent increase in performance with RoCE enabled, all while utilizing the same amount of CPU resources.

For additional information, watch a recording of the presentation (demonstration starts at 57:00).

For more videos from Ignite 2015, visit Ignite On Demand.

Bill Lee

InfiniBand at VMworld!

September 2nd, 2011

VMworld 2011 took place this week in sunny Las Vegas, and with over 20,000 attendees, this show has quickly developed into one of the largest enterprise IT events in the world. Virtualization continues to be one of the hottest topics in the industry, providing a great opportunity for InfiniBand vendors to market the wide-array of benefits that InfiniBand is enabling in virtualized environments. There were several in the IBTA community spreading the InfiniBand message, but here were a few of note.

vmworld-8312011-image-11

On the networking side, Mellanox Technologies showed the latest generation of InfiniBand technology, FDR 56Gb/s. With FDR adapters, switches and cables available today, IT managers can immediately deploy this next generation technology into their data center and get instant performance improvements, whether it be leading vMotion performance, the ability to support more virtual machines per server at higher bandwidth per virtual machine, or higher capital and lower operating expenses by consolidating the networking, management and storage I/O into a one-wire infrastructure.

vmworld-8312011-image-21

Fusion-io, a Flash-based storage manufacturer that targets heavy data acceleration needs from such applications as database, virtualization, Memchached and VDI, also made a big splash at VMworld. Their booth featured an excellent demonstration of how low-latency, high-speed InfiniBand networks help enable Fusion-io to show 800 virtual desktops being accessed and displayed over 17 monitors. InfiniBand enabled them to stream bandwidth-intensive HD movies (over 2,000) from just eight servers.

vmworld-8312011-image-3

Pure Storage, a newcomer in the storage arena, announced their 40Gb/s InfiniBand-based enterprise storage array that targets applications such as database, VDI, etc. With InfiniBand they are able to reduce their latency by over 800 percent while increasing performance by 10X.

vmworld-8312011-image-41

Isilon was recently acquired by EMC, and in the EMC booth, a rack of Isilon storage systems was displayed, scaling out by running 40Gb/s InfiniBand on the back-end. These storage systems excel in VDI implementations and are ripe for customers implementing a cloud solution where performance, reliability and storage resiliency are vital.

vmworld-8312011-image-5

Also exhibiting at VMworld was Xsigo Systems. Xsigo showed their latest Virtual I/O Director which now includes 40Gb/s InfiniBand. The previous generation used 20Gb/s InfiniBand. With the upgraded bandwidth capabilities, Xsigo can now offer their customers with 12-30X acceleration of I/O intensive tasks such as vMotion, queries, backup, etc all while providing dynamic bandwidth allocation per VM or job. In addition, by consolidating the network over a single wire, Xsigo is able to provide customers with 85 percent less hardware cost per virtual machine.

The items mentioned above are just a small slice of the excitement that was at VMworld. I’m glad to have seen so many InfiniBand solutions displayed. For more information on InfiniBand in the enterprise, watch for an upcoming webinar series being produced by the IBTA.

Brian Sparks

IBTA Marketing Working Group Co-Chair

HPC Advisory Council Showcases World’s First FDR 56Gb/s InfiniBand Demonstration at ISC’11

July 1st, 2011

The HPC Advisory Council, together with ISC’11, showcased the world’s first demonstration of FDR 56Gb/s InfiniBand in Hamburg, Germany, June 20-22. The HPC Advisory Council is hosting and organizing new technology demonstrations at leading HPC conferences around the world to highlight new solutions which will influence future HPC systems in term of performance, scalability and utilization.

The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths.

The FDR InfiniBand network included dedicated and distributed clusters, as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualization applications using car models courtesy of Peugeot Citroën.

The installation of the fiber cables (we used 20 and 50 meter cables) was completed a few days before the show opened, and we placed the cables on the floor, protecting them with wooden bridges. The clusters, Lustre and application setup was done the day before and everything ran perfectly.

You can see the network architecture of the ISCnet FDR InfiniBand demo below. We have combined both MPI traffic and storage traffic (Lustre) on the same fabric, utilizing the new bandwidth capabilities to provide a high performance, consolidated fabric for the high speed rendering and visualization application demonstration.

iscnet3

The following HPC Council member organizations contributed to the FDR 56Gb/s InfiniBand demo and I would like to personally thank each of them: AMD, Corning Cable Systems, Dell, Fujitsu, HP, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

Regards,

Gilad Shainer

Member of the IBTA and chairman of the HPC Advisory Council