Archive

Posts Tagged ‘RDMA’

Incorporate Networking into Hyperconverged Integrated Systems to Gain a Market Advantage

August 22nd, 2016

gartner-logo
The concept of hyperconverged integrated systems (HCIS) emerged as data centers considered new ways to increase resource utilization by reducing infrastructure inefficiencies and complexities. HCIS is primarily a software-defined platform that integrates compute, storage, networking resources. The HCIS market is expected to grow 79 percent to reach almost $2 billion this year, driving it into mainstream use in the next five years, according to Gartner.

Since this market is growing so rapidly, Gartner released an exciting new report, “Use Networking to Differentiate Your Hyperconverged System.” In the report, Gartner advises HCIS vendors to focus on networking to gain competitive market advantage by integrating use-case-specific guidelines and case studies in go-to-market efforts.

According to the report, more than 10 percent of HCIS deployments will suffer from avoidable network-induced performance problems by 2018, up from less than one percent today. HCIS vendors can help address expected challenges and add value for buyers by considering high performance networking protocols, such as InfiniBand and RDMA over Converged Ethernet (RoCE), during the system design stage.

The growing scale of HCIS clusters creates challenges such as expanding workload coverage and diminishing competitive product differentiation. This will force HCIS vendors to alter their product lines and marketing efforts to help their offerings stand out from the rest. Integrating the right networking capabilities will become even more important as a growing number of providers look to differentiate their products. The Gartner report states that by 2018, 60 percent of providers will start to offer integration of networking services, together with compute and storage services, inside of their HCIS products.

Until recently, HCIS vendors have often treated networking simply as a “dumb” interconnect. However, when clusters grow beyond a handful of nodes and higher workloads are introduced, issues begin to arise. This Gartner report stresses that treating the network as “fat dumb pipes” will make it harder to troubleshoot application performance problems from an end-to-end perspective. The report also determines that optimizing the entire communications stack is key to driving latency down and it names InfiniBand and RoCE as important protocols to implement for input/output (I/O)-intensive workloads.

As competition in the HCIS market continues to grow, vendors must change their perception of networking and begin to focus on how to integrate it in order to keep a competitive edge. To learn more about how HCIS professionals can achieve this market advantage, download the full report from the InfiniBand Reports page.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Bill Lee

Dive into RDMA’s Impact on NVMe Devices at the 2016 Flash Memory Summit

August 5th, 2016

fms

Next week, storage experts will gather at the 2016 Flash Memory Summit (FMS) in Santa Clara, CA, to discuss the current state of flash memory applications and how these technologies are enabling new designs for many products in the consumer and enterprise markets. This year’s program will include three days packed with sessions, tutorials and forums on a variety of flash storage trends, including new architectures, systems and standards.

NVMe technology, and its impact on enterprise flash applications, is among the major topics that will be discussed at the show. The growing industry demand to unlock flash storage’s full potential by leveraging high performance networking has led to the NVMe community to develop a new standard for fabrics. NVMe over Fabrics (NVMe/F) allows flash storage devices to communicate over RDMA fabrics, such as InfiniBand and RDMA over Converged Ethernet (RoCE), and thereby enabling all flash arrays to overcome existing performance bottlenecks.

Attending FMS 2016?

If you’re attending FMS 2016 and are interested in learning more about the importance of RDMA fabrics for NVMe/F solutions, I recommend the following two educational sessions:

NVMe over Fabrics Panel – Which Transport Is Best?
Tuesday, August 9, 2016 (9:45-10:50 a.m.)

Representatives from the IBTA will join a panel to discuss the value of RDMA interconnects for the NVMe/F standard. Attendees can expect to receive an overview of each RDMA fabric and the benefits they bring to specific applications and workloads. Additionally, the session will cover the promise that NVMe/F has for unleashing the potential performance of NVMe drives via mainstream high performance interconnects.

Beer, Pizza and Chat with the Experts
Tuesday, August 9, 2016 (7-8:30 p.m.)

This informal event encourages attendees to “sit and talk shop” with experts about a diverse set of storage and networking topics. As IBTA’s Marketing Work Group Co-Chair, I will be hosting a table focused on RDMA interconnects. I’d love to meet with you to answer questions about InfiniBand and RoCE and discuss the advantages they provide the flash storage industry.

Additionally, there will be various IBTA member companies exhibiting on the show floor, so stop by their booths to learn about the new InfiniBand and RoCE solutions:

·HPE (#600)

· Keysight Technologies (#810)

· Mellanox Technologies (#138)

· Tektronix (#641)

· University of New Hampshire InterOperability Lab (#719)

For more information on the FMS 2016 program and exhibitors, visit the event website.

Bill Lee

InfiniBand Experts Discuss Latest Trends and Opportunities at OFA Workshop 2016

May 24th, 2016

ofaworkshop2016

Each year, OpenFabrics Software (OFS) users and developers gather at the OpenFabrics Alliance (OFA) Workshop to discuss and tackle the most recent challenges facing the high performance storage and networking industry. OFS is an open-source software that enables maximum application efficiency and performance agnostically over RDMA fabrics, including InfiniBand and RDMA over Converged Ethernet (RoCE). The work of the OFA supports mission critical applications in High Performance Computing (HPC) and enterprise data centers, but is also quickly becoming significant in cloud and hyper-converged markets.

In our previous blog, we showcased an IBTA sponsored session that provided an update on InfiniBand virtualization support. In addition to our virtualization update, there were a handful of other notable sessions that highlighted the latest InfiniBand developments, case studies and tutorials. Below is a collection of notable InfiniBand focused sessions that we recommend you check out:

InfiniBand as Core Network in an Exchange Application
Ralph Barth, Deutsche Börse AG; Joachim Stenzel, Deutsche Börse AG

Group Deutsche Boerse is a global financial service organization covering the entire value chain from trading, market data, clearing, settlement to custody. While reliability has been a fundamental requirement for exchanges since the introduction of electronic trading systems in the 1990s, since about 10 years also low and predictable latency of the entire system has become a major design objective. Both issues have been important architecture considerations, when Deutsche Boerse started to develop an entirely new derivatives trading system T7 for its options market in the US (ISE) in 2008. As the best fit at the time a combination of InfiniBand with IBM® WebSphere® MQ Low Latency Messaging (WLLM) as the messaging solution was determined. Since then the same system has been adopted for EUREX, one of the largest derivatives exchanges in the world, and is now also extended to cover cash markets. The session presents the design of the application and its interdependence with the combination of InfiniBand and WLLM. Also practical experiences with InfiniBand in the last couple of years will be reflected upon.

Download: Slides / Video


Experiences in Writing OFED Software for a New InfiniBand HCA
Knut Omang, Oracle

This talk presents experiences, challenges and opportunities as lead developer in initiating and developing OFED stack support (kernel and user space driver) for Oracles InfiniBand HCA integrated in the new SPARC Sonoma SoC CPU. In addition to the physical HCA function SR/IOV is supported with vHCAs visible to the interconnect as connected to virtual switches. Individual driver instances for the vHCAs maintains page tables set up for the HCAs MMU for memory accessible from the HCA. The HCA is designed to scale to a large number of QPs. For minimal overhead and maximal flexibility, administrative operations such as memory invalidations also use an asynchronous work request model similar to normal InfiniBand traffic.

Download: Slides / Video

Fabrics and Topologies for Directly Attached Parallel File Systems and Storage Networks
Susan Coulter, Los Alamos National Laboratory

InfiniBand fabrics supporting directly attached storage systems are designed to handle unique traffic patterns, and they contain different stress points than other fabrics. These SAN fabrics are often expected to be extensible in order to allow for expansion of existing file systems and addition of new file systems. The character and lifetime of these fabrics is distinct from those of internal compute fabrics, or multi-purpose fabrics. This presentation covers the approach to InfiniBand SAN design and deployment as experienced by the High Performance Computing effort at Los Alamos National Laboratory.

Download: Slides / Video


InfiniBand Topologies and Routing in the Real World
Susan Coulter, Los Alamos National Laboratory; Jesse Martinez, Los Alamos National Laboratory

As with all sophisticated and multifaceted technologies - designing, deploying and maintaining high-speed networks and topologies in a production environment and/or at larger scales can be unwieldy and surprising in their behavior. This presentation illustrates that fact via a case study from an actual fabric deployed at Los Alamos National Laboratory.

Download: Slides / Video


InfiniBand Routers Premier
Mark Bloch, Mellanox Technologies; Liran Liss, Mellanox Technologies

InfiniBand has gone a long way in providing efficient large-scale high performance connectivity. InfiniBand subnets have shown to scale to tens of thousands of nodes, both in raw capacity and in management. As demand for computing capacity increases, future clusters sizes might exceed the number of addressable endpoints in a single IB subnet (around 40K nodes). To accommodate such clusters, a routing layer with the same latencies and bandwidth characteristics as switches is required.

In addition, as data center deployments evolve, it becomes beneficial to consolidate resources across multiple clusters. For example, several compute clusters might require access to a common storage infrastructure. Routers can enable such connectivity while reducing management complexity and isolating intra-subnet faults. The bandwidth capacity to storage may be provisioned as needed.

This session reviews InfiniBand routing operation and how it can be used in the future. Specifically, we will cover topology considerations, subnet management issues, name resolution and addressing, and potential implications for the host software stack and applications.

Download: Slides

Bill Lee

Author: admin Categories: InfiniBand, RDMA Tags: , ,

Changes to the Modern Data Center – Recap from SDC 15

October 19th, 2015

sdc15_logo
The InfiniBand Trade Association recently had the opportunity to speak on RDMA technology at the 2015 Storage Developer Conference. For the first time, SDC15 introduced Pre-conference Primer Sessions covering topics such as Persistent Memory, Cloud and Interop and Data Center Infrastructure. Intel’s David Cohen, System Architect and Brian Hausauer, Hardware Architect spoke on behalf of IBTA in a pre-conference session and discussed “Nonvolatile Memory (NVM), four trends in the modern data center and implications for the design of next generation distributed storage systems.”

Below is a high level overview of their presentation:

The modern data center continues to transform as applications and uses change and develop. Most recently, we have seen users abandon traditional storage architectures for the cloud. Cloud storage is founded on data-center-wide connectivity and scale-out storage, which delivers significant increases in capacity and performance, enabling application deployment anytime, anywhere. Additionally, job scheduling and system balance capabilities are boosting overall efficiency and optimizing a variety of essential data center functions.

Trends in the modern data center are appearing as cloud architecture takes hold. First, the performance of network bandwidth and storage media is growing rapidly. Furthermore, operating system vendors (OSV) are optimizing the code path of their network and storage stacks. All of these speed and efficiency gains to network bandwidth and storage are occurring while single processor/core performance remains relatively flat.

Data comes in a variety of flavors, some of which is accessed frequently for application I/O requests and others that are rarely retrieved. To enable higher performance and resource efficiency, cloud storage uses a tiering model to access data based on what is accessed most often. Data that is regularly accessed is stored on expensive, high performance media (solid-state drives). Data that is hardly or never retrieved is relegated to less expensive media with the lowest $/GB (rotational drives). This model follows a Hot, Warm and Cold data pattern and allows you faster access to what you use the most.

The growth of high performance storage media is driving the need for innovation in the network, primarily addressing application latency. This is where Remote Direct Memory Access (RDMA) comes into play. RDMA is an advanced, reliable transport protocol that enhances the efficiency of workload processing. Essentially, it increases data center application performance by offloading the movement of data from the CPU. This lowers overhead and allows the CPU to focus its processing power on running applications, which in turn reduces latency.

Demand for cloud storage is increasing and the need for RDMA and high performance storage networking grows as well. With this in mind, the InfiniBand Trade Association is continuing its work developing the RDMA architecture for InfiniBand and Ethernet (via RDMA over Converged Ethernet or RoCE) topologies.

Bill Lee

IBTA Launches the RoCE Initiative: Industry Ecosystem to Drive Adoption of RDMA over Converged Ethernet

June 23rd, 2015

roce-logo

At IBTA, we are pleased to announce the launch of the RoCE Initiative, a new effort to highlight the many benefits of RDMA over Converged Ethernet (RoCE) and to facilitate the technology’s adoption in the enterprise data centers. With the rise of server virtualization and big data analytics, data center architects are demanding innovative ways to improve overall network performance and to accelerate applications without breaking the bank in the process.

Remote Direct Memory Access (RDMA) is well known in the InfiniBand community as a proven technology that boosts data center efficiency and performance by allowing the transport of data from storage to server with less CPU overhead. RDMA technology achieves faster speeds and lower latency by offloading data movement from the CPU, resulting in more efficient execution of applications and data transfers.

Before RoCE, the advantages of RDMA were only available over InfiniBand fabrics. This left system engineers that leverage Ethernet infrastructure with only the most expensive options for increasing system performance (i.e. adding more servers or buying faster CPUs). Now, data center architects can upgrade their application performance while leveraging existing infrastructure. There is already tremendous ecosystem support for RoCE; it is supported by server and storage OEMs, adapter and switch vendors, and all major operating systems.

Through a new online resource, the RoCE Initiative will:

  • Enable CIOs, enterprise data center architects and solutions engineers to learn about improved application performance and data center productivity through training webinars, whitepapers and educational programs
  • Encourage the adoption and development of RoCE applications with case studies and solution briefs
  • Continue the development of specifications, benchmarking performance improvements and technical resources for current/future RoCE adopters

For additional information about the RoCE Initiative, check out www.RoCEInitiative.org or read the full announcement here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

RoCE Benefits on Full Display at Ignite 2015

May 27th, 2015

ignite-2015

On May 4-8, IT professionals and enterprise developers gathered in Chicago for the 2015 Microsoft Ignite conference. Attendees were given a first-hand glimpse at the future of a variety of Microsoft business solutions through a number of sessions, presentations and workshops.

Of particular note were two demonstrations of RDMA over Converged Ethernet (RoCE) technology and the resulting benefits for Windows Server 2016. In both demos, RoCE technology showed significant improvements over Ethernet implementations without RDMA in terms of throughput, latency and processor efficiency.

Below is a summary of each presentation featuring RoCE at Ignite 2015:

Platform Vision and Strategy (4 of 7): Storage Overview
This demonstration highlighted the extreme performance and scalability of Windows Server 2016 through RoCE enabled servers populated with NVMe and SATA SSDs. It simulated application and user workloads using SMB3 servers with Mellanox ConnectX-4 100 GbE RDMA enabled Ethernet adapters, Micron DRAM and enterprise NVMe SSDs for performance and SATA SSDs for capacity.

During the presentation, the use of RoCE compared to TCP/IP showcased drastically different performance. With RDMA enabled, the SMB3 server was able to achieve about twice the throughput, half the latency and around 33 percent less CPU overhead than that attained by TCP/IP.

Check out the video to see the demonstration in action.

Enabling Private Cloud Storage Using Servers with Local Disks

Claus Joergensen, a principal program manager at Microsoft, demonstrated a Windows Server 2016’s Storage Spaces Direct with Mellanox’s ConnectX-3 56Gb/s RoCE with Micron RAM and M500DC local SATA storage.

The goal of the demo was to highlight the value of running RoCE on a system as it related to performance, latency and processor utilization. The system was able to achieve a combined 680,000 4KB IOPS and 2ms latency when RoCE was disabled. With RoCE enabled, the system increased the 4KB IOPS to about 1.1 million and reduced the latency to 1ms. This translated roughly to a 40 percent increase in performance with RoCE enabled, all while utilizing the same amount of CPU resources.

For additional information, watch a recording of the presentation (demonstration starts at 57:00).

For more videos from Ignite 2015, visit Ignite On Demand.

Bill Lee

Accelerating Data Movement with RoCE

April 29th, 2015

On April 14-16, Ethernet designers and experts from around the globe gathered at the Ethernet Technology Summit 2015 to discuss developments happening within the industry as it pertained to the popular networking standard. IBTA’s Diego Crupnicoff, co-chair of the Technical Working Group, shared his expertise with attendees via a presentation on “Accelerating Data Movement with RDMA over Converged Ethernet (RoCE).” The session focused on the ever-growing complexity, bandwidth requirements and services of data centers and how RoCE can address the challenges that emerge from new enterprise data center initiatives.

Here is a brief synopsis of the points that Diego covered in his well-attended presentation:

People are living in an ever-increasing digital world. In the last decade, there’s been an explosion of connected devices that are running many applications and creating massive amounts of data in the process that must be accessible anytime, anywhere.

accelerating-data-movement-with-roce_image

Over time, the data center has emerged as the workhorse of the networking industry, with the increased pace of the ‘information generation’ generating many new data center initiatives, such as the cloud, virtualization and hyper-converged infrastructure. Expectations for enhanced accessibility to larger sets of data are straining enterprise data networks, bringing about a variety of new challenges to the industry, including the following needs:

• Scale and Flexibility
• Overlays & Shared Storage
• Reduce Latency
• Rapid Server-to-Server I/O
• Big Storage, Large Clusters
• New Scale-out Storage Traffic

The Transmission Control Protocol (TCP) has had difficulty keeping up with some traffic stemming from newer, more demanding applications. In these cases, packet processing over TCP saturates CPU resources, resulting in networks with low bandwidth, high latency and limited scalability. The industry was in need of a capability that would bypass the CPU altogether to enable faster, more efficient movement of data between servers.

The advent of Remote Direct Memory Access (RDMA) did just that, utilizing hardware offloads to move data faster with less CPU overhead. By offloading the I/O from the CPU, users of RDMA experience lower latency while freeing up the CPU to focus its resources on applications that process data as opposed to moving it.

Recently, RDMA expanded into the enterprise market and is now being widely adopted over Ethernet networks with RDMA over Converged Ethernet or RoCE. The RoCE standard acts as an efficient, lightweight transport that’s layered directly over Ethernet, bypassing the TCP/IP stack. It offers the lowest latency in the Ethernet industry, which enables faster application completion, better server utilization and higher scalability. Given these advantages, RoCE became the most widely deployed Ethernet RDMA standard, resulting in millions of RoCE-capable ports on the market today.

For additional details on the benefits of RDMA for Ethernet networks, including RoCE network considerations and use cases, view the presentation in its entirety here.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Storage with Intense Network Growth and the Rise of RoCE

February 4th, 2015

On January 4 and 5, the Entertainment Storage Alliances held the 14th annual Storage Visions conference in Las Vegas, highlighting advances in storage technologies utilized in consumer electronics, the media and entertainment industries. The theme of Storage Visions 2015 was Storage with Intense Network Growth (SWING), which was very appropriate given the explosions going on in both data storage and networking.

SV1

While the primary focus of Storage Visions is storage technologies, this year’s theme acknowledges the corollary between storage growth and network growth. Therefore, among the many sessions offered on increased capacity and higher performance, the storage networking session was specifically designed to educate the audience on advances in network technology – “Speed is the Need: High Performance Data Center Fabrics to Speed Networking.”

More pressure is being put on the data center network from a variety sources, including continued growth in enterprise application transactions, new sources of data (aka, big data) and the growth in streaming video and emergence of 4K video. According to Cisco, global IP data center traffic will grow 23% annually to 8.7 zettabytes by 2018. Three quarters of this traffic will be inter-data center, or traffic between servers (East-West) or between servers and storage (North-South). Given this, data centers need to factor in technologies designed to optimize data center traffic.

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center IP Traffic Forecast, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Global Data Center Traffic By Destination, Cisco Global Cloud Index, 2013-2018

Storage administrators have always placed emphasis on two important metrics, I/O operations per second (IOPS) and throughput, to measure the ability of the network to server storage devices. Lately, a third metric, latency, has become equally important. When balanced with the IOPS and throughput, low latency technologies can bring dramatic benefits to storage.
At this year’s Storage Visions conference, I was asked to sit on a panel discussing the benefits of Remote Direct Memory Access (RDMA) for storage traffic. I specifically called out the benefits of RDMA over Converged Ethernet (RoCE). Joining me on the panel were representatives from Mellanox, speaking about InfiniBand, and Chelsio, speaking about iWARP. The storage-focused audience showed real interest in the topic and asked a number of insightful questions about RDMA benefits for their storage implementations.

RoCE in particular will bring specific benefits to data center storage environments. As the purest implementation of the InfiniBand specification in the Ethernet environment, it has the ability to provide the lowest latency for storage. In addition, it capitalizes on the converged Ethernet standards defined in the IEEE 802.1 standards for Ethernet, include Congestion Management, Enhanced Transmission Selection and Priority Flow Control, which collectively allow for lossless transmission, bandwidth allocation and quality of service. With the introduction of RoCEv2 in September 2014, the technology moves from support for a (flat) Layer 2 network to become a routable protocol supporting Layer 3 networks, allowing for use in distributed storage environments.
Ultimately, what customers need for optimal Ethernet-based storage is technology which will balance between IOPS, throughput, and latency while allowing for flexible storage placement in their network. RoCE addresses all of these needs and is becoming widely available in popular server and storage offerings.

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Mike Jochimsen, co-chair of the Marketing Working Group (MWG) at IBTA

Tweet: Get a recap from the IBTA's #SWING & #RoCE panel from #StorageVisions: http://blog.infinibandta.org/2015/02/03/storage-with-i…e-rise-of-roce/

IBTA Announces New RoCE Specification

September 16th, 2014

Big news! The IBTA today announced the updated specification for Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE), RoCEv2. This is a huge development and one that will expand RoCE’s adoption.

The benefits of the first RoCE spec released in 2010 were many:

  • Low latency and CPU overhead (eliminated the multiple data copies inside the server)
  • High network utilization
  • Support for message passing, sockets and storage protocols
  • Supported by all major operating systems

RoCEv2 extends the original RoCE specification by enabling routing across Layer 3 networks and as a result provides better isolation and enables hyperscale data center deployments. This addresses the needs of today’s evolving data centers which require more efficient data movement over a variety of network topologies.

With a number of vendors including Dell and Zadara Storage recently adopting RoCE, this new specification comes at the perfect time. Major cloud providers and Web 2.0 companies are also adopting RoCE in order to combat the challenges of running compute intensive applications and processing massive amounts of data in hyperscale networking environments.

Representatives from IBTA member companies including Emulex, IBM, Mellanox, Microsoft and Software Forge, Inc. participated in the development of this new standard, which will help enterprises to more widely adopt RoCE and improve infrastructure efficiency.

For more information, read the full IBTA announcement.

Bill Lee

Recent Trends toward RoCE Adoption

September 2nd, 2014

The RoCE specification was released back in April, 2010 to address the need for efficient RDMA in end to end Ethernet networks. With the increase in enterprise data traffic and the emergence of hyperscale infrastructure, the need for efficient networking continues to grow.  Here are the latest examples of RoCE in the field.

RoCE Accelerates Microsoft Windows Azure

Albert Greenberg, an architect with IBTA member Microsoft, describes in his keynote address to the Open Network Summit this year how they are using RoCE in their storage offering.  RoCE over 40GbE enabled line-rate performance with zero CPU usage.  At the same time they were able to use less CPUs than would otherwise be needed.

RoCE Used in Dell Fluid Cache for SAN pool

Dell’s Fluid Cache for SAN accelerates applications requiring high data I/O. The servers in this pool are connected to each other by RoCE NICs, making use of the technology to quickly move data between the nodes without facing bottlenecks from the slow operating system kernel.

RoCE Can Do What Ethernet Alone Cannot

IBTA member Applied Micro is continuing to roll out its X-Gene family of 64-bit processors. These processors support RoCE, giving 10GbE some of the low-latency capabilities that used to only be available to InfiniBand. The decrease in latency for Ethernet means transaction latencies also decrease, better positioning the X-Gene family to make a difference in modern workloads.

RoCE Provides Exceptional Performance for Storage Solutions

Zadara Storage announced a new high performance STaaS solution offer for private clouds that uses an Ethernet transport mechanism for exceptional performance at reduced costs. Developed as a collaboration with IBTA member Mellanox Technologies, this solution boosts application performance using the iSCSI Extensions for RDMA (iSER) over RoCE, delivering substantially improved latency and throughput and therefore delivering cost savings and converged enterprise storage for private clouds of all sizes.

RoCE Helps Data Centers Account for New Technology for the Cloud

With the huge acceleration of data centers to the cloud, low latency is becoming increasingly important to avoid bottlenecks and incorporate new technologies seamlessly. RoCE, says SYS-CON Media’s Barbara Porter, is one way to plan for the barrage of new technology that’s coming our way, and to reduce latency overall in the cloud.

Bill Lee