Paving the way for Terabit Ethernet

Paving the way for Terabit Ethernet Despite advances in Wi-Fi technology and the recent introduction of Wi-Fi XNUMX, Ethernet remains the preferred technology used by companies when they need to move large amounts of data quickly, especially in data centers. Although the technology behind Ethernet is now over forty years old, new protocols have been developed over the years that allow even more gigabytes of data to be sent. To find out more about the latest technologies, protocols, advancements and the future of Gigabit Ethernet and perhaps even Terabit Ethernet one day soon, TechRadar Pro chatted with Tim Klein, CEO of storage connectivity company ATTO.

Ethernet was first introduced in XNUMX, how has the technology evolved since then and where does it fit in today's data center?

Now over 4 decades old, Ethernet technology has made major improvements, but there are also many things that look just like they did when they were first introduced. Originally developed for scientists to share small chunks of data at ten megabits per second (Mbps), we now see giant data centers sharing massive unstructured data sets over Ethernet networks, and a roadmap that will reach Terabit Ethernet in just a few years. . The exponential development of data, driven by new formats such as digital images, produced enormous demand, and these early implementations of shared storage over Ethernet could not meet performance needs or handle congestion with deterministic latency. As a result, protocols like Fiber Channel have been developed particularly for storage. Over the years, multiple innovations such as smart downloads and RDMA have been introduced to enable Ethernet to meet the demands of unstructured data and overcome the dead point that can arise when transferring large data sets. The latest high-speed Ethernet standards such as 10/25/40/50/100GbE are now the backbone of the modern data center.

(Image credit: Pixabay)

Today's applications demand ever-increasing performance. What are the challenges of configuring faster protocols? Can the software assist here?

Adjustment is extremely essential these days due to the demand for higher performance. Each system, whether it is a client of the service or a server, must adapt to the requirements of each specific workflow. The sheer number of file sharing protocols and workflow requirements can be overwhelming. Previously, you may have admitted that half your bandwidth was destroyed by overload with incidents and packet loss slowing you to a dizzying pace. Today, there are multiple methods for optimizing performance and tuning Ethernet adapters for very intensive workloads. Hardware drivers now come with built-in algorithms that improve efficiency, TCP offload engines reduce network stack overhead. Large Receive Offload (LRO) and TCP Segmentation Offload (TSO) can also be incorporated into hardware and software to facilitate the transfer of large volumes of unstructured data. Adding buffers, such as a continuous receive queue, speeds up package delivery, increases fairness, and improves performance. Newer technologies, such as RDMA, allow direct memory access by bypassing the operating system's networking stack and virtually eliminating overhead.

What drives the adoption of 10/25/50/100GbE interfaces?

Demand for larger, more efficient storage solutions and enthusiasm for new Ethernet technologies such as RDMA and NVMe-over-Fabrics are driving the adoption of high-speed Ethernet in modern data centers. Ten Gigabit Ethernet (10GbE) is now the dominant interconnect for server-class adapters, and 40GbE was quickly introduced to push the boundaries by bundling 4 lanes of 10GbE traffic. This eventually evolved into the 25/50/100GbE standard which uses twenty-five Gigabit lanes. Networks now employ a mix of all 10/25/40/50/100GbE speeds, with 50GbE at the core, 100GbE at the edge. The ability to mix and match speeds, design paths to give them all the power they need, and balance the data center from core to edge, leads to the rapid adoption of the XNUMX/XNUMX/XNUMX GbE standard. New technologies like RDMA open new opportunities for companies to use network adapters and network-attached storage (NAS) with deterministic latency to handle workloads that previously had to be performed by computer networks. More expensive storage (SAN) using fiber. Channel adapters that require more specialized assistance. More recently, we've seen NVMe-Over-Fabrics, which uses RDMA transport to share cutting-edge NVMe technology in a storage fabric. XNUMXGbE network cards with RDMA opened the door to the fastest performing NVMe storage fabric on the market today. These previously unimaginable levels of speed and reliability allow companies to do more with their data than ever before.

What is RDMA and how does it affect Ethernet technology?

Remote Direct Memory Access (RDMA) allows intelligent network adapters to access memory directly from another system without going through the traditional TCP procedure and without CPU intervention. Traditional transfers relied on the operating system's network stack (TCP/IP) to communicate, causing massive overhead, resulting in loss of performance, and limiting what was possible with Ethernet and storage. RDMA now allows for lossless transfers that virtually eliminate overhead with a massive increase in efficiency by saving CPU cycles. Throughput is increased and latency is reduced, allowing organizations to do more with less. RDMA is actually an extension of DMA (direct memory access) and bypasses the CPU to allow "zero copy" operations. These technologies have been an integral part of Fiber Channel storage for many years. This deterministic latency that made Fiber Channel the first choice for enterprises and heavy workloads is now free with Ethernet, making it easy for organizations of any and all sizes to take advantage of high-end shared storage.

How is NVMe integrated?

Where NVMe integrates with Ethernet is through the NVMe-over-Fabrics protocol. It is simply the fastest way to transfer files over Ethernet today. NVMe itself was developed to take advantage of modern SSDs and flash storage by upgrading to SATA/SAS protocols. NVMe raises the bar even higher by taking advantage of the ability of non-volatile memory to operate in parallel. Since NVMe is a direct attached storage technology, the next step for shared storage is where Ethernet or Fiber Channel comes into play: bringing NVMe into a shared storage fabric.

RAM

(Image credit: Gorodenkoff / Shutterstock)

What are the Ethernet requirements for storage technologies like RAM disk and smart storage?

Smart NICs are a partially new term for the ability of network controllers to handle operations that were previously the burden of a processor. Offloading the system processor improves overall efficiency. Taking this term even further, network card manufacturers offer Field Programmable Gate Array (FPGA) technology that leaves application-specific functionality, including data offloading and acceleration, to be developed and coded on the FPGA. Relying on the hardware layer makes these network cards blazingly fast with enormous potential in the future to add more innovations to this layer. Disk RAM Smart Storage is advancing in this area by integrating hardware data acceleration into storage devices that use volatile RAM (which is faster than the non-volatile memory used in today's NVMe devices). This translates into blazing-fast storage with the ability to streamline heavy workloads. The combination of ultra-fast RAM storage, embedded NIC supervisor, and FPGA with smart offloads and data acceleration has enormous potential for super-high-speed storage. RAM disk and smart storage wouldn't exist without the latest innovations in RDMA Ethernet and NVMe-over-Fabrics.

What future does Ethernet technology have?

Two hundred Gigabit Ethernet is already beginning to spread from HPC solutions to data centers. The standard doubles the lanes to fifty GbE each and there is a remarkable roadmap to see one with five terabits in just a few years. PCI Express XNUMX and XNUMX are going to play a critical role in enabling these higher speeds, and companies will continue to look for ways to max out power, speed up transfer speeds, and find ways to direct CPU operations. And GPU with network controllers and FPGA.