
Storage Acceleration Using Mellanox Interconnect
Mellanox Technologies Confidential
The RDMA bypass allows the data path to effectively skip to the front of the line. Data is
provided directly to the application immediately upon receipt without being subject to various
delays due to CPU load-dependent software queues. This has three effects:
There is no waiting, which means that the latency of transactions is incredibly low.
Because there is no contention for resources, the latency is consistent, which is essential
for offering end users with a guaranteed SLA.
By bypassing the OS, using RDMA results in significant savings in CPU cycles. With a
more efficient system in place, those saved CPU cycles can be used to accelerate
application performance.
In the following diagram, it is clear that by performing hardware offload of the data transfers
using the iSER protocol, the full capacity of the link is utilized to the maximum of the PCIe
limit.
To summarize, network performance is a significant element in the overall delivery of data
center services. To produce the maximum performance for data center services requires fast
interconnects. Unfortunately the high CPU overhead associated with traditional storage
adapters prevents taking full advantage of high speed interconnects. Many more CPU cycles
are needed to process TCP and iSCSI operations compared to that required by the RDMA
(iSER) protocol performed by the network adapter. Hence, using RDMA-based fast
interconnects significantly increases data center performance levels.
Figure 3: RDMA Acceleration
Kommentare zu diesen Handbüchern