The current move toward NVMe over Fibre channel or network fabric is a paradigm shift in enterprise storage because it allows parallel communication between the server and storage device — therefore establishing the potential for both performance and scalability to increase dramatically.
As a highly optimized, high-performance, scalable interface designed to work with current and next-generation NVM technologies, NVMe enables organizations to take advantage of higher performance levels, reduced latency and parallel I/O in transferring data to and from solid state drives (SSDs).
Down to Details
As defined by the NVM Express consortium, “NVMe is a scalable host controller interface designed to address the needs of enterprise and client systems that utilize PCI Express (PCIe) based solid state drives. The interface provides an optimized command issue and completion path. It includes support for parallel operation by supporting up to 64K commands within a single I/O queue to the device. Additionally, support has been added for many Enterprise capabilities like end-to-end data protection (compatible with T10 DIF and DIX standards), enhanced error reporting, and virtualization.”
The key attributes named by NVM Express that separate NVMe from other storage protocols include:
- Support for up to 64K I/O queues, with each I/O queue supporting up to 64K commands
- Priority associated with each I/O queue with a well-defined arbitration mechanism
- All information to complete a 4KB read request is included in the 64K command itself, ensuring efficient small random I/O operation
- Efficient and streamlined command set
Fibre Channel or Network Fabric
Although the end results are similar, there are some significant differences between NVMe over network fabric and NVMe over Fibre.
NVMe over fabrics is often the choice for organizations with a large installed base of centralized storage accessible over a network. Leveraging network fabrics often relies heavily on switches and software-defined environments. However, when scalability is a primary driver for embracing NVMe, going with fabric often makes more sense since it has the ability to seamlessly scale out to tens of thousands of devices.
Understanding SSD Endurance and Over-Provisioning
Get your free guide to optimizing SSD over-provisioning for improved cell endurance. Download Now
Conversely, Fibre channel is the preferred protocol for connecting all-flash arrays in today’s data centers due to its performance, availability, scalability and plug-and-play architecture. Simply put, NVMe over Fibre channel provides the improved performance of NVMe along with the flexibility and the scalability of the shared storage architecture.
A major distinction between NVMe over Fibre channel and NVMe over network fabric is the mechanism for transporting commands. NVMe maps requests and responses to shared memory in the host computer via the Peripheral Component Interconnect Express (PCIe) interface protocol. The Fibre channel approach leverages a message-based model to send requests and responses between a host and a target storage device over a network.
Understanding Use Cases
There are several use cases for NVMe, whether an organization elects to leverage the protocol over a Fibre channel or network fabric.
Optimizing Analytics. For many organizations, the ability to fully leverage big data analytics is proving instrumental in remaining competitive. NVMe’s Concurrent IO is an important driver when organizations depend heavily on big data analytics. NVMe over fabric, in particular, is an opportunity to process analytics in a much faster storage array than traditional legacy database environments.
Consider, for instance, a storage system comprised of many NVMe devices, using NVMe over fabrics with either an RDMA or Fibre channel interface, making a complete end-to-end NVMe storage solution. This system would provide extremely high performance while maintaining the very low latency available via NVMe.
Building the Next-Gen Data Center. NVMe also creates an interesting opportunity to put new storage technologies into the data center — for instance, the ability to use storage class memory as a cache. Adding flash in this environment provides much lower latency and speeds up data-intensive applications. The reduced latency enables the registration of direct memory regions applications to leverage — meaning data can pass directly to the hardware fabric adapter.
Another implementation would use NVMe over fabrics to leverage low latency while connected to a storage subsystem that uses more traditional protocols internally to handle I/O to each of the SSDs in that system. This allows organizations to benefit from simplified host software stack and lower latency, while still taking advantage of existing storage subsystem technology.
Find the best storage solutions for your business by checking out our award-winning selection of SSDs for the enterprise.