Rack-scale flash is a solution to a new challenge: improving the speed of solid state drives (SSDs). For a long time, hard drive-based storage was the biggest bottleneck in the data center. The original intent of RAID systems was to boost hard drive performance by spreading reads and writes across multiple disks. With the advent of SSDs, flash performance has improved to such a degree that solid state storage is now faster than the other parts of the systems in a data center.

To get the most out of new flash storage, it’s necessary to go beyond the old SATA and SAS connections to individual drives. Non-volatile memory express (NVMe) is the newest connection standard for flash drives, and some SSDs are also designed to connect directly to the PCIe bus or to the memory bus, both of which offer faster interfaces than SAS or SATA.

Rack-Scale Flash Improves Network Performance

Rack-scale flash, differentiated from older storage area network (SAN) systems, uses a very high-speed interface intended to share storage between the servers in a rack, or between adjacent racks. These systems tend to use NVMe over fabric, meaning they use the NVMe protocol, rather than SCSI or TCP/IP, over a high-speed network technology such as 40/50/100Gbit Ethernet or 32Gbit fibre channel. The throughput of such systems doesn’t improve much over other protocols on the same networks, but the latency can be much less, which improves the input/output operations per second (IOps), a particular requirement of real-time analytics systems, big data, artificial intelligence and high-performance computing, among other applications.

By maintaining the NVMe protocol from the drives themselves, using drives such as the Samsung 950 PRO, to the protocol on the network that connects the storage to the servers, latency is minimized and the overhead necessary to handle the storage protocol is reduced. This enables the servers to get the maximum use of flash performance without additional processing by the server, reducing server utilization during high storage loads and leaving more processing power for analysis and processing.

Hybrid systems may use NVMe storage inside the server as cache or a tier 0 for additional storage in more standard SAN systems. In these cases, the fast NVMe storage is written to first, and then data is moved from the fast internal storage to the shared SAN-based storage. Caching is simpler to install and configure, since it simply uses the NVMe storage as a buffer. Tiering solutions will automatically move data from the internal tier 0 to other tiers in SAN or even cloud storage, enabling a more flexible approach, though it’s also more complex to set up and maintain.

As newer versions of the interconnects, including PCIe 4.0 and additional NVMe over fabric solutions become available, vendors will be able to increase the number of fast flash devices that can be connected to each server, and also improve shared storage performance between lots of servers. For now, however, the balance has shifted — storage devices, particularly NVMe-based flash devices, are faster than the rest of the systems they’re attached to.

To find the best storage solution for your enterprise, check out our complete range of SSDs here.