With the amount of data being processed globally every day, businesses are constantly looking to find ways to increase speed with hardware. One way to do this is to use server-side flash caching, which uses high-speed SSDs to accelerate input/output operations.

Caching is used at many levels within computer systems: from the L1, L2 and L3 caching on the CPU, to the operating system using parts of RAM to speed up operations, to using faster storage to accelerate writes to and reads from slower storage, and even cache on network interface cards to improve performance of Ethernet or fibre channel networks.

Server-side flash caching typically uses driver software developed by vendors such as Intel and Proximal Data to intercept data being sent to an external storage system, or even to slower internal storage. Since data is sent first to a fast, low-latency SSD, and then onward to the slower storage, input/output operations finish sooner, reducing delays from apps waiting for data to be written or read before moving to the next operation.

Discover the Cost Advantages of SSDs

White Paper

Read this white paper on the total cost of ownership advantages of using SSDs. Download Now

Server-side caching has been implemented with RAM disks that use part of a server’s memory to cache read and write operations. While this is very fast, it is quite expensive, and many servers are short on RAM, so the latest high-speed SSD technology provides an ideal mechanism to implement caching at a lower cost and more effectively than with RAM.

Tier 0 With Automated Tiering

A SAN appliance with automated tiering, in which data is automatically moved to the fastest tier, generally the flash Tier 0 when it’s in use, is similar to server-side caching, and can serve as an additional storage method.

The storage virtualization software built into the SAN storage system automates the process of moving data to and from the fastest tier, ensuring that when data is no longer being used regularly, it’s moved to a slower tier of storage.

Since the data is never stored only in volatile memory, it doesn’t have the limitations of caching, but is subject to the speed of the interface — a SAN connection of gigabit Ethernet, or even 10Gb, or any fibre channel slower than 8Gb, will be slower than a fast PCIe-based server-side flash device.

Accounting for Server Hardware

A high-speed SSD on a PCIe board or connected via NVMe may have five to 10 times the performance of older versions of SSDs, and dozens of times the performance of hard drives or SAN or NAS networked storage. Assuming that applications are I/O bound, server-side caching can improve app performance by a factor close to the difference in storage performance — potentially up to 50 times or more.

Potential downsides to server-side caching are the possibility of data loss if data is written to cache and power is lost before the data in the cache is written to permanent storage, and possible issues rising from the extra layer of software that intercepts data on its way to and from storage and routes it through the fast SSD. The possibility of data loss can be addressed by using storage with power protection, such as the latest Samsung enterprise drives.

Implementing Server-Side Flash for Caching

Server-side flash caching only requires using the appropriate OS driver software — there is no need to manually move or designate data to be accelerated. In one example, database performance doubled when NVMe SSDs were used with caching software.

The efficiency and gains to be found with caching will vary — the software can be installed at the filesystem level, at the OS level or at the hypervisor level in applications using virtualization. Each type has advantages and disadvantages — filesystem level caching has the least overhead, resulting in better latency, but may end up caching files that aren’t critical to improving performance. OS- and hypervisor-level software has the overall intelligence to cache the most-needed files, but will have more overhead and may have longer latency.

Another issue with caching is that the cache should be flushed out before snapshots of data are taken, to ensure that all the latest changes have been made to the permanent storage where the snapshot is created. Some caching software includes commands to ensure this happens, and some storage vendors that supply both storage systems and server-side caching have additional apps to ensure coherency is maintained.

Given the potential for performance gains, and the relatively low cost of high-speed SSDs compared to RAM, server-side caching may be an ideal way to improve storage performance.

Find more information on the improved storage capacities of SSDs.

Posts By

Logan Harbaugh

Logan Harbaugh is an IT consultant and reviewer. He has worked in IT for over 20 years, and was a senior contributing editor with InfoWorld Labs as well as a senior technology editor at Information Week Labs. He has written reviews of enterprise IT products including storage, network switches, operating systems, and more for many publications and websites, including Storage Magazine, TechTarget.com, StateTech, Information Week, PC Magazine and Internet.com. He is the author of two books on network troubleshooting.

View more posts by Logan Harbaugh