In any collection of data, about 10 percent is considered “hot data,” which is the most-used part of the set. By using a tiered approach to keep this top 10 percent of data in the fastest storage type available, the performance of the entire database is effectively accelerated, while avoiding the ramp up in cost that would be entailed with keeping the whole thing in top tier storage. With the total data universe projected to reach 44 zettabytes (44 billion terabytes) by 2020, it’s critical to utilize tiers of storage to maximize performance and minimize total cost.

There can be several tiers of storage in a data system, and typically each tier is one-tenth the size of the one below it. You might have a RAM-based tier for the most critical data, a PCIe SSD tier for the next most critical, a SATA-based SSD tier below that and a high-capacity HDD tier for the large quantity of “cold data” at the bottom. A properly proportioned multi-tier storage system can deliver something close to the performance of its fastest tier for all the data in the system, since only the 10 percent of hot data is usually needed at any given time.

Check out the infographic below for a visual representation of this architecture, and read this white paper for more technical information on tiered storage.

Hot Data Flows Through SSDs – How Tiering Storage Can Maximize ROI and Performance from Samsung Business USA

Posts By

Logan Harbaugh

Logan Harbaugh is an IT consultant and reviewer. He has worked in IT for over 20 years, and was a senior contributing editor with InfoWorld Labs as well as a senior technology editor at Information Week Labs. He has written reviews of enterprise IT products including storage, network switches, operating systems, and more for many publications and websites, including Storage Magazine, TechTarget.com, StateTech, Information Week, PC Magazine and Internet.com. He is the author of two books on network troubleshooting.

View more posts by Logan Harbaugh