Memory & Storage

Data Center Tiers: Using a Tiered Storage Architecture to Increase Performance Without Breaking the Bank

Storage administrators are always looking for more performance, and SSDs are one of the best ways of adding speed with big data, real-time analysis and transactional computing all stretching the limits of performance. Fortunately, a tiered storage architecture can increase performance dramatically at relatively low cost. Implementing data center tiers that handle the busiest 10 percent of the data in a system can effectively accelerate all the data being handled. The white paper “Tiered Storage Architecture: Taking Advantage of New Classes of Data Center Storage” can get you up to speed on using tiered storage.

Tiering Algorithms

In data center tiers, only about 10 percent of data is “hot,” or active, at any given time. By putting that 10 percent on the fastest available storage along with new data or data that has been recently accessed — while moving inactive data to a lower tier — the system can serve up all data as if it were on the fastest storage. Predictive algorithms can move the rest of the parts of a file to the faster storage if one part is accessed, as well as determine when to move data to a lower tier. Files that haven’t been accessed for a while can be moved down to slower storage.

How Much Capacity?

Since only about 10 percent of storage is hot at any given time, each tier needs be only 10 percent of the slower tier below it. For example, if the first tier is 10 terabytes (10 TB), it can accelerate access to 100 TB in the next tier. The key is moving data in and out of the top tier as it becomes hot or no longer hot. This concept can be extended to multiple tiers. With four tiers, and a top tier of 10 TB, a second tier would be 100 TB, a third tier one petabyte (PB), and the fourth tier could be up to 10 PB.

How Many Tiers?

Just as the top tier can accelerate the tier below it, that tier can accelerate another below it, and so on. It’s possible to have five or six tiers, depending on the total amount of data the system has to support. With a top tier of ultra-high-speed SSDs, another tier of high-capacity SSDs, a tier of high-capacity hard drives and another of tape or cloud (each 10 times the capacity of the one above it), the ultimate capacity of the system can reach exabytes (a million terabytes) without compromising performance of the most-used data, all while remaining reasonably cost-effective. The model only breaks down if the last tier is near-line or offline, with data not always accessible.

Big Data

Big data isn’t just big — it’s huge. Data sets have gone from a few terabytes to hundreds of terabytes or more. Furthermore, analysis of the data requires that search engines and other data tools be able to comb through the data at high speeds. This has led some organizations to all-flash arrays, but for many uses, efficient tiered storage can be as fast with a far lower cost. The white paper “Big Data SSD Architecture: Digging Deep to Discover Where SSD Performance Pays Off” details some of the information needed to decide which option is best for your application, including how to identify pain points and which SSDs will be most useful.

Learn more about the various SSDs available and determine which can best optimize your storage performance.

Posts By

Logan Harbaugh

Logan Harbaugh is an IT consultant and reviewer. He has worked in IT for over 20 years, and was a senior contributing editor with InfoWorld Labs as well as a senior technology editor at Information Week Labs. He has written reviews of enterprise IT products including storage, network switches, operating systems, and more for many publications and websites, including Storage Magazine, TechTarget.com, StateTech, Information Week, PC Magazine and Internet.com. He is the author of two books on network troubleshooting.

View more posts by Logan Harbaugh