With tactical smartphones, soldiers have secure, fast, reliable communications, plus situational awareness and close air support.
Automated storage tiering with SSDs is all about cost vs. benefit. Generally, the fastest tiers of storage are much more expensive than slower tiers. Effectively tiering data from the fastest flash tiers to less-expensive storage tiers can reduce the overall costs of the system, and it also keeps the most important data on the best-performing storage.
Tier 0 is a generic description of flash added to a storage system that includes multiple tiers of hard drives, which are usually Tiers 1, 2 and 3. Now that there is the possibility of multiple tiers of flash storage in a system, it could be argued that we need new set of tiers, such as Tiers 0.1, 0.2, 0.3. Just as the original hard drive tiers made storage more flexible, multiple flash tiers can optimize a system with the fastest storage for apps that need it, and lower-performance, higher-volume tiers for large files.
Storage costs are typically measured in dollars per gigabyte. With high-end storage systems ranging from several dollars per gigabyte at the highest tier — to less than a tenth of a cent for the lowest tiers — it’s easy to see the potential cost implications of differing storage system architectures. Storage vendors add additional software features to improve capacity and reduce the effective cost per gigabyte using compression and deduplication, but still face an uphill battle to match costs of hard drive and cloud-based storage.
While automated storage tiering with SSDs has been associated with high-end, all-flash storage arrays, the technology has trickled down to even inexpensive network attached storage (NAS) systems. For example, Synology offers the M2D17 adapter to support dual M.2 drives such as the Samsung 850 EVO to create a flash tier in a number of their NAS systems.
Understanding Flash Tiers
There are three common physical interfaces used for flash tiers: the SATA bus, the NVMe/PCIe bus and the memory bus. The NVMe/PCIe bus can either connect drives directly in a PCIe slot on the motherboard of a server or storage system, or via an M.2 PCIe slot. Both use the PCI Express protocol to communicate between drives and motherboards, and the NVMe standard for the fastest connection.
The reason for the newer, higher-speed interfaces is because the flash technology used to create the SSDs has outstripped the speed of the older interfaces. The SATA bus itself is what limits the SSDs to around 550 MB/s, while the NVMe interface can support much higher speeds.
Manage the Enterprise Storage Life Cycle
Download this guide to improve your storage planning and evaluation processes. Download Now
There are also SATA-based M.2 SSDs, which cannot achieve the speeds of NVMe SSDs, but offer as much performance as the SATA protocol can support: up to 540 MB/s on reads and 500 MB/s on writes.
Latency in milliseconds (ms) to microseconds (µs), transfer rates from tens to tens of thousands of megabytes per second and input/output operations per second (IOps) improve with each newer technology. These correspond to as much as a 1000x improvement in performance for applications. Flash speeds have improved so much that performance limits are not in the flash technology itself, but in the interface used to connect flash to the rest of the system.
The newest NVMe SSDs such as the Samsung PM1725a for enterprise customers can deliver performance of up to 6400 MB/s on sequential reads and 3000 MB/s on writes, and 1 million IOps on random reads and 170,000 IOps on random writes with a single drive when plugged into a 4x PCIe slot.
The Samsung SM963 can deliver up to 1400 MB/s on writes and 2100 MB/s on reads, with up to 430,000 IOps from a single M.2 drive, while an older hard drive may only reach 60 MB/s on reads and writes, 500 IOps, and latencies of around 10 milliseconds.
Gaining Benefits for Storage
The real advantage of tiered storage is that a top tier that is only 10 to 20 percent of the size of the next, slower tier can speed the whole array up to nearly the performance of the fastest drives. After all, only 10 to 20 percent of the total data is in use at any given time. A hybrid system keeps the most commonly requested data on the fastest tier, while the data that’s accessed less often stays on a slower tier. In the end, the average speed of the array is mostly dictated by the fastest tier.
For example, a tiered system might have SM963 drives as its top tier, SATA SSDs such as the Samsung SM863a for the next tier, and HDDs for the least important tier, each descending tier costing less and holding a higher capacity. This not only can help enterprises reduce costs when it comes to assembling data processing setups, but also that resources are properly allocated.
Find the best storage solutions for your business by checking out our award-winning selection of SSDs for the enterprise.