Over the years, many mainstream strategies have emerged for avoiding enterprise data loss, one of which is a redundant array of inexpensive disks (RAID). However, a thorough examination of enterprise data storage reveals that solid state drives (SSDs) are increasingly indispensable to storage strategies.
What is RAID?
RAID is a storage protocol that defines how disk controllers view a pool of disks, controlling how data is read from and written to the disks in that collection. Generally, with some exceptions, the intention is to make sure data integrity and availability are top priorities.
In the past, RAID installations had dual priorities: to protect data and to squeeze more performance out of a storage subsystem on a server or high-class workstation, as measured in input/output operations per second (IOPS). Some RAID schemes improve storage performance over and above the speed characteristics you would get from individual drives, whereas others add capacity by stringing together individual drives or protect the integrity of data by striping it across multiple volumes. This reduces the overall capacity of the storage solution because some data has to be mirrored or otherwise written twice, and increases the ability of the whole storage stack to survive a drive failing.
RAID was important because storage in the past was almost exclusively mechanical hard drives, which failed relatively consistently over the course of a drive’s life. Since there were many moving parts within the drive, its life was shorter, and users could not predict when those parts would fail. Therefore, protecting data against such failures with RAID was the first step in an important battle.
Get your complete guide to SSD management
Discover how to effectively adopt and manage SSDs in your organization with this ultimate guide. Download Now
RAID’s importance is shifting
As SSDs have become more mainstream and enterprises are migrating their storage workloads, legacy spinning hard disk drives (HDDs) are representing a smaller portion of the changing storage picture. After all, SSDs are an order of magnitude or two faster than traditional media, and their maximum capacities now rival what traditional HDDs have had all along. While SSDs can wear out, their lives are much longer than legacy disk drives, making their in-service dates much more predictable.
RAID’s place in enterprise storage has changed for the following reasons:
- SSDs are tremendously reliable — far more than legacy HDDs — making failure of any given drive in a RAID system much less likely.
- There are software-defined storage options that work differently from RAID and don’t require expensive RAID controllers with expensive battery backups but will still allow you to protect against data loss in the event of drive failure.
- In a time where server-based computing is trying to become simpler and more abstract, complicating the storage subsystem with further RAID controllers adds costs and yet another possible failure point.
Many people choose to use RAID 1, which is drive mirroring, with SSDs alongside other software-defined storage solutions and a robust, layered approach to data backups. In this configuration, data is written identically to two separate disks by the hardware controller, so if one drive goes bad, the data exists in its entirety on the other drive. While you do not get to use the extra capacity of the second drive, it provides automatic data loss protection in the event of a drive loss at essentially no cost to performance. Protecting against other threats like accidental deletion, ransomware infections and more, it’s a better backup scheme overall.
SSDs in the future
As storage becomes increasingly condensed — consider the terabytes upon terabytes of storage solutions a cloud provider must have — vendors are beginning to think about other types of redundancy within storage, including hybrid flash arrays and multinode solutions that add redundancy not only at a drive level but also at the individual flash chip level. This removes the failure point of the RAID controller from the equation. Additionally, technologies like erasure coding split out data and write to different chips in a redundant fashion, as well as new flavors of RAID designed with modern components in mind.
There is also the concept of differential RAID, which is a type of strategy used with SSDs in RAID formation that tries to keep track of how old each drive in a RAID set is. The controller uses this information to distribute more activity to newer drives and less activity to older drives with the objective to make sure all drives do not have unrecoverable data errors simultaneously. This really takes hold as drives fail and are replaced with hot spares.
Enterprise-class SSDs from Samsung are designed for long-term service. These drives are built with Samsung V-NAND technology, making them optimal for 24/7 data center operations under heavy workloads. Samsung data center SSDs, including Samsung PM9A3 NVMe® SSD and Samsung PM893 SATA SSD, employ end-to-end protection to maintain data integrity across the entire data transfer path.
The Power Loss Protection (PLP) feature prevents data corruption and data loss in an unexpected shutdown. When there is an unexpected power loss, cached data in a storage device’s internal DRAM buffers can be lost. But with PLP, when a power failure is detected, Samsung SSDs immediately use stored energy from its PLP capacitors to provide enough time to transfer the cached data in DRAM to flash memory, which ensures no data loss.
In addition, Samsung PM9A3 and PM893 SSDs support Self-Monitoring, Analysis and Reporting Technology (SMART), which allows IT staff to inspect the health of their SSDs and detect potential failures before they occur.
All of this combines to make drive loss (and, therefore, data loss) a remote occurrence, even as SSDs expand their role as an enterprise workhorse.
Learn more about the intricacies of SSDs and the importance of over-provisioning with this free white paper.