Enterprise storage has come a long way in computing’s relatively brief history. Solid state drives (SSDs) have played a major part in the evolution of that storage. So what have those changes created in terms of components, benefits and applications? Examining the history of SSDs helps paint a picture of what the future will hold.

What Is SSD Storage?

SSD storage is the use of non-volatile memory to replace magnetic media to store data on a long-term basis. While traditional hard disk drives (HDDs) use magnetic platters spinning at high speeds — which are read from and written to by drive heads — SSDs have no moving parts and depend entirely on flash memory to store data, making them much faster at reading and writing data both ad hoc and in a sustained operation. Today’s SSDs don’t require an additional power source to maintain an electrical current into the device at all times in order to preserve the data, making their reliability on par, from a data integrity standpoint, with a traditional HDD.

When Were SSDs First Available?

The use of flash memory for longer-term storage has been around since the 1950s, but those solutions were generally in mainframes or larger minicomputers and also required battery backups to preserve the contents of the memory when the apparatus was not powered by the host, since those solutions used volatile memory.

Commercial SSDs similar to those available today made their first entrance into the market in the early 1990s — in 1991, a 20MB SSD sold for $1,000. The prices have obviously come down since then, and performance has improved as various PC bus interfaces have made it possible for data transfer rates to far exceed the standard rates that traditional spinning media would saturate.

Storing Data

SSDs depend on a mesh of electrical cells in a NAND to store data, and also include an embedded processor known as the controller that runs firmware-level code to help the drive operate and bridge the media to the host computer via the interface bus.

Which SSD Is Best for Your Deployment?

icon of a document
White Paper

Take this quick assessment to determine the best storage fit for your needs. Download Now

Within the memory medium itself, the cell meshes are divided into pages, where data is stored, and blocks, which are groups of pages. Brand new SSDs fresh from the factory are filled with entire blocks full of pages of unused memory. SSDs write new data only to empty pages within these blocks. As you can imagine, as new writes and data are stored on the drive, this means that, eventually, fresh contiguous blank pages run out. When this happens, it requires some intelligent management of empty pages within blocks on the part of the drive. When the drive detects that many pages within a block are unused, the SSD’s controller commits that block’s pages worth of data to memory, erases the whole block, and then rewrites the data back into the block, ignoring the unused pages and leaving them empty.

This is why SSDs are blazing fast when they are mostly empty, but tend to grow slower as they age — it’s because this process of finding a block with unused space, committing it, erasing it, rewriting it and then writing the new data has to take place every time new data needs to be rewritten to an older drive. But in reality, this performance degradation takes years of very heavy usage of a drive.

Caches and Buffers in SSDs

Traditional HDDs included a bit of memory within the drive hardware itself — a few megabytes, typically eight, 16 or perhaps a little more — in order to increase the read and write performance that the user perceives. If data that the user wants to read or write can be stored within the high-performing cache memory, the drive can store the data there temporarily in the fast memory modules and report back to the operating system that the operation is complete, and then the drive can handle actually transferring the data from the cache to the much slower magnetic media. It doesn’t always work, since only a very small portion of the drive’s total data could be cached at any one time, and if data isn’t in the cache, it has to be read from the slower physical medium.

SSDs have the same kind of concept with a cache, except they include DRAM chips within the SSD’s controller hardware on the SSD itself. These can range from 64MB all the way up to gigabytes, and they essentially act to buffer requests to improve the life of the drive and serve short bursts of read and write requests a bit faster than the regular drive memory would allow. These caches are important in enterprise storage applications, including in heavily used file servers and database servers, but are of little import to typical desktop and laptop users.

Applications for SSDs

The benefits of using SSDs in production storage applications are numerous. As mentioned, since SSDs have no moving mechanical components, they use less power, are more resistant to drops or rough handling, operate nearly silently and read more quickly and with less latency. In addition, since platters don’t need to spin, there is no need to wait for the physical parts to ramp up to operating speed, reducing a performance hit that hard drives cannot escape. They’re also lightweight, making them ideal for laptops and small form factor machines, as well as for high-capacity storage area networks in a smaller footprint.

Because of these advantages, SSDs are popular in the following environments:

  • As a database server, both to host the database engine and also to host the database itself for quick access
  • As a “hot” tier in a stratified network storage archive, where frequently accessed data can be retrieved and rewritten very quickly
  • In situations where physical shocks are a possibility, and thus HDDs present an untenable risk to system reliability

What Does the Future Hold?

In the short term, it is fair to expect higher capacity SSDs to become more prevalent in the industry, and that the cost per gigabyte for SSDs as compared to traditional hard drives will further decrease as the market share for SSDs rises. New form factors that increase the number of parallel data transmission lanes between storage and the host bus will emerge to increase speed, and the quality of the NAND storage medium itself — the physical layer of cells that holds the blocks and pages — will improve, offering better reliability and performance, especially as a drive ages from new to used.

Learn more about how to improve your storage planning and evaluation processes with this free guide.

Posts By

Jonathan Hassell

Jonathan Hassell is an award-winning writer specializing in enterprise information technology, including administration, security, and mobile. His work has appeared in Computerworld, CIO.com, Network World, and dozens of other publications.

View more posts by Jonathan Hassell