Samsung's Flip 2 enhances a radical interactive screen designed to merge the best of paper flipcharts and interactive whiteboards.
Enterprises may benefit from examining their computer storage abilities, as they may be in the market to replace or improve upon their options. Until recently, hard disk drives (HDDs) and solid state drives (SSDs) had some pretty clear-cut differences. While SSDs were (and still are) faster, quieter and need less power, HDDs offered higher capacities, lasted longer and cost less. Those differences are eroding, however, as SSD storage capacity and longevity continues to improve.
Delivering More Space
SSDs with V-NAND technology continue to push the boundaries of capacity. Currently, 2TB SSDs are the industry benchmark, but SSD storage capacity is expected to surpass 16TB by 2016. For the first time, SSDs will be available in higher storage capacities than HDDs. New M.2 SSDs, which feature a reduced footprint, will allow more SSDs to fit in less space and improve performance for both input/output operations per second (IOps), reduce latency and increase throughput. This enables each server to have more storage, with better performance, enabling faster processing of data so that businesses can make better use of their collected data in real time. Regardless of form factor, SSDs can offer 10 times the performance of HDDs.
Scale-Out vs. Scale-Up
Two types of architecture dominate the computer memory market. Scale-up architecture involves large servers with two or four sockets and up to 18 CPU cores per socket, which resemble the mainframes of old. Scale-out architecture takes a different approach, utilizing dozens or even hundreds of inexpensive, low-power nodes with distributed or clustered applications spread across multiple nodes.
Scale-out architecture is more resilient and scalable, enabling large, business-critical systems, while improving on the overall amount of data that can be processed and the numbers of simultaneous operations that can be carried out on the data.
New blade systems and hyperconverged systems can put dozens of nodes in one box. With scale-out architectures, direct-attached storage is more efficient, since each node can directly access storage rather than using network bandwidth, which is better left for internode communications. M.2 SSDs are ideal for these nodes, offering a small footprint, high performance and very low power consumption, while SSD storage capacity continues to improve.
Changing Storage Tiers for High-Value Data
In the large servers that typify scale-up architectures, storage was generally concentrated in storage area networks (SANs). Data was typically processed in high-performance Tier 1 storage, consumed in Tier 2 midrange storage and stored in long-term, inexpensive Tier 3 storage, which might have been separate SAN systems or tiers within a single SAN system.
On the other hand, a scale-out architecture has data streams from millions of smart devices. In these environments, processing data at high speeds with high-performance storage located near the CPUs used for data processing will be required to reduce network bottlenecks between CPU and storage. SSDs are ideal for this kind of processing. Rather than data being consumed once by an application and then stored after processing, data will become more active and processing will be more continuous. This will enable businesses to respond more quickly to customer demands, whether monitoring usage of electrical power or tracking sales through vending machines.
All-Flash Data Centers
As data processing becomes more critical to getting and maintaining a business advantage, big data systems such as Hadoop become more critical. Data centers will need to become more responsive and able to support data mining of both raw data in ‘data lakes’ and big data in more structured formats. In either case, the higher performance of SSDs will inevitably push HDDs to an offline or near-line function, more appropriate for archiving and backups than storage of active data.
As storage vendors continue to improve deduplication technologies to give flash storage greater effective capacities, the cost difference between flash and HDDs grows closer. Distributed files systems like the Hadoop Distributed File System (HDFS) maintain multiple data copies across many nodes of a cluster. A Hadoop cluster will perform better in an all-flash environment, without the bottlenecks of HDDs to slow down the creation of multiple data copies. Using SSDs in cluster nodes will improve times required to analyze data or increase the volumes of data that can be analyzed in real time.
Enterprise SSDs in the Data Center
As endurance, capacity and cost of enterprise-class SSDs continue to improve, and deduplication technologies are enabled by the higher performance of SSD, there seems to be less and less place for hard drives in high-performance data centers. The demands of big data, whether Hadoop, Oracle or MongoDB, are best met with the high performance of SSDs, such as the Samsung PM863 Series and SM863 Series, while the large numbers of nodes in database clusters make their lower power consumption extremely attractive.
Increasing the amount of high-performance storage available to each node of a clustered environment such as a Hadoop cluster can improve the performance of the cluster as a whole, and reduce the number of nodes necessary, since the storage per node is one of the factors that determines the number of nodes required. In addition, the lower power requirements, lower noise and higher performance inherent to SSDs enable many manufacturers to improve their product offerings.
Read the whitepaper below to learn more about how increased capacity and durability expand SSD applications in the data center.