
The need for high performance
With more and more users accessing the same resources, response times get slower and applications start taking longer to process. The performance of traditional storage has not changed in the last couple of years—a single HDD yields about 150 MB/s with response times of several milliseconds. With the introduction of flash media and protocols such as non-volatile memory express (NVMe), a single SSD can easily achieve gigabytes-per-second and sub-millisecond response times; SDS can leverage these new technologies to provide increased performance and significantly reduce response times.
Enterprise storage is designed to handle multiple concurrent requests for hundreds of clients who are trying to get their data as fast as possible, but when the performance limits are reached, traditional monolithic storage starts slowing down, causing applications to fail as requests are not completed in time. Increasing the performance of this type of storage comes at a high price and, in most cases, it can't be done while the storage is still serving data.
The need for increased performance comes from the increased load in storage servers; with the explosion in data consumption, users are storing much more information and require it much faster than before.
Applications also require data to be delivered to them as quickly as possible; for example, consider the stock market, where data is requested multiple times a second by thousands of users. At the same time, another thousand users are continuously writing new data. If a single transaction is not committed in time, people will not be able to make the correct decision when buying or selling stocks because the wrong information is displayed.
The previous problem is something that architects have to face when designing a solution that can deliver the expected performance that is necessary for the application to work as expected. Taking the right amount of time to size storage solutions correctly makes the entire process flow smoother with less back and forth between design and implementation.
Storage systems, such as GlusterFS, can serve thousands of concurrent users simultaneously without a significant decrease in performance, as data is spread across multiple nodes in the cluster. This approach is considerably better than accessing a single storage location, such as with traditional arrays.