Latency vs. Capacity storage

Latency storage vs. Capacity storage – A Podcast by Intel and StorPool

A couple of days ago, Boyan Krosnov, a Co-Founder and Chief of Product at StorPool joined an episode of Conversations in the Cloud. Conversations in the Cloud is a weekly podcast organized by Intel.

About Conversations in the Cloud

Intel-Conversations in the cloudIT leaders driving the future of a software-defined infrastructure are sharing their knowledge and thoughts for the current market trends. The podcast series are featuring also members of the Intel Builders programs. Participants are also Intel experts and industry analysts.  They provide you with valuable information on delivering, deploying  and managing cloud computing, technology, and services in your data center or enterprise.

Have you missed the podcast? We are here to share with you the most important topics covered in the discussion “Is Latency the Key to Storage?“.

Capacity storage vs. latency storage

According to many of storage experts capacity vs. latency storage is a key storage dimension.

Capacity storage is vast in, well, capacity. However it is less demanding in terms of performance. It is usually in the petabyte range and supports systems with low performance requirements – systems which are OK with latency of 1+ millisecond, sometimes hundreds of ms. Capacity storage systems can be found in archive, data retention, time-series databases use cases, for Internet of things, video, photos and other similar applications. This systems are sometimes built in different geographic regions, as they can tolerate high latency, which is inevitable, when data must travel long distances. It is the law of physics.

Latency-driven storage can be found usually in the borders of one data center. It supports active applications, which demand fast performance. Such applications are Databases, Virtual Machines, VDI (Desktop Virtualization), OLTP (On Line Transaction Processing) systems and others. All these applications demand latencies of less then 1 millisecond down to a few tens of microseconds.

The lower the latency, the faster the application.

A fact, which is not well understood is that latency is probably the most important metric of a storage system. It is more important than IOPS for the majority of use cases. Since many applications are performing inter-dependent storage operations – the lower the latency of a storage system the faster the application is.

For example a database will issue a read or write operation to the storage system, will wait (latency) to get the data and will then use it – join, merge, etc to produce a result. Only then will this result be used to do the next query. Therefore a system delivering 200 us latency will be 10 times faster than a traditional SAN with 2 ms latency.

In the episode of Conversations in the Cloud , Boyan Krosnov talked about the storage landscape. He clarified the difference between capacity-driven and latency-driven storage systems.

Software-defined storage and software-defined data centers

boyank-smallBoyan started with a short retrospect of the StorPool’s history dating back to 2011. In  this time the idea for developing StorPool came to his team, based on market feedback. Existing storage systems were slow, expensive and not scalable enough. Initially the main mission of StorPool was to serve companies, which want to build a public cloud and have storage challenges around the service they are building. Then they also covered private clouds as well.

StorPool developed a new type of storage software from the ground up. One that was a scale-out, high-performance and extremely low in latency. In addition it was extremely efficient in terms of server resources, so it can run on a standard server, alongside applications (hyper-converged).

In the next few years the market matured. Thus terms like “software-defined storage” and “software-defined data center” were born and became established.

Most of the software-defined storage users nowadays are companies which build public or private clouds. They search for a flexible and scalable storage system suited to their needs. In addition it should provide a low latency and high performance.

If the company is running hundreds of VMs, a storage system which delivers hundreds of thousands of IOPS and microseconds-level of latency is a must. A best-of-breed software-defined storage solution delivers just that, at a much lower price point than traditional SANs or all-flash arrays.

Is Latency the Key to Storage?

Boyan Krosnov explained that the IOPS numbers advertised by many storage system vendors actually have nothing to do with the application performance companies will get. It is the latency of storage operations that is tightly related to the application performance you are going to get.

Therefore if you design for lower latency, you can achieve latency levels which are a fraction of what a system, which is not designed for lower latency, can do. To summarize Boyan outlined that a standard StorPool system, build with standard hardware can do the impressive 200 microseconds under load. For a shared storage system this is something amazing.

The full podcast can be found below:

Intel - full podcast

In conclusion – when you are buying a storage system, focus on a low-latency architecture. This will deliver qualitatively better results for your users and applications.

 

Related article

Performance Test: StorPool “All-SSD” shared storage system

Leave a Reply

Your email address will not be published. Required fields are marked *