Fast Kubernetes Persistent Storage

StorPool provides fast and reliable persistent storage / persistent volumes for large bare-metal Kubernetes Clusters through its Kubernetes CSI driver

High-Performance Block Storage for Kubernetes

When deploying containers with Kubernetes or containerized applications, companies eventually realize that they need persistent storage. They need to provide fast and reliable storage for databases and other datastores used by the containers.

StorPool, a leading software-defined storage platform, is integrated with Kubernetes via CSI (Container Storage Interface). Any company can easily deploy applications and microservices in containers and ensure their performance, scalability, and availability with StorPool fast and reliable storage for Kubernetes.

StorPool’s Integration with Kubernetes

StorPool provides persistent volumes for Kubernetes through a K8s CSI driver.

StorPool’s Kubernetes CSI allows the user to create persistent volumes in StorPool’s cluster that can be accessed by K8s. This can be used as a default for all data, not only databases or stateful applications/microservices.

There are three ways to deploy Kubernetes:

  • The first is to use bare-metal nodes for K8s, using the StorPool CSI driver.
  • The second is to use virtual machine instances for the Kubernetes nodes. In this case you do not need the StorPool-Kubernetes integration, you will use StorPool with the underlying virtualization platform (for example OpenStack or VMware).
  • The third is to run K8s in public cloud (AWS, GCP, Azure, etc.). In this case you would typically use the native block storage service of the public cloud.


The StorPool integration is designed for bare-metal servers. The StorPool CSI driver allows on-premise Kubernetes clusters to use StorPool as persistent storage, providing persistent volumes (a.k.a datastores) to K8S that are stored in the StorPool cluster. Volumes can be dynamically attached/detached to different Kubernetes nodes (either VMs or bare-metal hosts that run the containers) as needed, with dynamic provisioning. This integration best utilizes StorPool’s block-level storage. To run the Kubernetes cluster in a virtualized environment, use the cloud management software (such as OpenStack or VMware) to attach/detach volumes to the VMs.

StorPool (on bare metal) supports persistent volume claims in Read/Write Once or Read-Only Many, like most block device drivers – iSCSI, Amazon EBS, rbd, etc. For Read/Write Many, options are limited, with NFS the most common.

Benefits of Using Persistant Storage for Kubernetes

Kubernetes Persistent Storage and Mixing other IT Stacks

In addition to providing persistent volumes for Kubernetes, StorPool also supports multiple IT stacks. It can provide persistent shared storage – from one storage system – to several IT stacks (IT infrastructure platforms). The list of supported cloud orchestration systems is one of the widest in the industry and includes OpenStack, VMware, Hyper-V, OnApp, OpenNebula, CloudStack, and proprietary Cloud Management Systems.

Proprietary Cloud 
Management Systems and more…

Learn more about the integration in our paper
“Persistent Storage for Kubernetes with StorPool”!

Use Cases and Usage

Many of our MSP clients provide managed services to their customers. A recent example is a managed database service similar to Amazon RDS. This is achieved by using Kubernetes with the Percona or KubeDB database operators. The solution is built on the MSP’s own infrastructure, so their Kubernetes cluster is integrated with each underlying subsystem, one of which is StorPool.

StorPool has native integration with Kubernetes (introduced in StorPool’s v18.02 release) through which persistent volumes are provided to the pods. The setup consists of an operational StorPool storage cluster, redundant network layer, and several bare-metal Kubernetes nodes with StorPool client (initiator) installed. Each Kubernetes persistent volume is backed by a StorPool volume attached as a block device to the node where the pod, which requested it, is running.

Container Challenges

Containers emerged as a way to make software portable. The container contains all the packages you need to run a service. The provided isolation makes containers extremely portable and easy to use in development. A container can be moved from development to test or production with no or relatively few configuration changes.

Historically Kubernetes was suitable only for stateless services. However, all applications work with data, which requires persistence, which leads to the creation of persistent storage for Kubernetes. Implementing persistent storage for containers is one of the top challenges of Kubernetes administrators, DevOps and cloud engineers. Containers may be ephemeral, but more and more of their data is not, so you need to ensure its survival in case of container deletion or hardware failure.