StorPool version 19 (v19) has been released. The new version has numerous improvements and makes the leading block storage SDS even better. Notable features of the new version are large scale deployments, added Windows CSV support, lower latency for NVMe storage and improved Kubernetes bare-metal support.
- Multi-Cluster Support – larger-scale deployments (10 PB and up) in the same datacenter.
Adds the ability to scale the storage system in one physical location, by creating a cluster of clusters and moving volumes between these clusters. In this way, customers with large block-level workloads can have multi-petabyte all-flash systems.
- iSCSI on Layer 3 datacenter networks
Next to the existing Layer 3 support through the StorPool native driver, now the StorPool iSCSI target also supports routed datacenter networks. It includes a BGP speaker for announcing portal group IPs after a failover event. Support for iSCSI on layer 3 networks is demanded by large scale deployments, where layer 3 data center networks are preferred over layer 2 networks.
- iSCSI persistent reservations
The new features, tailored to Microsoft users, are enabling them to use:
– Cluster Shared Volumes (CSV) for Hyper-V
– Windows Server Failover Cluster for Microsoft SQL Server
– Scale-Out File Server
– Multi-stack deployments (mixing Microsoft and other hypervisors, like VMware and KVM)
- Even lower latency – now below 0.1 ms for committed, 3x synchronous replicated systems.
- Significantly improved performance for low queue depth workloads
- The StorPool iSCSI target now uses accelerated networking (user-space poll-mode driver), similarly to the StorPool native protocol.
- Added support for AMD EPYC hardware sleep. Now AMD CPUs benefit from the same low latency as StorPool on Intel CPUs.
- Improved support for Kubernetes Persistent Volumes in bare metal use-cases
- Added support for Linux kernel 4.20+ and 5.0+
- Added support for CentOS 7.7
StorPool 19 is now being deployed to all new StorPool clusters and is rolled out to all existing StorPool customers.
For more information or questions please contact us at [email protected].