StorPool full features list in alphabetical order:
Advanced Volume management – unmatched flexibility and ease of use when creating and manipulating volumes.
Advanced Copy-On-Write (COW) on-disk format.
Automatic recovery – the StorPool cluster detects hardware or software failures and automatically recovers from failed drives, network interfaces, failing switches, etc.
Backups & DR (Disaster Recovery) – through an asynchronous transfer of encrypted snapshots to a remote location(s).
Data locality – each volume may be configured independently, for high-performance access from a particular node.
Data tiering – data can be placed on an HDD pool, or on an SSD pool. Live migration between pools can take place.
Distributed, scale-out, shared-nothing cluster architecture – high performance, scalability, availability, and reliability.
End-to-end data integrity – StorPool’s unique checksum protection mechanisms prevent silent data corruption, phantom/partial/misplaced writes, over the full lifecycle of the data from the client/initiator to the underlying drives on the storage servers and back to the client.
Hyper-converged Infrastructure – use the same servers for both storage and applications / virtual machines. StorPool is unmatched in efficiency and only needs 5 to 10% of the CPU and RAM per server.
In-service software upgrades – the storage service continues operating while the cluster is being upgraded on a rolling basis.
iSCSI – built-in HA (Highly Available), scale-out iSCSI target, managed from the main StorPool API.
Multi-attach – the same volume may be attached to many servers (clients).
Multi-core processing – ability to use multiple CPU cores in parallel in order to increase the performance per server.
Native Infiniband and 10/40/56/100 Gbit Еthernet support.
Native Hardware Acceleration for some supported NICs (Intel ixgbe/i40e based / Mellanox mlx4/mlx5 based).
Network redundancy and load balancing.- automated failover and load balancing allows smooth operation and ensures the maximum level of performance.
Networking protocols developed for maximum throughput and lowest possible latency – StorPool block, NVMe/TCP and iSCSI.
Online configuration changes – volumes are not brought down during reconfiguration.
Storage pools – each pool may have a different set of drives assigned. There may be a single pool or multiple pools per cluster. Supports hard drive pools for high capacity use cases, hybrid pools of SSDs and HDDs, and all-SSD pools for high-performance use cases.
Storage Quality of Service (QoS), ensures the level of storage performance and SLA’s are met. User-configurable per-volume limits of IOPS and MB/s may be set to guarantee that no single user will be able to monopolize the cluster resources.
Templates – automate volume creation and management. Reduce space usage.
Thin provisioning – allows more storage to be made visible to users than is physically available on the system.
Three-way synchronous replication – tuned for high performance, replication level (number of copies), set per volume.
TRIM/Discard – data deleted in the upper layers (OS) is also deleted on the underlying drives managed by the StorPool storage system. This frees up space and also helps with the performance and longevity of SSDs
Unique “SSD-Hybrid” – delivers an all-flash level of performance, at close to HDD-only cost. Delivers steady and predictable IOPS & latency metrics for the entire dataset. Flexibility to configure one or two SSD copies on a hybrid pool with triple replication.
Zeroes detection – StorPool detects empty blocks of data (ones containing only zeroes) so that they do not occupy any space on the storage system.