Whoa, that’s meta. But you can. And it might make a lot of sense for you.
But if we’re going to talk about this, we should define our terms.
Cloud computing is generally defined as the practice of using a network of remote servers hosted on the internet to store, manage, and process data, rather than local computer storage.
Service Defined Cloud
An alternative definition is not location-defined (on-premises vs off-premises) as much as service defined. In this definition a cloud is a series of services – compute, storage, networking – that you could use to build an application stack to produce business or personal value. Cloud offerings generally use automation, self-service, pay-as-you-go pricing, and virtualization (VMware, Hyper-V, KVM) as part of the offering. In this definition, the “cloud” can be anywhere, including within your data center – AKA a “Private Cloud”. And the hardware and services that make up the cloud can be managed by anyone from your team to the experts at Amazon or Microsoft or Google or wherever. While the big cloud vendors generally use a mix of open-source and homegrown technology to manage their clouds, private clouds are usually built with some sort of Cloud Management Platform (CMP) such as OpenStack, CloudStack, OpenNebula, and others.
Back to Bare-Metal
What’s relatively new is that some of the public cloud vendors are now also offering “bare-metal” offerings in addition to their more value-add offerings. With bare-metal offerings, you just get the bare server with a defined amount of RAM, DAS, and networking connectivity. No OS, no web stack, or application stack. No cloud storage or backup services. Just bare metal. The first-generation public cloud was very good at “scale-out” applications. But organizations also had “scale-up” application stacks that weren’t a great fit for migrating to the cloud. Storage latency was always a limiting factor for transactional applications in the cloud. And network latency was a limit for technical computing. For example, the latency AWS offers for their basic Elastic Block Storage is “single-digit millisecond” latency. Which is … not great. And it’s quite expensive. So if you need high-performance transactions in the cloud, you might choose to skip EBS and just get bare metal servers with direct-attached NVMe to get the storage latency down where you need it. But then you’re dealing with the complexity and expense of meeting reliability and availability targets with direct-attached storage. Shared storage and SANs were developed in the first place to resolve the issues of direct-attached storage. So using DAS in the cloud becomes a case of one step forward and 3 steps back.
Turn Raw Servers into a Cloud Platform Powerhouse
But what is interesting is that you can connect these raw server offerings and turn them into a cloud platform the same way you’d turn actual physical servers in your datacenter into a private cloud platform. And lots of people do. For one, it can be very cost-effective, especially with open-source software. For another, open-source CMPs are platform neutral and won’t lock you into your cloud provider. If you build your stack with CloudStack on a bare metal cloud, you should be able to make it with CloudStack on a bunch of servers in your datacenter.
So if I can build out my application stack on bare metal offerings in the cloud, can I build my storage system in the cloud as well? Bare-metal cloud offerings are servers with direct-attached storage and networking. StorPool is a bunch of servers with direct-attached storage and networking. You can use your favorite CMP to configure, provision, and manage a cluster of servers in the cloud. So why not have that cluster be a StorPool Cluster and provide robust, low-latency, cost-efficient storage for your cloud applications?
StorPool Storage on AWS
And this isn’t just a theoretical model. We’ve done it. We did it on AWS and clobbered them on performance and price. A five server StorPool cluster produced 100 microseconds (0.1 milliseconds) latency and topped out at over 1.3M IPs (4k blocks, 50/50 r/w). The best AWS could do with their EBS io2 Block Express product was .25 milliseconds while topping out below 250k IOPS. There’s lots more detail about the implementation and the performance in the blog linked above.
StorPool Storage on Equinix.Metal
We also did it on Equinix. Equinix started in 1998 as a vendor-neutral multi-tenant datacenter provider where competing networks could securely connect and share data traffic. Through the years they have continued to expand both their global footprint and their breadth of services from colocation sites to bare metal cloud offerings to software-as-a-service. StorPool worked with the Equinix.Metal team to define a hardware and software configuration, networking, and a detailed deployment plan. We’ll soon be listed on the Equinix Metal Storage Partners page.
Accelerates Transactional Applications
StorPool Storage in a cloud environment is ideal for workloads such as large transactional databases, monolithic applications, and heavily loaded e-commerce websites. An ideal example would be electronic health records (EHR) systems used by healthcare providers. These applications typically run on high-end servers on-premises with high-performance SAN storage and two or more locations for high availability and disaster recovery. With StorPool Storage in AWS or Equinix clouds, an organization could move at least one instance into the cloud with the attendant savings in real estate, staffing, and power. StorPool could even be used in multi-region deployments for availability or secondary storage uses.
And why stop with one application? As we’ve seen with our customers in more traditional on-premises configurations – applications and use cases tend to migrate to StorPool Storage. Why refresh the storage for a single application stack when you can just migrate that data to StorPool instead?
We’re here to help if you want to talk. Talk with a StorPool Expert.