25GE is the new 10GE, RDMA and open source infrastructure on the rise, revive in computing platforms – IT infrastructure trends for 2017
The new generation of Ethernet NICs and switches is already on the shelves, with both Mellanox and Broadcom offering products on the market. We are now starting to see the first actual deployments, with some delay from our predictions from last year of having this tech widely adopted in 2016.
For storage 25 Gbps Ethernet will be especially popular because it is a single-lane technology (like 10G, not like 40G). It is backward compatible with 10G (uses the same SFP-type connector) and has a low cost for NICs, cables, and switches.
In 2017, the upgrade from 10G to 25G costs a small increment (perhaps under 1% of overall IT infrastructure cost), so for new deployments at scale, it makes sense to go straight to 25G.
RDMA is gaining popularity. This is the ability of some NICs to make transfers from memory on one server to memory on another server, with almost no CPU usage. Infiniband networks have RDMA capability as standard. On Ethernet, RoCE and RoCE v2 are popular protocols supported by a number of NICs.
There’s lots of hype around it now, even though it has been out for a very long time. In 2017 it will become essential for every storage system claiming to be high performance to support RDMA transport between servers and between initiators and servers. (StorPool has had RDMA support for ages).
A new generation of interconnects is brewing up. UPI (Intel) and Gen-Z (from everyone else). These are for connecting compute nodes (imagine servers with big CPUs and local RAM) with rack-level memory, storage and networking. To a degree, it fulfills a rack-level-interconnect role currently occupied by external PCIe, Ethernet, and Infiniband. The promise of these technologies is byte-level access (like RAM), higher speed, lower latency, and lower CPU usage than all current technologies. As usual for a new class of interconnect, it will start simple with point-to-point links and eventually move to switched topologies.
We are looking forward to seeing what 2017 will bring for these new interconnects.
https://en.wikipedia.org/wiki/Heterogeneous_computing (in search for a better name)
This is the idea that we can build systems composed of general purpose CPUs paired with auxiliary compute engines specialized in one type of task or another. It is not a new idea, and it has seen a fair bit of use in HPC, but it is now making its way into more mainstream IT.
- OpenVSwitch offloads in NICs from Intel, Mellanox, Netronome and others
- Offload of highly parallel tasks (e.g. video compression) to GPUs, especially with integrated CPU/GPU packages – AMD APUs and Intel Xeon E3-12xx v5/E3-15xx v5
- AWS’s F1 instances – FPGAs in servers for the masses https://aws.amazon.com/ec2/instance-types/f1/
We expect that in 2017 heterogeneous computing will see good progress both in terms of adoption and in terms of the maturity of the software stacks.
In Q2 2017 we’ll see Intel’s Purley platform (perhaps named Xeon E5 v5?) roll out of the factory. It features significant advancements in compute density and memory throughput: additional 50% give or take. Density and throughput are important, but more importantly with Purley, we’ll get the first general availability platform that can host a CPU (Xeon E5) and a GPGPU (Xeon Phi) on the same motherboard (interconnected with UPI).
AMD are aiming to release Rizen (previously known as Zen) x86 CPUs in Q1 2017 https://en.wikipedia.org/wiki/Zen_%28microarchitecture%29. Rizen will have desktop and server parts. With AMD back in the game, it is likely we will see 2 vendors for x86 server CPUs again.
ARM and OpenPower will continue to slowly increase their share of the general purpose server CPU market, eating away at Intel’s near-100% share.
OpenPower servers (with IBM CPUs) are available through a number of second-tier OEMs. It is also rumored that some hyperscalers, Google, in particular, are using OpenPower for an unnamed workload. POWER9 chips are coming out in 2017, so POWER is now going on a similar release cadence as what we are used to seeing with x86 servers.
ARM servers, based on chips from Cavium, AppliedMicro and AMD, and others are available now. Other chip vendors from the early wave got dissolved or acquired – Calxeda (dissolved), Annapurna Labs (Amazon). Others put their plans on hold but might re-enter the race – Broadcom, NVIDIA, Samsung.
Big announcement from December 2016 is Qualcomm will release their first server chip in H2 2017. For those that don’t know, Qualcomm is the third largest semiconductor company (after Intel and Samsung, https://en.wikipedia.org/wiki/Semiconductor_sales_leaders_by_year ). It has one of the best ARM cores (Qualcomm Kryo https://en.wikipedia.org/wiki/Kryo_%28microarchitecture%29 ), at a similar performance level to Samsung’s Exynos M1 core. If anyone’s capable of designing a new server-class ARM chip it’s Qualcomm.
And unless we’ve missed an announcement, AMD are releasing their first “K12” chips in 2017 too.
Competition is good!
The rise of the “Managed Cloud (Provider)”
The companies which were a good fit for Public Cloud are already using it. These are mainly 2 broad categories:
- Companies with use cases that were a good fit for Public clouds or use cases which were not big enough to require purpose-built cloud;
- Companies which have sufficient technical/IT capabilities internally to manage a Private-Public (Hybrid) Cloud set-ups.
However, the majority of businesses today fall in a different category – they do not have the technical expertise internally and they cannot put their workloads in the public cloud themselves. They cannot continue running IT internally as they bleed technical talent and also lack the economy of scale to make it worthwhile. These are the drivers leading to the raise Managed Private Cloud Providers. These Providers become the utility providers of the information age – except they do not provide electricity, gas or water – they provide IT services. We see this trend gaining steam and we expect to see more of it in 2017.
The biggest trend that we see is the switch from VMware to other alternatives. Usually, this is KVM + an open source cloud management system. The usual suspects are OpenStack, CloudStack, OpenNebula, OnApp, oVirt and every now and then others (see next point Cloud Management Platforms).
While companies still run their VMware stacks, they were actively experimenting with alternatives for the past 2-4 years, but in 2016 this trend became rather evident. We expect there will be an acceleration of the switch to open source alternatives due to a mix of reasons in 2017 – reducing cost, increasing efficiency/business competitiveness and reducing lock-ins.
Cloud Management Platforms
There are many alternatives here, although the ones worth mentioning are OpenStack, CloudStack, OnApp, OpenNebula, oVirt for multi-tenancy and Proxmox for single-tenancy. From where we stand we see little momentum in other directions.
OpenStack is now getting more mature and is finally becoming fit for wider audiences. However, it is still better suited to larger deployments (2+ racks) ro where a company can dedicate at least 2 (more like 5-6) full-time employees on building and supporting an OpenStack solution.
CloudStack is still mentioned, although less and less. oVirt is preferred in RHEL ecosystem, although we do not have enough signal to make a meaningful comment on it.
OnApp has its solid foothold in the cloud & hosting market, although a rising number of companies which are looking to switch to something else. Still in cloud/hosting it is maybe the most mature and user-friendly solution.
Another open source cloud management platform, which is worth mentioning is OpenNebula. It is very robust, yet simple. We can only recommend it for getting the job done in a fast and simple manner. Also, the project is developed at a fast rate, without the complexity and politics of multi-vendor project such as OpenStack.
Proxmox is a good, simple solution when it comes to single-tenancy use cases.
Containers were definitely a hype in 2016. And while there was a lot of media coverage we have not seen real world usage follow within our core target markets – IaaS/PaaS/MSP/hosting. Maybe we do not see a lot of “new” use cases which containers are best suited for.
So, for now, we restrain for further comment and we’ll be looking forward to seeing containers become more widespread as they move beyond the “hype” phase in 2017.
If you have any questions feel free to contact us at [email protected]
Share this Post