Registered users can unlock up to five pieces of premium content each month.
Expect Containers on VMs in the Short to Medium Term |
NEWS |
As Communication Service Providers (CSPs) progress towards deploying a cloud-native telco environment, there is an increased pressure for modular, multi-vendor deployments to avoid vendor lock-in and increase innovation. As such, Network Equipment Vendors (NEVs) such as Ericsson, Nokia, Huawei, and ZTE are innovating and partnering with telco platform providers such as Red Hat and VMware as well as cloud service providers such as AWS to develop integrated solutions.
In this arrangement, CSPs would build their telco cloud on a horizontal platform (e.g., Red Hat OpenShift Container Platform (OCP), VMware Telco Cloud Platform (TCP), etc.) with network functions, software, services, public cloud, and hardware provided by multiple vendors. At the core of these cloud-native solutions for CSPs is Kubernetes, the de facto open-source container orchestration system for telcos, which is deployed usually as a Container-as-a-Service (CaaS) (e.g., Ericsson’s Cloud Container Distribution, Google Kubernetes Engine, etc.) or bundled as a Platform-as-a-Service (PaaS) and CaaS (e.g., Red Hat OCP).
However, the industry is currently at a transition stage from the traditional NFVI architecture to a cloud-native telco cloud deployment. As such, ABI Research believes that NFVI and cloud-native architecture will coexist in CSPs’ networks. What this means is that there will be an increasing need to deploy containers on both Virtual Machines (VMs) as well as bare metal in the short to medium term. This is evidenced by recent developments such as AWS releasing EKS Anywhere this year to support deployment on VMs and bare metal, Red Hat OCP releasing OpenShift virtualization based on KubeVirt last year, and VMware Tanzu, which supports both VNFs and CNFs in a horizontal platform, running containers in VMs.
Demystifying the Bare Metal and VM Dichotomy |
IMPACT |
The narrative behind whether to deploy containers on VMs or bare metal is one that is largely driven by the vendors of these solutions. ABI Research believes that there are pros and cons to both modes of deployment and based on briefings with CSPs and vendors that are deploying 5G cloud-native environments, we believe that the answer is not so straightforward. Many in the industry also believe that it is a case of VMs versus containers, but that is the wrong perspective because containers can also be run in VMs (nested container), and many vendors are looking towards creating a hybrid layer that can eventually run both VMs and containers. In the context of the telco industry, the issue is more complex considering that CSPs must run VNF workloads from past network installations with future CNFs in the same environment.
One of the main reasons to run containers on bare metal would be to eliminate the virtualization layer, which increases complexity for automation due to the additional layers and increases CAPEX from the hypervisor license fee. Another main reason is that container images, which are measured in megabytes, more effectively utilize the infrastructure layer underneath compared to VMs, which are measured in gigabytes. This is particularly useful for edge deployments, which require a much smaller footprint because they cannot host large data centers.
However, there are downsides to the bare metal option as well. One of the problems is that there is reduced isolation between each container on the bare metal. Isolation and segmentation were traditionally better achieved through deployment of VMs. Another key issue is that each container abstracts the same host Operating System (OS), which makes deployment of multi-vendor CNFs more difficult because each CNFs will have different requirements. Also, it would be a challenge to continuously replace the bare metal if the CNFs have new requirements due to updates.
Bare Metal Is a Choice, not a Necessity |
RECOMMENDATIONS |
While bare metal deployments are an end goal for some vendors (consider that Ericsson developed its Cloud Native Infrastructure Solution (CNIS) as a bare metal-only deployment for its current configuration), ABI Research believes that it is more of an option that has to be considered on a case-by-case basis. Currently, all public cloud and cloud-native deployments by CSPs are building their stack on a virtualization layer (e.g., DISH is still running VMware on top of AWS, Ericsson Cloud Native Deployments are still based on OpenStack), and so the bare metal deployment option is still largely a work in progress that will materialize in the longer term.
CSPs should not be quick to jump to bare metal, but instead consider some of the benefits that running containers on VMs will bring. For one, portability. With bare metal, portability across different platforms is limited because each container makes use of the OS kernel of the host. With VMs, CNFs can move between hosts easily. Another benefit of VMs is that they can split nodes, which reduces the risk of all clusters failing if the bare metal is hosted as a single node. Lastly, the virtualization aspect in the hypervisor layer and VMs brings about increased isolation and security, better management of configuration and updates, and persistence of data (allowing for rollback to previous configurations in the case of failures), which could make it a more ideal set up for a multi-vendor environment.
Furthermore, ABI Research recommends that CSPs focus on the use cases and applications and utilize the best technology to provision them. For example, containers on bare metal have a smaller footprint, and microservices at the edge have resource constraints, which makes bare metal a viable option. Furthermore, containers on bare metal would be viable for certain use cases that require much lower latency due to it spinning up in milliseconds, but only require a few minutes or hours of uptime. However, certain use cases at the core that require more management and security for a multi-vendor environment might preferably use containers on VMs. CSPs will also have to bear in mind that containers on bare metal have not yet seen commercial deployments and might consider deploying containers on existing architecture (i.e., VMs), which might give them an earlier entry for cloud-native deployments.
CSPs will also have to protect their current investments in NFVI and keep an eye out for making investments for cloud native more efficient. This means picking a platform that would allow their VNF workloads to exist alongside CNFs, while ensuring that use case requirements can be met. This implies knowing the full capabilities of the system that they are investing in and understanding caveats that might impede their development (e.g., when virtualizing on KubeVirt, currently heavy VNF workloads common to RAN cannot be supported and is a feature that is currently being developed on the roadmap). Also, CSPs will need to ensure that they have a clear migration plan from NFVI (e.g., some deployments, such as in OpenStack, do not have a clear migration plan for its VNFs to bare metal).