Why Telcos Need to Re-Architect toward Achieving Cloud-Native

Subscribe To Download This Insight

By Nelson Englert-Yang | 3Q 2024 | IN-7491

While most Communication Service Providers (CSPs) have transitioned to cloud infrastructure, only a fraction are gaining the efficiency and agility of a genuine cloud-native architecture. This is changing, and telcos are expanding their commitment to re-architecting. This ABI Insight highlights Swisscom’s recent re-architecting strategies surrounding the Kubernetes Resource Model (KRM).

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Swisscom Uses KRM for Automatic Reconciliation

NEWS


The Kubernetes Resource Model (KRM) is a framework for declarative deployment and management of Kubernetes objects. For Communication Service Providers (CSPs), it helps bring network automation platforms in-band with Kubernetes infrastructure. Why does this matter? Consider the case of Swisscom, which was using Ansible to automate network configurations and Kubernetes to automate container orchestration. These two automation flows were disconnected. As a result, network states were not automatically reconciling with the Git repository as they should through GitOps and Continuous Integration, Continuous Deployment (CI/CD) workflows. A fundamental principle of GitOps is that this shared Git repository should act as a “single source of truth” for digital network operations and when modified, updates will be automatically pulled by systems such as Kubernetes. Instead, in Swisscom’s case and in many other CSPs’ cases, there is a need to manually push updates. The mobile operator is now working to implement KRM to bring all automation in-band with Kubernetes. This is expected to drastically increase network efficiency for the development pipeline.

KRM for Cloud-Native CI/CD

IMPACT


Many CSPs have chosen to rehost (“lift and shift”) or replatform (“lift, tinker, shift”) their applications to cloud environments, rather than refactor—perhaps due to the continued use of Virtualized Network Functions (VNFs). This is a faster process and, in many cases, more cost effective because it avoids double-paying for infrastructure—legacy and cloud infrastructure—while refactoring prior to cloud deployment. It also has the benefit of using cloud resources during refactoring to ensure alignment with the cloud environment. However, the downside is that CSPs are introducing old code into their new systems, which will lead to inefficiencies if left unchecked. Moreover, refactoring will take significant time because old code tends to be complex, rife with dependencies, and lacks the level of abstraction needed for cloud-native flexibility and scalability. The inefficiencies caused by rehosting may range from bloated code to broken services or system dependencies, and re-architecting will be required for further efficiency improvements.

The KRM can help within the narrow domain of GitOps to bridge from merely cloud-based to cloud-native. Pushing updates is still common practice, and KRM can help support alignment with the GitOps principle of automatic reconciliation. The basis of this support is to simply bring automation in-band with Kubernetes. Yet, KRM’s benefits go further because it also offers declarative command of Kubernetes, is vendor agnostic, and directly integrates with Kubernetes. This is an advantage over cloud-platform Application Programming Interfaces (APIs) that expose Kubernetes functionality for network configuration, but do so in an imperative and vendor-locked way. These benefits are why, for instance, KRM has been instrumental in Project Nephio to allow for open-source and declarative network automation.

KRM’s main limitation is its scope. As a framework for managing Kubernetes objects, it helps with operations, but will not support a broad architectural shift toward cloud-native. There are many common issues outside of CI/CD pipelines—such applications that do not scale or need to update Internet Protocol (IP) addresses manually—that are not addressed by KRM.

The Bigger Issue

RECOMMENDATIONS


KRM is a useful technology that may support all CSPs, regardless of their position along the cloud-native journey. However, the recommendations here go beyond narrow consideration of KRM:

  1. Follow a Broad Plan for Cloud-Native Transition: A CSP will need to start with a plan of what re-architecting should be prioritized. The Swisscom case is instructive: removing slack within the existing network through better integrated automation is a cost-effective starting point. Early in the transition process, CSPs will also need to evaluate which applications need to be refactored to ensure that they are cloud-ready and can be developed alongside key cloud-native features such as auto-scaling, load balancing, and other container orchestration. Once legacy monolith applications are broken down into agile and modular microservices, and once containers are free to scale on-demand and not nailed down to specific servers, then more advanced cloud-native features may be added, such as distributed edge computing or end-to-end infusion of network operations with Artificial Intelligence (AI).
  2. Hyperscalers May Guide: Following in the footsteps of Amazon Web Services (AWS), Google Cloud, and Microsoft Azure is not such a bad idea given their cloud-native expertise and commitment to open technologies. The use of a canary deployment strategy is also illustrative here. Hyperscalers have leveraged cloud-native for small-scale “canary” test deployments, including test deployments for new infrastructure. For instance, a hyperscaler might use a canary deployment to ensure newly created microservices are operational prior to full deployment. This has reached telco vendors such as Fujitsu, Nokia, and Oracle, all of which now promote canary deployments for cloud-native testing. KRM and canary deployments are both general frameworks to assist in cloud-native development, rather than specific applications or technologies; so, the value of such frameworks in a private-cloud setting can be inferred by observing their use in the public cloud.
  3. Cloud-Native Strategy Is Part of the 5G Strategy: Cloud-native and 5G are independent technologies (e.g., 2G, 3G, and 4G may all be cloudified); yet, the strategy for transitioning to cloud-native should be an integral part of the strategy for transitioning to 5G Standalone (SA). The 5G Core is the best advocate for cloud-native architectures because it stands to benefit more than previous generations. A 5G transition pathway that is 5G-SA-forward is more likely to adopt cloud-native principles faster than a typical pathway that is legacy-forward with gradual integration of 5G Core. Mavenir is especially noteworthy here for promoting early 5G-SA as a catalyst for cloud-native. It has implemented this approach with Deutsche Telekom (DT), and DT now uses its cloud-native, automated CI/CD pipelines to reduce time for deployment, upgrades, and scaling.

Services