Registered users can unlock up to five pieces of premium content each month.
Swisscom Uses KRM for Automatic Reconciliation |
NEWS |
The Kubernetes Resource Model (KRM) is a framework for declarative deployment and management of Kubernetes objects. For Communication Service Providers (CSPs), it helps bring network automation platforms in-band with Kubernetes infrastructure. Why does this matter? Consider the case of Swisscom, which was using Ansible to automate network configurations and Kubernetes to automate container orchestration. These two automation flows were disconnected. As a result, network states were not automatically reconciling with the Git repository as they should through GitOps and Continuous Integration, Continuous Deployment (CI/CD) workflows. A fundamental principle of GitOps is that this shared Git repository should act as a “single source of truth” for digital network operations and when modified, updates will be automatically pulled by systems such as Kubernetes. Instead, in Swisscom’s case and in many other CSPs’ cases, there is a need to manually push updates. The mobile operator is now working to implement KRM to bring all automation in-band with Kubernetes. This is expected to drastically increase network efficiency for the development pipeline.
KRM for Cloud-Native CI/CD |
IMPACT |
Many CSPs have chosen to rehost (“lift and shift”) or replatform (“lift, tinker, shift”) their applications to cloud environments, rather than refactor—perhaps due to the continued use of Virtualized Network Functions (VNFs). This is a faster process and, in many cases, more cost effective because it avoids double-paying for infrastructure—legacy and cloud infrastructure—while refactoring prior to cloud deployment. It also has the benefit of using cloud resources during refactoring to ensure alignment with the cloud environment. However, the downside is that CSPs are introducing old code into their new systems, which will lead to inefficiencies if left unchecked. Moreover, refactoring will take significant time because old code tends to be complex, rife with dependencies, and lacks the level of abstraction needed for cloud-native flexibility and scalability. The inefficiencies caused by rehosting may range from bloated code to broken services or system dependencies, and re-architecting will be required for further efficiency improvements.
The KRM can help within the narrow domain of GitOps to bridge from merely cloud-based to cloud-native. Pushing updates is still common practice, and KRM can help support alignment with the GitOps principle of automatic reconciliation. The basis of this support is to simply bring automation in-band with Kubernetes. Yet, KRM’s benefits go further because it also offers declarative command of Kubernetes, is vendor agnostic, and directly integrates with Kubernetes. This is an advantage over cloud-platform Application Programming Interfaces (APIs) that expose Kubernetes functionality for network configuration, but do so in an imperative and vendor-locked way. These benefits are why, for instance, KRM has been instrumental in Project Nephio to allow for open-source and declarative network automation.
KRM’s main limitation is its scope. As a framework for managing Kubernetes objects, it helps with operations, but will not support a broad architectural shift toward cloud-native. There are many common issues outside of CI/CD pipelines—such applications that do not scale or need to update Internet Protocol (IP) addresses manually—that are not addressed by KRM.
The Bigger Issue |
RECOMMENDATIONS |
KRM is a useful technology that may support all CSPs, regardless of their position along the cloud-native journey. However, the recommendations here go beyond narrow consideration of KRM: