Deploying Cisco XRd on Red Hat OpenShift
This blog provides step by step instructions on how to onboard and configure Cisco XRd Control Plane on Red Hat OpenShift in a lab environment.
What is Cisco XRd anyway?
Cisco XRd is a containerized router running Cisco’s flagship IOS-XR operating system and can be instantiated on any cloud infrastructure, public or private. XRd is available in 2 flavors:.
- XRd Control Plane: Targeted towards compute heavy use cases such as a virtual Route reflector (vRR) or a Path Computation Element (PCE), and
- XRd vRouter: Can be used as virtual PE (vPE) or virtual Cell Site Router (vCSR) that focuses on packet forwarding
Both XRd variants, XRd Control Plane and XRd vRouter are vendor validated and listed as supported CNF for Red Hat OpenShift Container Platform. You can get your copy of XRd here.
Onboarding XRd on Red Hat OpenShift is 5 step process that includes:
- Preparing the cluster by tweaking default system parameters to XRd required values
- Preparing the XRd tenant environment by creating name space and service accounts
- Defining a startup config for XRd router, if so desired
- Onboarding the XRd router with desired number of interfaces
- Connecting to and configuring your new XRd router
Lab Topology and PreRequisites
The goal of this blog is pretty simple: Learn how to onboard XRd on Red Hat OpenShift. As such, a very simple topology is used in this blog. If you have a baremetal server (or a virtual machine) capable of running Red Hat OpenShift, you can also set up this topology in your home lab.
The following topology is used in this blog throughout.
Prerequisites
Your topology should have the following.
- A functional OpenShift Cluster
- You can install Red Hat Single Node Openshift (SNO) using Red Hat Assisted Installer as defined here.
- A jumphost/workstation with OpenShift client (aka oc cli) installed. OC cli client is used to interact with the OpenShift cluster.
XRd Requirements
Like any other latency sensitive and/or CPU extensive workload, XRd’s container needs to fine tune the host its running on for optimal performance. Some of these requirements are mandatory to run XRd in a testing environment, while others are recommended parameter settings for a production environment. Omitting the production requirements may cause run time issues resulting in an unstable system. However, omitting the lab requirements will result in the XRd CNF failing to reach running state.
Following are the requirements for XRd 7.8.1, the version used in this blog:
Provider | Requirement | Desired Value | Default | Required? |
---|---|---|---|---|
Hardware | Architecture | x86 | N/A | Required |
Hardware | CPU Cores | 2 | N/A | Required |
Basic OpenShift | Min Kernel Version | 4.0 | 4.x | Required |
Basic OpenShift | Exact Cgroup Version | 1 | 1 (2 is available, not enabled) | Required |
Basic OpenShift | PID Limit | 2500 | 4096 | Required |
Basic OpenShift | Sysctl fs.inotify.max_user_instances (minimum for 1 instance ) | 4000 | 8192 | Required |
Basic OpenShift | netdev_max_backlog | 30000 | 1000 | Required |
Basic OpenShift | optmem_max | 67108864 | 81920 | Recommended |
Basic OpenShift | rmem_default | 67108864 | 212992 | Recommended |
Basic OpenShift | rmem_max | 67108864 | 212992 | Recommended |
Basic OpenShift | wmem_default | 67108864 | 212992 | Recommended |
Basic OpenShift | wmem_max | 67108864 | 212992 | Recommended |
Performance Profile | Huge Page Size | 1GB | - | Required |
Performance Profile | # of Huge Pages | 2 | - | Required |
TuneD | Address-Space Layout Randomization with data segments Sysctl kernel.randomize_va_space | 2 | 2 | Required |
TuneD | Sysctl net.ipv4.ip_local_port_range | 1024 - 65535 | 32768 - 60999 | Required |
TuneD | Sysctl net.ipv4.tcp_tw_reuse | 1 | 2 | Required |
TuneD | Sysctl net.core.rmem_max | 67108864 | 212992 | Required |
TuneD | Sysctl net.core.wmem_max | 67108864 | 212992 | Required |
TuneD | Sysctl net.core.rmem_default | 67108864 | 212992 | Required |
TuneD | Sysctl net.core.wmem_default | 67108864 | 212992 | Required |
TuneD | Sysctl net.ipv4.udp_mem | 1124736 10000000 67108864 | - | Required |
MachineConfig | Sysctl fs.inotify.max_user_instances (recommended) | 64000 | 8192 | Required |
MachineConfig | Sysctl fs.inotify.max_user_watches | 65536 | 65536 | Required |
Let’s start working on preparing the cluster with the parameters defined above for XRd deployment.
Step 1: Preparing the Cluster
Like any other software, Red Hat OpenShift is installed with generic, default values. However as mentioned in the table above, XRd requires certain values to be tweaked for its needs.This section covers how to tune the OpenShift cluster using the following:
- Reserve CPU and enable hugepages through performance profile
- Modify systemctl parameters using TuneD profile
- Modify additional parameters through machine configuration
Step 1.1 Reserving CPUs and Enabling Hugepages
Using the node tuning operator that comes pre-packed with OpenShift installation, you can perform multiple node tuning functions such as isolate CPUs for system and application or define hugepages - both of which are achieved by defining a performance profile.
Before starting, verify the total number of CPUs and current status of your hugepages configuration as shown below.
When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. Performance profile configured below reflects 32 CPUs (from 0 - 31), four of which (0-3) are reserved for openshift infra, whereas the rest (4-31) are available for application workloads.
In addition to isolating CPUs, the performance profile can also be used to configure the node for Hugepages. Based on the XRd requirements mentioned earlier, at least 2 huge pages of size 1G each will need to be enabled as shown in the performance profile below.
Here is the yaml manifest that configures CPU isolation and desired huge pages. The file is called pao.yaml.
More detail on performance profile can be found in OpenShift documentation here
The yaml file with performance profile can be applied using the oc apply -f construct as shown below. Note that this step will reboot your node.
Once the configuration is applied, you can verify that verify the Hugepages configuration using *oc describe* command for the node:
Note CPU isolation is not explicitly spelled out in oc describe. To verify CPUs have been isolated for application, you can connect to the host using oc debug node command:
Since this is a single node cluster, there are no additional nodes in the cluster to apply the profile. However, if a multiple node cluster was being used, the performance profile could be directed towards a subset of nodes by using an appropriate node selector.
1.2 Modify SystemCtl Parameters through TuneD Profile
Linux kernel parameters can be enabled, set, or disabled using sysctl. These settings can be altered by changing parameters in sysctl configuration (using /etc/sysctl.conf, or a file under /etc/sysctl.d/). However, the preferred way to tweak system parameters on RHEL (7 onwards) and Red Hat CoreOS (the default OS for OpenShift) is to use the TuneD service. Detailed description of TuneD is beyond the scope of this blog. On a system running OpenShift, the tuned values can be altered by using the tuneD resource type which then ensures that the configuration is applied to the OpenShift nodes.
In the case of XRd, there are a number of kernel parameters that need to be adjusted. While some of these requirements may be met by the default configuration, others may need to be altered. The following tuneD manifest ensures that required tweaks are made to the system:
Note that this tuned profile embeds a tuned profile called openshift-node-performance-iosxr-performanceprofile in it. That's because only one tuned profile can be applied at a time. The “openshift-node-performance-iosxr-performanceprofile” exists as a result of the performance profile created in last step. If this profile is not included, some of the changes made by the performance profile (specifically, the CPU isolation) will be undone. Before applying the new profile,the following command can be used to verify that currently the TuneD profile applied is one that was auto-generated by performance profile.
Now lets apply the new tuned profile:
At this point, you can verify that the new profile has been applied, as shown in the following command output:
Now that TuneD profile have configured the system, you can validate that the individual parameters set by this profile are being configured on the OpenShift node by connecting to the node using oc debug. The example below shows net.ipv4.ip_local_port_range parameters verified.
1.3 Modify Additional Kernel Parameters
Generally all the kernel settings are done using a tuned manifest. However, if a machine config is also tweaking those parameters then the Machine Config takes precedence and hence overrides those specific changes made using the tuned manifest.
There are two XRd required parameters that were skipped in the previous section’s tuned manifest for this reason, specifically “fs.inotify.max_user_watches” and “fs.inotify.max_user_instances”. The following Machine Config sets the defaults for these parameters:
Even if these were configured using tuneD, the values configured by this Machine Config will override that change. You can verify these parameters by attaching to node using oc debug:
To override these default values, create the following Machine Config:
Before applying the machineconfig profile, verify the current status of the rendered profile as shown below:
Now apply the new MachineConfig as shown below. Note that this step will trigger a node reload
After the node is reloaded, check the MachineConfig profile. The new profile will be in updating state as show below:
After a little while, the MachineConfig profile will transition from updating to updated. At this point, the MCP is good to go.
Once the Machine Config Profile has transition to Updated = True and stays at Upgraded=Fales, you can attach to the node and verify the new values:
At this point all the node tuning has been completed and you can start preparing the XRd tenant.
Step 2: Preparing the XRd Tenent Environment
Once nodes’ system parameters are tuned in preparation of XRd deployment, you can create the XRd environment by:
- Creating a namespace for XRd containers
- Creating a service account to be used by XRd
- Assigning applicable privileges to the service account
Step 2.1 Creating a Namespace
A kubernetes namespace is a virtual sub-space that provides logical isolation to resources (such as pods, services, etc) of a project. OpenShif uses the umbrella term project to refer to namespaces.
The following manifest will be used to create a namespace called “xrd”.
Now apply this manifest on your system to create this namespace.
Once the manifest is applied, verify that the project is indeed created"
While the new namespace (XRd) is now created, your oc commands will continue to use your current namespace by default It will save you some future typing if you switch to the newly create name space by using the following command:
Now lets create a Service Account for XRd to use.
Step 2.2: Create a Service Account
Currently, XRd needs privileged access to some of the host’s resources. It, therefore, needs to run as a privileged pod, and hence should be run by a privileged user or service account. For this purpose, a new service account will need to be created using the process below.
Optionally, you can first verify which service accounts already exists on the system:
Here is the manifest that will create the user account called xrd-sain the newly created xrd namespace:
Verify that the new service account is created:
Step 2.3: Giving Privileged Access to the Service Account
To complete the process of creating a privileged service account, the newly created service account will now be bound with a pre-existing role called privileged. The list of pre created roles and their privilege status can be found using the following command:
Here is the manifest that bound the xrd-sa service account with the privileged role:
Now apply this manifest as shown below:
Once the manifest is applied, verify that xrd-sa is now a privileged user.
Now that the enviroment for XRd tenent is created, let move on to define a startup configuration for containerized XRd router.
Step 3: Define Startup Configuration For XRd Router
Before deploying an XRd instance on OpenShift, a startup configuration could be made available for the router using configmap.
Here is a configmap that provides very basic configuration - hostname, username and password - to the XRd container:
Now apply the configmap for XRd to use later:
Once the configmap is created, move on to the next which onboards XRd with desired number of interfaces.
Step 4: Onboard XRd Control Plane with Desired Interfaces
Now that the host environment has been prepared (performance profile, TuneD and MachineConfig), XRd tenant environment is created (namespace,service account and role binding) and a basic startup configuration has been defined, it's time to onboard your first XRd control plane instance.
To onboard the router, you will have to do the following:
- Define interfaces for the router by create a Network Attachment Definition (NAD)
- Instantiate an XRd Control Plane pod using statefulset
Step 4.1: Define Interfaces for your XRd Control Plane
By default, a Kubernetes Pod is provided with a single interface for management that is used by Kubenetes infrastructure to communicate with and access it. Being a router, however, XRd should have multiple interfaces to, you know, route traffic. To provide a pod with additional interfaces, a resource of type Network Attachment Definition has to be created.
In the example below, a Network Attachment Definition is using a MACVLAN plugin and associate it with a physical interface - eno 2 in this case. This plugin allows a pod to define a virtual interface, allocating a unique mac address for it. The plugin can be configured in different modes, and in this case we use “bridge” mode as it allows the pods utilizing this plugin to create interfaces to communicate with each other as well as external devices connected via associated physical interface. More details on MACVLAN and other such plugins are available here:
Note that the the name of the physical interface may be different for other situations and you can use the following process to find physical interface names on your server:
The use of macvlan and its associate with physical interfaces for XRd using in this lab is shown in the figure below:
The following Network Attachement Definition manifest reflects the topology shown above.
Note: Its Important that you replace eno2 with your own interface in the manifest shown below
Here are some basic explanations of terms used in the manifest :
- Name: The name of the virtual interface
- Type: Virtual interfaces can be of various types, such as macvlan, ipvlan etc as covered above.
- Mode: Define the forwarding mode for the macvlan. We will use bridge mode as mentioned earlier
- Master: This is the physical interface used by macvlan interface as uplink
- IPAM: IP address Management is not being used in this case.
Now apply this manifest and verify the as shown below:
Now you are ready to instantiate your first XRd pod.
Step 4.2: Deploy XRd using Statefulset
We will use a statefulset to instantiate the XRd Control Plane pod. The statefulset used here is shown below:
Here is the brief overview of various values used in the manifest above:
- network name: using macvlan-1 as defined in nad.yaml
- serviceAccountName: using xrd-sa as defined in sa.yaml and assigned to proviedged role in role.yaml
- configMap name: using xrd-control-plane-config to be used as statutup config as defined in configmap.yaml
- Container image: This is the container image that should be download from the cisco.com and placed in a container image registry.
- Interfaces values: Once applied, the interfaces defined in network name are mapped to linux interface(s) that are in turn mapped to XRd router interface. An example of this mapping will be shown later after XRd control plane is instantiated.
Now apply the statefulset as shown below.
And thats it !!! Your XRd control plane pod is now running as shown below.
To inspect the pod and its parameters you can use oc describe xrd-control-plane-0 command. You can also use the following command to see the association between macvlan-1, net1 and the XRd gig 0/0/0/0 interface using the following 2 commands
Congratulations. You’ve instantiated your first Cisco XRd on Red Hat Openshift. Now its time to connect to your router and configure it.
Step 5: Connect to and configure your XRd Router
Just like any other pod, you can use the oc exec command to connect to XRd. The difference is that you can directly connect to the XR shell to run your beloved IOS-XR commands. Bear in mind that you will need the login credentials defined in the configmap.
Here is how to connect to the xr shell on the XRd container:
Notice the hostname is already set, courtesy of the configmap defined in Step 3. Now that you are connected to the router, you can configure it just like an IOS-XR router. Lets verify that we have an interface associated with the router as defined in the statefulset and nad manifests.
We will now use the following configuration to assign a static IP to this interface as well as a default route so that we can reach devices outside our local network.
You can now verify that the interface is up with the right IP address and ping the default gateway and other external devices. It must be noted that this ping is possible because of the macvlan interface defined in Network Attachment Definition in nad.yaml that is using eno2 as its uplink interface.
Congratulations. Your XRd is now fully operational with an interface associated with it for external connectivity.
Git Resources and Using Kustomize to Run All Manifests at Once
All the files used in this blog are publically availble at our Git repository. This blog provides a full
Heres a tip: You can clone this git repo and use kutomization to apply ALL manifests at once as shown below:
Kustomization uses a kustomization.yaml file that describes all the
The oc apply -k . command show above will run all the manifests that are listed in the Kustomization.yaml file. Currently the kustomization file shared on our git repo has the following manifests.
Once this kustomization file is run all the tasks listed in this blog will be executed simultaneously and an XRd instance will be initiated on your OpenShift, provided that you have updated the statefulset.yaml with the correct XRd container image.
In a future blog, we will put together a simulated O-RAN xHaul Architectural topology using multiple XRd (and possibly other vendors’) containerized routers.