Deploy a Single Node Production SD-Core¶
This guide covers how to install a SD-Core 5G core network on a single node for production.
see also: production deployment reference
Prepare production node¶
These steps need to be run on the production node itself.
Install driverctl:
sudo apt update
sudo apt install -y driverctl
Configure two 1G huge pages:
sudo sed -i "s/GRUB_CMDLINE_LINUX=.*/GRUB_CMDLINE_LINUX='default_hugepagesz=1G hugepages=2'/" /etc/default/grub
sudo update-grub
Record the MAC address of the access and core network interfaces, they will be required later:
export ACCESS_NIC=enp4s0f0
export CORE_NIC=enp4s0f1
cat /sys/class/net/$ACCESS_NIC/address
cat /sys/class/net/$CORE_NIC/address
Record the PCI addresses of the access and core network interfaces, updating the interface names to match your setup:
export ACCESS_NIC=enp4s0f0
export CORE_NIC=enp4s0f1
readlink -f /sys/class/net/$ACCESS_NIC/device | xargs basename
readlink -f /sys/class/net/$CORE_NIC/device | xargs basename
Create the /etc/rc.local
with the following content, replacing the PCI addresses with the ones from the previous step:
#!/bin/bash
sudo driverctl set-override 0000:04:00.0 vfio-pci
sudo driverctl set-override 0000:04:00.1 vfio-pci
Install and bootstrap Canonical K8s:
sudo snap install k8s --classic --channel=1.33-classic/stable
cat << EOF | sudo k8s bootstrap --file -
containerd-base-dir: /opt/containerd
cluster-config:
network:
enabled: true
dns:
enabled: true
load-balancer:
enabled: true
local-storage:
enabled: true
annotations:
k8sd/v1alpha1/cilium/sctp/enabled: true
EOF
Add the Multus plugin:
sudo k8s kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
Note
There is an known issue with Multus that can sometimes need more memory than allowed in the DaemonSet, especially when starting many
containers concurrently. Symptoms of this issue are many pods not able to start, Multus in CrashLoopBackoff
and lines with OOM
in
dmesg
. If this impacts you, edit the memory limit in the Multus DaemonSet to 500Mi:
sudo k8s kubectl edit daemonset -n kube-system kube-multus-ds
Create a manifest file sriovdp-config.yaml
, replacing the PCI addresses with those recorded previously:
apiVersion: v1
kind: ConfigMap
metadata:
name: sriovdp-config
data:
config.json: |
{
"resourceList": [
{
"resourceName": "intel_sriov_vfio_access",
"selectors": {
"pciAddresses": ["0000:04:00.0"]
}
},
{
"resourceName": "intel_sriov_vfio_core",
"selectors": {
"pciAddresses": ["0000:04:00.1"]
}
}
]
}
Apply the manifest:
sudo k8s kubectl apply -f sriovdp-config.yaml
Install the SR-IOV device plugin:
sudo k8s kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/sriov-network-device-plugin/master/deployments/sriovdp-daemonset.yaml
Install the vfioveth CNI:
sudo wget -O /opt/cni/bin/vfioveth https://raw.githubusercontent.com/opencord/omec-cni/master/vfioveth
sudo chmod +x /opt/cni/bin/vfioveth
Create the ipaddresspools.yaml
manifest for the static IP address pools for MetalLB,
using the 4 not configured static IP address from the management network, to update the
following content:
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: lb-address-cos
namespace: metallb-system
spec:
addresses:
- 10.201.0.3/32
avoidBuggyIPs: false
serviceAllocation:
namespaces:
- cos-lite
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: lb-address-nms
namespace: metallb-system
spec:
addresses:
- 10.201.0.4/32
avoidBuggyIPs: false
serviceAllocation:
namespaces:
- control-plane
serviceSelectors:
- matchExpressions:
- {key: "app.juju.is/created-by", operator: In, values: [traefik]}
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: lb-address-amf
namespace: metallb-system
spec:
addresses:
- 10.201.0.5/32
avoidBuggyIPs: false
serviceAllocation:
namespaces:
- control-plane
serviceSelectors:
- matchExpressions:
- {key: "app.juju.is/created-by", operator: In, values: [amf]}
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: lb-address-upf
namespace: metallb-system
spec:
addresses:
- 10.201.0.6/32
avoidBuggyIPs: false
serviceAllocation:
namespaces:
- user-plane
Apply the manifest:
sudo k8s kubectl apply -f ipaddresspools.yaml
Extract the Kubernetes configuration to a file:
sudo k8s config > sdcore_k8s_config
Transfer the resulting file to the machine used for installation.
Reboot the production node:
sudo reboot
Bootstrap a Juju controller¶
The remaining steps need to be run from the installation machine.
Add the Kubernetes cluster to the Juju client and bootstrap the controller:
export KUBECONFIG=/path/to/sdcore_k8s_config
juju add-k8s sdcore_k8s --cluster_name=k8s --client
juju bootstrap --config controller-service-type=loadbalancer sdcore_k8s
Deploy SD-Core¶
Create a Terraform module for your deployment:
mkdir terraform
cd terraform
Create a main.tf
file with the following content, updating the values for your deployment:
module "sdcore-production" {
source = "git::https://github.com/canonical/charmed-aether-sd-core//production"
amf_ip = "10.201.0.12"
amf_hostname = "amf.example.com"
gnb_subnet = "10.204.0.0/24"
nms_domainname = "sdcore.example.com"
upf_access_gateway_ip = "10.202.0.1"
upf_access_ip = "10.202.0.10/24"
upf_access_mac = "a1:b2:c3:d4:e5:f6"
upf_core_gateway_ip = "10.203.0.1"
upf_core_ip = "10.203.0.10/24"
upf_core_mac = "a1:b2:c3:d4:e5:f7"
upf_enable_hw_checksum = "true"
upf_enable_nat = "false"
upf_hostname = "upf.example.com"
}
Note
All hostnames listed in the main.tf
should be configured in your DNS server, pointing to the
selected static IPs. Specific instructions on how to do this is out of scope for this document.
Consult your DNS server’s documentation.
Initialize the provider and run the deployment:
terraform init
terraform apply -auto-approve
Access NMS¶
Retrieve the NMS address:
juju switch control-plane
juju run traefik/0 show-proxied-endpoints
Retrieve the NMS credentials (username
and password
):
juju show-secret NMS_LOGIN --reveal
The output looks like this:
cvn3usfmp25c7bgqqr60:
revision: 2
checksum: f2933262ee923c949cc0bd12b0456184bb85e5bf41075028893eea447ab40b68
owner: nms
label: NMS_LOGIN
created: 2025-04-03T07:57:40Z
updated: 2025-04-03T08:02:15Z
content:
password: pkxp9DYCcZG
token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3NDM2NzA5MzMsInVzZXJuYW1lIjoiY2hhcm0tYWRtaW4tVlNMTSIsInJvbGUiOjF9.Qwp0PIn9L07nTz0XooPvMb8v8-egYJT85MXjoOY9nYQ
username: charm-admin-VSLM
Access Grafana¶
Retrieve Grafana’s URL and admin password:
juju switch cos-lite
juju run grafana/leader get-admin-password
This produces output similar to the following:
Running operation 1 with 1 task
- task 2 on unit-grafana-0
Waiting for task 2...
admin-password: c72uEq8FyGRo
url: http://10.201.0.3/cos-lite-grafana