Multicast is an efficient means of distributing streaming data to multiple recipients on a network. Our customers have existing applications and hardware that depend on multicast traffic, yet also want to move these applications into container-based orchestration engines like Kubernetes. In this post, we’ll walk through how to configure Kubernetes (1.18) to allow for container multicast ingest without having to fall back to “host-only” style networking. This is done through a combination of Intel’s Multus CNI and a macvlan network driver. Iperf is used to test sending and receiving multicast traffic between our two virtual hosts.
Additional details along with all source for this walkthrough can be found on Github.
Ensure you have Vagrant installed and VirtualBox, then clone the example repository and bring up our VMs. (Vagrant version 2.2.5 and VirtualBox 6.0 were used)
[ylb@spectric ~]$ git clone git@github.com:spectriclabs/k8s-mcast-example.git
[ylb@spectric ~]$ cd k8s-mcast-example/Vagrant
[ylb@spectric Vagrant]$ vagrant up
This will download and start our two pre-configured VMs, one to act as a multicast source and one as the single node Kubernetes cluster. Secure shell into the Kubernetes node and apply the Intel Multus daemonset:
[ylb@spectric Vagrant]$ vagrant ssh k8s
[vagrant@k8s ~]$ kubectl apply -f https://raw.githubusercontent.com/intel/multus-cni/master/images/multus-daemonset.yml
We now must apply a NetworkAttachmentDefinition which tells multus which network interface we want to expose to our containers and how. We need to specify which interface on the parent to bridge, and an IP range to assign addresses. Many other optional parameters may be specified including a default gateway and custom routes; see the multus documentation for additional configuration options. Below is the NetworkAttachmentDefinition we will use to expose eth1 on our VM to containers within Kubernetes, a copy of this definition has been placed in /vagrant on the VM:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: eth1-multicast
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "10.0.0.0/24",
"rangeStart": "10.0.0.13",
"rangeEnd": "10.0.0.254"
}
}'
Apply the configuration with kubectl:
[vagrant@k8s ~]$ kubectl apply -f /vagrant/net-config.yml
Now, simply reference that network definition via an annotation in our pod spec to pass the interface into the Pod. This can be done within a pod definition, deployment, etc. Apply the example pod shown below to start our multicast consumer within Kubernetes, the pod spec is also available within the VM in: /vagrant
apiVersion: v1
kind: Pod
metadata:
name: multicast-example
annotations:
k8s.v1.cni.cncf.io/networks: eth1-multicast@eth1
spec:
containers:
- name: example-multicast-pod
command: ["iperf", "-s", "-u", "-B", "224.0.67.67%eth1", "-i", "1"]
image: bagoulla/iperf:2.0
Once it has successfully deployed, tail the logs:
[vagrant@k8s ~]$ kubectl apply -f /vagrant/example-mcast-pod.yml
pod/multicast-example created
... # Use "kubectl describe pod multicast-example"
... # to track the launch
[vagrant@k8s ~]$ kubectl logs -f multicast-example
In a separate terminal, secure shell into our multicast source VM and start sending multicast packets:
[ylb@spectric ~]$ cd k8s-mcast-example/Vagrant
[ylb@spectric Vagrant]$ vagrant ssh mcastsrc
[vagrant@mcastsrc ~]$ docker run --net=host --rm bagoulla/iperf:2.0 -c 224.0.67.67 -u --ttl 5 -t 5 -B 10.0.0.12
...
The pod logs should show that multicast packets have been received.
You can now bring down and delete the VMs with:
[ylb@spectric ~]$ cd k8s-mcast-example/Vagrant
[ylb@spectric Vagrant]$ vagrant destroy -f
The video below walks through the steps described above:
Comments