top of page

Multicast within Kubernetes

Updated: Mar 15, 2022

Multicast is an efficient means of distributing streaming data to multiple recipients on a network. Our customers have existing applications and hardware that depend on multicast traffic, yet also want to move these applications into container-based orchestration engines like Kubernetes. In this post, we’ll walk through how to configure Kubernetes (1.18) to allow for container multicast ingest without having to fall back to “host-only” style networking. This is done through a combination of Intel’s Multus CNI and a macvlan network driver. Iperf is used to test sending and receiving multicast traffic between our two virtual hosts.

Additional details along with all source for this walkthrough can be found on Github.

Ensure you have Vagrant installed and VirtualBox, then clone the example repository and bring up our VMs. (Vagrant version 2.2.5 and VirtualBox 6.0 were used)

[ylb@spectric ~]$ git clone
[ylb@spectric ~]$ cd k8s-mcast-example/Vagrant
[ylb@spectric Vagrant]$ vagrant up

This will download and start our two pre-configured VMs, one to act as a multicast source and one as the single node Kubernetes cluster. Secure shell into the Kubernetes node and apply the Intel Multus daemonset:

[ylb@spectric Vagrant]$ vagrant ssh k8s
[vagrant@k8s ~]$ kubectl apply -f

We now must apply a NetworkAttachmentDefinition which tells multus which network interface we want to expose to our containers and how. We need to specify which interface on the parent to bridge, and an IP range to assign addresses. Many other optional parameters may be specified including a default gateway and custom routes; see the multus documentation for additional configuration options. Below is the NetworkAttachmentDefinition we will use to expose eth1 on our VM to containers within Kubernetes, a copy of this definition has been placed in /vagrant on the VM:

apiVersion: "" 
kind: NetworkAttachmentDefinition 
  name: eth1-multicast 
  config: '{ 
      "cniVersion": "0.3.0", 
      "type": "macvlan", 
      "master": "eth1", 
      "mode": "bridge", 
      "ipam": { 
        "type": "host-local", 
        "subnet": "", 
        "rangeStart": "", 
        "rangeEnd": "" 

Apply the configuration with kubectl:

[vagrant@k8s ~]$ kubectl apply -f /vagrant/net-config.yml 

Now, simply reference that network definition via an annotation in our pod spec to pass the interface into the Pod. This can be done within a pod definition, deployment, etc. Apply the example pod shown below to start our multicast consumer within Kubernetes, the pod spec is also available within the VM in: /vagrant

apiVersion: v1 
kind: Pod 
  name: multicast-example 
  annotations: eth1-multicast@eth1 
  - name: example-multicast-pod 
    command: ["iperf", "-s", "-u", "-B", "", "-i", "1"] 
    image: bagoulla/iperf:2.0

Once it has successfully deployed, tail the logs:

[vagrant@k8s ~]$ kubectl apply -f /vagrant/example-mcast-pod.yml 
pod/multicast-example created
...  # Use "kubectl describe pod multicast-example"
...  # to track the launch
[vagrant@k8s ~]$ kubectl logs -f multicast-example 

In a separate terminal, secure shell into our multicast source VM and start sending multicast packets:

[ylb@spectric ~]$ cd k8s-mcast-example/Vagrant
[ylb@spectric Vagrant]$ vagrant ssh mcastsrc
[vagrant@mcastsrc ~]$ docker run --net=host --rm bagoulla/iperf:2.0  -c -u --ttl 5 -t 5 -B

The pod logs should show that multicast packets have been received.

You can now bring down and delete the VMs with:

[ylb@spectric ~]$ cd k8s-mcast-example/Vagrant
[ylb@spectric Vagrant]$ vagrant destroy -f

The video below walks through the steps described above:



bottom of page