DPDK Testpmd

Introduction

The dpdk-testpmd sample application enables various forwarding modes, including IO, macswap, 5tswap, rxonly, and txonly. The instructions provided in this guide do not cover all features of this sample application. Users can refer to dpdk-testpmd user guide to learn and experiment with additional features.

Test Setup

This guide assumes the following setup:

../_images/dpdk_setup.png

The Kubernetes cluster must have been set up by the create-cluster.yaml playbook. As shown, every worker node should be connected to the traffic generator & have PFs, VFs, or ENIs available to bind to application pod(s).

Execute the DPDK-Testpmd Ansible Playbook

Execute the Ansible playbook in the <nw_cra_workspace>/cnf-reference-arch/examples/dpdk-testpmd/ directory by running:

cd <nw_cra_workspace>/cnf-reference-arch/examples/dpdk-testpmd/
ansible-playbook -i ../../inventory.ini dpdk-testpmd.yaml

If the cluster is being deployed on EC2 then this command line argument -e '{aws_inst: true, deploy_on_vfs: false}' needs to be used. Also if the cluster has to use PFs directly instead of VFs then this command line argument -e '{deploy_on_vfs: false}' needs to be used.

This playbook will execute the following steps:

  1. Define various variables using runtime information, such as the Multus network name depending on if VFs or PFs are being used.

  2. Expands the DPDK deployment template to the controller node. By default, it will be placed at ~/dpdk-deployment.yaml.

  3. Copies files needed to build the dpdk-testpmd docker image to temporary directory on the worker node

  4. Build the dpdk-testpmd docker image and push it to the private docker registry on the controller node

The directory which hosts the dpdk-deployment.yaml file is controlled by the output_dir parameter. To put those files in a different directory, add -e output_dir=path to the ansible-playbook command above. For example, to place the files in ~/different_output_dir, the full command would look like:

ansible-playbook -i ../../inventory.ini dpdk-testpmd.yaml -e output_dir=~/different_output_dir

Once the playbook successfully executes, ssh into the controller node and run:

cd <output_dir>
kubectl apply -f dpdk-deployment.yaml

This will create pods running the dpdk-testpmd application. While the number of pods created equals the number of worker nodes, it is up to the Kubernetes scheduler to decide on which nodes the pods will run.

Test

Monitor the application by running kubectl get pods on the controller node. It may take some time for the pods to start up. Once the pods are in the Running state, their logs can be viewed with kubectl logs <pod name>. The pod name can be obtained from the kubectl get pods command. To get more information about the pods (such as which node it is running on), use kubectl get pods -o wide or kubectl describe pod <pod name>.

The logs should contain something similar to:

+ ./build/app/dpdk-testpmd --lcores 1@9,2@10 -a 0000:07:02.0 -- --forward-mode=5tswap --port-topology=loop --auto-start
...
Set 5tswap packet forwarding mode
Auto-start selected
Configuring Port 0 (socket 0)

Port 0: link state change event

Port 0: link state change event
Port 0: CA:7D:57:CB:B0:5F
Checking link statuses...

These logs show port 0 has MAC address CA:7D:57:CB:B0:5F with PCIe address 0000:07:02.0 on the DUT.

Configure the traffic generator to send packets to the NIC port, using the specified MAC as DMAC. If deploying on AWS EC2 instances, also ensure the destination IP matches the primary IP of the dataplane ENI.

In this example, use a the destination MAC address of CA:7D:57:CB:B0:5F and a destination IP of 198.18.0.21. Then, dpdk-testpmd will forward those packets out on port 0 after swapping the MAC, IP and port(s). In this example, the packets transmitted by dpdk-testpmd will have the source MAC set to CA:7D:57:CB:B0:5F and the source IP will be 198.18.0.21. The destination MAC and IP will be set to the source MAC and IP of the packets transmitted by the traffic generator.

Stop

To stop the application pods, delete the deployment with kubectl delete deploy dpdk-testpmd. To clean up the K8s cluster, run sudo kubeadm reset -f and sudo rm -rf /etc/cni/net.d on controller and worker nodes.

Suggested Experiments

The example provided above covers a very simple use case of the DPDK Testpmd application. Users are encouraged to experiment with various options provided by the application.

Some experiments will involve changing the DPDK source code and/or the command line arguments to dpdk-testpmd. Changes to these will require updating the deployed container image.

To update the command line arguments, modify the dpdk-launch.sh script in the <nw_cra_workspace>/cnf-reference-arch/examples/dpdk-testpmd directory. There are multiple ways to update the source code/executable in the sample application.

This guide recommends developing a patch locally & generate a patch file with git format-patch. The DPDK contributor guidelines have some examples on creating such a patch file. Then, place the patch file in the <nw_cra_workspace>/cnf-reference-arch/examples/dpdk-testpmd directory, and modify the dpdk-testpmd.yaml playbook to copy the patch file to the workers. See the tasks copying the Dockerfile and dpdk-launch.sh files for reference. Finally, modify the Dockerfile to apply your patch (e.g. using git apply) before the ninja step.

Once the changes are in place, it is important they are used to generate an updated container image. To ensure the changes are always included, add force_source: yes to the docker_image task in dpdk-testpmd.yaml. Once this is in place, an updated container image can be built at any time by re-running the dpdk-testpmd.yaml playbook.

The users are also encouraged to try the following options to understand the performance and scalability possible with Arm platforms.

  • Number of RX/TX ring descriptors: This can affect the performance in multiple ways. For example, if the worker node is capable of storing the incoming packets in system cache, the incoming packets can trash the system cache, reducing the overall performance. To understand how these affect the performance, experiment by changing the number of descriptors. Add the --rxd and --txd parameters to the dpdk-testpmd command line arguments in dpdk-launch.sh and update the container image.

  • Number of RX/TX queues/core: A single queue may not be enough traffic to saturate a single CPU core. It is possible to increase the number of queues per core by varying the --rxq and --txq parameters.

  • CPU Scalability: Add more VFs to the deployment by increasing the arm.com/dpdk and CPU requests/limits. Ensure the limits/requests for CPU and memory remain equal, otherwise the Pod will no longer be given dedicated CPUs. Additionally, update the k8s.v1.cni.cncf.io/networks annotation to repeat sriov-dpdk or pf-dpdk as many times as arm.com/dpdk is listed in the limits/requests sections. For example, to request 3 VFs for a single pod, then set k8s.v1.cni.cncf.io/networks: sriov-dpdk, sriov-dpdk, sriov-dpdk and arm.com/dpdk: 3 for both limits and requests. Ensure that Receive Side Scaling (RSS) distributes the packets equally to all the enabled queues by sending multiple flows of traffic. Finally, ensure the --nb-cores parameter is set for dpdk-testpmd to ensure the additional CPU cores are used for packet forwarding.

  • Different Forwarding Modes: dpdk-testpmd supports a variety of forwarding modes. Try these out by editing --forward-mode=5tswap to the forwarding mode of interest. If running in AWS, additional setup on VPC routes and ENI source/destination checking is needed to ensure proper packet flow.

It is also possible to compile additional sample applications and run them following DPDK’s Sample Applications User Guide. To compile them, update the ninja step in the Dockerfile. To run them, modify dpdk-launch.sh accordingly.