From patchwork Wed Sep 7 14:10:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Koikkara Reeny, Shibin" X-Patchwork-Id: 116053 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE518A054F; Wed, 7 Sep 2022 16:10:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A47AE40143; Wed, 7 Sep 2022 16:10:33 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 27F44400D6 for ; Wed, 7 Sep 2022 16:10:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662559831; x=1694095831; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=K/7oet7Qi74LcX1R9VeoqA7Dt+rc+OJww3LPYWOvC8k=; b=fHivDEDRV9lTuMCg7KLwTh5PLVYt3Jyb0T///qEtxcZDQ6si/Dcfl72w oNLcppq0r1jLOhrz4Jr+BqYZgucMtKn/MycrwhDaVVkxSHOgrel+1zoaG 15eZaYz+35lDobHfsAyU3AcHiucCGEuGRnHiYFIbPlgCRimK37ADJgkxo XU9U+blPHqsqrhlsRZ9FVqm+agk02G5tdbXGJdm2tI5iUHb0RvbtKA1mV gm58o15h1eLQM4v7k58AfOfgCcKaG4bExSUoQ8ipSb4dOshqvTrOvntew TsJW8geN7jPC5mfxzqUm41xN45PT7f6VbNpuP4mQnSRYzKyyegCZoYzAn w==; X-IronPort-AV: E=McAfee;i="6500,9779,10462"; a="383174674" X-IronPort-AV: E=Sophos;i="5.93,297,1654585200"; d="scan'208";a="383174674" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2022 07:10:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,297,1654585200"; d="scan'208";a="647642832" Received: from silpixa00400867.ir.intel.com ([10.243.22.123]) by orsmga001.jf.intel.com with ESMTP; 07 Sep 2022 07:10:27 -0700 From: Shibin Koikkara Reeny To: dts@dpdk.org Cc: lijuan.tu@intel.com, zbigniewx.sikora@intel.com, Shibin Koikkara Reeny Subject: [dts][PATCH 1/1] test_plans/af_xdp_test_plan.rst: Add test plan for AF_XDP Date: Wed, 7 Sep 2022 16:10:24 +0200 Message-Id: <20220907141024.438595-1-shibin.koikkara.reeny@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From: "Sikora, ZbigniewX" Add 7 performance test cases and 8 functional test cases: Test case 1: perf_one_port_multiqueue_and_same_irqs Test case 2: perf_one_port_multiqueue_and_separate_irqs Test case 3: perf_one_port_multiqueues_with_two_vdev Test case 4: perf_one_port_single_queue_and_separate_irqs Test case 5: perf_one_port_single_queue_with_two_vdev Test case 6: perf_two_port_and_same_irqs Test case 7: perf_two_port_and_separate_irqs Test case 8: func_start_queue Test case 9: func_queue_count Test case 10: func_shared_umem_1pmd Test case 11: func_shared_umem_2pmd Test case 12: func_busy_budget Test case 13: func_xdp_prog Test case 14: func_xdp_prog_mq Test case 15: func_secondary_prog Signed-off-by: Shibin Koikkara Reeny --- test_plans/af_xdp_2_test_plan.rst | 189 -------- test_plans/af_xdp_test_plan.rst | 650 +++++++++++++++++++++++++++ test_plans/index.rst | 2 +- tests/TestSuite_af_xdp.py | 718 ++++++++++++++++++++++++++++++ tests/TestSuite_af_xdp_2.py | 474 -------------------- 5 files changed, 1369 insertions(+), 664 deletions(-) delete mode 100644 test_plans/af_xdp_2_test_plan.rst create mode 100644 test_plans/af_xdp_test_plan.rst create mode 100644 tests/TestSuite_af_xdp.py delete mode 100644 tests/TestSuite_af_xdp_2.py diff --git a/test_plans/af_xdp_2_test_plan.rst b/test_plans/af_xdp_2_test_plan.rst deleted file mode 100644 index 457cc55d..00000000 --- a/test_plans/af_xdp_2_test_plan.rst +++ /dev/null @@ -1,189 +0,0 @@ -.. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2019 Intel Corporation - -========================= -DPDK PMD for AF_XDP Tests -========================= - -Description -=========== - -AF_XDP is a proposed faster version of AF_PACKET interface in Linux. -This test plan is to analysis the performance of DPDK PMD for AF_XDP. - -Prerequisites -============= - -1. Hardware:: - - I40e 40G*1 - enp26s0f1 <---> IXIA_port_0 - -2. The NIC is located on the socket 1, so we define the cores of socket 1. - -3. Clone kernel branch master v5.4, make sure you turn on XDP socket/BPF/I40E before compiling kernel:: - - make menuconfig - Networking support --> - Networking options --> - [ * ] XDP sockets - -4. Build kernel and replace your host kernel with it:: - - cd bpf-next - sh -c 'yes "" | make oldconfig' - make -j64 - make modules_install install - make install - make headers_install - cd tools/lib/bpf && make clean && make install && make install_headers && cd - - make headers_install ARCH=x86_64 INSTALL_HDR_PATH=/usr - grub-mkconfig -o /boot/grub/grub.cfg - reboot - -5. Build DPDK:: - - cd dpdk - CC=gcc meson -Denable_kmods=True -Dlibdir=lib --default-library=static x86_64-native-linuxapp-gcc - ninja -C x86_64-native-linuxapp-gcc - -6. Involve lib:: - - export LD_LIBRARY_PATH=/home/linux/tools/lib/bpf:$LD_LIBRARY_PATH - -Test case 1: single port test with PMD core and IRQ core are pinned to separate cores -===================================================================================== - -1. Start the testpmd:: - - ethtool -L enp26s0f1 combined 1 - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=1 --log-level=pmd.net.af_xdp:8 -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop - -2. Assign the kernel core:: - - ./set_irq_affinity 3 enp26s0f1 #PMD and IRQs pinned to seperate cores - ./set_irq_affinity 2 enp26s0f1 #PMD and IRQs pinned to same cores - -3. Send packets by packet generator with different packet size, from 64 bytes to 1518 bytes, check the throughput. - -Test case 2: two ports test with PMD cores and IRQ cores are pinned to separate cores -===================================================================================== - -1. Start the testpmd:: - - ethtool -L enp26s0f0 combined 1 - ethtool -L enp26s0f1 combined 1 - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 --no-pci -n 4 \ - --vdev net_af_xdp0,iface=enp26s0f0 --vdev net_af_xdp1,iface=enp26s0f1 \ - --log-level=pmd.net.af_xdp:8 -- -i --auto-start --nb-cores=2 --rxq=1 --txq=1 --port-topology=loop - -2. Assign the kernel cores:: - - ./set_irq_affinity 4 enp26s0f0 - ./set_irq_affinity 5 enp26s0f1 - -3. Send packets by packet generator to port0 and port1 with different packet size, from 64 bytes to 1518 bytes, check the throughput at port0 and port1. - -Test case 3: multi-queue test with PMD cores and IRQ cores are pinned to separate cores -======================================================================================= - -1. Set hardware queues:: - - ethtool -L enp26s0f1 combined 2 - -2. Start the testpmd with two queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 -n 6 --no-pci \ - --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=2 \ - -- -i --auto-start --nb-cores=2 --rxq=2 --txq=2 --port-topology=loop - -3. Assign the kernel cores:: - - ./set_irq_affinity 4-5 enp26s0f1 - -4. Send packets with different dst IP address by packet generator with different packet size from 64 bytes to 1518 bytes, check the throughput and ensure the packets were distributed to the two queues. - -Test case 4: two ports test with PMD cores and IRQ cores pinned to same cores -============================================================================= - -1. Start the testpmd:: - - ethtool -L enp26s0f0 combined 1 - ethtool -L enp26s0f1 combined 1 - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29,30-31 --no-pci -n 4 \ - --vdev net_af_xdp0,iface=enp26s0f0 --vdev net_af_xdp1,iface=enp26s0f1 \ - -- -i --auto-start --nb-cores=2 --rxq=1 --txq=1 --port-topology=loop - -2. Assign the kernel cores:: - - ./set_irq_affinity 30 enp26s0f0 - ./set_irq_affinity 31 enp26s0f1 - -3. Send packets by packet generator to port0 and port1 with different packet size, from 64 bytes to 1518 bytes, check the throughput at port0 and port1. - -Test case 5: multi-queue test with PMD cores and IRQ cores pinned to same cores -=============================================================================== - -1. Set hardware queues:: - - ethtool -L enp26s0f1 combined 2 - -2. Start the testpmd with two queues:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29,30-31 -n 6 --no-pci \ - --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=2 \ - -- -i --auto-start --nb-cores=2 --rxq=2 --txq=2 --port-topology=loop - -3. Assign the kernel cores:: - - ./set_irq_affinity 30-31 enp26s0f1 - -4. Send packets with different dst IP address by packet generator with different packet size from 64 bytes to 1518 bytes, check the throughput and ensure packets were distributed to the two queues. - -Test case 6: one port with two vdev and single queue test -========================================================= - -1. Set hardware queues:: - - ethtool -L enp26s0f1 combined 2 - -2. Start the testpmd:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-3 --no-pci -n 4 \ - --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=1 \ - --vdev net_af_xdp1,iface=enp26s0f1,start_queue=1,queue_count=1 \ - -- -i --nb-cores=2 --rxq=1 --txq=1 --port-topology=loop - -3. Assign the kernel core:: - - ./set_irq_affinity 4-5 enp26s0f1 #PMD and IRQs pinned to seperate cores - ./set_irq_affinity 2-3 enp26s0f1 #PMD and IRQs pinned to same cores - -4. Set flow director rules in kernel, mapping queue0 and queue1 of the port:: - - ethtool -N enp26s0f1 rx-flow-hash udp4 fn - ethtool -N enp26s0f1 flow-type udp4 src-port 4242 dst-port 4242 action 1 - ethtool -N enp26s0f1 flow-type udp4 src-port 4243 dst-port 4243 action 0 - -5. Send packets match the rules to port, check the throughput at queue0 and queue1. - -Test case 7: one port with two vdev and multi-queues test -========================================================= - -1. Set hardware queues:: - - ethtool -L enp26s0f1 combined 8 - -2. Start the testpmd:: - - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-9 --no-pci -n 6 \ - --vdev net_af_xdp0,iface=enp26s0f1,start_queue=0,queue_count=4 \ - --vdev net_af_xdp1,iface=enp26s0f1,start_queue=4,queue_count=4 --log-level=pmd.net.af_xdp:8 \ - -- -i --rss-ip --nb-cores=8 --rxq=4 --txq=4 --port-topology=loop - -3. Assign the kernel core:: - - ./set_irq_affinity 10-17 enp26s0f1 #PMD and IRQs pinned to seperate cores - ./set_irq_affinity 2-9 enp26s0f1 #PMD and IRQs pinned to same cores - -4. Send random ip packets , check the packets were distributed to queue0 ~ queue7. diff --git a/test_plans/af_xdp_test_plan.rst b/test_plans/af_xdp_test_plan.rst new file mode 100644 index 00000000..4f5c4d74 --- /dev/null +++ b/test_plans/af_xdp_test_plan.rst @@ -0,0 +1,650 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2019 Intel Corporation + +========================= +DPDK PMD for AF_XDP Tests +========================= + +Description +=========== + +AF_XDP is a socket type that was introduced in kernel 4.18. +It is designed to pass network traffic from the driver in the kernel up to user space +as fast and efficiently as possible, but still abiding by all the usual robustness, +isolation and security properties that Linux provides. + +The test plan contains 15 tests, 7 of which are focused on performance and the remaining +8 are focused on validating functionality. + +To scale up throughput, one can configure an AF_XDP PMD to use multiple queues. +To configure the underlying network device with multiple queues, use ethtool:: + + ethtool -L combined + +In order for input traffic to be spread among the queues of an interface ensure a scheme +such as Receive Side Scaling (RSS) is in use. Generally, a random traffic profile will +ensure an even spread. More information on RSS can be found here: +https://www.kernel.org/doc/Documentation/networking/scaling.txt + +If desired one can also set explicit rules to ensure the spread if the underlying network +device supports it. For example the below will ensure incoming packets with UDP source port +1234 will land on queue 0 and those with port 5678 will land on queue 1:: + + ethtool -N eth0 rx-flow-hash udp4 fn + ethtool -N eth0 flow-type udp4 src-port 1234 dst-port 4242 action 0 + ethtool -N eth0 flow-type udp4 src-port 5678 dst-port 4243 action 1 + +For each queue in an AF_XDP test there are two pieces of work to consider: + +#. The DPDK thread processing the queue (userspace: testpmd application and AF_XDP PMD) +#. The driver thread processing the queue (kernelspace: kernel driver for the NIC) + +#1 and #2 can be pinned to either the same core or to separate cores. +Pinning #1 involves using the DPDK EAL parameters '-c' '-l' or '--lcores'. +Pinning #2 involves configuring /proc/irq/ where irq_no is the IRQ number associated +with the queue on your device, which can be obtained by querying /proc/interrupts. Some +network devices will have helper scripts available to simplify this process, such as the +set_irq_affinity.sh script which will be referred to in this test plan. + +Pinning to separate cores will generally yield better throughput due to more computing +power being available for the packet processing. +When separate cores are used it is suggested that the 'busy_budget=0' argument is added +to the AF_XDP PMD vdev string. This disables the 'preferred busy polling' feature of the +AF_XDP PMD. It is disabled because it only aids performance for single-core tests (app +threads and IRQs pinned to the same core) and as such should be disabled for tests where +the threads and IRQs are pinned to different cores. +When pinning to the same core the busy polling feature is used and along with it the +following netdev configuration options should be set for each netdev in the test:: + + echo 2 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 | sudo tee /sys/class/net/eth0/gro_flush_timeout + +These settings allow for a user to defer interrupts to be enabled and instead schedule +the NAPI context from a watchdog timer. When the NAPI context is being processed by a +softirq, the softirq NAPI processing will exit early to allow the busy-polling to be +performed. If the application stops performing busy-polling via a system call, the +watchdog timer defined by gro_flush_timeout will timeout, and regular softirq handling +will resume. This all leads to better single-core performance. More information can be +found at: +https://lwn.net/Articles/837010/ + +The AF_XDP PMD provides two different configurations for a multiple queue test: + +#. Single PMD with multiple queues +#. Multiple PMDs each with one or multiple queues. + +For example, if an interface is configured with four queues:: + + ethtool -L eth0 combined 4 + +One can configure the PMD(s) in multiple ways, for example: + +Single PMD four queues:: + + --vdev=net_af_xdp0,iface=eth0,queue_count=4 + +Two PMDs each with two queues:: + + --vdev=net_af_xdp0,iface=eth0,queue_count=2 --vdev=net_af_xdp1,iface=eth0,start_queue=2,queue_count=2 + +Four PMDs each with one queue:: + + --vdev=net_af_xdp0,iface=eth0 --vdev=net_af_xdp1,iface=eth0,start_queue=1 \ + --vdev=net_af_xdp1,iface=eth0,start_queue=2 --vdev=net_af_xdp1,iface=eth0,start_queue=3 + +The throughput can be measured by issuing the 'show port stats all' command on the testpmd CLI +and taking note of the throughput value:: + + testpmd> show port stats all + + ######################## NIC statistics for port 0 ######################## + RX-packets: 31534568 RX-missed: 0 RX-bytes: 1892074080 + RX-errors: 0 + RX-nombuf: 0 + TX-packets: 31534504 TX-errors: 0 TX-bytes: 1892070240 + + Throughput (since last show) + Rx-pps: 1967817 Rx-bps: 944552192 + Tx-pps: 1967817 Tx-bps: 944552192 + ############################################################################ + +To ensure packets were distributed to all queues in a test, use the 'show port xstats all' +interactive testpmd command which will show the distribution. For example in a two-queue test:: + + testpmd> show port xstats all + ###### NIC extended statistics for port 0 + rx_good_packets: 317771192 + tx_good_packets: 317771128 + rx_good_bytes: 19066271520 + tx_good_bytes: 19066267680 + rx_missed_errors: 0 + rx_errors: 0 + tx_errors: 0 + rx_mbuf_allocation_errors: 0 + rx_q0_packets: 158878968 + rx_q0_bytes: 9532738080 + rx_q0_errors: 0 + rx_q1_packets: 158892224 + rx_q1_bytes: 9533533440 + rx_q1_errors: 0 + tx_q0_packets: 158878904 + tx_q0_bytes: 9532734240 + tx_q1_packets: 158892224 + tx_q1_bytes: 9533533440 + +Above we can see that packets were received on Rx queue 0 (rx_q0_packets) and Rx queue 1 +(rx_q1_packets) and transmitted on Tx queue 0 (tx_q0_packets) and Tx queue 1 (tx_q1_packets). + +Alternatively if not using testpmd interactive mode, one can display the xstats at a specific +interval by adding the following to their testpmd command line:: + + --display-xstats=, --stats-period= + +For example to display the statistics for Rx queue 0 and Rx queue 1 every 1s, use:: + + --display-xstats=rx_q0_packets,rx_q1_packets,1 --stats-period=1 + +The functional tests validate the different options available for the AF_XDP PMD which are +decribed in the DPDK documentation: +https://doc.dpdk.org/guides/nics/af_xdp.html#options + + +Prerequisites +============= + +#. Hardware:: + + 2 Linux network interfaces each connected to a traffic generator port:: + + eth0 <---> Traffic Generator Port 0 + eth1 <---> Traffic Generator Port 1 + + For optimal performance ensure the interfaces are connected to the same NUMA node as the application + cores used in the tests. This test plan assumes the interfaces are connected to NUMA node 0 and thatc + cores 0-8 are also on NUMA node 0. + +#. Kernel v5.15 or later with the CONFIG_XDP_SOCKETS option set. + +#. libbpf (<=v0.7.0) and libxdp (>=v1.2.2) libraries installed and discoverable via pkg-config:: + + pkg-config libbpf --modversion + pkg-config libxdp --modversion + + The environment variables LIBXDP_OBJECT_PATH and PKG_CONFIG_PATH should be set + appropriately. + LIBXDP_OBJECT_PATH should be set to the location of where libxdp placed its bpf + object files. This is usually in /usr/local/lib/bpf or /usr/local/lib64/bpf. + PKG_CONFIG_PATH should include the path to where the libxdp and libbpf .pc files + are located. + +#. Build DPDK:: + + cd dpdk + meson --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc + +#. Method to pin the IRQs for the networking device. + This test plan assumes an i40e device and as such the set_irq_affinity.sh script will be used. + The script can be found in the i40e sourceforge project. + If no script is available for your device, you will need to manually edit /proc/irq/. + More information can be found here: https://www.kernel.org/doc/html/latest/core-api/irq/irq-affinity.html + + +Test case 1: perf_one_port_multiqueue_and_same_irqs +=================================================== + +This test configures one PMD with two queues. +It uses two application cores (0 and 1) and pins the IRQs for each queue to those same cores. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd with two queues:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-2 --no-pci --main-lcore=2 \ + --vdev net_af_xdp0,iface=eth0,queue_count=2 \ + -- -i --a --nb-cores=2 --rxq=2 --txq=2 --forward-mode=macswap + +#. Assign the kernel cores:: + + ./set_irq_affinity 0-1 eth0 + +#. Send packets with random IP addresses and different packet sizes from 64 bytes to 1518 bytes by packet generator. + Check the throughput and ensure packets were distributed to the two queues. + +Test case 2: perf_one_port_multiqueue_and_separate_irqs +======================================================= + +This test configures one PMD with two queues. +It uses two application cores (2 and 3) and pins the IRQs for each queue to separate non-application cores (0 and 1). + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 0 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 0 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd with two queues:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 --no-pci --main-lcore=4 \ + --vdev net_af_xdp0,iface=eth0,queue_count=2,busy_budget=0 \ + --log-level=pmd.net.af_xdp:8 \ + -- -i --a --nb-cores=2 --rxq=2 --txq=2 --forward-mode=macswap + +#. Assign the kernel cores:: + + ./set_irq_affinity 0-1 eth0 + +#. Send packets with random IP addresses and different packet sizes from 64 bytes to 1518 bytes by packet generator. + Check the throughput and ensure packets were distributed to the two queues. + +Test case 3: perf_one_port_multiqueues_with_two_vdev +==================================================== + +This test configures two PMDs each with four queues. +It uses eight application cores (0 to 7) and pins the IRQs for each queue to those same cores. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 8 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-8 --no-pci --main-lcore=8 \ + --vdev net_af_xdp0,iface=eth0,queue_count=4 \ + --vdev net_af_xdp1,iface=eth0,start_queue=4,queue_count=4 \ + --log-level=pmd.net.af_xdp:8 \ + -- -i -a --nb-cores=8 --rxq=4 --txq=4 --forward-mode=macswap + +#. Assign the kernel cores:: + + ./set_irq_affinity 0-7 eth0 + +#. Send packets with random IP addresses and different packet sizes from 64 bytes to 1518 bytes by packet generator. + Check the throughput and ensure packets were distributed to the eight queues. + +Test case 4: perf_one_port_single_queue_and_separate_irqs +========================================================= + +This test configures one PMD with one queue. +It uses one application core (1) and pins the IRQs for the queue to a separate non-application core (0). + +#. Set the hardware queues:: + + ethtool -L eth0 combined 1 + +#. Configure busy polling settings:: + + echo 0 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 0 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \ + --vdev net_af_xdp0,iface=eth0,queue_count=1,busy_budget=0 \ + --log-level=pmd.net.af_xdp:8 \ + -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap + +#. Assign the kernel core:: + + ./set_irq_affinity 0 eth0 + +#. Send packets with random IP addresses and different packet sizes from 64 bytes to 1518 bytes by packet generator. + Check the throughput and ensure packets were distributed to the queue. + +Test case 5: perf_one_port_single_queue_with_two_vdev +===================================================== + +This test configures two PMDs each with one queue. +It uses two application cores (2 and 3) and pins the IRQs for each queue to separate non-application cores (0 and 1). + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 0 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 0 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 --no-pci --main-lcore=4 \ + --vdev net_af_xdp0,iface=eth0,queue_count=1,busy_budget=0 \ + --vdev net_af_xdp1,iface=eth0,start_queue=1,queue_count=1,busy_budget=0 \ + --log-level=pmd.net.af_xdp:8 \ + -- -i -a --nb-cores=2 --rxq=1 --txq=1 --forward-mode=macswap + +#. Assign the kernel cores:: + + ./set_irq_affinity 0-1 eth0 + +#. Send packets with random IP addresses and different packet sizes from 64 bytes to 1518 bytes by packet generator. + Check the throughput and ensure packets were distributed to the two queues. + + +Test case 6: perf_two_port_and_same_irqs +======================================== + +This test configures two PMDs each with one queue from different interfaces (eth0 and eth1). +It uses two application cores (0 and 1) and pins the IRQs for each queue to those same cores. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 1 + ethtool -L eth1 combined 1 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + echo 2 >> /sys/class/net/eth1/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth1/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 0-2 --no-pci --main-lcore=2 \ + --vdev net_af_xdp0,iface=eth0 --vdev net_af_xdp1,iface=eth1 \ + --log-level=pmd.net.af_xdp:8 \ + -- -i --a --nb-cores=2 --rxq=1 --txq=1 --forward-mode=macswap + +#. Assign the kernel cores:: + + ./set_irq_affinity 0 eth0 + ./set_irq_affinity 1 eth1 + +#. Send packets with random IP addresses and different packet sizes from 64 bytes to 1518 bytes by packet generator to both ports. + Check the throughput and ensure packets were distributed to the queue on each port. + + +Test case 7: perf_two_port_and_separate_irqs +============================================ + +This test configures two PMDs each with one queue from different interfaces (eth0 and eth1). +It uses two application cores (2 and 3) and pins the IRQs for each queue to separate non-application cores (0 and 1). + +#. Set the hardware queues:: + + ethtool -L eth0 combined 1 + ethtool -L eth1 combined 1 + +#. Configure busy polling settings:: + + echo 0 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 0 >> /sys/class/net/eth0/gro_flush_timeout + echo 0 >> /sys/class/net/eth1/napi_defer_hard_irqs + echo 0 >> /sys/class/net/eth1/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 --no-pci --main-lcore=4 \ + --vdev net_af_xdp0,iface=eth0,busy_budget=0 \ + --vdev net_af_xdp1,iface=eth1,busy_budget=0 \ + --log-level=pmd.net.af_xdp:8 \ + -- -i --a --nb-cores=2 --rxq=1 --txq=1 --forward-mode=macswap + +#. Assign the kernel cores:: + + ./set_irq_affinity 0 eth0 + ./set_irq_affinity 1 eth1 + +#. Send packets with random IP addresses and different packet sizes from 64 bytes to 1518 bytes by packet generator to both ports. + Check the throughput and ensure packets were distributed to the queue on each port. + + +Test case 8: func_start_queue +============================= +This test creates a socket on a queue other than the default queue (0) and verifies that packets land on it. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --no-pci \ + --log-level=pmd.net.af_xdp,debug \ + --vdev=net_af_xdp0,iface=eth0,start_queue=1 \ + --forward-mode=macswap + +#. Send packets with random IP addresses by packet generator. + Ensure packets were distributed to the queue. + + +Test case 9: func_queue_count +============================= +This test creates a socket on 2 queues (0 and 1) and verifies that packets land on both of them. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --no-pci \ + --log-level=pmd.net.af_xdp,debug \ + --vdev=net_af_xdp0,iface=eth0,queue_count=2 -- --rxq=2 --txq=2 \ + --forward-mode=macswap + +#. Send packets with random IP addresses by packet generator. + Ensure packets were distributed to the two queues. + + +Test case 10: func_shared_umem_1pmd +=================================== +This test makes the UMEM 'shared' between two sockets using one PMD and verifies that packets land on both of them. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --no-pci \ + --log-level=pmd.net.af_xdp,debug \ + --vdev=net_af_xdp0,iface=eth0,queue_count=2,shared_umem=1 \ + -- --rxq=2 --txq=2 \ + --forward-mode=macswap + +#. Check for the log ``eth0,qid1 sharing UMEM`` + +#. Send packets with random IP addresses by packet generator. + Ensure packets were distributed to the queue. + + +Test case 11: func_shared_umem_2pmd +=================================== +This test makes the UMEM 'shared' between two sockets using two PMDs and verifies that packets land on both of them. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --no-pci \ + --vdev=net_af_xdp0,iface=eth0,shared_umem=1 \ + --vdev=net_af_xdp1,iface=eth0,start_queue=1,shared_umem=1 \ + --log-level=pmd.net.af_xdp,debug \ + --forward-mode=macswap + +#. Check for the log ``eth0,qid1 sharing UMEM`` + +#. Send packets with random IP addresses by packet generator. + Ensure packets were distributed to the two queues. + + +Test case 12: func_busy_budget +============================== +This test configures the busy polling budget to 0 which disables busy polling and verifies that packets land on the socket. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 1 + +#. Configure busy polling settings:: + + echo 0 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 0 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --no-pci \ + --vdev=net_af_xdp0,iface=eth0,busy_budget=0 \ + --log-level=pmd.net.af_xdp,debug \ + --forward-mode=macswap + +#. Check for the log ``Preferred busy polling not enabled`` + +#. Send packets with random IP addresses by packet generator. + Ensure packets were distributed to the queue. + + +Test case 13: func_xdp_prog +=========================== +This test loads a custom xdp program on the network interface, rather than using the default program that comes packaged with libbpf/libxdp. + +#. Create a file `xdp_example.c` with the following contents:: + + #include + #include + + struct bpf_map_def SEC("maps") xsks_map = { + .type = BPF_MAP_TYPE_XSKMAP, + .max_entries = 64, + .key_size = sizeof(int), + .value_size = sizeof(int), + }; + + static unsigned int idx; + + SEC("xdp-example") + + int xdp_sock_prog(struct xdp_md *ctx) + { + int index = ctx->rx_queue_index; + + /* Drop every other packet */ + if (idx++ % 2) + return XDP_DROP; + else + return bpf_redirect_map(&xsks_map, index, XDP_PASS); + } + +#. Compile the program:: + + clang -O2 -Wall -target bpf -c xdp_example.c -o xdp_example.o + +#. Set the hardware queues:: + + ethtool -L eth0 combined 1 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --no-pci \ + --vdev=net_af_xdp0,iface=eth0,xdp_prog=xdp_example.o \ + --log-level=pmd.net.af_xdp,debug \ + --forward-mode=macswap + +#. Check for the log ``Successfully loaded XDP program xdp_example.o with fd `` + +#. Send packets with random IP addresses by packet generator. + Ensure some packets were distributed to the queue. + + +Test case 14: func_xdp_prog_mq +============================== +This test loads a custom xdp program on the network interface with two queues and then creates a PMD with sockets on those two queues. +It assumes the custom program compilation outlined in the previous test has been completed. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 2 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd --no-pci \ + --vdev=net_af_xdp0,iface=eth0,xdp_prog=xdp_example.o,queue_count=2 \ + --log-level=pmd.net.af_xdp,debug -- \ + --rxq=2 --txq=2 \ + --forward-mode=macswap + +#. Check for the log ``Successfully loaded XDP program xdp_example.o with fd `` + +#. Send packets with random IP addresses by packet generator. + Ensure some packets were distributed to the two queues. + + +Test case 15: func_secondary_prog +================================= +This test launches two processes - a primary and a secondary DPDK process. +It verifies that the secondary process can communicate with the primary by running the "show port info all" command in the secondary and ensuring that the port info matches that of the PMD in the primary process. + +#. Set the hardware queues:: + + ethtool -L eth0 combined 1 + +#. Configure busy polling settings:: + + echo 2 >> /sys/class/net/eth0/napi_defer_hard_irqs + echo 200000 >> /sys/class/net/eth0/gro_flush_timeout + +#. Start testpmd (primary):: + + /root/dpdk/build/app/dpdk-testpmd --no-pci \ + --vdev=net_af_xdp0,iface=eth0 \ + --log-level=pmd.net.af_xdp,debug \ + -- --forward-mode=macswap -a -i + +#. Start testpmd (secondary):: + + /root/dpdk/build/app/dpdk-testpmd --no-pci \ + --proc-type=auto \ + --log-level=pmd.net.af_xdp,debug \ + -- -i -a + +#. execute the CLI command ``show port info all`` in the secondary process and ensure that you can see the port info of the net_af_xdp0 PMD from the primary process. \ No newline at end of file diff --git a/test_plans/index.rst b/test_plans/index.rst index 2d797b6d..28f61953 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -277,7 +277,7 @@ The following are the test plans for the DPDK DTS automated test system. fips_cryptodev_test_plan flow_filtering_test_plan - af_xdp_2_test_plan + af_xdp_test_plan cbdma_test_plan flexible_rxd_test_plan ipsec_gw_and_library_test_plan diff --git a/tests/TestSuite_af_xdp.py b/tests/TestSuite_af_xdp.py new file mode 100644 index 00000000..e9a1b1b5 --- /dev/null +++ b/tests/TestSuite_af_xdp.py @@ -0,0 +1,718 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2019-2020 Intel Corporation +# + +import os +import re +import time + +from framework.packet import Packet +from framework.pktgen import PacketGeneratorHelper +from framework.settings import HEADER_SIZE +from framework.test_case import TestCase + + +class TestAfXdp(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + """ + self.logger.info('+---------------------------------------------------------------------+') + self.logger.info('| TestAfXdp: set_up_all |') + self.logger.info('+---------------------------------------------------------------------+') + # self.verify(self.nic in ("I40E_40G-QSFP_A"), "the port can not run this suite") + self.dut_ports = self.dut.get_ports() + self.verify(len(self.dut_ports) >= 2, "Insufficient ports for testing") + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.header_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["udp"] + + self.frame_sizes = [64, 128, 256, 512, 1024, 1518] + # get the frame_sizes from cfg file + if "frame_sizes" in self.get_suite_cfg(): + self.frame_sizes = self.get_suite_cfg()["frame_sizes"] + else: + self.logger.info(f'you can config packets frame_sizes in file {self.suite_name}.cfg, like:') + self.logger.info('[suite]') + self.logger.info('frame_sizes=[64, 128, 256, 512, 1024, 1518]') + self.logger.info(f'configured frame_sizes={self.frame_sizes}') + + self.logger.info(f'you can config the traffic duration by setting the "TRAFFIC_DURATION" variable.') + self.traffic_duration = int(os.getenv('TRAFFIC_DURATION', '30')) + self.logger.info(f'traffic duration is set to: {self.traffic_duration} sec.') + + self.out_path = "/tmp" + out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") + if "No such file or directory" in out: + self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.pktgen_helper = PacketGeneratorHelper() + + self.dut.restore_interfaces() + self.irqs_set = self.dut.new_session(suite="irqs-set") + self.irqs_set.send_expect("killall irqbalance", "# ") + + self.shell = self.dut.new_session(suite="shell") + self.xdp_file = '/tmp/xdp_example' + self.sec_proc = self.dut.new_session(suite="sec_proc") + + for numa in [0, 1]: + cmd = f'echo 4096 > /sys/devices/system/node/node{numa}/hugepages/hugepages-2048kB/nr_hugepages' + self.dut.send_expect(cmd, "# ", 120) + + + def set_up(self): + self.logger.info('+---------------------------------------------------------------------+') + pass + + + def set_port_queue(self, intf): + # workaround for: + # https://lore.kernel.org/all/a52430d1e11c5cadcd08706bd6d8da3ea48e1c04.camel@coverfire.com/T/ + # Ciara: + # Sometimes after launching TRex I see the interrupts for my dpdk-testpmd interface disappear: + # cat /proc/interrupts | grep fvl0 # returns nothing + # As a result, I see nothing received on the dpdk-testpmd app. + # in order to restore the interrupts I have to do the following: + # ifconfig enp134s0f0 down + # ifconfig enp134s0f0 up + # ethtool -L enp134s0f0 combined 2 + self.logger.info(f"initializing port '{intf}'") + self.dut.send_expect(f'ifconfig {intf} down', '# ') + self.dut.send_expect(f'ifconfig {intf} up', '# ') + self.dut.send_expect( + "ethtool -L %s combined %d" % (intf, self.nb_cores / self.port_num), "# " + ) + + + def get_core_list(self): + core_config = "1S/%dC/1T" % ( + self.nb_cores + 1 + max(self.port_num, self.vdev_num) * self.queue_number + ) + self.core_list = self.dut.get_core_list(core_config, socket=self.ports_socket) + + def assign_port_core(self): + if self.separate_cores: + core_list = self.core_list[ + -max(self.port_num, self.vdev_num) * self.queue_number : + ] + else: + core_list = self.core_list[ + : -max(self.port_num, self.vdev_num) * self.queue_number + ][-max(self.port_num, self.vdev_num) * self.queue_number :] + + for i in range(self.port_num): + intf = self.dut.ports_info[i]["port"].get_interface_name() + cores = ",".join( + core_list[i * self.queue_number : (i + 1) * self.queue_number] + ) + if self.port_num == 1 and self.vdev_num == 2: + cores = ",".join(core_list) + command = "%s/set_irq_affinity %s %s" % ("/root", cores, intf) + out = self.irqs_set.send_expect(command, "# ") + self.verify( + "No such file or directory" not in out, + "can not find the set_irq_affinity in dut root " + \ + "(see: https://github.com/dmarion/i40e/blob/master/scripts/set_irq_affinity)", + ) + time.sleep(1) + + def get_vdev_list(self, shared_umem=1, start_queue=0, xdp_prog=''): + vdev_list = [] + + if self.port_num == 1: + intf = self.dut.ports_info[0]["port"].get_interface_name() + self.set_port_queue(intf) + time.sleep(1) + for i in range(self.vdev_num): + vdev = "" + if start_queue > 0: + vdev = "net_af_xdp%d,iface=%s,start_queue=%d" % ( + i, + intf, + start_queue + ) + else: + vdev = "net_af_xdp%d,iface=%s,start_queue=%d,queue_count=%d" % ( + i, + intf, + i * self.queue_number, + self.queue_number, + ) + if shared_umem > 0: + vdev += f',shared_umem={shared_umem}' + # When separate cores are used it is suggested that the 'busy_budget=0' + # argument is added to the AF_XDP PMD vdev string. + if self.separate_cores: + vdev += ',busy_budget=0' + if xdp_prog != '': + vdev += f',xdp_prog={xdp_prog}' + vdev_list.append(vdev) + else: + for i in range(self.port_num): + vdev = "" + intf = self.dut.ports_info[i]["port"].get_interface_name() + self.set_port_queue(intf) + vdev = "net_af_xdp%d,iface=%s" % (i, intf) + if self.separate_cores: + vdev += ',busy_budget=0' + vdev_list.append(vdev) + + return vdev_list + + + def launch_testpmd(self, start_queue=0, xdp_prog='', fwd_mode="macswap", shared_umem=0, topology="", rss_ip=False, no_prefix=False): + + self.get_core_list() + + vdev = self.get_vdev_list(start_queue=start_queue, xdp_prog=xdp_prog, shared_umem=shared_umem,) + + if topology: + topology = "--port-topology=%s" % topology + if fwd_mode: + fwd_mode = "--forward-mode=%s" % fwd_mode + if rss_ip: + rss_ip = "--rss-ip" + else: + rss_ip = "" + + if no_prefix: + eal_params = f'--no-pci --vdev={vdev[0]}' + else: + eal_params = self.dut.create_eal_parameters( + cores=self.core_list[ + : -max(self.port_num, self.vdev_num) * self.queue_number + ], + vdevs=vdev, + no_pci=True, + ) + app_name = self.dut.apps_name["test-pmd"] + command = f'{app_name} {eal_params} --log-level=pmd.net.af_xdp:8' + command += f' -- -i --auto-start --nb-cores={self.nb_cores}' + command += f' --rxq={self.queue_number} --txq={self.queue_number}' + command += f' {fwd_mode} {rss_ip} {topology}' + + self.logger.info("start testpmd:") + self.logger.info(command.replace("--", "\n\t--")) + self.out = self.dut.send_expect(command, "testpmd> ", 120) + self.logger.info('dpdk-testpmd output:\n' + self.out) + + + def launch_sec_testpmd(self): + app_name = self.dut.apps_name["test-pmd"] + command = f'{app_name} --no-pci' + command += f' --proc-type=auto' + command += f' --log-level=pmd.net.af_xdp:8' + command += f' -- -i --auto-start' + self.logger.info("start secondary testpmd:") + self.logger.info(command.replace("--", "\n\t--")) + self.out = self.sec_proc.send_expect(command, "testpmd> ", 120) + self.logger.info('secondary dpdk-testpmd output:\n' + self.out) + + + def create_table(self, index=1): + self.table_header = [ + "Frame Size [B]", + "Queue Number", + "Port Throughput [Mpps]", + "Port Linerate [%]", + ] + self.result_table_create(self.table_header) + + + def update_table_info(self, *param): + for each in param: + self.result_table_add(each) + + + def calculate_avg_throughput(self, frame_size, tgen_input, fwd_mode): + """ + send packet and get the throughput + """ + # set traffic option + traffic_opt = {"delay": 2, "duration": self.traffic_duration} + + # clear streams before add new streams + self.tester.scapy_execute() + self.tester.pktgen.clear_streams() + + # run packet generator + fields_config = { + "ip": { + "dst": {"action": "random"}, + }, + } + self.logger.debug("Content of 'fields_config':") + self.logger.debug(fields_config) + + streams = self.pktgen_helper.prepare_stream_from_tginput( + tgen_input, 100, fields_config, self.tester.pktgen + ) + _, pps = self.tester.pktgen.measure_throughput( + stream_ids=streams, options=traffic_opt + ) + + Mpps = pps / 1000000.0 + + if fwd_mode != "rxonly": + self.verify( + Mpps > 0, "can not receive packets of frame size %d" % (frame_size) + ) + throughput = Mpps * 100 / float(self.wirespeed(self.nic, frame_size, 1)) + + return Mpps, throughput + + + def check_packets(self, port, queue): + """ + check that given port has receive packets in a queue + """ + info = re.findall("###### NIC extended statistics for port %d" % port, self.out) + index = self.out.find(info[0]) + rx = re.search(f"rx_q{queue}_packets:\s*(\d*)", self.out[index:]) + tx = re.search(f"tx_q{queue}_packets:\s*(\d*)", self.out[index:]) + rx_packets = int(rx.group(1)) + tx_packets = int(tx.group(1)) + info = f'port {port} rx_q{queue}_packets: {rx_packets}, tx_q{queue}_packets: {tx_packets}' + self.logger.info(f'verify non-zero: {info}') + self.verify(rx_packets > 0 and tx_packets > 0, info,) + + + def send_and_verify_throughput(self, pkt_type="", frame_sizes='', fwd_mode=""): + if frame_sizes == '': + frame_sizes = self.frame_sizes + for frame_size in frame_sizes: + self.logger.info(f'Running test {self.running_case} with frame size {frame_size}B.') + result = [frame_size, self.queue_number] + tgen_input = [] + for rx_port in range(0, max(self.port_num, self.vdev_num)): + dst_mac = self.dut.get_mac_address(self.dut_ports[rx_port]) + pkt = Packet(pkt_len=frame_size) + pkt.config_layers( + [ + ("ether", {"dst": dst_mac}), + ("ipv4", {"dst": "192.168.%d.1" % (rx_port + 1), "proto": 255}), + ] + ) + pcap = os.path.join(self.out_path, f"af_xdp_{rx_port}_{frame_size}.pcap") + pkt.save_pcapfile(None, pcap) + # tgen_input.append((rx_port, rx_port, pcap)) + tgen_input.append((rx_port, 0, pcap)) + self.logger.debug(f"tgen_input (port_num={self.port_num}, vdev_num={self.vdev_num}):") + self.logger.debug(tgen_input) + + Mpps, throughput = self.calculate_avg_throughput( + frame_size, tgen_input, fwd_mode + ) + self.logger.debug(f"Mpps: {Mpps}, throughput: {throughput}") + result.append(Mpps) + result.append(throughput) + self.logger.debug(f"result: {result}") + + self.out = self.dut.send_expect("show port xstats all", "testpmd> ", 60) + # self.out += '\n' + self.dut.send_expect("stop", "testpmd> ", 60) + self.logger.info('dpdk-testpmd output:\n' + self.out) + + self.logger.info(f"port_num: {self.port_num}, queue_number: {self.queue_number}, vdev_num: {self.vdev_num}, nb_cores: {self.nb_cores}, separate_cores: {self.separate_cores}") + + for port_index in range(0, max(self.port_num, self.vdev_num)): + for queue_index in range(0, self.queue_number): + self.check_packets(port_index, queue_index) + + self.dut.send_expect("clear port xstats all", "testpmd> ", 60) + # self.dut.send_expect("start", "testpmd> ", 60) + + self.update_table_info(result) + + + def check_sharing_umem(self): + """ + Check for the log: eth0,qid1 sharing UMEM + """ + intf = self.dut.ports_info[0]["port"].get_interface_name() + to_check = f'{intf},qid1 sharing UMEM' + self.logger.info(f'check for the log: "{to_check}"') + info = re.findall(f'.*{to_check}', self.out) + if len(info) > 0: + self.logger.info(f'"{info[0]}" - confirmed') + else: + self.verify(False, '",qid1 sharing UMEM" - not found') + + + def check_busy_polling(self): + """ + Check for the log: Preferred busy polling not enabled + """ + to_check = 'Preferred busy polling not enabled' + self.logger.info(f'check for the log: "{to_check}"') + info = re.findall(f'.*{to_check}', self.out) + if len(info) > 0: + self.logger.info(f'"{info[0]}" - confirmed') + else: + self.verify(False, f'"{to_check}" - not found') + + + def check_xdp_program_loaded(self): + """ + Check for the log: Successfully loaded XDP program xdp_example.o with fd + """ + # load_custom_xdp_prog(): Successfully loaded XDP program /tmp/xdp_example.o with fd 227 + to_check = 'Successfully loaded XDP program' + self.logger.info(f'check for the log: "{to_check}"') + info = re.findall(f'.*{to_check}.*', self.out) + if len(info) > 0: + self.logger.info(f'"{info[0]}" - confirmed') + else: + self.verify(False, f'"{to_check}" - not found') + + + def create_xdp_file(self, file_name=''): + if file_name == '': + file_name = f'{self.xdp_file}.c' + program = [ + '#include ', + '#include ', + '', + 'struct bpf_map_def SEC("maps") xsks_map = {', + ' .type = BPF_MAP_TYPE_XSKMAP,', + ' .max_entries = 64,', + ' .key_size = sizeof(int),', + ' .value_size = sizeof(int),', + '};', + '', + 'static unsigned int idx;', + '', + 'SEC("xdp-example")', + '', + 'int xdp_sock_prog(struct xdp_md *ctx)', + '{', + ' int index = ctx->rx_queue_index;', + '', + ' /* Drop every other packet */', + ' if (idx++ % 2)', + ' return XDP_DROP;', + ' else', + ' return bpf_redirect_map(&xsks_map, index, XDP_PASS);', + '}', + ] + self.logger.info(f'creating XDP file "{file_name}":') + with open(file_name, "w") as file: + for line in program: + file.write(f'{line}\n') + self.logger.info(line) + + + def compile_xdp_program(self, c_file='', o_file=''): + if c_file == '': + c_file = f'{self.xdp_file}.c' + if o_file == '': + o_file = f'{self.xdp_file}.o' + self.logger.info(f'compile the XDP program "{c_file}":') + out = self.shell.send_expect(f'clang -v -O2 -Wall -target bpf -c {c_file} -o {o_file}', '# ') + self.logger.info(out) + + + def check_sec_process(self): + self.out = self.sec_proc.send_expect("show port info all", "testpmd> ", 60) + self.logger.info('secondary dpdk-testpmd output:\n' + self.out) + # ensure that you can see the port info of the net_af_xdp0 PMD from the primary process + # like: "Device name: net_af_xdp0" + to_check = 'net_af_xdp0' + self.logger.info(f'check for the log: "{to_check}"') + info = re.findall(f'.*{to_check}', self.out) + if len(info) > 0: + self.logger.info(f'"{info[0]}" - confirmed') + else: + self.verify(False, f'"{to_check}" - not found') + + self.sec_proc.send_expect("quit", "# ", 60) + + + def set_busy_polling(self, non_busy_polling=False): + self.logger.info("configuring preferred busy polling feature:") + if non_busy_polling: + # non busy polling tests: TC2,4,5,7,12 + napi_defer_hard_irqs = 0 + gro_flush_timeout = 0 + else: + # busy polling tests: TC1,3,6,8,9,10,11,13,14,15 + napi_defer_hard_irqs = 2 + gro_flush_timeout = 200000 + + for port in self.dut_ports: + intf = self.dut.ports_info[port]["port"].get_interface_name() + cmd = f'echo {napi_defer_hard_irqs} >> /sys/class/net/{intf}/napi_defer_hard_irqs' + self.irqs_set.send_expect(cmd, '# ') + cmd = f'echo {gro_flush_timeout} >> /sys/class/net/{intf}/gro_flush_timeout' + self.irqs_set.send_expect(cmd, '# ') + + + def test_perf_one_port_multiqueue_and_same_irqs(self): + """ + Test case 1: multiqueue test with PMD and IRQs pinned to same cores + """ + self.port_num = 1 + self.queue_number = 2 + self.vdev_num = 1 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.launch_testpmd() + self.assign_port_core() + self.send_and_verify_throughput() + self.result_table_print() + + + def test_perf_one_port_multiqueue_and_separate_irqs(self): + """ + Test case 2: multiqueue test with PMD and IRQs are pinned to separate cores + """ + self.port_num = 1 + self.queue_number = 2 + self.vdev_num = 1 + self.nb_cores = 2 + self.separate_cores = True + + self.set_busy_polling(non_busy_polling=True) + self.create_table() + self.launch_testpmd() + self.assign_port_core() + self.send_and_verify_throughput() + self.result_table_print() + + + def test_perf_one_port_multiqueues_with_two_vdev(self): + """ + Test case 3: one port with two vdev and multi-queues test + """ + self.port_num = 1 + self.queue_number = 4 + self.vdev_num = 2 + self.nb_cores = 8 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.launch_testpmd() + self.assign_port_core() + self.send_and_verify_throughput() + self.result_table_print() + + + def test_perf_one_port_single_queue_and_separate_irqs(self): + """ + Test case 4: single port test with PMD and IRQs are pinned to separate cores + """ + self.port_num = 1 + self.queue_number = 1 + self.vdev_num = 1 + self.nb_cores = 1 + self.separate_cores = True + + self.set_busy_polling(non_busy_polling=True) + self.create_table() + self.launch_testpmd() + self.assign_port_core() + self.send_and_verify_throughput() + self.result_table_print() + + + def test_perf_one_port_single_queue_with_two_vdev(self): + """ + Test case 5: one port with two vdev and single queue test + """ + self.port_num = 1 + self.queue_number = 1 + self.vdev_num = 2 + self.nb_cores = 2 + self.separate_cores = True + + self.set_busy_polling(non_busy_polling=True) + self.create_table() + self.launch_testpmd() + self.assign_port_core() + self.send_and_verify_throughput() + self.result_table_print() + + + def test_perf_two_port_and_same_irqs(self): + """ + Test case 6: two ports test with PMD and IRQs pinned to same cores + """ + self.port_num = 2 + self.queue_number = 1 + self.vdev_num = 2 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.launch_testpmd() + self.assign_port_core() + self.send_and_verify_throughput() + self.result_table_print() + + + def test_perf_two_port_and_separate_irqs(self): + """ + Test case 7: two port test with PMD and IRQs are pinned to separate cores + """ + self.port_num = 2 + self.queue_number = 1 + self.vdev_num = 2 + self.nb_cores = 2 + self.separate_cores = True + + self.set_busy_polling(non_busy_polling=True) + self.create_table() + self.launch_testpmd() + self.assign_port_core() + self.send_and_verify_throughput() + self.result_table_print() + + + def test_func_start_queue(self): + """ + Test case 8: func_start_queue + """ + self.port_num = 1 + self.queue_number = 1 + self.vdev_num = 1 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.launch_testpmd(start_queue=1) + self.send_and_verify_throughput(frame_sizes=[64]) + + + def test_func_queue_count(self): + """ + Test case 9: func_queue_count + """ + self.port_num = 1 + self.queue_number = 2 + self.vdev_num = 1 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.launch_testpmd() + self.send_and_verify_throughput(frame_sizes=[64]) + + + def test_func_shared_umem_1pmd(self): + """ + Test case 10: func_shared_umem_1pmd + """ + self.port_num = 1 + self.queue_number = 2 + self.vdev_num = 1 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.launch_testpmd(shared_umem=1) + self.check_sharing_umem() + self.send_and_verify_throughput(frame_sizes=[64]) + + def test_func_shared_umem_2pmd(self): + """ + Test case 11: func_shared_umem_2pmd + """ + self.port_num = 1 + self.queue_number = 1 + self.vdev_num = 2 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.launch_testpmd(shared_umem=1) + self.check_sharing_umem() + self.send_and_verify_throughput(frame_sizes=[64]) + + def test_func_busy_budget(self): + """ + Test case 12: func_busy_budget + """ + self.port_num = 1 + self.queue_number = 1 + self.vdev_num = 1 + self.nb_cores = 1 + self.separate_cores = True + + self.set_busy_polling(non_busy_polling=True) + self.create_table() + self.launch_testpmd() + self.check_busy_polling() + self.send_and_verify_throughput(frame_sizes=[64]) + + def test_func_xdp_prog(self): + """ + Test case 13: func_xdp_prog + """ + self.port_num = 1 + self.queue_number = 1 + self.vdev_num = 1 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.create_xdp_file() + self.compile_xdp_program() + self.launch_testpmd(fwd_mode='', xdp_prog=f'{self.xdp_file}.o') + self.check_xdp_program_loaded() + self.send_and_verify_throughput(frame_sizes=[64]) + + def test_func_xdp_prog_mq(self): + """ + Test case 14: func_xdp_prog_mq + """ + self.port_num = 1 + self.queue_number = 2 + self.vdev_num = 1 + self.nb_cores = 2 + self.separate_cores = False + + self.set_busy_polling() + self.create_table() + self.create_xdp_file() + self.compile_xdp_program() + self.launch_testpmd(fwd_mode='', xdp_prog=f'{self.xdp_file}.o') + self.check_xdp_program_loaded() + self.send_and_verify_throughput(frame_sizes=[64]) + + def test_func_secondary_prog(self): + """ + Test case 15: func_secondary_prog + """ + self.port_num = 1 + self.queue_number = 1 + self.vdev_num = 1 + self.nb_cores = 1 + self.separate_cores = False + + self.set_busy_polling() + self.launch_testpmd(no_prefix=True) + self.launch_sec_testpmd() + self.check_sec_process() + + + def tear_down(self): + self.dut.send_expect("quit", "# ", 60) + self.sec_proc.send_expect("quit", "# ", 60) + self.shell.send_expect(f'/bin/rm -f {self.xdp_file}*', '# ') + self.logger.info('+---------------------------------------------------------------------+') + self.logger.info('| Test complete |') + self.logger.info('+---------------------------------------------------------------------+') + self.logger.info('') + + + def tear_down_all(self): + self.dut.kill_all() diff --git a/tests/TestSuite_af_xdp_2.py b/tests/TestSuite_af_xdp_2.py deleted file mode 100644 index 8e2fa9c3..00000000 --- a/tests/TestSuite_af_xdp_2.py +++ /dev/null @@ -1,474 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2019-2020 Intel Corporation -# - -import os -import re -import time - -from framework.packet import Packet -from framework.pktgen import PacketGeneratorHelper -from framework.settings import HEADER_SIZE -from framework.test_case import TestCase - - -class TestAfXdp(TestCase): - def set_up_all(self): - """ - Run at the start of each test suite. - """ - # self.verify(self.nic in ("I40E_40G-QSFP_A"), "the port can not run this suite") - - self.frame_sizes = [64, 128, 256, 512, 1024, 1518] - self.dut_ports = self.dut.get_ports() - self.verify(len(self.dut_ports) >= 2, "Insufficient ports for testing") - - self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) - - self.header_size = HEADER_SIZE["eth"] + HEADER_SIZE["ip"] + HEADER_SIZE["udp"] - - self.logger.info( - "you can config packet_size in file %s.cfg," % self.suite_name - + "in region 'suite' like packet_sizes=[64, 128, 256]" - ) - # get the frame_sizes from cfg file - if "packet_sizes" in self.get_suite_cfg(): - self.frame_sizes = self.get_suite_cfg()["packet_sizes"] - - self.out_path = "/tmp" - out = self.tester.send_expect("ls -d %s" % self.out_path, "# ") - if "No such file or directory" in out: - self.tester.send_expect("mkdir -p %s" % self.out_path, "# ") - self.base_dir = self.dut.base_dir.replace("~", "/root") - self.pktgen_helper = PacketGeneratorHelper() - - self.dut.restore_interfaces() - self.irqs_set = self.dut.new_session(suite="irqs-set") - - def set_up(self): - pass - - def set_port_queue(self, intf): - self.dut.send_expect( - "ethtool -L %s combined %d" % (intf, self.nb_cores / self.port_num), "# " - ) - - def config_stream(self, rx_port, frame_size): - tgen_input = [] - - dst_mac = self.dut.get_mac_address(self.dut_ports[rx_port]) - pkt = Packet(pkt_len=frame_size) - pkt.config_layers( - [ - ("ether", {"dst": dst_mac}), - ("ipv4", {"dst": "192.168.%d.1" % (rx_port + 1), "proto": 255}), - ] - ) - pcap = os.path.join( - self.out_path, "af_xdp_%d_%d_%d.pcap" % (self.port_num, rx_port, frame_size) - ) - pkt.save_pcapfile(None, pcap) - tgen_input.append((rx_port, rx_port, pcap)) - - return tgen_input - - def config_rule_stream(self, rule_index, frame_size): - tgen_input = [] - - rule = self.rule[rule_index] - pkt = Packet(pkt_len=frame_size) - pkt.config_layers([("udp", {"src": rule[-2], "dst": rule[-1]})]) - pcap = os.path.join(self.out_path, "af_xdp_%d_%d.pcap" % (rule[-2], frame_size)) - pkt.save_pcapfile(None, pcap) - tgen_input.append((rule[0], rule[0], pcap)) - - return tgen_input - - def ethtool_set_rule(self): - rule_id, rule = 1, [] - for i in range(self.port_num): - intf = self.dut.ports_info[i]["port"].get_interface_name() - self.irqs_set.send_expect("ethtool -N %s rx-flow-hash udp4 fn" % intf, "# ") - self.irqs_set.send_expect( - "ethtool -N %s flow-type udp4 src-port 4243 dst-port 4243 action 0 loc %d" - % (intf, rule_id), - "# ", - ) - self.irqs_set.send_expect( - "ethtool -N %s flow-type udp4 src-port 4242 dst-port 4242 action 1 loc %d" - % (intf, rule_id + 1), - "# ", - ) - rule.append((i, intf, rule_id, 4243, 4243)) - rule.append((i, intf, rule_id + 1, 4242, 4242)) - rule_id += 2 - time.sleep(1) - self.rule = rule - - def ethtool_del_rule(self): - for each in self.rule: - self.irqs_set.send_expect( - "ethtool -N %s delete %d" % (each[1], each[2]), "# " - ) - - def get_core_list(self): - core_config = "1S/%dC/1T" % ( - self.nb_cores + 1 + max(self.port_num, self.vdev_num) * self.queue_number - ) - self.core_list = self.dut.get_core_list(core_config, socket=self.ports_socket) - - def assign_port_core(self, separate=True): - if separate: - core_list = self.core_list[ - -max(self.port_num, self.vdev_num) * self.queue_number : - ] - else: - core_list = self.core_list[ - : -max(self.port_num, self.vdev_num) * self.queue_number - ][-max(self.port_num, self.vdev_num) * self.queue_number :] - - for i in range(self.port_num): - intf = self.dut.ports_info[i]["port"].get_interface_name() - cores = ",".join( - core_list[i * self.queue_number : (i + 1) * self.queue_number] - ) - if self.port_num == 1 and self.vdev_num == 2: - cores = ",".join(core_list) - command = "%s/set_irq_affinity %s %s" % ("/root", cores, intf) - out = self.irqs_set.send_expect(command, "# ") - self.verify( - "No such file or directory" not in out, - "can not find the set_irq_affinity in dut root", - ) - time.sleep(1) - - def get_vdev_list(self): - vdev_list = [] - - if self.port_num == 1: - intf = self.dut.ports_info[0]["port"].get_interface_name() - self.set_port_queue(intf) - time.sleep(1) - for i in range(self.vdev_num): - vdev = "" - vdev = "net_af_xdp%d,iface=%s,start_queue=%d,queue_count=%d" % ( - i, - intf, - i * self.queue_number, - self.queue_number, - ) - vdev_list.append(vdev) - else: - for i in range(self.port_num): - vdev = "" - intf = self.dut.ports_info[i]["port"].get_interface_name() - self.set_port_queue(intf) - vdev = "net_af_xdp%d,iface=%s" % (i, intf) - vdev_list.append(vdev) - - return vdev_list - - def launch_testpmd(self, fwd_mode="", topology="", rss_ip=False): - self.get_core_list() - - vdev = self.get_vdev_list() - - if topology: - topology = "--port-topology=%s" % topology - if fwd_mode: - fwd_mode = "--forward-mode=%s" % fwd_mode - if rss_ip: - rss_ip = "--rss-ip" - else: - rss_ip = "" - - eal_params = self.dut.create_eal_parameters( - cores=self.core_list[ - : -max(self.port_num, self.vdev_num) * self.queue_number - ], - vdevs=vdev, - no_pci=True, - ) - app_name = self.dut.apps_name["test-pmd"] - command = ( - app_name - + " %s --log-level=pmd.net.af_xdp:8 -- -i %s %s --auto-start --nb-cores=%d --rxq=%d " - "--txq=%d %s" - % ( - eal_params, - fwd_mode, - rss_ip, - self.nb_cores, - self.queue_number, - self.queue_number, - topology, - ) - ) - - self.logger.info("start testpmd") - self.dut.send_expect(command, "testpmd> ", 120) - - def create_table(self, index=1): - if self.port_num == 2 or index == 2: - self.table_header = [ - "FrameSize(B)", - "Queue number", - "Port0 Throughput(Mpps)", - "Port0 % linerate", - "Port1 Throughput(Mpps)", - "Port1 % linerate", - ] - else: - self.table_header = [ - "FrameSize(B)", - "Queue number", - "Port Throughput(Mpps)", - "Port % linerate", - ] - self.result_table_create(self.table_header) - - def update_table_info(self, *param): - for each in param: - self.result_table_add(each) - - def calculate_avg_throughput(self, frame_size, tgen_input, fwd_mode): - """ - send packet and get the throughput - """ - # set traffic option - traffic_opt = {"delay": 5} - - # clear streams before add new streams - self.tester.pktgen.clear_streams() - - # run packet generator - fields_config = { - "ip": { - "dst": {"action": "random"}, - }, - } - streams = self.pktgen_helper.prepare_stream_from_tginput( - tgen_input, 100, fields_config, self.tester.pktgen - ) - _, pps = self.tester.pktgen.measure_throughput( - stream_ids=streams, options=traffic_opt - ) - - Mpps = pps / 1000000.0 - - if fwd_mode != "rxonly": - self.verify( - Mpps > 0, "can not receive packets of frame size %d" % (frame_size) - ) - throughput = Mpps * 100 / float(self.wirespeed(self.nic, frame_size, 1)) - - return Mpps, throughput - - def check_packets_of_each_port(self, port_index): - """ - check each port has receive packets - """ - info = re.findall("Forward statistics for port %d" % port_index, self.out) - index = self.out.find(info[0]) - rx = re.search("RX-packets:\s*(\d*)", self.out[index:]) - tx = re.search("TX-packets:\s*(\d*)", self.out[index:]) - rx_packets = int(rx.group(1)) - tx_packets = int(tx.group(1)) - self.verify( - rx_packets > 0 and tx_packets > 0, - "rx-packets:%d, tx-packets:%d" % (rx_packets, tx_packets), - ) - - def check_packets_of_each_queue(self, port_index): - """ - check port queue has receive packets - """ - for queue_index in range(0, self.queue_number): - queue_info = re.findall( - "RX\s*Port=\s*%d/Queue=\s*%d" % (port_index, queue_index), self.out - ) - queue = queue_info[0] - index = self.out.find(queue) - rx = re.search("RX-packets:\s*(\d*)", self.out[index:]) - tx = re.search("TX-packets:\s*(\d*)", self.out[index:]) - rx_packets = int(rx.group(1)) - tx_packets = int(tx.group(1)) - self.verify( - rx_packets > 0 and tx_packets > 0, - "The port %s queue %d, rx-packets:%d, tx-packets:%d" - % (port_index, queue_index, rx_packets, tx_packets), - ) - - def check_packets_of_all_queue(self, port_num): - """ - check all queue has receive packets - """ - for port_index in range(0, port_num): - self.check_packets_of_each_queue(port_index) - - def send_and_verify_throughput(self, pkt_type="", fwd_mode=""): - for frame_size in self.frame_sizes: - info = "Running test %s, and %d frame size." % ( - self.running_case, - frame_size, - ) - self.logger.info(info) - - result = [frame_size, self.queue_number] - - if pkt_type.lower() == "udp": - num = len(self.rule) - else: - num = self.port_num - - for i in range(num): - if pkt_type.lower() == "udp": - tgen_input = self.config_rule_stream(i, frame_size) - else: - tgen_input = self.config_stream(i, frame_size) - - Mpps, throughput = self.calculate_avg_throughput( - frame_size, tgen_input, fwd_mode - ) - result.append(Mpps) - result.append(throughput) - - self.out = self.dut.send_expect("stop", "testpmd> ", 60) - - if self.queue_number == 1: - self.check_packets_of_each_port(i) - elif self.vdev_num == 2: - self.check_packets_of_all_queue(2) - else: - self.check_packets_of_each_queue(i) - - self.dut.send_expect("start", "testpmd> ", 60) - - self.update_table_info(result) - - # check the throughput between two port - if len(result) == 6: - self.verify( - round((result[-2] - result[-4]) / result[-4], 2) <= 0.1, - "The gap is too big btween two port's throughput", - ) - - def test_perf_one_port_single_queue_and_separate_irqs(self): - """ - single port test with PMD and IRQs are pinned to separate cores - """ - self.nb_cores = 1 - self.queue_number = 1 - self.port_num = 1 - self.vdev_num = 1 - - self.create_table() - self.launch_testpmd(topology="loop") - self.assign_port_core() - self.send_and_verify_throughput() - - self.result_table_print() - - def test_perf_one_port_multiqueue_and_separate_irqs(self): - """ - multiqueue test with PMD and IRQs are pinned to separate cores - """ - self.nb_cores = 2 - self.queue_number = 2 - self.port_num = 1 - self.vdev_num = 1 - - self.create_table() - self.launch_testpmd(topology="loop") - self.assign_port_core() - self.send_and_verify_throughput() - - self.result_table_print() - - def test_perf_one_port_multiqueue_and_same_irqs(self): - """ - multiqueue test with PMD and IRQs pinned to same cores - """ - self.nb_cores = 2 - self.queue_number = 2 - self.port_num = 1 - self.vdev_num = 1 - - self.create_table() - self.launch_testpmd(topology="loop") - self.assign_port_core(separate=False) - self.send_and_verify_throughput() - - self.result_table_print() - - def test_perf_two_port_and_separate_irqs(self): - """ - two port test with PMD and IRQs are pinned to separate cores - """ - self.nb_cores = 2 - self.queue_number = 1 - self.port_num = 2 - self.vdev_num = 2 - - self.create_table() - self.launch_testpmd(topology="loop") - self.assign_port_core() - self.send_and_verify_throughput() - - self.result_table_print() - - def test_perf_two_port_and_same_irqs(self): - """ - two ports test with PMD and IRQs pinned to same cores - """ - self.nb_cores = 2 - self.queue_number = 1 - self.port_num = 2 - self.vdev_num = 2 - - self.create_table() - self.launch_testpmd(topology="loop") - self.assign_port_core(separate=False) - self.send_and_verify_throughput() - - self.result_table_print() - - def test_perf_one_port_single_queue_with_two_vdev(self): - """ - one port with two vdev and single queue test - """ - self.nb_cores = 2 - self.queue_number = 1 - self.port_num = 1 - self.vdev_num = 2 - - self.create_table(2) - self.launch_testpmd(topology="loop") - self.assign_port_core() - self.ethtool_set_rule() - self.send_and_verify_throughput(pkt_type="udp") - self.ethtool_del_rule() - - self.result_table_print() - - def test_perf_one_port_multiqueues_with_two_vdev(self): - """ - one port with two vdev and multi-queues test - """ - self.nb_cores = 8 - self.queue_number = 4 - self.port_num = 1 - self.vdev_num = 2 - - self.create_table() - self.launch_testpmd(topology="loop", rss_ip=True) - self.assign_port_core() - self.send_and_verify_throughput() - - self.result_table_print() - - def tear_down(self): - self.dut.send_expect("quit", "#", 60) - - def tear_down_all(self): - self.dut.kill_all()