From patchwork Wed Aug 3 01:44:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114551 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85D80A00C5; Wed, 3 Aug 2022 03:50:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 781AF427F7; Wed, 3 Aug 2022 03:50:30 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 21D1840141 for ; Wed, 3 Aug 2022 03:50:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659491429; x=1691027429; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=zYrBowuw6IOfO3qdDEAT4stG3DpmA/e7E3dwhJq/jKk=; b=Dj1uaZmqzdArKSBXEt/aI15jYtw8CR0Cu2zvPA/2WmCmNq7Ky0IinjFk w3GlHW3nIsWC+SkihJUge3iwk/CxdT+BGcx+eISam1neL7W3jtda6QedD wKAfVOcfX0Qnu1ppD4IqF3SH8FVkv2FhSB82PVOP7WKZm//y9jgJvscM/ ziEMAqJ7dMA0EVmBnPnYRdIDl3pCFmIe7A6d6+Mn12Namz4lTYqH3oJpi E/tsbMkzqMtJeVmt/isHR20TvMKpZvL44cIy3gduxjRaLcnPTzE33sHta Ei6GsylINFA/4cic2pzTP5H6bPAzea7IKsmVfDW/vrV/ol/vqXAg7wFx9 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10427"; a="351273062" X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="351273062" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:50:28 -0700 X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="599488821" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:50:27 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/dpdk_gro_lib_cbdma_test_plan: modify testplan to test virtio dequeue Date: Tue, 2 Aug 2022 21:44:51 -0400 Message-Id: <20220803014451.1122589-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify dpdk_gro_lib_cbdma testplan to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- test_plans/dpdk_gro_lib_cbdma_test_plan.rst | 54 +++++++++++++-------- 1 file changed, 35 insertions(+), 19 deletions(-) diff --git a/test_plans/dpdk_gro_lib_cbdma_test_plan.rst b/test_plans/dpdk_gro_lib_cbdma_test_plan.rst index 562ecdd0..e8a07461 100644 --- a/test_plans/dpdk_gro_lib_cbdma_test_plan.rst +++ b/test_plans/dpdk_gro_lib_cbdma_test_plan.rst @@ -35,24 +35,35 @@ Currently, the GRO library provides GRO supports for TCP/IPv4 packets and VxLAN packets which contain an outer IPv4 header and an inner TCP/IPv4 packet. -This test plan includes dpdk gro lib test with TCP/IPv4 traffic with CBDMA. +This test plan includes dpdk gro lib test with TCP/IPv4 traffic when vhost uses the asynchronous operations with CBDMA channels. ..Note: 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. -2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version >= 5.2.0, dut to old qemu exist reconnect issue when multi-queues test. +2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. 3.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. Prerequisites ============= +Topology +-------- + Test flow:NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net -Test flow -========= +General set up +-------------- +1. Compile DPDK:: -NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 +Test case +========= -Test Case1: DPDK GRO test with two queues and two CBDMA channels using tcp/ipv4 traffic -======================================================================================= +Test Case1: DPDK GRO test with two queues and cbdma channels using tcp/ipv4 traffic +----------------------------------------------------------------------------------- +This case tests dpdk gro lib with TCP/IPv4 traffic when vhost uses the asynchronous operations with CBDMA channels. 1. Connect two nic port directly, put nic2 into another namesapce and turn on the tso of this nic port by below cmds:: @@ -62,12 +73,12 @@ Test Case1: DPDK GRO test with two queues and two CBDMA channels using tcp/ipv4 ip netns exec ns1 ifconfig enp26s0f0 1.1.1.8 up ip netns exec ns1 ethtool -K enp26s0f0 tso on -2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1:: +2. Bind 2 CBDMA channels and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1:: ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-31 -n 4 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1]' \ - -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 + --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0;txq1;rxq0;rxq1]' \ + -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 --lcore-dma=[lcore30@0000:00:04.0,lcore30@0000:00:04.1,lcore31@0000:00:04.1] testpmd> set fwd csum testpmd> stop testpmd> port stop 0 @@ -84,15 +95,16 @@ Test Case1: DPDK GRO test with two queues and two CBDMA channels using tcp/ipv4 3. Set up vm with virto device and using kernel virtio-net driver:: - taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem \ - -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -netdev user,id=yinan,hostfwd=tcp:127.0.0.1:6005-:22 -device e1000,netdev=yinan \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=15 \ - -vnc :10 -daemonize + taskset -c 31 qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/image/ubuntu2004.img \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -pidfile /tmp/.vm0.pid -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 -device e1000,netdev=nttsip1 \ + -chardev socket,id=char0,path=./vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=15 -vnc :4 4. In vm, config the virtio-net device with ip and turn the kernel gro off:: @@ -104,3 +116,7 @@ Test Case1: DPDK GRO test with two queues and two CBDMA channels using tcp/ipv4 Host side : taskset -c 35 ip netns exec ns1 iperf -c 1.1.1.2 -i 1 -t 60 -m -P 2 VM side: iperf -s + +6. During the iperf send and receive packets, check that async data-path(virtio_dev_rx_async_xxx, virtio_dev_tx_async_xxx) is using at the host side:: + + perf top From patchwork Wed Aug 3 01:45:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114552 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7153A00C5; Wed, 3 Aug 2022 03:50:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A220440E50; Wed, 3 Aug 2022 03:50:41 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 6DFD540141 for ; Wed, 3 Aug 2022 03:50:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659491439; x=1691027439; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=8fLxkrManLQBHJZMyubvgtG3ZU3RVkNBZi4abn6G61A=; b=isQOpIxkOlMFMUepqetrzk288mU8t0WfHLW42bkViNfL7VTg6sOXxA9b bi9VkMOPn7wu0JMWazUMBS2BasKZlZY+OCYkz8+6jqVjTK+Z94YdCO+QP PG5eZK9Dyy5m3Lyzj0WKy4WZrlWBBb+v3bByqrhwX31nJkCjqX75B6UZw Wy9raXru5nST1LVgqZPACSQgPACdfpxA02X1AqkhRYEYzpZpuBoXWU1ef bpyqNKGkA9Ft0D63tYRWxfN1u5N7Mh0X1Kf+hOpBLG8VmBbWZpwLkivWf 58CHGF8M2VFeiK1wn/z9OcFxOCCKgi01YgZLXjq6VsdfM1YwAgNCN7zcI g==; X-IronPort-AV: E=McAfee;i="6400,9594,10427"; a="289570964" X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="289570964" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:50:38 -0700 X-IronPort-AV: E=Sophos;i="5.93,212,1654585200"; d="scan'208,223";a="599488849" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Aug 2022 18:50:37 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/2] tests/dpdk_gro_lib_cbdma: modify testsuite to test virtio dequeue Date: Tue, 2 Aug 2022 21:45:01 -0400 Message-Id: <20220803014501.1122653-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify dpdk_gro_lib_cbdma testsuite to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling Acked-by: Xingguang He Tested-by: Chenyu Huang Acked-by: Lijuan Tu --- tests/TestSuite_dpdk_gro_lib_cbdma.py | 56 ++++++++++++++++++--------- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/tests/TestSuite_dpdk_gro_lib_cbdma.py b/tests/TestSuite_dpdk_gro_lib_cbdma.py index 27d1ed17..c77c7075 100644 --- a/tests/TestSuite_dpdk_gro_lib_cbdma.py +++ b/tests/TestSuite_dpdk_gro_lib_cbdma.py @@ -43,9 +43,9 @@ class TestDPDKGROLibCbdma(TestCase): ) self.path = self.dut.apps_name["test-pmd"] self.testpmd_name = self.path.split("/")[-1] - cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) - self.vhost_list = cores_list[0:3] - self.qemu_cpupin = cores_list[3:4][0] + self.cores_list = self.dut.get_core_list(config="all", socket=self.ports_socket) + self.vhost_core_list = self.cores_list[0:3] + self.qemu_cpupin = self.cores_list[3:4][0] # Set the params for VM self.virtio_ip1 = "1.1.1.2" @@ -175,6 +175,7 @@ class TestDPDKGROLibCbdma(TestCase): raise Exception("Set up VM ENV failed") except Exception as e: print((utils.RED("Failure for %s" % str(e)))) + self.vm1_dut.restore_interfaces() def iperf_result_verify(self, run_info): """ @@ -202,33 +203,49 @@ class TestDPDKGROLibCbdma(TestCase): iperfdata_kb = float(tmp_value) return iperfdata_kb - def check_dut_perf_top_info(self, check_string): - self.dut.send_expect("perf top", "# ") + def get_and_verify_func_name_of_perf_top(self, func_name_list): + self.dut.send_expect("rm -fr perf_top.log", "# ", 120) + self.dut.send_expect("perf top > perf_top.log", "", 120) + time.sleep(10) + self.dut.send_expect("^C", "#") + out = self.dut.send_expect("cat perf_top.log", "# ", 120) + self.logger.info(out) + for func_name in func_name_list: + self.verify( + func_name in out, + "the func_name {} is not in the perf top output".format(func_name), + ) def test_vhost_gro_tcp_ipv4_with_cbdma_enable(self): """ - Test Case1: DPDK GRO test with two queues and two CBDMA channels using tcp/ipv4 traffic + Test Case1: DPDK GRO test with two queues and cbdma channels using tcp/ipv4 traffic """ self.config_kernel_nic_host() self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) - lcore_dma = "[lcore{}@{},lcore{}@{},lcore{}@{}]".format( - self.vhost_list[1], - self.cbdma_list[0], - self.vhost_list[1], - self.cbdma_list[1], - self.vhost_list[2], - self.cbdma_list[1], + lcore_dma = ( + "lcore%s@%s," + "lcore%s@%s," + "lcore%s@%s," + % ( + self.vhost_core_list[1], + self.cbdma_list[0], + self.vhost_core_list[1], + self.cbdma_list[1], + self.vhost_core_list[2], + self.cbdma_list[1], + ) ) param = ( - "--txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 --lcore-dma={}".format( - lcore_dma - ) + "--txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 --lcore-dma=[%s]" + % lcore_dma + ) + eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0;txq1;rxq0;rxq1]'" ) - eal_param = "--vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0;txq1]'" ports = self.cbdma_list ports.append(self.pci) self.vhost_pmd.start_testpmd( - cores=self.vhost_list, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=eal_param, @@ -253,8 +270,9 @@ class TestDPDKGROLibCbdma(TestCase): "", 180, ) + self.func_name_list = ["virtio_dev_rx_async", "virtio_dev_tx_async"] + self.get_and_verify_func_name_of_perf_top(self.func_name_list) time.sleep(30) - print(out) perfdata = self.iperf_result_verify("GRO lib") print(("the GRO lib %s " % (self.output_result))) self.quit_testpmd()