From patchwork Mon Jan 9 09:12:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yuan, DukaiX" X-Patchwork-Id: 121728 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 600CE4237C; Mon, 9 Jan 2023 10:17:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B6FB42D1F; Mon, 9 Jan 2023 10:17:04 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 50DF74067C for ; Mon, 9 Jan 2023 10:17:02 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673255822; x=1704791822; h=from:to:cc:subject:date:message-id; bh=cU3kwiSd38mtzqonTbMPGedgcZLwCWuDBrthuS/TSOY=; b=PFW7IgBixbSmzHZRFA/FT7FtsGxAkk8HqgA+ptTin1rR6LSgxZQ4VkJw RhPUyt2U1PFjUe9x92BLerPDyDSHz7hjO04a0ATsqOWUnFHcDMx9JbqPn 90g9bHC4fei+erndEC0D9GwSk5ArlMihGiFErXNy42fOGFgak7vYI+5dv aSx69R1WDdxUkKHujqI+ujtZDXYse0We/AhqfhXAZzIHwxchgqiamUx6b tzno3tn4jQ5ZeLYNolvv6YyJIVXbuENY6zDvD6UQzlT6NnvSZ7oeUIiUs JFYlMc9+nMRC6JL592XcNKlgsEYzw7XsFq8iTYLKV3h9xS9Gqa07FEvXi A==; X-IronPort-AV: E=McAfee;i="6500,9779,10584"; a="322905580" X-IronPort-AV: E=Sophos;i="5.96,311,1665471600"; d="scan'208";a="322905580" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 01:17:01 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10584"; a="985335368" X-IronPort-AV: E=Sophos;i="5.96,311,1665471600"; d="scan'208";a="985335368" Received: from unknown (HELO localhost.localdomain) ([10.239.252.15]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 01:17:00 -0800 From: Dukai Yuan To: dts@dpdk.org Cc: Dukai Yuan Subject: [dts][PATCH V1] test_plans/pvp_qemu_multi_paths_port_restart: add new cases description Date: Mon, 9 Jan 2023 17:12:15 +0800 Message-Id: <20230109091215.21343-1-dukaix.yuan@intel.com> X-Mailer: git-send-email 2.17.1 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org 1.Add 3 new cases for virtio11. 2.Add case 10 pvp test ... restart 100 times description. Signed-off-by: Dukai Yuan --- ...emu_multi_paths_port_restart_test_plan.rst | 190 +++++++++++++++++- 1 file changed, 188 insertions(+), 2 deletions(-) diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst index a621738d..7e24290a 100644 --- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst +++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst @@ -9,7 +9,7 @@ Description =========== Benchmark pvp qemu test with 3 tx/rx paths,includes mergeable, normal, vector_rx. -Cover virtio 1.0 and virtio 0.95, also cover port restart test with each path. +Cover virtio 1.1, virtio 1.0, virtio 0.95, also cover port restart test with each path. Test flow ========= @@ -291,4 +291,190 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path 6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:: testpmd>start - testpmd>show port stats all \ No newline at end of file + testpmd>show port stats all + +Test Case 7: pvp test with virtio 1.1 mergeable path +==================================================== + +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.1:: + + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,packed=on,rx_queue_size=1024,tx_queue_size=1024 \ + -vnc :10 + +3. On VM, bind virtio net to vfio-pci and run testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ + --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:: + + testpmd>show port stats all + +5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:: + + testpmd>stop + testpmd>show port stats all + +6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:: + + testpmd>start + testpmd>show port stats all + +Test Case 8: pvp test with virtio 1.1 normal path +================================================= + +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.1:: + + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,packed=on,rx_queue_size=1024,tx_queue_size=1024 \ + -vnc :10 + +3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\ + --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:: + + testpmd>show port stats all + +5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:: + + testpmd>stop + testpmd>show port stats all + +6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:: + + testpmd>start + testpmd>show port stats all + +Test Case 9: pvp test with virtio 1.1 vrctor_rx path +==================================================== + +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.1:: + + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,packed=on,rx_queue_size=1024,tx_queue_size=1024 \ + -vnc :10 + +3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:04:00.0,vectorized=1 -- -i \ + --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packets by packet generator with different packet sizes(64,128,256,512,1024,1280,1518), show throughput with below command:: + + testpmd>show port stats all + +5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:: + + testpmd>stop + testpmd>show port stats all + +6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:: + + testpmd>start + testpmd>show port stats all + +Test Case 10: pvp test with virtio 1.0 mergeable path restart 100 times +======================================================================= + +1. Bind 1 NIC port to vfio-pci, then launch testpmd by below command:: + + rm -rf vhost-net* + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xe -n 4 -a 0000:af:00.0 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: + + qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,id=char0,path=./vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ + -vnc :10 + +3. On VM, bind virtio net to vfio-pci and run testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i \ + --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packets by packet generator with different packet sizes(64), show throughput with below command:: + + testpmd>show port stats all + +5. Stop port at vhost side by below command and re-calculate the average throughput,verify the throughput is zero after port stop:: + + testpmd>stop + testpmd>show port stats all + +6. Restart port at vhost side by below command and re-calculate the average throughput,verify the throughput is not zero after port restart:: + + testpmd>start + testpmd>show port stats all + +7. Rerun steps 4-6 100 times to check stability.