From patchwork Thu May 19 08:33:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 111396 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B948A0503; Thu, 19 May 2022 10:35:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64F0340156; Thu, 19 May 2022 10:35:15 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id CFD2940140 for ; Thu, 19 May 2022 10:35:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652949314; x=1684485314; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=iMur6YjcP9kAu35vIZb11CHBi1r58L8VcDrMXCJP1eM=; b=Ng9Fn0D4G9Vt16AGbG7LPheNxWJATZRgstI0VX3bYZkbVdjyEi5JZ0Pc pEjhn2/Ix/ptmgGODIA11nXXXshltSWqse3JTV1tkeIB0rARSJg4btALL duC8nBfYOdhjMhW09vGsCh9Wr9HNxCizWBxR0Sb2vPg9CayMcIzAY+NPs ElOCXaxqM2r3PVQERW+avUmzgVFB6A6qHVcVuOvfl5QGjmkHtsY6xNxEf 3Kw4/DAUuph+O2Wze0d+EFQ3F1iSaNvv/8Kt8VDzZ0a6sjXo9yWaXbvfL omK3XWN90s25eQzIcHAUoP7vlQbLUMtEYA1zbZINvuUwrIQUNnWx8CPLM Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="271791979" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="271791979" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:12 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="598438425" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:11 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/5] test_plans/index: add vhost_event_idx_interrupt_cbdma testsuite Date: Thu, 19 May 2022 04:33:50 -0400 Message-Id: <20220519083350.2816744-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_event_idx_interrupt_cbdma_test_plan into test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/test_plans/index.rst b/test_plans/index.rst index 6978a2ef..ccc5dbcd 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -226,6 +226,7 @@ The following are the test plans for the DPDK DTS automated test system. malicious_driver_event_indication_test_plan vhost_event_idx_interrupt_test_plan + vhost_event_idx_interrupt_cbdma_test_plan vhost_virtio_pmd_interrupt_test_plan vhost_virtio_pmd_interrupt_cbdma_test_plan vhost_virtio_user_interrupt_test_plan From patchwork Thu May 19 08:33:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 111397 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F089A0503; Thu, 19 May 2022 10:35:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8944142684; Thu, 19 May 2022 10:35:24 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 37B4A40140 for ; Thu, 19 May 2022 10:35:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652949323; x=1684485323; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=R0Ow13I1RMNO5qGTrkpa70TMDGueEDrD4ewWIK81xX8=; b=CCGau182LYln1hvcUo8HUkycjAmtkJf8a6Q+NRrw/DNRuSf9m+5ROexU wl8Mx3MPr3OoHLS/fYuZlxfiECm3hRQ2v2tePbVMRdDgCXHAcydLzuywg JXUgYkkVg8OhBw5Uljjfgffzf1ukAKQRVdfZGtrGfHpro4udIHDUoPDbn ml1L/KJOvPyBKfvo0MU9fygGhN30XhriKU1KEiZyVDOLnOWAqsqe3FgXl udJXRMCT8fOPboOhDKJEIrB0HcAoQw9TbMGbWaQ7pTlXoU1ZmhF5vQfUq GdZWCKzMZFN5EvpYR5LXbszyxA/Sc4rmuVnMN6NM9diYiEov8r9u7ubWD g==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="332717339" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="332717339" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:22 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="598438467" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:20 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/5] test_plans/vhost_event_idx_interrupt_test_plan: migrate CBDMA related testcase to new testsuite Date: Thu, 19 May 2022 04:33:59 -0400 Message-Id: <20220519083359.2816804-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Migrate CBDMA related testcase to new testsuite. Signed-off-by: Wei Ling --- .../vhost_event_idx_interrupt_test_plan.rst | 232 ------------------ 1 file changed, 232 deletions(-) diff --git a/test_plans/vhost_event_idx_interrupt_test_plan.rst b/test_plans/vhost_event_idx_interrupt_test_plan.rst index 4873ba54..86a946e0 100644 --- a/test_plans/vhost_event_idx_interrupt_test_plan.rst +++ b/test_plans/vhost_event_idx_interrupt_test_plan.rst @@ -395,235 +395,3 @@ Test Case 6: wake up packed ring vhost-user cores by multi virtio-net in VMs wit #send packets to vhost 6. Check vhost related cores are waked up with l3fwd-power log. - -Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test -=============================================================================================================== - -1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -2. Launch VM1 with server mode:: - - taskset -c 17-18 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu1910.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40 -vnc :12 - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -4. Set vitio-net with 16 quques and give vitio-net ip address:: - - ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net - ifconfig [ens3] 1.1.1.1 - -5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: - - taskset -c 0 ping 1.1.1.2 - taskset -c 1 ping 1.1.1.3 - taskset -c 2 ping 1.1.1.4 - taskset -c 3 ping 1.1.1.5 - taskset -c 4 ping 1.1.1.6 - taskset -c 5 ping 1.1.1.7 - taskset -c 6 ping 1.1.1.8 - taskset -c 7 ping 1.1.1.9 - taskset -c 8 ping 1.1.1.2 - taskset -c 9 ping 1.1.1.2 - taskset -c 10 ping 1.1.1.2 - taskset -c 11 ping 1.1.1.2 - taskset -c 12 ping 1.1.1.2 - taskset -c 13 ping 1.1.1.2 - taskset -c 14 ping 1.1.1.2 - taskset -c 15 ping 1.1.1.2 - -6. Check vhost related cores are waked up with l3fwd-power log, such as following:: - - L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 - ... - ... - L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 - -Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test -================================================================================================================================ - -1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -2. Launch VM1 and VM2 with server mode:: - - taskset -c 33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,server,id=char0,path=/vhost-net0 \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on -vnc :10 -daemonize - - taskset -c 34 \ - qemu-system-x86_64 -name us-vhost-vm2 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910-2.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ - -chardev socket,server,id=char0,path=/vhost-net1 \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on -vnc :11 -daemonize - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -4. On VM1, set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.2 - #[ens3] is the virtual device name - ping 1.1.1.3 - #send packets to vhost - -5. On VM2, also set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.4 - #[ens3] is the virtual device name - ping 1.1.1.5 - #send packets to vhost - -6. Check vhost related cores are waked up with l3fwd-power log. - -Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test -================================================================================================================ - -1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -2. Launch VM1 with server mode:: - - taskset -c 17-18 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu1910.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net0 \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40,packed=on -vnc :12 - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -4. Set vitio-net with 16 quques and give vitio-net ip address:: - - ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net - ifconfig [ens3] 1.1.1.1 - -5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: - - taskset -c 0 ping 1.1.1.2 - taskset -c 1 ping 1.1.1.3 - taskset -c 2 ping 1.1.1.4 - taskset -c 3 ping 1.1.1.5 - taskset -c 4 ping 1.1.1.6 - taskset -c 5 ping 1.1.1.7 - taskset -c 6 ping 1.1.1.8 - taskset -c 7 ping 1.1.1.9 - taskset -c 8 ping 1.1.1.2 - taskset -c 9 ping 1.1.1.2 - taskset -c 10 ping 1.1.1.2 - taskset -c 11 ping 1.1.1.2 - taskset -c 12 ping 1.1.1.2 - taskset -c 13 ping 1.1.1.2 - taskset -c 14 ping 1.1.1.2 - taskset -c 15 ping 1.1.1.2 - -6. Check vhost related cores are waked up with l3fwd-power log, such as following:: - - L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 - ... - ... - L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 - -Test Case 10: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test -================================================================================================================================== - -1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -2. Launch VM1 and VM2 with server mode:: - - taskset -c 33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,server,id=char0,path=/vhost-net0 \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,packed=on -vnc :10 -daemonize - - taskset -c 34 \ - qemu-system-x86_64 -name us-vhost-vm2 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910-2.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ - -chardev socket,server,id=char0,path=/vhost-net1 \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on,packed=on -vnc :11 -daemonize - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -4. On VM1, set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.2 - #[ens3] is the virtual device name - ping 1.1.1.3 - #send packets to vhost - -5. On VM2, also set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.4 - #[ens3] is the virtual device name - ping 1.1.1.5 - #send packets to vhost - -6. Check vhost related cores are waked up with l3fwd-power log. From patchwork Thu May 19 08:34:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 111398 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B61DFA0503; Thu, 19 May 2022 10:35:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B174B40222; Thu, 19 May 2022 10:35:34 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id D1A8540140 for ; Thu, 19 May 2022 10:35:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652949333; x=1684485333; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=7SD69iXnP7x36jsHqen6OcpOY9HEpeDQLHoiy+pri5E=; b=c039PVmVi2Qpc05zWI4XYCxlInCHmbBSijgoFnpFnZX/4FWwQwb+FjOd GrvnJfvjOPcrlHZKPcOAkQISGheYI5IrPvmJXZIQNDYVURZavXYHWyswJ JRZSA5i1BCudhWp2GjOvb6zJ0vnxPTsw2t5bsthgX+1HA2N0JHXLVBDOO OWB/aSrMw2bjl/Ko36Tdu01rZdqY2TNWCTjU9pu/+5uzoFnerKcLIBhE2 dvBTUtt5vCCLmmtXP3NrzDqXH3vPiCSTVJjoyQU2Pgz3+uHxv6yE9qTQW MqBBNdYlVrRb5nj+4uYdMs2Gw59RoOPL2K8Dl7nrpgUtVuZWoJP7341rA g==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="272225495" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="272225495" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:31 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="598438561" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:29 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 3/5] tests/vhost_event_idx_interrupt: migrate CBDMA related testcase to new testsuite Date: Thu, 19 May 2022 04:34:08 -0400 Message-Id: <20220519083408.2816864-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Migrate CBDMA related testcase to new testsuite. Signed-off-by: Wei Ling --- tests/TestSuite_vhost_event_idx_interrupt.py | 170 +------------------ 1 file changed, 8 insertions(+), 162 deletions(-) diff --git a/tests/TestSuite_vhost_event_idx_interrupt.py b/tests/TestSuite_vhost_event_idx_interrupt.py index f01465f1..d452052c 100644 --- a/tests/TestSuite_vhost_event_idx_interrupt.py +++ b/tests/TestSuite_vhost_event_idx_interrupt.py @@ -37,7 +37,6 @@ Vhost event idx interrupt need test with l3fwd-power sample import re import time -import framework.utils as utils from framework.test_case import TestCase from framework.virt_common import VM @@ -58,8 +57,6 @@ class TestVhostEventIdxInterrupt(TestCase): self.l3fwdpower_name = self.app_l3fwd_power_path.split("/")[-1] self.dut_ports = self.dut.get_ports() self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) - self.cbdma_dev_infos = [] - self.device_str = None def set_up(self): """ @@ -87,9 +84,6 @@ class TestVhostEventIdxInterrupt(TestCase): out = self.dut.build_dpdk_apps("examples/l3fwd-power") self.verify("Error" not in out, "compilation l3fwd-power error") - def list_split(self, items, n): - return [items[i : i + n] for i in range(0, len(items), n)] - @property def check_2M_env(self): out = self.dut.send_expect( @@ -97,7 +91,7 @@ class TestVhostEventIdxInterrupt(TestCase): ) return True if out == "2048" else False - def lanuch_l3fwd_power(self, cbdma=False): + def lanuch_l3fwd_power(self): """ launch l3fwd-power with a virtual vhost device """ @@ -124,32 +118,11 @@ class TestVhostEventIdxInterrupt(TestCase): core_index = core_index + 1 # config the vdev info, if have 2 vms, it shoule have 2 vdev info vdev_info = "" - if cbdma: - self.cbdma_dev_infos_list = [] - if self.vm_num >= 2: - self.cbdma_dev_infos_list = self.list_split( - self.cbdma_dev_infos, int(len(self.cbdma_dev_infos) / self.vm_num) - ) - for i in range(self.vm_num): - dmas = "" - if self.vm_num == 1: - for queue in range(self.queues): - dmas += f"txq{queue}@{self.cbdma_dev_infos[queue]};" - - else: - cbdma_dev_infos = self.cbdma_dev_infos_list[i] - for index, q in enumerate(cbdma_dev_infos): - dmas += f"txq{index}@{q};" - vdev_info += ( - f"--vdev 'net_vhost%d,iface=%s/vhost-net%d,dmas=[{dmas}],queues=%d,client=1' " - % (i, self.base_dir, i, self.queues) - ) - else: - for i in range(self.vm_num): - vdev_info += ( - "--vdev 'net_vhost%d,iface=%s/vhost-net%d,queues=%d,client=1' " - % (i, self.base_dir, i, self.queues) - ) + for i in range(self.vm_num): + vdev_info += ( + "--vdev 'net_vhost%d,iface=%s/vhost-net%d,queues=%d,client=1' " + % (i, self.base_dir, i, self.queues) + ) port_info = "0x1" if self.vm_num == 1 else "0x3" @@ -173,7 +146,7 @@ class TestVhostEventIdxInterrupt(TestCase): self.logger.info("Launch l3fwd-power sample finished") self.verify(res is True, "Lanuch l3fwd failed") - def relanuch_l3fwd_power(self, cbdma=False): + def relanuch_l3fwd_power(self): """ relauch l3fwd-power sample for port up """ @@ -184,7 +157,7 @@ class TestVhostEventIdxInterrupt(TestCase): ) if pid: self.dut.send_expect("kill -9 %s" % pid, "#") - self.lanuch_l3fwd_power(cbdma) + self.lanuch_l3fwd_power() def set_vm_cpu_number(self, vm_config): # config the vcpu numbers when queue number greater than 1 @@ -318,50 +291,6 @@ class TestVhostEventIdxInterrupt(TestCase): session_info[sess_index].send_expect("^c", "#") self.vm_dut[vm_index].close_session(session_info[sess_index]) - def get_cbdma_ports_info_and_bind_to_dpdk(self): - """ - get all cbdma ports - """ - self.cbdma_dev_infos = [] - out = self.dut.send_expect( - "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 - ) - device_info = out.split("\n") - for device in device_info: - pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) - if pci_info is not None: - # dev_info = pci_info.group(1) - # the numa id of ioat dev, only add the device which - # on same socket with nic dev - self.cbdma_dev_infos.append(pci_info.group(1)) - self.verify( - len(self.cbdma_dev_infos) >= self.queues, - "There no enough cbdma device to run this suite", - ) - if self.queues == 1: - self.cbdma_dev_infos = [self.cbdma_dev_infos[0], self.cbdma_dev_infos[-1]] - self.used_cbdma = self.cbdma_dev_infos[0 : self.queues * self.vm_num] - self.device_str = " ".join(self.used_cbdma) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=%s %s" - % (self.drivername, self.device_str), - "# ", - 60, - ) - - def bind_cbdma_device_to_kernel(self): - if self.device_str is not None: - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" - % self.device_str, - "# ", - 60, - ) - def stop_all_apps(self): """ close all vms @@ -458,86 +387,6 @@ class TestVhostEventIdxInterrupt(TestCase): self.send_and_verify() self.stop_all_apps() - def test_wake_up_split_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( - self, - ): - """ - Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test - """ - self.vm_num = 1 - self.bind_nic_driver(self.dut_ports) - self.queues = 16 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power(cbdma=True) - self.start_vms( - vm_num=self.vm_num, - ) - self.relanuch_l3fwd_power(cbdma=True) - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - - def test_wake_up_split_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( - self, - ): - """ - Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test - """ - self.vm_num = 2 - self.bind_nic_driver(self.dut_ports) - self.queues = 1 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power(cbdma=True) - self.start_vms( - vm_num=self.vm_num, - ) - self.relanuch_l3fwd_power(cbdma=True) - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - - def test_wake_up_packed_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( - self, - ): - """ - Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test - """ - self.vm_num = 1 - self.bind_nic_driver(self.dut_ports) - self.queues = 16 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power(cbdma=True) - self.start_vms(vm_num=self.vm_num, packed=True) - self.relanuch_l3fwd_power(cbdma=True) - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - - def test_wake_up_packed_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( - self, - ): - """ - Test Case 10: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test - """ - self.vm_num = 2 - self.bind_nic_driver(self.dut_ports) - self.queues = 1 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power(cbdma=True) - self.start_vms(vm_num=self.vm_num, packed=True) - self.relanuch_l3fwd_power(cbdma=True) - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - def tear_down(self): """ Run after each test case. @@ -545,9 +394,6 @@ class TestVhostEventIdxInterrupt(TestCase): self.dut.close_session(self.vhost) self.dut.send_expect(f"killall {self.l3fwdpower_name}", "#") self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") - self.bind_cbdma_device_to_kernel() - if "cbdma" in self.running_case: - self.bind_nic_driver(self.dut_ports, self.drivername) def tear_down_all(self): """ From patchwork Thu May 19 08:34:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 111399 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD2A3A0503; Thu, 19 May 2022 10:35:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D6D2640150; Thu, 19 May 2022 10:35:47 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 9D96340140 for ; Thu, 19 May 2022 10:35:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652949345; x=1684485345; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=V5t0hKN2LSLUtDsPhFGc5Hy96iXtiRTxI6xmHM+v/eo=; b=Odo1OtEIx6C3FtF7TA1Fr0HQLjstXfYOffP3A8VPuirUfmCCuHhGdI8n IifB4eEtv3dGeuytJzxy6dNIQK013wl/PbQyt1xUtP8V8Um0Uy2tpv5OS /tCHC89Ug2K+kOSs+/c8n3yxtgfGe6LxGIfyT5+SV62/ptth7jjzHH8Tt X5Al+bYAPfFX5KV22mxX4VwPNwvXigty2OVQ3JrL59bItkcDlLjLr2z6x 1ufEyvUzUFriuFmChjO0eeMk0q2stwbjcwF1MKkanhy2MR1ShIkOb5EDj 4A8KEubiO5tmAM2HB50DmF00iBcBMJ83usQYx/Ra3ZBIQIFzvIt6qoX3N Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="297395427" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="297395427" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:44 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="598438634" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:43 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 4/5] test_plans/vhost_event_idx_interrupt_cbdma_test_plan: add new testsuite for CBDMA related testcases Date: Thu, 19 May 2022 04:34:20 -0400 Message-Id: <20220519083420.2816924-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testsuite for CBDMA related testcases. Signed-off-by: Wei Ling --- ...st_event_idx_interrupt_cbdma_test_plan.rst | 281 ++++++++++++++++++ 1 file changed, 281 insertions(+) create mode 100644 test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst diff --git a/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst new file mode 100644 index 00000000..1c6a6400 --- /dev/null +++ b/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst @@ -0,0 +1,281 @@ +.. Copyright (c) <2022>, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + - Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + - Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + + - Neither the name of Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED + OF THE POSSIBILITY OF SUCH DAMAGE. + +======================================== +vhost event idx interrupt mode test plan +======================================== + +Description +=========== + +Vhost event idx interrupt need test with l3fwd-power sample with CBDMA channel, send small packets +from virtio-net to vhost side,check vhost-user cores can be wakeup status,and +vhost-user cores should be sleep status after stop sending packets from virtio +side.For packed virtqueue test, need using qemu version > 4.2.0. + + +Test flow +========= + +Virtio-net --> Vhost-user + +Test Case 1: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test +=============================================================================================================== + +1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +2. Launch VM1 with server mode:: + + taskset -c 17-18 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu1910.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40 -vnc :12 + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +4. Set vitio-net with 16 quques and give vitio-net ip address:: + + ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net + ifconfig [ens3] 1.1.1.1 + +5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: + + taskset -c 0 ping 1.1.1.2 + taskset -c 1 ping 1.1.1.3 + taskset -c 2 ping 1.1.1.4 + taskset -c 3 ping 1.1.1.5 + taskset -c 4 ping 1.1.1.6 + taskset -c 5 ping 1.1.1.7 + taskset -c 6 ping 1.1.1.8 + taskset -c 7 ping 1.1.1.9 + taskset -c 8 ping 1.1.1.2 + taskset -c 9 ping 1.1.1.2 + taskset -c 10 ping 1.1.1.2 + taskset -c 11 ping 1.1.1.2 + taskset -c 12 ping 1.1.1.2 + taskset -c 13 ping 1.1.1.2 + taskset -c 14 ping 1.1.1.2 + taskset -c 15 ping 1.1.1.2 + +6. Check vhost related cores are waked up with l3fwd-power log, such as following:: + + L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 + ... + ... + L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 + +Test Case 2: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test +================================================================================================================================ + +1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ + --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +2. Launch VM1 and VM2 with server mode:: + + taskset -c 33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,server,id=char0,path=/vhost-net0 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on -vnc :10 -daemonize + + taskset -c 34 \ + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,server,id=char0,path=/vhost-net1 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on -vnc :11 -daemonize + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ + --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +4. On VM1, set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.2 + #[ens3] is the virtual device name + ping 1.1.1.3 + #send packets to vhost + +5. On VM2, also set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.4 + #[ens3] is the virtual device name + ping 1.1.1.5 + #send packets to vhost + +6. Check vhost related cores are waked up with l3fwd-power log. + +Test Case 3: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test +================================================================================================================ + +1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +2. Launch VM1 with server mode:: + + taskset -c 17-18 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu1910.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,id=char0,path=./vhost-net0 \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40,packed=on -vnc :12 + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ + -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" + +4. Set vitio-net with 16 quques and give vitio-net ip address:: + + ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net + ifconfig [ens3] 1.1.1.1 + +5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: + + taskset -c 0 ping 1.1.1.2 + taskset -c 1 ping 1.1.1.3 + taskset -c 2 ping 1.1.1.4 + taskset -c 3 ping 1.1.1.5 + taskset -c 4 ping 1.1.1.6 + taskset -c 5 ping 1.1.1.7 + taskset -c 6 ping 1.1.1.8 + taskset -c 7 ping 1.1.1.9 + taskset -c 8 ping 1.1.1.2 + taskset -c 9 ping 1.1.1.2 + taskset -c 10 ping 1.1.1.2 + taskset -c 11 ping 1.1.1.2 + taskset -c 12 ping 1.1.1.2 + taskset -c 13 ping 1.1.1.2 + taskset -c 14 ping 1.1.1.2 + taskset -c 15 ping 1.1.1.2 + +6. Check vhost related cores are waked up with l3fwd-power log, such as following:: + + L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 + ... + ... + L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 + +Test Case 4: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test +================================================================================================================================== + +1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ + --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +2. Launch VM1 and VM2 with server mode:: + + taskset -c 33 \ + qemu-system-x86_64 -name us-vhost-vm1 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ + -chardev socket,server,id=char0,path=/vhost-net0 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,packed=on -vnc :10 -daemonize + + taskset -c 34 \ + qemu-system-x86_64 -name us-vhost-vm2 \ + -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ + -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910-2.img \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ + -chardev socket,server,id=char0,path=/vhost-net1 \ + -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ + -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on,packed=on -vnc :11 -daemonize + +3. Relauch l3fwd-power sample for port up:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ + --vdev 'eth_vhost1,iface=/vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ + -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" + +4. On VM1, set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.2 + #[ens3] is the virtual device name + ping 1.1.1.3 + #send packets to vhost + +5. On VM2, also set ip for virtio device and send packets to vhost:: + + ifconfig [ens3] 1.1.1.4 + #[ens3] is the virtual device name + ping 1.1.1.5 + #send packets to vhost + +6. Check vhost related cores are waked up with l3fwd-power log. From patchwork Thu May 19 08:34:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 111400 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 14E8AA0503; Thu, 19 May 2022 10:35:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0A3BD40156; Thu, 19 May 2022 10:35:57 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 926E640140 for ; Thu, 19 May 2022 10:35:55 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652949355; x=1684485355; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=B+Zl/beknXuyG3Tot+o6vu82Myy6pXcaXDzdYmuOcIA=; b=NC5Aeb/guH7n6RSqyoKRnbLGAxOSJBvqb6c43o6Q3sg399bAQOkc8PyU hmVBd33FkX00W2QzxaLuOw4sjBNV/M6RNh4NKgTVCrQ5Ytr0cMlukOp5S /J32FilfyczHP0jcXJrwwi2uG0CrKBCXaqejgd8fds7k7nK+xGULIKNoj duCoLHDVYUm0ra+L38r0d1EWWLuKLIoPWDM0UW2H0r8OGRtWl35pC/y9N zo08ySTRT5UtjH8XbdqddE99SVnAWbHvXhA5/e2NydaoQ69STGx2c4F5E kCeGBgAmhs5/ebT8HDY5Z/37//Zy9u6Ephtg6UjXJfy4CEvS1KOB9WKd1 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="335142880" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="335142880" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:54 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="598438696" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:35:53 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 5/5] tests/vhost_event_idx_interrupt_cbdma: add new testsuite for CBDMA related testcases Date: Thu, 19 May 2022 04:34:32 -0400 Message-Id: <20220519083432.2816984-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new testsuite for CBDMA related testcases. Signed-off-by: Wei Ling Tested-by: Chenyu Huang Acked-by: Xingguang He --- ...stSuite_vhost_event_idx_interrupt_cbdma.py | 460 ++++++++++++++++++ 1 file changed, 460 insertions(+) create mode 100644 tests/TestSuite_vhost_event_idx_interrupt_cbdma.py diff --git a/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py b/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py new file mode 100644 index 00000000..173ce925 --- /dev/null +++ b/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py @@ -0,0 +1,460 @@ +# BSD LICENSE +# +# Copyright (c) <2022>, Intel Corporation. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +""" +DPDK Test suite. +Vhost event idx interrupt need test with l3fwd-power sample +""" + +import re +import time + +from framework.test_case import TestCase +from framework.virt_common import VM + + +class TestVhostEventIdxInterruptCbdma(TestCase): + def set_up_all(self): + """ + Run at the start of each test suite. + + """ + self.vm_num = 1 + self.queues = 1 + self.cores_num = len([n for n in self.dut.cores if int(n["socket"]) == 0]) + self.prepare_l3fwd_power() + self.pci_info = self.dut.ports_info[0]["pci"] + self.base_dir = self.dut.base_dir.replace("~", "/root") + self.app_l3fwd_power_path = self.dut.apps_name["l3fwd-power"] + self.l3fwdpower_name = self.app_l3fwd_power_path.split("/")[-1] + self.dut_ports = self.dut.get_ports() + self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) + self.cbdma_dev_infos = [] + self.device_str = None + + def set_up(self): + """ + Run before each test case. + """ + # Clean the execution ENV + self.verify_info = [] + self.dut.send_expect(f"killall {self.l3fwdpower_name}", "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") + self.vhost = self.dut.new_session(suite="vhost-l3fwd") + self.vm_dut = [] + self.vm = [] + self.nopci = True + + def get_core_mask(self): + self.core_config = "1S/%dC/1T" % (self.vm_num * self.queues) + self.verify( + self.cores_num >= self.queues * self.vm_num, + "There has not enought cores to test this case %s" % self.running_case, + ) + self.core_list_l3fwd = self.dut.get_core_list(self.core_config) + + def prepare_l3fwd_power(self): + out = self.dut.build_dpdk_apps("examples/l3fwd-power") + self.verify("Error" not in out, "compilation l3fwd-power error") + + def list_split(self, items, n): + return [items[i : i + n] for i in range(0, len(items), n)] + + @property + def check_2M_env(self): + out = self.dut.send_expect( + "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " + ) + return True if out == "2048" else False + + def lanuch_l3fwd_power(self): + """ + launch l3fwd-power with a virtual vhost device + """ + res = True + self.logger.info("Launch l3fwd_sample sample:") + config_info = "" + core_index = 0 + # config the interrupt cores info + for port in range(self.vm_num): + for queue in range(self.queues): + if config_info != "": + config_info += "," + config_info += "(%d,%d,%s)" % ( + port, + queue, + self.core_list_l3fwd[core_index], + ) + info = { + "core": self.core_list_l3fwd[core_index], + "port": port, + "queue": queue, + } + self.verify_info.append(info) + core_index = core_index + 1 + # config the vdev info, if have 2 vms, it shoule have 2 vdev info + vdev_info = "" + self.cbdma_dev_infos_list = [] + if self.vm_num >= 2: + self.cbdma_dev_infos_list = self.list_split( + self.cbdma_dev_infos, int(len(self.cbdma_dev_infos) / self.vm_num) + ) + for i in range(self.vm_num): + dmas = "" + if self.vm_num == 1: + for queue in range(self.queues): + dmas += f"txq{queue}@{self.cbdma_dev_infos[queue]};" + + else: + cbdma_dev_infos = self.cbdma_dev_infos_list[i] + for index, q in enumerate(cbdma_dev_infos): + dmas += f"txq{index}@{q};" + vdev_info += ( + f"--vdev 'net_vhost%d,iface=%s/vhost-net%d,dmas=[{dmas}],queues=%d,client=1' " + % (i, self.base_dir, i, self.queues) + ) + + port_info = "0x1" if self.vm_num == 1 else "0x3" + + example_para = self.app_l3fwd_power_path + " " + para = ( + " --log-level=9 %s -- -p %s --parse-ptype 1 --config '%s' --interrupt-only" + % (vdev_info, port_info, config_info) + ) + eal_params = self.dut.create_eal_parameters( + cores=self.core_list_l3fwd, no_pci=self.nopci + ) + command_line_client = example_para + eal_params + para + self.vhost.get_session_before(timeout=2) + self.vhost.send_expect(command_line_client, "POWER", 40) + time.sleep(10) + out = self.vhost.get_session_before() + if "Error" in out and "Error opening" not in out: + self.logger.error("Launch l3fwd-power sample error") + res = False + else: + self.logger.info("Launch l3fwd-power sample finished") + self.verify(res is True, "Lanuch l3fwd failed") + + def relanuch_l3fwd_power(self): + """ + relauch l3fwd-power sample for port up + """ + self.dut.send_expect("killall -s INT %s" % self.l3fwdpower_name, "#") + # make sure l3fwd-power be killed + pid = self.dut.send_expect( + "ps -ef |grep l3|grep -v grep |awk '{print $2}'", "#" + ) + if pid: + self.dut.send_expect("kill -9 %s" % pid, "#") + self.lanuch_l3fwd_power() + + def set_vm_cpu_number(self, vm_config): + # config the vcpu numbers when queue number greater than 1 + if self.queues == 1: + return + params_number = len(vm_config.params) + for i in range(params_number): + if list(vm_config.params[i].keys())[0] == "cpu": + vm_config.params[i]["cpu"][0]["number"] = self.queues + + def check_qemu_version(self, vm_config): + """ + in this suite, the qemu version should greater 2.7 + """ + self.vm_qemu_version = vm_config.qemu_emulator + params_number = len(vm_config.params) + for i in range(params_number): + if list(vm_config.params[i].keys())[0] == "qemu": + self.vm_qemu_version = vm_config.params[i]["qemu"][0]["path"] + + out = self.dut.send_expect("%s --version" % self.vm_qemu_version, "#") + result = re.search("QEMU\s*emulator\s*version\s*(\d*.\d*)", out) + self.verify( + result is not None, + "the qemu path may be not right: %s" % self.vm_qemu_version, + ) + version = result.group(1) + index = version.find(".") + self.verify( + int(version[:index]) > 2 + or (int(version[:index]) == 2 and int(version[index + 1 :]) >= 7), + "This qemu version should greater than 2.7 " + + "in this suite, please config it in vhost_sample.cfg file", + ) + + def start_vms(self, vm_num=1, packed=False): + """ + start qemus + """ + for i in range(vm_num): + vm_info = VM(self.dut, "vm%d" % i, "vhost_sample_copy") + vm_info.load_config() + vm_params = {} + vm_params["driver"] = "vhost-user" + vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i + vm_params["opt_mac"] = "00:11:22:33:44:5%d" % i + vm_params["opt_server"] = "server" + if self.queues > 1: + vm_params["opt_queue"] = self.queues + opt_args = "csum=on,mq=on,vectors=%d" % (2 * self.queues + 2) + else: + opt_args = "csum=on" + if packed: + opt_args = opt_args + ",packed=on" + vm_params["opt_settings"] = opt_args + vm_info.set_vm_device(**vm_params) + self.set_vm_cpu_number(vm_info) + self.check_qemu_version(vm_info) + vm_dut = None + try: + vm_dut = vm_info.start(load_config=False, set_target=False) + if vm_dut is None: + raise Exception("Set up VM ENV failed") + except Exception as e: + self.logger.error("ERROR: Failure for %s" % str(e)) + vm_dut.restore_interfaces() + self.vm_dut.append(vm_dut) + self.vm.append(vm_info) + + def config_virito_net_in_vm(self): + """ + set vitio-net with 2 quques enable + """ + for i in range(len(self.vm_dut)): + vm_intf = self.vm_dut[i].ports_info[0]["intf"] + self.vm_dut[i].send_expect( + "ethtool -L %s combined %d" % (vm_intf, self.queues), "#", 20 + ) + + def check_vhost_core_status(self, vm_index, status): + """ + check the cpu status + """ + out = self.vhost.get_session_before() + for i in range(self.queues): + # because of the verify_info include all config(vm0 and vm1) + # so current index shoule vm_index + queue_index + verify_index = i + vm_index + if status == "waked up": + info = "lcore %s is waked up from rx interrupt on port %d queue %d" + info = info % ( + self.verify_info[verify_index]["core"], + self.verify_info[verify_index]["port"], + self.verify_info[verify_index]["queue"], + ) + elif status == "sleeps": + info = ( + "lcore %s sleeps until interrupt triggers" + % self.verify_info[verify_index]["core"] + ) + self.logger.info(info) + self.verify(info in out, "The CPU status not right for %s" % info) + + def send_and_verify(self): + """ + start to send packets and check the cpu status + stop and restart to send packets and check the cpu status + """ + ping_ip = 3 + for vm_index in range(self.vm_num): + session_info = [] + vm_intf = self.vm_dut[vm_index].ports_info[0]["intf"] + self.vm_dut[vm_index].send_expect( + "ifconfig %s 1.1.1.%d" % (vm_intf, ping_ip), "#" + ) + ping_ip = ping_ip + 1 + self.vm_dut[vm_index].send_expect("ifconfig %s up" % vm_intf, "#") + for queue in range(self.queues): + session = self.vm_dut[vm_index].new_session( + suite="ping_info_%d" % queue + ) + session.send_expect( + "taskset -c %d ping 1.1.1.%d" % (queue, ping_ip), "PING", 30 + ) + session_info.append(session) + ping_ip = ping_ip + 1 + time.sleep(3) + self.check_vhost_core_status(vm_index=vm_index, status="waked up") + # close all sessions of ping in vm + for sess_index in range(len(session_info)): + session_info[sess_index].send_expect("^c", "#") + self.vm_dut[vm_index].close_session(session_info[sess_index]) + + def get_cbdma_ports_info_and_bind_to_dpdk(self): + """ + get all cbdma ports + """ + self.cbdma_dev_infos = [] + out = self.dut.send_expect( + "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 + ) + device_info = out.split("\n") + for device in device_info: + pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) + if pci_info is not None: + # dev_info = pci_info.group(1) + # the numa id of ioat dev, only add the device which + # on same socket with nic dev + self.cbdma_dev_infos.append(pci_info.group(1)) + self.verify( + len(self.cbdma_dev_infos) >= self.queues, + "There no enough cbdma device to run this suite", + ) + if self.queues == 1: + self.cbdma_dev_infos = [self.cbdma_dev_infos[0], self.cbdma_dev_infos[-1]] + self.used_cbdma = self.cbdma_dev_infos[0 : self.queues * self.vm_num] + self.device_str = " ".join(self.used_cbdma) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=%s %s" + % (self.drivername, self.device_str), + "# ", + 60, + ) + + def bind_cbdma_device_to_kernel(self): + if self.device_str is not None: + self.dut.send_expect("modprobe ioatdma", "# ") + self.dut.send_expect( + "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 + ) + self.dut.send_expect( + "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" + % self.device_str, + "# ", + 60, + ) + + def stop_all_apps(self): + """ + close all vms + """ + for i in range(len(self.vm)): + self.vm[i].stop() + self.dut.send_expect("killall %s" % self.l3fwdpower_name, "#", timeout=2) + + def test_wake_up_split_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( + self, + ): + """ + Test Case 1: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test + """ + self.vm_num = 1 + self.bind_nic_driver(self.dut_ports) + self.queues = 16 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.lanuch_l3fwd_power() + self.start_vms( + vm_num=self.vm_num, + ) + self.relanuch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def test_wake_up_split_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( + self, + ): + """ + Test Case 2: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test + """ + self.vm_num = 2 + self.bind_nic_driver(self.dut_ports) + self.queues = 1 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.lanuch_l3fwd_power() + self.start_vms( + vm_num=self.vm_num, + ) + self.relanuch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def test_wake_up_packed_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( + self, + ): + """ + Test Case 3: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test + """ + self.vm_num = 1 + self.bind_nic_driver(self.dut_ports) + self.queues = 16 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.lanuch_l3fwd_power() + self.start_vms(vm_num=self.vm_num, packed=True) + self.relanuch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def test_wake_up_packed_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( + self, + ): + """ + Test Case 4: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test + """ + self.vm_num = 2 + self.bind_nic_driver(self.dut_ports) + self.queues = 1 + self.get_core_mask() + self.nopci = False + self.get_cbdma_ports_info_and_bind_to_dpdk() + self.lanuch_l3fwd_power() + self.start_vms(vm_num=self.vm_num, packed=True) + self.relanuch_l3fwd_power() + self.config_virito_net_in_vm() + self.send_and_verify() + self.stop_all_apps() + + def tear_down(self): + """ + Run after each test case. + """ + self.dut.close_session(self.vhost) + self.dut.send_expect(f"killall {self.l3fwdpower_name}", "#") + self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") + self.bind_cbdma_device_to_kernel() + if "cbdma" in self.running_case: + self.bind_nic_driver(self.dut_ports, self.drivername) + + def tear_down_all(self): + """ + Run after each test suite. + """ + pass