From patchwork Sat Sep 3 06:04:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115824 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C61E7A0543; Sat, 3 Sep 2022 08:09:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BCBF6410E5; Sat, 3 Sep 2022 08:09:16 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B5B9140698 for ; Sat, 3 Sep 2022 08:09:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662185354; x=1693721354; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=GtWdCwV3XtmFwDDx1x7UZLP69gknUqEUVXwDaLvzhWk=; b=RYgrCHv/QT2sgfvqntJlGyVMF4pw56k17+shyPK3j1h5oOLLShQqm2uw SeBjrJldq2ttdXpKb9kxFgrujZi/QMynEYYm2tqvm9+LgYTmX8TRkE40k nqL5pwKyVmnlcgDxVOJnxEHJKVIDERN3cog91WzMRtz0xamU4cweQwTFP SgDxfm+4uZmqYWAhq6KiBBpXtQwiVan2sB2sRzmbN5hmSqzwTQ2JLK5ue oqASiPKpATyfDO5WwxG+J+oaeXwt2tYUf5G4vDYT0wjktRoShoU9ClEOM L2PfawuXC7m4eSVKJcCGvYaROFzLRr48Qu0J//xD1O3dijgIfmNBJzKdY g==; X-IronPort-AV: E=McAfee;i="6500,9779,10458"; a="382435593" X-IronPort-AV: E=Sophos;i="5.93,286,1654585200"; d="scan'208";a="382435593" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2022 23:09:13 -0700 X-IronPort-AV: E=Sophos;i="5.93,286,1654585200"; d="scan'208";a="674632663" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2022 23:09:12 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/3] test_plans/index: delete vhost_event_idx_interrupt_cbdma_test_plan Date: Sat, 3 Sep 2022 02:04:43 -0400 Message-Id: <20220903060443.2937745-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Interrupt with async datapath requires vhostpmd local patch support, but it doesn't support it right now, so delete the vhost_event_idx_interrupt_cbdma testplan from test_plans/index.rst. Signed-off-by: Wei Ling --- test_plans/index.rst | 1 - 1 file changed, 1 deletion(-) diff --git a/test_plans/index.rst b/test_plans/index.rst index 6aaf1ab8..2d797b6d 100644 --- a/test_plans/index.rst +++ b/test_plans/index.rst @@ -197,7 +197,6 @@ The following are the test plans for the DPDK DTS automated test system. malicious_driver_event_indication_test_plan vhost_event_idx_interrupt_test_plan - vhost_event_idx_interrupt_cbdma_test_plan vhost_virtio_pmd_interrupt_test_plan vhost_virtio_pmd_interrupt_cbdma_test_plan vhost_virtio_user_interrupt_test_plan From patchwork Sat Sep 3 06:04:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115825 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8610A0543; Sat, 3 Sep 2022 08:09:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E32B340F18; Sat, 3 Sep 2022 08:09:30 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 58B5B40698 for ; Sat, 3 Sep 2022 08:09:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662185368; x=1693721368; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=7D1vGMJIC91zyU2wEgoKjpIfNteA8UQnZYagL9ofs74=; b=Z84RbMlTs+g28Vc4BxQIPjc9FRz1LBLy5i4b8y22bd+ICL+fRjsebO8H MEXt3A/GidXS2Yr05vKPw9p2OBglpt8Tm4wKX6h5AEXSKEP85+O85z2V5 0lmofuHoGlc3mBqHyLJn1a3lF3N0IrodzbX6gbXcYpheGHHVqs0sgiMSd F2AGxMkIXAR+/IcRE3xzv5j+rKm26WGGxi5uG5CBk2I9B2o9cjbkjQOsM NbH9no/ueaLxw0h9hPQmC4GtYGsVoTPItFsvK2TXzaV/zTBPZnGOtUycX bQKMLgfTr56lnoZlyaFurC98Z6+qQgZJBhQJPRg5a0IMZ/qW+3EVuNMpj w==; X-IronPort-AV: E=McAfee;i="6500,9779,10458"; a="276533279" X-IronPort-AV: E=Sophos;i="5.93,286,1654585200"; d="scan'208";a="276533279" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2022 23:09:27 -0700 X-IronPort-AV: E=Sophos;i="5.93,286,1654585200"; d="scan'208";a="674632710" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2022 23:09:26 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/3] test_plans/vhost_event_idx_interrupt_cbdma_test_plan: delete testplan as not support Date: Sat, 3 Sep 2022 02:04:56 -0400 Message-Id: <20220903060456.2937805-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Interrupt with async datapath requires vhostpmd local patch support, but it doesn't support it right now, so delete the vhost_event_idx_interrupt_cbdma testplan. Signed-off-by: Wei Ling --- ...st_event_idx_interrupt_cbdma_test_plan.rst | 281 ------------------ 1 file changed, 281 deletions(-) delete mode 100644 test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst diff --git a/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst b/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst deleted file mode 100644 index 341005a4..00000000 --- a/test_plans/vhost_event_idx_interrupt_cbdma_test_plan.rst +++ /dev/null @@ -1,281 +0,0 @@ -.. Copyright (c) <2022>, Intel Corporation - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - - Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - - Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - - - Neither the name of Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS - FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE - COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, - INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR - SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) - HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, - STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED - OF THE POSSIBILITY OF SUCH DAMAGE. - -======================================== -vhost event idx interrupt mode test plan -======================================== - -Description -=========== - -Vhost event idx interrupt need test with l3fwd-power sample with CBDMA channel, send small packets -from virtio-net to vhost side,check vhost-user cores can be wakeup status,and -vhost-user cores should be sleep status after stop sending packets from virtio -side.For packed virtqueue test, need using qemu version > 4.2.0. - - -Test flow -========= - -Virtio-net --> Vhost-user - -Test Case 1: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test -=============================================================================================================== - -1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -2. Launch VM1 with server mode:: - - taskset -c 17-18 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu1910.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40 -vnc :12 - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -4. Set vitio-net with 16 quques and give vitio-net ip address:: - - ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net - ifconfig [ens3] 1.1.1.1 - -5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: - - taskset -c 0 ping 1.1.1.2 - taskset -c 1 ping 1.1.1.3 - taskset -c 2 ping 1.1.1.4 - taskset -c 3 ping 1.1.1.5 - taskset -c 4 ping 1.1.1.6 - taskset -c 5 ping 1.1.1.7 - taskset -c 6 ping 1.1.1.8 - taskset -c 7 ping 1.1.1.9 - taskset -c 8 ping 1.1.1.2 - taskset -c 9 ping 1.1.1.2 - taskset -c 10 ping 1.1.1.2 - taskset -c 11 ping 1.1.1.2 - taskset -c 12 ping 1.1.1.2 - taskset -c 13 ping 1.1.1.2 - taskset -c 14 ping 1.1.1.2 - taskset -c 15 ping 1.1.1.2 - -6. Check vhost related cores are waked up with l3fwd-power log, such as following:: - - L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 - ... - ... - L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 - -Test Case 2: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test -================================================================================================================================ - -1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -2. Launch VM1 and VM2 with server mode:: - - taskset -c 33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,server,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on -vnc :10 -daemonize - - taskset -c 34 \ - qemu-system-x86_64 -name us-vhost-vm2 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910-2.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ - -chardev socket,server,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on -vnc :11 -daemonize - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -4. On VM1, set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.2 - #[ens3] is the virtual device name - ping 1.1.1.3 - #send packets to vhost - -5. On VM2, also set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.4 - #[ens3] is the virtual device name - ping 1.1.1.5 - #send packets to vhost - -6. Check vhost related cores are waked up with l3fwd-power log. - -Test Case 3: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test -================================================================================================================ - -1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -2. Launch VM1 with server mode:: - - taskset -c 17-18 qemu-system-x86_64 -name us-vhost-vm1 -enable-kvm -cpu host -smp 1 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu1910.img \ - -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,mq=on,vectors=40,packed=on -vnc :12 - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-16 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \ - -- -p 0x1 --parse-ptype 1 \ - --config "(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5),(0,5,6),(0,6,7),(0,7,8),(0,8,9),(0,9,10),(0,10,11),(0,11,12),(0,12,13),(0,13,14),(0,14,15),(0,15,16)" - -4. Set vitio-net with 16 quques and give vitio-net ip address:: - - ethtool -L [ens3] combined 16 # [ens3] is the name of virtio-net - ifconfig [ens3] 1.1.1.1 - -5. Send packets with different IPs from virtio-net, notice to bind each vcpu to different send packets process:: - - taskset -c 0 ping 1.1.1.2 - taskset -c 1 ping 1.1.1.3 - taskset -c 2 ping 1.1.1.4 - taskset -c 3 ping 1.1.1.5 - taskset -c 4 ping 1.1.1.6 - taskset -c 5 ping 1.1.1.7 - taskset -c 6 ping 1.1.1.8 - taskset -c 7 ping 1.1.1.9 - taskset -c 8 ping 1.1.1.2 - taskset -c 9 ping 1.1.1.2 - taskset -c 10 ping 1.1.1.2 - taskset -c 11 ping 1.1.1.2 - taskset -c 12 ping 1.1.1.2 - taskset -c 13 ping 1.1.1.2 - taskset -c 14 ping 1.1.1.2 - taskset -c 15 ping 1.1.1.2 - -6. Check vhost related cores are waked up with l3fwd-power log, such as following:: - - L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0 - ... - ... - L3FWD_POWER: lcore 15 is waked up from rx interrupt on port 0 queue 15 - -Test Case 4: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test -================================================================================================================================== - -1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -2. Launch VM1 and VM2 with server mode:: - - taskset -c 33 \ - qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \ - -chardev socket,server,id=char0,path=./vhost-net0,server \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,packed=on -vnc :10 -daemonize - - taskset -c 34 \ - qemu-system-x86_64 -name us-vhost-vm2 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu1910-2.img \ - -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \ - -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6004-:22 \ - -chardev socket,server,id=char0,path=./vhost-net1,server \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet1,csum=on,packed=on -vnc :11 -daemonize - -3. Relauch l3fwd-power sample for port up:: - - ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 1-2 -n 4 --log-level=9 \ - --vdev 'eth_vhost0,iface=./vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \ - --vdev 'eth_vhost1,iface=./vhost-net1,queues=1,client=1,dmas=[txq0@80:04.0]' \ - -- -p 0x3 --parse-ptype 1 --config "(0,0,1),(1,0,2)" - -4. On VM1, set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.2 - #[ens3] is the virtual device name - ping 1.1.1.3 - #send packets to vhost - -5. On VM2, also set ip for virtio device and send packets to vhost:: - - ifconfig [ens3] 1.1.1.4 - #[ens3] is the virtual device name - ping 1.1.1.5 - #send packets to vhost - -6. Check vhost related cores are waked up with l3fwd-power log. From patchwork Sat Sep 3 06:05:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 115826 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25B7EA0543; Sat, 3 Sep 2022 08:09:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 200F840E09; Sat, 3 Sep 2022 08:09:40 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id CA8E940698 for ; Sat, 3 Sep 2022 08:09:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662185379; x=1693721379; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=46rI8eInd5YHxyGHa73P3D/NCjUtPB35PIdbuRTf5MQ=; b=P12Sp1OGah3dQ5R29xoykrYYQm7eA2OITUfRp7MenPUxFbf3b2Sg2skZ IZexEJY+sSpsuMxSx+pRNwvTPb5wFQc8o53LlJ/nHrfgjE1JpUKMHgBSc NakRremwYTARX8ySa0vy2+KgkuwU0nx2cTB7fa9c5d3YeTjEpW9KyYpKE 2SDPJ2dzpY8o6LUb52d800QV/tJ1cx1ky0tiAGKIMno8qqGnH2h/2el/s JEusQLpZI8IxmQ9B5mYi34Krva17V0CW8jTLxflXZ9v+m4sl8/657u3v+ DEk3WfiWfmWzcEw8UPQzKbP1VDU5M73/ZNkTUs0mi4w1GL+MqufunljcN Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10458"; a="296139828" X-IronPort-AV: E=Sophos;i="5.93,286,1654585200"; d="scan'208";a="296139828" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2022 23:09:38 -0700 X-IronPort-AV: E=Sophos;i="5.93,286,1654585200"; d="scan'208";a="674632758" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Sep 2022 23:09:36 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 3/3] tests/vhost_event_idx_interrupt_cbdma: delete testsuite as not support Date: Sat, 3 Sep 2022 02:05:07 -0400 Message-Id: <20220903060507.2937865-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Interrupt with async datapath requires vhostpmd local patch support, but it doesn't support it right now, so delete the vhost_event_idx_interrupt_cbdma testsuite. Signed-off-by: Wei Ling Acked-by: Xingguang He Acked-by: Lijuan Tu --- ...stSuite_vhost_event_idx_interrupt_cbdma.py | 459 ------------------ 1 file changed, 459 deletions(-) delete mode 100644 tests/TestSuite_vhost_event_idx_interrupt_cbdma.py diff --git a/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py b/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py deleted file mode 100644 index 2cf54df5..00000000 --- a/tests/TestSuite_vhost_event_idx_interrupt_cbdma.py +++ /dev/null @@ -1,459 +0,0 @@ -# BSD LICENSE -# -# Copyright (c) <2022>, Intel Corporation. -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Intel Corporation nor the names of its -# contributors may be used to endorse or promote products derived -# from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -""" -DPDK Test suite. -Vhost event idx interrupt need test with l3fwd-power sample -""" - -import re -import time - -from framework.test_case import TestCase -from framework.virt_common import VM - - -class TestVhostEventIdxInterruptCbdma(TestCase): - def set_up_all(self): - """ - Run at the start of each test suite. - - """ - self.vm_num = 1 - self.queues = 1 - self.cores_num = len([n for n in self.dut.cores if int(n["socket"]) == 0]) - self.prepare_l3fwd_power() - self.pci_info = self.dut.ports_info[0]["pci"] - self.base_dir = self.dut.base_dir.replace("~", "/root") - self.app_l3fwd_power_path = self.dut.apps_name["l3fwd-power"] - self.l3fwdpower_name = self.app_l3fwd_power_path.split("/")[-1] - self.dut_ports = self.dut.get_ports() - self.ports_socket = self.dut.get_numa_id(self.dut_ports[0]) - self.cbdma_dev_infos = [] - self.device_str = None - - def set_up(self): - """ - Run before each test case. - """ - # Clean the execution ENV - self.verify_info = [] - self.dut.send_expect(f"killall {self.l3fwdpower_name}", "#") - self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") - self.dut.send_expect("rm -rf %s/vhost-net*" % self.base_dir, "#") - self.vhost = self.dut.new_session(suite="vhost-l3fwd") - self.vm_dut = [] - self.vm = [] - self.nopci = True - - def get_core_mask(self): - self.core_config = "1S/%dC/1T" % (self.vm_num * self.queues) - self.verify( - self.cores_num >= self.queues * self.vm_num, - "There has not enought cores to test this case %s" % self.running_case, - ) - self.core_list_l3fwd = self.dut.get_core_list(self.core_config) - - def prepare_l3fwd_power(self): - out = self.dut.build_dpdk_apps("examples/l3fwd-power") - self.verify("Error" not in out, "compilation l3fwd-power error") - - def list_split(self, items, n): - return [items[i : i + n] for i in range(0, len(items), n)] - - @property - def check_2M_env(self): - out = self.dut.send_expect( - "cat /proc/meminfo |grep Hugepagesize|awk '{print($2)}'", "# " - ) - return True if out == "2048" else False - - def lanuch_l3fwd_power(self): - """ - launch l3fwd-power with a virtual vhost device - """ - res = True - self.logger.info("Launch l3fwd_sample sample:") - config_info = "" - core_index = 0 - # config the interrupt cores info - for port in range(self.vm_num): - for queue in range(self.queues): - if config_info != "": - config_info += "," - config_info += "(%d,%d,%s)" % ( - port, - queue, - self.core_list_l3fwd[core_index], - ) - info = { - "core": self.core_list_l3fwd[core_index], - "port": port, - "queue": queue, - } - self.verify_info.append(info) - core_index = core_index + 1 - # config the vdev info, if have 2 vms, it shoule have 2 vdev info - vdev_info = "" - self.cbdma_dev_infos_list = [] - if self.vm_num >= 2: - self.cbdma_dev_infos_list = self.list_split( - self.cbdma_dev_infos, int(len(self.cbdma_dev_infos) / self.vm_num) - ) - for i in range(self.vm_num): - dmas = "" - if self.vm_num == 1: - for queue in range(self.queues): - dmas += f"txq{queue}@{self.cbdma_dev_infos[queue]};" - - else: - cbdma_dev_infos = self.cbdma_dev_infos_list[i] - for index, q in enumerate(cbdma_dev_infos): - dmas += f"txq{index}@{q};" - vdev_info += ( - f"--vdev 'net_vhost%d,iface=%s/vhost-net%d,dmas=[{dmas}],queues=%d,client=1' " - % (i, self.base_dir, i, self.queues) - ) - - port_info = "0x1" if self.vm_num == 1 else "0x3" - - example_para = self.app_l3fwd_power_path + " " - para = ( - " --log-level=9 %s -- -p %s --parse-ptype 1 --config '%s' --interrupt-only" - % (vdev_info, port_info, config_info) - ) - eal_params = self.dut.create_eal_parameters( - cores=self.core_list_l3fwd, no_pci=self.nopci - ) - command_line_client = example_para + eal_params + para - self.vhost.get_session_before(timeout=2) - self.vhost.send_expect(command_line_client, "POWER", 40) - time.sleep(10) - out = self.vhost.get_session_before() - if "Error" in out and "Error opening" not in out: - self.logger.error("Launch l3fwd-power sample error") - res = False - else: - self.logger.info("Launch l3fwd-power sample finished") - self.verify(res is True, "Lanuch l3fwd failed") - - def relanuch_l3fwd_power(self): - """ - relauch l3fwd-power sample for port up - """ - self.dut.send_expect("killall -s INT %s" % self.l3fwdpower_name, "#") - # make sure l3fwd-power be killed - pid = self.dut.send_expect( - "ps -ef |grep l3|grep -v grep |awk '{print $2}'", "#" - ) - if pid: - self.dut.send_expect("kill -9 %s" % pid, "#") - self.lanuch_l3fwd_power() - - def set_vm_cpu_number(self, vm_config): - # config the vcpu numbers when queue number greater than 1 - if self.queues == 1: - return - params_number = len(vm_config.params) - for i in range(params_number): - if list(vm_config.params[i].keys())[0] == "cpu": - vm_config.params[i]["cpu"][0]["number"] = self.queues - - def check_qemu_version(self, vm_config): - """ - in this suite, the qemu version should greater 2.7 - """ - self.vm_qemu_version = vm_config.qemu_emulator - params_number = len(vm_config.params) - for i in range(params_number): - if list(vm_config.params[i].keys())[0] == "qemu": - self.vm_qemu_version = vm_config.params[i]["qemu"][0]["path"] - - out = self.dut.send_expect("%s --version" % self.vm_qemu_version, "#") - result = re.search("QEMU\s*emulator\s*version\s*(\d*.\d*)", out) - self.verify( - result is not None, - "the qemu path may be not right: %s" % self.vm_qemu_version, - ) - version = result.group(1) - index = version.find(".") - self.verify( - int(version[:index]) > 2 - or (int(version[:index]) == 2 and int(version[index + 1 :]) >= 7), - "This qemu version should greater than 2.7 " - + "in this suite, please config it in vhost_sample.cfg file", - ) - - def start_vms(self, vm_num=1, packed=False): - """ - start qemus - """ - for i in range(vm_num): - vm_info = VM(self.dut, "vm%d" % i, "vhost_sample_copy") - vm_info.load_config() - vm_params = {} - vm_params["driver"] = "vhost-user" - vm_params["opt_path"] = self.base_dir + "/vhost-net%d" % i - vm_params["opt_mac"] = "00:11:22:33:44:5%d" % i - vm_params["opt_server"] = "server" - if self.queues > 1: - vm_params["opt_queue"] = self.queues - opt_args = "csum=on,mq=on,vectors=%d" % (2 * self.queues + 2) - else: - opt_args = "csum=on" - if packed: - opt_args = opt_args + ",packed=on" - vm_params["opt_settings"] = opt_args - vm_info.set_vm_device(**vm_params) - self.set_vm_cpu_number(vm_info) - self.check_qemu_version(vm_info) - vm_dut = None - try: - vm_dut = vm_info.start(load_config=False, set_target=False) - if vm_dut is None: - raise Exception("Set up VM ENV failed") - except Exception as e: - self.logger.error("ERROR: Failure for %s" % str(e)) - self.vm_dut.append(vm_dut) - self.vm.append(vm_info) - - def config_virito_net_in_vm(self): - """ - set vitio-net with 2 quques enable - """ - for i in range(len(self.vm_dut)): - vm_intf = self.vm_dut[i].ports_info[0]["intf"] - self.vm_dut[i].send_expect( - "ethtool -L %s combined %d" % (vm_intf, self.queues), "#", 20 - ) - - def check_vhost_core_status(self, vm_index, status): - """ - check the cpu status - """ - out = self.vhost.get_session_before() - for i in range(self.queues): - # because of the verify_info include all config(vm0 and vm1) - # so current index shoule vm_index + queue_index - verify_index = i + vm_index - if status == "waked up": - info = "lcore %s is waked up from rx interrupt on port %d queue %d" - info = info % ( - self.verify_info[verify_index]["core"], - self.verify_info[verify_index]["port"], - self.verify_info[verify_index]["queue"], - ) - elif status == "sleeps": - info = ( - "lcore %s sleeps until interrupt triggers" - % self.verify_info[verify_index]["core"] - ) - self.logger.info(info) - self.verify(info in out, "The CPU status not right for %s" % info) - - def send_and_verify(self): - """ - start to send packets and check the cpu status - stop and restart to send packets and check the cpu status - """ - ping_ip = 3 - for vm_index in range(self.vm_num): - session_info = [] - vm_intf = self.vm_dut[vm_index].ports_info[0]["intf"] - self.vm_dut[vm_index].send_expect( - "ifconfig %s 1.1.1.%d" % (vm_intf, ping_ip), "#" - ) - ping_ip = ping_ip + 1 - self.vm_dut[vm_index].send_expect("ifconfig %s up" % vm_intf, "#") - for queue in range(self.queues): - session = self.vm_dut[vm_index].new_session( - suite="ping_info_%d" % queue - ) - session.send_expect( - "taskset -c %d ping 1.1.1.%d" % (queue, ping_ip), "PING", 30 - ) - session_info.append(session) - ping_ip = ping_ip + 1 - time.sleep(3) - self.check_vhost_core_status(vm_index=vm_index, status="waked up") - # close all sessions of ping in vm - for sess_index in range(len(session_info)): - session_info[sess_index].send_expect("^c", "#") - self.vm_dut[vm_index].close_session(session_info[sess_index]) - - def get_cbdma_ports_info_and_bind_to_dpdk(self): - """ - get all cbdma ports - """ - self.cbdma_dev_infos = [] - out = self.dut.send_expect( - "./usertools/dpdk-devbind.py --status-dev dma", "# ", 30 - ) - device_info = out.split("\n") - for device in device_info: - pci_info = re.search("\s*(0000:\S*:\d*.\d*)", device) - if pci_info is not None: - # dev_info = pci_info.group(1) - # the numa id of ioat dev, only add the device which - # on same socket with nic dev - self.cbdma_dev_infos.append(pci_info.group(1)) - self.verify( - len(self.cbdma_dev_infos) >= self.queues, - "There no enough cbdma device to run this suite", - ) - if self.queues == 1: - self.cbdma_dev_infos = [self.cbdma_dev_infos[0], self.cbdma_dev_infos[-1]] - self.used_cbdma = self.cbdma_dev_infos[0 : self.queues * self.vm_num] - self.device_str = " ".join(self.used_cbdma) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=%s %s" - % (self.drivername, self.device_str), - "# ", - 60, - ) - - def bind_cbdma_device_to_kernel(self): - if self.device_str is not None: - self.dut.send_expect("modprobe ioatdma", "# ") - self.dut.send_expect( - "./usertools/dpdk-devbind.py -u %s" % self.device_str, "# ", 30 - ) - self.dut.send_expect( - "./usertools/dpdk-devbind.py --force --bind=ioatdma %s" - % self.device_str, - "# ", - 60, - ) - - def stop_all_apps(self): - """ - close all vms - """ - for i in range(len(self.vm)): - self.vm[i].stop() - self.dut.send_expect("killall %s" % self.l3fwdpower_name, "#", timeout=2) - - def test_wake_up_split_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( - self, - ): - """ - Test Case 1: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test - """ - self.vm_num = 1 - self.bind_nic_driver(self.dut_ports) - self.queues = 16 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power() - self.start_vms( - vm_num=self.vm_num, - ) - self.relanuch_l3fwd_power() - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - - def test_wake_up_split_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( - self, - ): - """ - Test Case 2: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test - """ - self.vm_num = 2 - self.bind_nic_driver(self.dut_ports) - self.queues = 1 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power() - self.start_vms( - vm_num=self.vm_num, - ) - self.relanuch_l3fwd_power() - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - - def test_wake_up_packed_ring_vhost_user_cores_with_event_idx_interrupt_mode_16_queues_with_cbdma( - self, - ): - """ - Test Case 3: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test - """ - self.vm_num = 1 - self.bind_nic_driver(self.dut_ports) - self.queues = 16 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power() - self.start_vms(vm_num=self.vm_num, packed=True) - self.relanuch_l3fwd_power() - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - - def test_wake_up_packed_ring_vhost_user_cores_by_multi_virtio_net_in_vms_with_event_idx_interrupt_with_cbdma( - self, - ): - """ - Test Case 4: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test - """ - self.vm_num = 2 - self.bind_nic_driver(self.dut_ports) - self.queues = 1 - self.get_core_mask() - self.nopci = False - self.get_cbdma_ports_info_and_bind_to_dpdk() - self.lanuch_l3fwd_power() - self.start_vms(vm_num=self.vm_num, packed=True) - self.relanuch_l3fwd_power() - self.config_virito_net_in_vm() - self.send_and_verify() - self.stop_all_apps() - - def tear_down(self): - """ - Run after each test case. - """ - self.dut.close_session(self.vhost) - self.dut.send_expect(f"killall {self.l3fwdpower_name}", "#") - self.dut.send_expect("killall -s INT qemu-system-x86_64", "#") - self.bind_cbdma_device_to_kernel() - if "cbdma" in self.running_case: - self.bind_nic_driver(self.dut_ports, self.drivername) - - def tear_down_all(self): - """ - Run after each test suite. - """ - pass