From patchwork Tue Dec 15 23:46:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Yinan" X-Patchwork-Id: 85212 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A51BA09EE; Tue, 15 Dec 2020 15:57:57 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A3FA6CA38; Tue, 15 Dec 2020 15:57:55 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id CE5C2CA36 for ; Tue, 15 Dec 2020 15:57:53 +0100 (CET) IronPort-SDR: q8o5ZKbCrEceEUkeI0mD0Zo3aSBoH5mqsni9r9ZfbEtcliCt3yaoPe3K0JaPQQ9O2VqB+kaO41 dadal7D5hD5w== X-IronPort-AV: E=McAfee;i="6000,8403,9836"; a="162633982" X-IronPort-AV: E=Sophos;i="5.78,421,1599548400"; d="scan'208";a="162633982" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Dec 2020 06:57:50 -0800 IronPort-SDR: BvmfVWaNWlgBZCtm8kN3rmwHX+KszcdbjmWkKZHNbcpU4IniNgyW5U8nD+GfrCxRjPNFpCt+m6 3y9IjowGHHSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,421,1599548400"; d="scan'208";a="368265298" Received: from dpdk-yinan-ntb1.sh.intel.com ([10.67.119.39]) by orsmga008.jf.intel.com with ESMTP; 15 Dec 2020 06:57:48 -0800 From: Yinan Wang To: dts@dpdk.org Cc: Yinan Wang Date: Tue, 15 Dec 2020 18:46:04 -0500 Message-Id: <20201215234604.91346-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan.rst X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Add one cbdma new case and optimize dynamic queue size test case. Signed-off-by: Yinan Wang --- test_plans/vhost_cbdma_test_plan.rst | 117 ++++++++++++++++++--------- 1 file changed, 80 insertions(+), 37 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index b2230900..504b9aa0 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -61,7 +61,7 @@ operations of queues: otherwise, leverage librte_vhost to perform memory copy. Here is an example: - $ ./testpmd -c f -n 4 \ + $ ./dpdk-testpmd -c f -n 4 \ --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' Test Case 1: PVP Split all path with DMA-accelerated vhost enqueue @@ -73,14 +73,14 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 - set fwd mac - start + >set fwd mac + >start 2. Launch virtio-user with inorder mergeable path:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -95,7 +95,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG 4. Relaunch virtio-user with mergeable path, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -103,15 +103,15 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG 5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0in_order=1,queues=1 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start 6. Relaunch virtio-user with non-mergeable path, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac @@ -119,7 +119,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG 7. Relaunch virtio-user with vector_rx path, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 @@ -129,54 +129,97 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations ============================================================================= -1. Bind two cbdma port and one nic port to igb_uio, then launch vhost by below command:: +1. Bind four cbdma port and one nic port to igb_uio, then launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 - set fwd mac - start + >set fwd mac + >start -2. Launch virtio-user by below ccd ommand:: +2. Launch virtio-user by below command:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 >set fwd mac >start -3. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target and RX/TX can work normally in two queues. +3. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target. -4. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: +4. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log. - start - stop - port stop all - port config all rxq 1 - port start all - start +5. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: + + testpmd>port stop all + testpmd>port config all rxq 1 + testpmd>port config all txq 1 + testpmd>port start all + testpmd>start + testpmd>show port stats all -5. Relaunch virtio-user with queues=2, check RX/TX can work normally in two queues:: +6. Relaunch virtio-user with 2 queues:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \ -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 >set fwd mac >start -4. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: +7. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target. - start - stop - port stop all - port config all rxq 1 - port start all - start +8. Stop vhost port, check vhost RX and TX direction both exist packtes in queue0 from vhost log. -6. Relaunch vhost with another two cbdma channels, check perforamnce can get target and RX/TX can work normally in two queueus:: +9. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally:: - ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.0],dmathr=512' \ + testpmd>port stop all + testpmd>port config all rxq 1 + testpmd>port config all txq 1 + testpmd>port start all + testpmd>start + testpmd>show port stats all + +10. Relaunch vhost with another two cbdma channels and 2 queueus, check perforamnce can get target:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2 >set fwd mac - >start \ No newline at end of file + >start + +11. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log. + +Test Case3: CBDMA threshold value check +======================================== + +1. Bind four cbdma port to igb_uio, then launch vhost by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@80:04.0;txq1@80:04.1],dmathr=512' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@80:04.2;txq1@80:04.3],dmathr=4096' -- \ + -i --nb-cores=1 --rxq=2 --txq=2 + >start + +2. Launch virtio-user1:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \ + --no-pci --file-prefix=virtio1 \ + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 + >start + +3. Launch virtio-user0:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ + --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \ + -- -i --nb-cores=1 --rxq=2 --txq=2 + >start + +4. Check the cbdma threshold value for each vhost port can be config correct from vhost log:: + + dma parameters: vid0,qid0,dma*,threshold:512 + dma parameters: vid0,qid2,dma*,threshold:512 + dma parameters: vid1,qid0,dma*,threshold:4096 + dma parameters: vid1,qid2,dma*,threshold:4096 + +