From patchwork Wed Jan 19 02:58:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 106047 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0D76A0350; Wed, 19 Jan 2022 03:58:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB5BB40C35; Wed, 19 Jan 2022 03:58:50 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id DFBA94013F for ; Wed, 19 Jan 2022 03:58:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642561129; x=1674097129; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=hY+aW/k8Y/gR3QedbwfyWHy2y4WNSqvYgmCCbkxEBdE=; b=hf9H3RlURivjNlcLO7WKhxS5yKq/SQ1984TBHv+yB10LtpYwGrH8s977 SmO8YIVA99XAS9ssLeHafN53NaM8b7974DAGcYxHFD1z0tr5WDGaqty+z 9NnSmPyJN/dgg2hoWHH5zTcQZ6ZQ6OZhTKgJZdxoR0L8lGt/dlleY3vdh ffNbdvh/z7BYy88fEAiE53mKk4yvjkK/VDczN83sXpMQ+qBc2eeNEp4zf q+BxDERMFqBQRBRnnMKVkDW2S4jQQ1avQnegKomOJXAjBfZHZZGrIAE6B ItzE0BBJ8joswTnX05uB+2YfcNMaHdESEaDfFH77PYaUfZtemQcCqlJ5G A==; X-IronPort-AV: E=McAfee;i="6200,9189,10231"; a="242532056" X-IronPort-AV: E=Sophos;i="5.88,298,1635231600"; d="scan'208";a="242532056" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2022 18:58:47 -0800 X-IronPort-AV: E=Sophos;i="5.88,298,1635231600"; d="scan'208";a="625726306" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2022 18:58:44 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] test_plans/vhost_cbdma_test_plan:modify test plan to coverage more test point Date: Wed, 19 Jan 2022 10:58:40 +0800 Message-Id: <20220119025840.898166-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Modify test plan to coverage more test point, such as add VA and PA mode. Signed-off-by: Wei Ling --- test_plans/vhost_cbdma_test_plan.rst | 280 +++++++++++++++++---------- 1 file changed, 176 insertions(+), 104 deletions(-) diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst index 3d0e518a..234498a4 100644 --- a/test_plans/vhost_cbdma_test_plan.rst +++ b/test_plans/vhost_cbdma_test_plan.rst @@ -38,40 +38,43 @@ Overview -------- This feature supports to offload large data movement in vhost enqueue operations -from the CPU to the I/OAT device for every queue. Note that I/OAT acceleration -is just enabled for split rings now. In addition, a queue can only use one I/OAT -device, and I/OAT devices cannot be shared among vhost ports and queues. That is, -an I/OAT device can only be used by one queue at a time. DMA devices used by -queues are assigned by users; for a queue without assigning a DMA device, the -PMD will leverages librte_vhost to perform vhost enqueue operations. Moreover, -users cannot enable I/OAT acceleration for live-migration. Large copies are -offloaded from the CPU to the DMA engine in an asynchronous manner. The CPU just -submits copy jobs to the DMA engine and without waiting for DMA copy completion; +from the CPU to the I/OAT(a DMA engine in Intel's processor) device for every queue. +In addition, a queue can only use one I/OAT device, and I/OAT devices cannot be shared +among vhost ports and queues. That is, an I/OAT device can only be used by one queue at +a time. DMA devices(e.g.,CBDMA) used by queues are assigned by users; for a queue without +assigning a DMA device, the PMD will leverages librte_vhost to perform vhost enqueue +operations. Moreover, users cannot enable I/OAT acceleration for live-migration. Large +copies are offloaded from the CPU to the DMA engine in an asynchronous manner. The CPU +just submits copy jobs to the DMA engine and without waiting for DMA copy completion; there is no CPU intervention during DMA data transfer. By overlapping CPU computation and DMA copy, we can save precious CPU cycles and improve the overall throughput for vhost-user PMD based applications, like OVS. Due to startup overheads associated with DMA engines, small copies are performed by the CPU. +DPDK 21.11 adds vfio support for DMA device in vhost. When DMA devices are bound to +vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping +may exceed IOMMU's max capability, better to use 1G guest hugepage. We introduce a new vdev parameter to enable DMA acceleration for Tx operations of queues: - - dmas: This parameter is used to specify the assigned DMA device of a queue. Here is an example: - $ ./dpdk-testpmd -c f -n 4 \ - --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' + $ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c f -n 4 \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0] --iova=va -- -i' -Test Case 1: PVP Split all path with DMA-accelerated vhost enqueue -================================================================== +Test Case 1: PVP split ring all path vhost enqueue operations with cbdma +======================================================================== Packet pipeline: ================ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG -1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command:: +1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ + --iova=va \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start @@ -80,11 +83,11 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again:: +3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: testpmd>show port stats all testpmd>stop @@ -95,7 +98,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start @@ -103,7 +106,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start @@ -111,26 +114,37 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start 7. Relaunch virtio-user with vector_rx path, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,vectorized=1,queues=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start -Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx operations -========================================================================================= +8. Quit all testpmd and relaunch vhost with iova=pa by below command:: -1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command:: + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:00:04.0]' \ + --iova=pa \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 + >set fwd mac + >start + +9. Rerun steps 2-7. + +Test Case 2: PVP split ring dynamic queue number vhost enqueue operations with cbdma +===================================================================================== + +1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ + --iova=va \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 >set fwd mac >start @@ -139,48 +153,64 @@ Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx o ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 >set fwd mac >start -3. Send imix packets from packet generator with random ip, check perforamnce can get target. +3. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. 4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. -5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma:: +5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@00:04.0;txq1@00:04.1;txq2@00:04.2;txq3@00:04.3]' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4 + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3]' \ + --iova=va \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 >set fwd mac >start -6. Send imix packets from packet generator with random ip, check perforamnce can get target. +6. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. -7. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log. +7. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. -8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma:: +8. Quit and relaunch vhost with 8 queues w/ cbdma:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \ + --iova=va \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + >set fwd mac + >start + +9. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. + +10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. + +11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5]' \ + --iova=pa \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 >set fwd mac >start -9. Send imix packets from packet generator with random ip, check perforamnce can get target. +12. Send imix packets[64,1518] from packet generator with random ip, check perforamnce can get target. -10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. +13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. -Test Case 3: PVP packed ring all path with DMA-accelerated vhost enqueue -======================================================================== +Test Case 3: PVP packed ring all path vhost enqueue operations with cbdma +========================================================================= Packet pipeline: ================ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG -1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command:: +1. Bind 1 CBDMA port and 1 NIC port to vfio-pci, then launch vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ + --iova=va \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start @@ -189,11 +219,11 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,packed_vq=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start -3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port, then check throughput again:: +3. Send imix packets [64,1518] from packet generator, check the throughput can get expected data, restart vhost port and send imix pkts again, check get same throuhput:: testpmd>show port stats all testpmd>stop @@ -204,7 +234,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1,packed_vq=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start @@ -212,7 +242,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start @@ -220,35 +250,45 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1,packed_vq=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start 7. Relaunch virtio-user with vectorized path, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1 \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start -8. Relaunch virtio-user with vector_rx path, then repeat step 3:: +8. Relaunch virtio-user with vectorized path and ring size is not power of 2, then repeat step 3:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \ - --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1,packed_vq=1 \ - -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio --force-max-simd-bitwidth=512 \ + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queues=1,queue_size=1025 \ + -- -i --nb-cores=1 --txd=1025 --rxd=1025 + >set fwd mac + >start + +9. Quit all testpmd and relaunch vhost with iova=pa by below command:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@0000:80:04.0]' \ + --iova=pa \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 >set fwd mac >start -Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx operations -========================================================================================== +10. Rerun steps 2-8. + +Test Case 4: PVP packed ring dynamic queue number vhost enqueue operations with cbdma +===================================================================================== -1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command:: +1. Bind 8 CBDMA ports and 1 NIC port to vfio-pci, then launch vhost by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \ + --iova=va \ -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 >set fwd mac >start @@ -256,8 +296,8 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx 2. Launch virtio-user by below command:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1,packed_vq=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1,packed_vq=1 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 >set fwd mac >start @@ -265,11 +305,12 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx 4. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. -5. Quit vhost port and relaunch vhost with 4 queues w/ cbdma:: +5. Quit and relaunch vhost with 4 queues w/ cbdma and 4 queues w/o cbdma:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=4 --rxq=4 + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3]' \ + --iova=va \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 >set fwd mac >start @@ -277,71 +318,102 @@ Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx 7. Stop vhost port, check vhost RX and TX direction both exist packtes in 4 queues from vhost log. -8. Quit vhost port and relaunch vhost with 8 queues w/ cbdma:: +8. Quit and relaunch vhost with 8 queues w/ cbdma:: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7]' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 - >set fwd mac - >start + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]' \ + --iova=va \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + >set fwd mac + >start 9. Send imix packets from packet generator with random ip, check perforamnce can get target. 10. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. -Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and Sync copy -========================================================================================== +11. Quit and relaunch vhost with iova=pa, 6 queues w/ cbdma and 2 queues w/o cbdma:: -1. Bind one cbdma port and one nic port which on same numa to vfio-pci, then launch vhost by below command:: + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29 \ + --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5]' \ + --iova=pa \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=8 --rxq=8 + >set fwd mac + >start - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +12. Send imix packets from packet generator with random ip, check perforamnce can get target. -2. Launch virtio-user with inorder mergeable path:: +13. Stop vhost port, check vhost RX and TX direction both exist packtes in 8 queues from vhost log. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,server=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start +Test Case 5: loopback split ring large chain packets stress test with cbdma enqueue +==================================================================================== -3. Send packets with 64b and 1518b seperately from packet generator, record the throughput as sync copy throughput for 64b and cbdma copy for 1518b:: +Packet pipeline: +================ +Vhost <--> Virtio - testpmd>show port stats all +1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: -4.Quit vhost side, relaunch with below cmd:: + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ + --iova=va \ + -- -i --nb-cores=1 --mbuf-size=65535 - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac +2. Launch virtio and start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \ + mrg_rxbuf=1,in_order=0,vectorized=1,queue_size=2048 \ + -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 >start -5. Send packets with 1518b from packet generator, record the throughput as sync copy throughput for 1518b:: +3. Send large packets from vhost, check virtio can receive packets:: - testpmd>show port stats all + testpmd> vhost enable tx all + testpmd> set txpkts 65535,65535,65535,65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all -6. Quit two testpmd, relaunch vhost by below command:: +4. Quit all testpmd and relaunch vhost with iova=pa:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1' \ - -- -i --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac - >start + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ + --iova=pa \ + -- -i --nb-cores=1 --mbuf-size=65535 -7. Launch virtio-user with inorder mergeable path:: +5. Rerun steps 2-3. - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \ - --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \ - -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 - >set fwd mac +Test Case 6: loopback packed ring large chain packets stress test with cbdma enqueue +==================================================================================== + +Packet pipeline: +================ +Vhost <--> Virtio + +1. Bind 1 CBDMA channel to vfio-pci and launch vhost:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' \ + --iova=va \ + -- -i --nb-cores=1 --mbuf-size=65535 + +2. Launch virtio and start testpmd:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=testpmd0 --no-pci \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1, \ + mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,queue_size=2048 \ + -- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1 >start -8. Send packets with 64b from packet generator, record the throughput as cpu copy for 64b:: +3. Send large packets from vhost, check virtio can receive packets:: - testpmd>show port stats all + testpmd> vhost enable tx all + testpmd> set txpkts 65535,65535,65535,65535,65535 + testpmd> start tx_first 32 + testpmd> show port stats all + +4. Quit all testpmd and relaunch vhost with iova=pa:: -9. Check performance can meet below requirement:: + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:04.0]' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 - (1)CPU copy vs. sync copy delta < 10% for 64B packet size - (2)CBDMA copy vs sync copy delta > 5% for 1518 packet size +5. Rerun steps 2-3.