From patchwork Fri Nov 11 05:58:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119748 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45135A0542; Fri, 11 Nov 2022 07:05:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3F15840F16; Fri, 11 Nov 2022 07:05:14 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id AF08D400EF for ; Fri, 11 Nov 2022 07:05:12 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668146712; x=1699682712; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=w0OSwYy1dLdOoCiv4FBenhGfUwxu1ujUOqm6WTt7ueo=; b=TQvDJxPk8xNsLGDV+QsWvbWBDMU53Fgp8YCeZhvxAb7wZUFcJzePDP1e Ygf8xutD3ZY8TCGTfHc7JwizFXe9I/YQChSr0Ca/Pw4bz94bSRFQ+1Zlf GNGSt6tLwDbcGmoR7tsZxWbeUhZD7KzCg8msHZHY5p0Fx9Fh0GH3Bh8kh kgAGvSpWaqei7SwJ0qmh9N3F+S8yggmUx5roZOItmsFMZvPJfalgYd0aP 0n4YgoMZegtOOHHTNVdbIut1pHDnGnSw+fN46nfHgT4PEWXb0LMpkxn7q Idk/p6+ZAb0NT6/taAernKkzaZEwKe2pQMMSx33wkXyrD/vf8BBG0R8Xl w==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="309158618" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="309158618" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 22:05:11 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882655541" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="882655541" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 22:05:10 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 2/3] test_plans/vhost_user_interrupt_cbdma_test_plan: add new testplan Date: Fri, 11 Nov 2022 13:58:31 +0800 Message-Id: <20221111055831.2421249-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new vhost_user_interrupt_cbdma testplan to test the virtio enqueue and dequeue use l3fwd-power with split ring and packed ring path and CBDMA. Signed-off-by: Wei Ling --- .../vhost_user_interrupt_cbdma_test_plan.rst | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 test_plans/vhost_user_interrupt_cbdma_test_plan.rst diff --git a/test_plans/vhost_user_interrupt_cbdma_test_plan.rst b/test_plans/vhost_user_interrupt_cbdma_test_plan.rst new file mode 100644 index 00000000..96cef1a6 --- /dev/null +++ b/test_plans/vhost_user_interrupt_cbdma_test_plan.rst @@ -0,0 +1,79 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Intel Corporation + +============================================== +vhost-user interrupt mode with CBDMA test plan +============================================== + +Description +=========== + +Vhost-user interrupt need test with l3fwd-power sample with CBDMA channel, +small packets send from virtio-user to vhost side, check vhost-user cores +can be wakeup,and vhost-user cores should be back to sleep after stop +sending packets from virtio side. + +Note: +1.For packed virtqueue test, need using qemu version > 4.2.0. +2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. + +Prerequisites +============= +Topology +-------- +Test flow: Virtio-user --> Vhost-user + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +Test case +========= + +Test Case1: Wake up split ring vhost-user cores with l3fwd-power sample when multi queues and cbdma are enabled +--------------------------------------------------------------------------------------------------------------- + +1. Launch virtio-user with server mode:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip + +2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3]' -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" + +3. Send packet by testpmd, check vhost-user multi-cores will keep wakeup status:: + + testpmd>set fwd txonly + testpmd>start + +4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again. + +Test Case2: Wake up packed ring vhost-user cores with l3fwd-power sample when multi queues and cbdma are enabled +---------------------------------------------------------------------------------------------------------------- + +1. Launch virtio-user with server mode:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4,packed_vq=1 -- -i --rxq=4 --txq=4 --rss-ip + +2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3]' -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" + +3. Send packet by testpmd, check vhost-user multi-cores will keep wakeup status:: + + testpmd>set fwd txonly + testpmd>start + +4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again.