From patchwork Thu Aug 4 01:45:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114593 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 457ADA00C5; Thu, 4 Aug 2022 03:51:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3EF634113F; Thu, 4 Aug 2022 03:51:38 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 271334068E for ; Thu, 4 Aug 2022 03:51:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659577896; x=1691113896; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=qeFEgnMnGDSiUhL4y7OT1qlTQdLipavVJQ5UoTxJVpg=; b=Nomdr3uA+f01GJxT7V2YG2SlYxGt+hBtRFb0eS9wGTgKODkEAKw7EwK8 shb6Mdfr7vmicnb08TI9gvcTMAM3TPc5+F10Yf5epV5NlHNafpyFapt2+ GQ/YxmYdRszGFHZg/wOxqWFUI8X8db7aBefkPf7PBmWytzSyCZ+md0TuW F+SptHzi0Jz1J9QrRFasRYJR6rNQBYBchyjBS7KWVIyrnVyGMRSFZStyx HSX+jzh/bZmirUdrn48Yt4gZy8znUvkUpyKlf4p5BHGW8gQu5AFxdR56D 0qcI78q0547AAaudqYTRK2YcPKK86FgIwFM9Hh90WyWGBlBRYKuN9czzQ w==; X-IronPort-AV: E=McAfee;i="6400,9594,10428"; a="272862743" X-IronPort-AV: E=Sophos;i="5.93,214,1654585200"; d="scan'208,223";a="272862743" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Aug 2022 18:51:32 -0700 X-IronPort-AV: E=Sophos;i="5.93,214,1654585200"; d="scan'208,223";a="631381664" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Aug 2022 18:51:30 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 1/2] test_plans/vhost_virtio_user_interrupt_cbdma_test_plan: modify testplan to test virtio dequeue Date: Wed, 3 Aug 2022 21:45:43 -0400 Message-Id: <20220804014543.1145951-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify virtio_user_interrupt_cbdma testplan to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- ..._virtio_user_interrupt_cbdma_test_plan.rst | 89 ++++++++++++++----- 1 file changed, 68 insertions(+), 21 deletions(-) diff --git a/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst index c823f98e..3efa2326 100644 --- a/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst +++ b/test_plans/vhost_virtio_user_interrupt_cbdma_test_plan.rst @@ -5,19 +5,61 @@ vhost/virtio-user interrupt mode with cbdma test plan ===================================================== +Description +=========== + Virtio-user interrupt need test with l3fwd-power sample, small packets send from traffic generator to virtio side, check virtio-user cores can be wakeup status, and virtio-user cores should be sleep -status after stop sending packets from traffic generator when CBDMA enabled.This test plan cover -vhost-user as the backend. +status after stop sending packets from traffic generator. +This test plan tests virtio-user Rx interrupt and LSC interrupt with vhost-user as the backend when cbdma enable. + +..Note: + +DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. + +Prerequisites +============= + +Software +-------- + Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 -Test Case1: LSC event between vhost-user and virtio-user with split ring and cbdma enabled -========================================================================================== +2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID:: -flow: Vhost <--> Virtio + # ./usertools/dpdk-devbind.py -s + + Network devices using kernel driver + =================================== + 0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci + + DMA devices using kernel driver + =============================== + 0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + 0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci + +Test case +========= + +Test Case1: Split ring LSC event between vhost-user and virtio-user with cbdma enable +------------------------------------------------------------------------------------- +This case tests the LSC interrupt of split ring virtio-user with vhost-user as the back-end +when vhost uses the asynchronous operations with CBDMA channels. +Flow: Vhost <--> Virtio 1. Bind 1 CBDMA channel to vfio-pci driver, then start vhost-user side:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0;rxq0]' \ + -- -i --lcore-dma=[lcore13@0000:00:04.0] testpmd> set fwd mac testpmd> start @@ -37,20 +79,21 @@ flow: Vhost <--> Virtio testpmd> show port info 0 #it should show "down" -Test Case2: Split ring virtio-user interrupt test with vhost-user as backend and cbdma enabled -============================================================================================== - -flow: TG --> NIC --> Vhost --> Virtio +Test Case2: Split ring virtio-user interrupt test with vhost-user as backend and cbdma enable +--------------------------------------------------------------------------------------------- +This case tests Rx interrupt of split ring virtio-user with vhost-user as the back-end when vhost uses the asynchronous operations with CBDMA channels. +Flow: TG --> NIC --> Vhost --> Virtio 1. Bind 1 CBDMA channel and 1 NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --rxq=1 --txq=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0;rxq0]' \ + -- -i --rxq=1 --txq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore3@0000:00:04.1] testpmd> start 2. Start l3fwd-power with a virtio-user device:: ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ - --vdev=virtio_user0,path=./vhost-net -- -p 1 --config="(0,0,14)" --parse-ptype + --vdev=virtio_user0,path=./vhost-net -- -p 1 -P --config="(0,0,14)" --parse-ptype 3. Send packets with packet generator, check the virtio-user related core can be wakeup status. @@ -58,14 +101,16 @@ flow: TG --> NIC --> Vhost --> Virtio 5. Restart sending packets with packet generator, check virtio-user related core change to wakeup status again. -Test Case3: LSC event between vhost-user and virtio-user with packed ring and cbdma enabled -=========================================================================================== - -flow: Vhost <--> Virtio +Test Case3: Packed ring LSC event between vhost-user and virtio-user with cbdma enable +-------------------------------------------------------------------------------------- +This case tests the LSC interrupt of packed ring virtio-user with vhost-user as the back-end +when vhost uses the asynchronous operations with CBDMA channels. +Flow: Vhost <--> Virtio 1. Bind one cbdma port to vfio-pci driver, then start vhost-user side:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0;rxq0]' \ + -- -i --lcore-dma=[lcore13@0000:00:04.0,lcore13@0000:00:04.1] testpmd> set fwd mac testpmd> start @@ -85,20 +130,22 @@ flow: Vhost <--> Virtio testpmd> show port info 0 #it should show "down" -Test Case4: Packed ring virtio-user interrupt test with vhost-user as backend and cbdma enabled -================================================================================================ +Test Case4: Packed ring virtio-user interrupt test with vhost-user as backend and cbdma enable +---------------------------------------------------------------------------------------------- +This case tests Rx interrupt of packed ring virtio-user with vhost-user as the back-end when vhost uses the asynchronous operations with CBDMA channels. flow: TG --> NIC --> Vhost --> Virtio 1. Bind one cbdma port and one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --rxq=1 --txq=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0;rxq0]' \ + -- -i --rxq=1 --txq=1 --lcore-dma=[lcore3@0000:00:04.0] testpmd> start 2. Start l3fwd-power with a virtio-user device:: ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -c 0xc000 -n 4 --log-level='user1,7' --no-pci --file-prefix=l3fwd-pwd \ - --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 --config="(0,0,14)" --parse-ptype + --vdev=virtio_user0,path=./vhost-net,packed_vq=1 -- -p 1 -P --config="(0,0,14)" --parse-ptype 3. Send packets with packet generator, check the virtio-user related core can be wakeup status. From patchwork Thu Aug 4 01:45:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 114594 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 675F6A00C5; Thu, 4 Aug 2022 03:51:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6133C4282B; Thu, 4 Aug 2022 03:51:45 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 41CCE4068E for ; Thu, 4 Aug 2022 03:51:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659577903; x=1691113903; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=IlgLU0keRWN4H8fpjcynk6B78FmnT73QdjPuiB7x9Do=; b=ZIJh5si9GD9qpfUsI/05tBkXZfLfCdsQUSCPTnVsu98lPbh7UiYO3dEL mwJ4vvyz+NPJ+hOKaS2u/6eqyRWHSZbopjNgTt+tybYSe03wqdDdicXd2 SAZyeAGstO1QLUU5R3pQiQLKr4ZmXI/auJqN0B8uG0VkYGoyG7aB/pcE6 wuVfG+fvy3P+Dmc0ehLUe29bSQHMIfdXMaOV8cw3Ct5mStPIbMkzHLJxf RZBwjEq97XIUf1ft4joGuW8vnz8fE9KrtLBGwgA4wa3KUTirwLY5H5Bsh Rjk7SArsAcrfwL6cZQTm6McdhVBU/J/2a9i77lLcf9C6z707XBD2B2p8A g==; X-IronPort-AV: E=McAfee;i="6400,9594,10428"; a="288574230" X-IronPort-AV: E=Sophos;i="5.93,214,1654585200"; d="scan'208,223";a="288574230" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Aug 2022 18:51:42 -0700 X-IronPort-AV: E=Sophos;i="5.93,214,1654585200"; d="scan'208,223";a="631381764" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Aug 2022 18:51:41 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 2/2] tests/vhost_virtio_user_interrupt_cbdma: modify testsuite to test virtio dequeue Date: Wed, 3 Aug 2022 21:45:56 -0400 Message-Id: <20220804014556.1146015-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org From DPDK-22.07, virtio support async dequeue for split and packed ring path, so modify vhost_virtio_user_interrupt_cbdma testsuite to test the split and packed ring async dequeue feature. Signed-off-by: Wei Ling --- ...Suite_vhost_virtio_user_interrupt_cbdma.py | 86 ++++++++++--------- 1 file changed, 45 insertions(+), 41 deletions(-) diff --git a/tests/TestSuite_vhost_virtio_user_interrupt_cbdma.py b/tests/TestSuite_vhost_virtio_user_interrupt_cbdma.py index a89ae5d7..ce26b968 100644 --- a/tests/TestSuite_vhost_virtio_user_interrupt_cbdma.py +++ b/tests/TestSuite_vhost_virtio_user_interrupt_cbdma.py @@ -31,11 +31,11 @@ class TestVirtioUserInterruptCbdma(TestCase): self.core_list = self.dut.get_core_list( self.core_config, socket=self.ports_socket ) - self.core_list_vhost = self.core_list[0:2] - self.core_list_l3fwd = self.core_list[2:4] - self.core_mask_vhost = utils.create_mask(self.core_list_vhost) - self.core_mask_l3fwd = utils.create_mask(self.core_list_l3fwd) - self.core_mask_virtio = self.core_mask_l3fwd + self.vhost_core_list = self.core_list[0:2] + self.l3fwd_core_list = self.core_list[2:4] + self.core_mask_vhost = utils.create_mask(self.vhost_core_list) + self.l3fwd_core_mask = utils.create_mask(self.l3fwd_core_list) + self.virtio_core_mask = self.l3fwd_core_mask self.pci_info = self.dut.ports_info[0]["pci"] self.cbdma_dev_infos = [] self.dmas_info = None @@ -78,14 +78,14 @@ class TestVirtioUserInterruptCbdma(TestCase): return True if out == "2048" else False def launch_l3fwd(self, path, packed=False): - self.core_interrupt = self.core_list_l3fwd[0] + self.core_interrupt = self.l3fwd_core_list[0] example_para = "./%s " % self.app_l3fwd_power_path if not packed: vdev = "virtio_user0,path=%s,cq=1" % path else: vdev = "virtio_user0,path=%s,cq=1,packed_vq=1" % path eal_params = self.dut.create_eal_parameters( - cores=self.core_list_l3fwd, prefix="l3fwd-pwd", no_pci=True, vdevs=[vdev] + cores=self.l3fwd_core_list, prefix="l3fwd-pwd", no_pci=True, vdevs=[vdev] ) if self.check_2M_env: eal_params += " --single-file-segments" @@ -171,21 +171,21 @@ class TestVirtioUserInterruptCbdma(TestCase): 60, ) - def test_lsc_event_between_vhost_user_and_virtio_user_with_split_ring_and_cbdma_enabled( + def test_split_ring_lsc_event_between_vhost_user_and_virtio_user_with_cbdma_enable( self, ): """ - Test Case1: LSC event between vhost-user and virtio-user with split ring and cbdma enabled + Test Case1: Split ring LSC event between vhost-user and virtio-user with cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - lcore_dma = "[lcore{}@{}]".format(self.core_list_vhost[1], self.cbdma_list[0]) - vhost_param = "--lcore-dma={}".format(lcore_dma) + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) + lcore_dma = "lcore%s@%s" % (self.vhost_core_list[1], self.cbdma_list[0]) + vhost_param = "--lcore-dma=[%s]" % lcore_dma vhost_eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0]'" + "--vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0;rxq0]'" ) ports = self.cbdma_list self.vhost_pmd.start_testpmd( - cores=self.core_list_vhost, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=vhost_eal_param, @@ -198,7 +198,7 @@ class TestVirtioUserInterruptCbdma(TestCase): "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net" ) self.virtio_pmd.start_testpmd( - cores=self.core_list_l3fwd, + cores=self.l3fwd_core_list, no_pci=True, prefix="virtio", eal_param=virtio_eal_param, @@ -210,26 +210,28 @@ class TestVirtioUserInterruptCbdma(TestCase): self.vhost_pmd.quit() self.check_virtio_side_link_status("down") - def test_split_ring_virtio_user_interrupt_test_with_vhost_user_as_backend_and_cbdma_enabled( + def test_split_ring_virtio_user_interrupt_test_with_vhost_user_as_backend_and_cbdma_enable( self, ): """ - Test Case2: Split ring virtio-user interrupt test with vhost-user as backend and cbdma enabled + Test Case2: Split ring virtio-user interrupt test with vhost-user as backend and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(2) - lcore_dma = "[lcore{}@{},lcore{}@{}]".format( - self.core_list_vhost[1], + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) + lcore_dma = "lcore%s@%s,lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], - self.core_list_vhost[1], + self.vhost_core_list[1], self.cbdma_list[1], ) - vhost_param = "--rxq=1 --txq=1 --lcore-dma={}".format(lcore_dma) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0]'" + vhost_param = "--rxq=1 --txq=1 --lcore-dma=[%s]" % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0;rxq0]'" + ) ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.logger.info(ports) self.vhost_pmd.start_testpmd( - cores=self.core_list_vhost, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=vhost_eal_param, @@ -249,21 +251,21 @@ class TestVirtioUserInterruptCbdma(TestCase): time.sleep(3) self.check_interrupt_log(status="waked up") - def test_lsc_event_between_vhost_user_and_virtio_user_with_packed_ring_and_cbdma_enabled( + def test_packed_ring_lsc_event_between_vhost_user_and_virtio_user_with_cbdma_enable( self, ): """ - Test Case3: LSC event between vhost-user and virtio-user with packed ring and cbdma enabled + Test Case3: Packed ring LSC event between vhost-user and virtio-user with cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(1) - lcore_dma = "[lcore{}@{}]".format(self.core_list_vhost[1], self.cbdma_list[0]) - vhost_param = "--lcore-dma={}".format(lcore_dma) + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=1) + lcore_dma = "lcore%s@%s" % (self.vhost_core_list[1], self.cbdma_list[0]) + vhost_param = "--lcore-dma=[%s]" % lcore_dma vhost_eal_param = ( - "--vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0]'" + "--vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0;rxq0]'" ) ports = self.cbdma_list self.vhost_pmd.start_testpmd( - cores=self.core_list_vhost, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=vhost_eal_param, @@ -276,7 +278,7 @@ class TestVirtioUserInterruptCbdma(TestCase): "--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1" ) self.virtio_pmd.start_testpmd( - cores=self.core_list_l3fwd, + cores=self.l3fwd_core_list, no_pci=True, prefix="virtio", eal_param=virtio_eal_param, @@ -288,25 +290,27 @@ class TestVirtioUserInterruptCbdma(TestCase): self.vhost_pmd.quit() self.check_virtio_side_link_status("down") - def test_packed_ring_virtio_user_interrupt_test_with_vhost_user_as_backend_and_cbdma_enabled( + def test_packed_ring_virtio_user_interrupt_test_with_vhost_user_as_backend_and_cbdma_enable( self, ): """ - Test Case4: Packed ring virtio-user interrupt test with vhost-user as backend and cbdma enabled + Test Case4: Packed ring virtio-user interrupt test with vhost-user as backend and cbdma enable """ - self.get_cbdma_ports_info_and_bind_to_dpdk(2) - lcore_dma = "[lcore{}@{},lcore{}@{}]".format( - self.core_list_vhost[1], + self.get_cbdma_ports_info_and_bind_to_dpdk(cbdma_num=2) + lcore_dma = "lcore%s@%s,lcore%s@%s" % ( + self.vhost_core_list[1], self.cbdma_list[0], - self.core_list_vhost[1], + self.vhost_core_list[1], self.cbdma_list[1], ) - vhost_param = "--rxq=1 --txq=1 --lcore-dma={}".format(lcore_dma) - vhost_eal_param = "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0]'" + vhost_param = "--rxq=1 --txq=1 --lcore-dma=[%s]" % lcore_dma + vhost_eal_param = ( + "--vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0;rxq0]'" + ) ports = self.cbdma_list ports.append(self.dut.ports_info[0]["pci"]) self.vhost_pmd.start_testpmd( - cores=self.core_list_vhost, + cores=self.vhost_core_list, ports=ports, prefix="vhost", eal_param=vhost_eal_param,