get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/109175/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 109175,
    "url": "http://patchwork.dpdk.org/api/patches/109175/?format=api",
    "web_url": "http://patchwork.dpdk.org/project/dts/patch/20220406082136.25382-1-weix.ling@intel.com/",
    "project": {
        "id": 3,
        "url": "http://patchwork.dpdk.org/api/projects/3/?format=api",
        "name": "DTS",
        "link_name": "dts",
        "list_id": "dts.dpdk.org",
        "list_email": "dts@dpdk.org",
        "web_url": "",
        "scm_url": "git://dpdk.org/tools/dts",
        "webscm_url": "http://git.dpdk.org/tools/dts/",
        "list_archive_url": "https://inbox.dpdk.org/dts",
        "list_archive_url_format": "https://inbox.dpdk.org/dts/{}",
        "commit_url_format": ""
    },
    "msgid": "<20220406082136.25382-1-weix.ling@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dts/20220406082136.25382-1-weix.ling@intel.com",
    "date": "2022-04-06T08:21:36",
    "name": "[V1,3/5] test_plans/vm2vm_virtio_net_perf_cbdma_test_plan: add DPDK22.03 new feature",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": false,
    "hash": "d3e162aa3d95c0a77010f07f46ffb10558a6fef5",
    "submitter": {
        "id": 1828,
        "url": "http://patchwork.dpdk.org/api/people/1828/?format=api",
        "name": "Ling, WeiX",
        "email": "weix.ling@intel.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.dpdk.org/project/dts/patch/20220406082136.25382-1-weix.ling@intel.com/mbox/",
    "series": [
        {
            "id": 22361,
            "url": "http://patchwork.dpdk.org/api/series/22361/?format=api",
            "web_url": "http://patchwork.dpdk.org/project/dts/list/?series=22361",
            "date": "2022-04-06T08:20:59",
            "name": "migrate cbdma case in new testsuite",
            "version": 1,
            "mbox": "http://patchwork.dpdk.org/series/22361/mbox/"
        }
    ],
    "comments": "http://patchwork.dpdk.org/api/patches/109175/comments/",
    "check": "pending",
    "checks": "http://patchwork.dpdk.org/api/patches/109175/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dts-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 3728DA0509;\n\tWed,  6 Apr 2022 10:21:50 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 306C1410EF;\n\tWed,  6 Apr 2022 10:21:50 +0200 (CEST)",
            "from mga01.intel.com (mga01.intel.com [192.55.52.88])\n by mails.dpdk.org (Postfix) with ESMTP id F1DA740689\n for <dts@dpdk.org>; Wed,  6 Apr 2022 10:21:47 +0200 (CEST)",
            "from fmsmga001.fm.intel.com ([10.253.24.23])\n by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 06 Apr 2022 01:21:47 -0700",
            "from unknown (HELO localhost.localdomain) ([10.239.251.222])\n by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 06 Apr 2022 01:21:44 -0700"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1649233308; x=1680769308;\n h=from:to:cc:subject:date:message-id:mime-version:\n content-transfer-encoding;\n bh=YtTv8/wNvg7wKK0lVIzf108ca/CW048Xw+/FQeqACOI=;\n b=JZmHxs97U7dFVd46Omap0kp2fEVWzTdycAcZrcZdp/doL5XLOGH0IJtD\n CVplmoaUPkkygsifPyi2bdYTw0D7QgNNj65T8P8D2VACe7q7tjEGPOI/P\n Z5OROpBI3W5YIq/GNNjBJi9jI5s2ZuGTuaJVoBE+Lbs6Tb4NOTyZkm4af\n YHA+ylcuHbL8FWPUaI/uLz9zTZQy4/Qazys6GI/THQ3NdS99EDKWyYqcb\n /gGa4YJExMhuXmH+hgvc3HwEtGUhBx3fLH/kAusjmje8rtD6MsSAeXNeZ\n j+iN0cTvYARGks9VVztRKmzO38yk0fNuZaf5vEHONnnQrkREzCJdFcKB7 w==;",
        "X-IronPort-AV": [
            "E=McAfee;i=\"6200,9189,10308\"; a=\"285951519\"",
            "E=Sophos;i=\"5.90,239,1643702400\"; d=\"scan'208\";a=\"285951519\"",
            "E=Sophos;i=\"5.90,239,1643702400\"; d=\"scan'208\";a=\"697282935\""
        ],
        "From": "Wei Ling <weix.ling@intel.com>",
        "To": "dts@dpdk.org",
        "Cc": "Wei Ling <weix.ling@intel.com>",
        "Subject": "[dts][PATCH V1 3/5] test_plans/vm2vm_virtio_net_perf_cbdma_test_plan:\n add DPDK22.03 new feature",
        "Date": "Wed,  6 Apr 2022 16:21:36 +0800",
        "Message-Id": "<20220406082136.25382-1-weix.ling@intel.com>",
        "X-Mailer": "git-send-email 2.25.1",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-BeenThere": "dts@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "test suite reviews and discussions <dts.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dts/>",
        "List-Post": "<mailto:dts@dpdk.org>",
        "List-Help": "<mailto:dts-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dts-bounces@dpdk.org"
    },
    "content": "As commit 53d3f4778c(vhost: integrate dmadev in asynchronous data-path),\nadd new test_plan/vm2vm_virtio_net_perf_cbdma_test_plan.\n\nSigned-off-by: Wei Ling <weix.ling@intel.com>\n---\n .../vm2vm_virtio_net_perf_cbdma_test_plan.rst | 876 ++++++++++++++++++\n 1 file changed, 876 insertions(+)\n create mode 100644 test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst",
    "diff": "diff --git a/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst\nnew file mode 100644\nindex 00000000..3e766a2c\n--- /dev/null\n+++ b/test_plans/vm2vm_virtio_net_perf_cbdma_test_plan.rst\n@@ -0,0 +1,876 @@\n+.. Copyright (c) <2022>, Intel Corporation\n+   All rights reserved.\n+\n+   Redistribution and use in source and binary forms, with or without\n+   modification, are permitted provided that the following conditions\n+   are met:\n+\n+   - Redistributions of source code must retain the above copyright\n+     notice, this list of conditions and the following disclaimer.\n+\n+   - Redistributions in binary forim must reproduce the above copyright\n+     notice, this list of conditions and the following disclaimer in\n+     the documentation and/or other materials provided with the\n+     distribution.\n+\n+   - Neither the name of Intel Corporation nor the names of its\n+     contributors may be used to endorse or promote products derived\n+     from this software without specific prior written permission.\n+\n+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,\n+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n+   OF THE POSSIBILITY OF SUCH DAMAGE.\n+\n+=====================================\n+vm2vm vhost-user/virtio-net test plan\n+=====================================\n+\n+Description\n+===========\n+\n+This test plan test several features in VM2VM topo:\n+1. Check Vhost tx offload function by verifing the TSO/cksum in the TCP/IP stack with vm2vm split ring and\n+packed ring vhost-user/virtio-net mergeable path with CBDMA channel.\n+2. Multi-queues number dynamic change in vm2vm vhost-user/virtio-net with split ring and packed ring when\n+vhost enqueue operation with multi-CBDMA channels.\n+Note:\n+1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > v5.1.\n+2.For split virtqueue virtio-net with multi-queues server mode test, need qemu version > LTS 4.2.1,\n+DUT to old qemu exist reconnect issue when multi-queues test.\n+3.For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage.\n+\n+\n+For more about dpdk-testpmd sample, please refer to the DPDK docments:\n+https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html\n+\n+For virtio-user vdev parameter, you can refer to the DPDK docments:\n+https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.\n+\n+Prerequisites\n+=============\n+\n+Topology\n+--------\n+      Test flow: Virtio-net-->Vhost-->Testpmd-->Vhost-->Virtio-net\n+\n+Hardware\n+--------\n+      Supportted NICs: ALL\n+\n+Software\n+--------\n+      Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz\n+\n+General set up\n+--------------\n+1. Compile DPDK::\n+\n+      # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>\n+      # ninja -C <dpdk build dir> -j 110\n+\n+2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::\n+\n+      <dpdk dir># ./usertools/dpdk-devbind.py -s\n+\n+      Network devices using kernel driver\n+      ===================================\n+      0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci\n+\n+      DMA devices using kernel driver\n+      ===============================\n+      0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci\n+      0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci\n+\n+Test case\n+=========\n+\n+Common steps\n+------------\n+1. Bind 2 CBDMA channels to vfio-pci::\n+\n+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>\n+\n+      For example, Bind 1 NIC port and 2 CBDMA channels::\n+      <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0,0000:00:04.1\n+\n+Test Case 1: VM2VM split ring vhost-user/virtio-net CBDMA enable test with tcp traffic\n+--------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test split ring and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 2 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],dma_ring_size=2048' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0],dma_ring_size=2048' \\\n+\t--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq==1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G0,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tcsum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10\n+\n+\ttaskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G1,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tcsum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+7. Check 2VMs can receive and send big packets to each other::\n+\n+\ttestpmd>show port xstats all\n+\tPort 0 should have tx packets above 1522\n+\tPort 1 should have rx packets above 1522\n+\n+Test Case 2: VM2VM split ring vhost-user/virtio-net mergeable 8 queues CBDMA enable test with large packet payload valid check\n+------------------------------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test split ring mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n+\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2 using qemu::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n+\n+\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Quit and relaunch vhost w/ diff CBDMA channels::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n+\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+9. Rerun step 5-6.\n+\n+10. Quit and relaunch vhost w/ iova=pa::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+11. Rerun step 5-6.\n+\n+12. Quit and relaunch vhost w/o CBDMA channels::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \\\n+\t-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=4 --rxq=4\n+\ttestpmd>start\n+\n+13. On VM1, set virtio device::\n+\n+\tethtool -L ens5 combined 4\n+\n+14. On VM2, set virtio device::\n+\n+\tethtool -L ens5 combined 4\n+\n+15. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+16. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+17. Quit and relaunch vhost with 1 queues::\n+\n+     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+     --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=4' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=4' \\\n+     -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1\n+     testpmd>start\n+\n+18. On VM1, set virtio device::\n+\n+\tethtool -L ens5 combined 1\n+\n+19. On VM2, set virtio device::\n+\n+\tethtool -L ens5 combined 1\n+\n+20. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+21. Check the iperf performance, ensure queue0 can work from vhost side::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+Test Case 3: VM2VM split ring vhost-user/virtio-net non-mergeable 8 queues CBDMA enable test with large packet payload valid check\n+----------------------------------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test split ring non-mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n+\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n+\n+\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Quit and relaunch vhost w/ diff CBDMA channels::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+9. Rerun step 5-6.\n+\n+10. Quit and relaunch vhost ports w/o CBDMA channels::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \\\n+\t-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8\n+\ttestpmd>start\n+\n+11. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+12. Check the iperf performance and compare with CBDMA enable performance, ensure CMDMA enable performance is higher::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+13. Quit and relaunch vhost ports with 1 queues::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8' \\\n+\t-- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=1 --rxq=1\n+\ttestpmd>start\n+\n+14. On VM1, set virtio device::\n+\n+\tethtool -L ens5 combined 1\n+\n+15. On VM2, set virtio device::\n+\n+\tethtool -L ens5 combined 1\n+\n+16. Scp 1MB file form VM1 to VM2M, check packets can be forwarding success by scp::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+17. Check the iperf performance, ensure queue0 can work from vhost side::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+Test Case 4: VM2VM split ring vhost-user/virtio-net mergeable 16 queues CBDMA enable test with large packet payload valid check\n+-------------------------------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test split ring mergeable path with 16 queues and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-9 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,client=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,client=1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11;txq12;txq13;txq14;txq15]' \\\n+\t--iova=va -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore3@0000:00:04.2,lcore3@0000:00:04.3,lcore4@0000:00:04.4,lcore4@0000:00:04.5,lcore5@0000:00:04.6,lcore5@0000:00:04.7,lcore6@0000:80:04.0,lcore6@0000:80:04.1,lcore7@0000:80:04.2,lcore7@0000:80:04.3,lcore8@0000:80:04.4,lcore8@0000:80:04.5,lcore9@0000:80:04.6,lcore9@0000:80:04.7]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2 using qemu::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10\n+\n+\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1,server \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 16\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 16\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+Test Case 5: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic\n+---------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test packed ring path and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 2 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0]' \\\n+\t--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2 on socket 1 with qemu::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tcsum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n+\n+\ttaskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tcsum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+7. Check 2VMs can receive and send big packets to each other::\n+\n+\ttestpmd>show port xstats all\n+\tPort 0 should have tx packets above 1522\n+\tPort 1 should have rx packets above 1522\n+\n+Test Case 6: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable test with large packet payload valid check\n+--------------------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test packed ring mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore3@0000:00:04.0,lcore3@0000:00:04.2,lcore3@0000:00:04.4,lcore3@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:00:04.1,lcore4@0000:00:04.3,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2 with qemu::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10\n+\n+\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Rerun step 5-6 five times.\n+\n+Test Case 7: VM2VM virtio-net packed ring non-mergeable 8 queues CBDMA enable test with large packet payload valid check\n+------------------------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test packed ring non-mergeable path with 8 queues and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7]' \\\n+\t--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10\n+\n+\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Rerun step 5-6 five times.\n+\n+Test Case 8: VM2VM virtio-net packed ring mergeable 16 queues CBDMA enabled test with large packet payload valid check\n+----------------------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test packed ring mergeable path with 16 queues and CBDMA enable to get throughput between 2 VMs.\n+\n+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-9 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,queues=16,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;txq8;txq9;txq10;txq11,txq12,txq13;txq14;txq15]' \\\n+\t--iova=pa -- -i --nb-cores=8 --txd=1024 --rxd=1024 --txq=16 --rxq=16 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore3@0000:00:04.2,lcore3@0000:00:04.3,lcore4@0000:00:04.4,lcore4@0000:00:04.5,lcore5@0000:00:04.6,lcore5@0000:00:04.7,lcore6@0000:80:04.0,lcore6@0000:80:04.1,lcore7@0000:80:04.2,lcore7@0000:80:04.3,lcore8@0000:80:04.4,lcore8@0000:80:04.5,lcore9@0000:80:04.6,lcore9@0000:80:04.7]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2 with qemu::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G0,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10\n+\n+\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge1G1,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=16 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 16\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 16\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Rerun step 5-6 five times.\n+\n+Test Case 9: VM2VM packed ring vhost-user/virtio-net CBDMA enable test with tcp traffic when set iova=pa\n+--------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test packed ring and CBDMA enable when set iova=pa mode to get throughput between 2 VMs.\n+\n+1. Bind 2 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0]' \\\n+\t--iova=pa -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1 --lcore-dma=[lcore3@0000:00:04.0,lcore4@0000:00:04.1]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2 on socket 1 with qemu::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tcsum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10\n+\n+\ttaskset -c 33 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tcsum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Check 2VMs can receive and send big packets to each other::\n+\n+\ttestpmd>show port xstats all\n+\tPort 0 should have tx packets above 1522\n+\tPort 1 should have rx packets above 1522\n+\n+Test Case 10: VM2VM virtio-net packed ring mergeable 8 queues CBDMA enable and PA mode test with large packet payload valid check\n+---------------------------------------------------------------------------------------------------------------------------------\n+This case uses testpmd and QEMU and iperf to test packed ring mergeable path with 8 queues and CBDMA enable when set iova=pa mode to get throughput between 2 VMs.\n+\n+1. Bind 16 CBDMA channels to vfio-pci, as common step 1.\n+\n+2. Launch vhost by below command::\n+\n+\t./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \\\n+\t-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \\\n+\t-a 0000:80:04.0 -a 0000:80:04.1 -a 0000:80:04.2 -a 0000:80:04.3 -a 0000:80:04.4 -a 0000:80:04.5 -a 0000:80:04.6 -a 0000:80:04.7 \\\n+\t--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6]' \\\n+\t--iova=pa -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \\\n+\t--lcore-dma=[lcore2@0000:00:04.0,lcore2@0000:00:04.1,lcore2@0000:00:04.2,lcore2@0000:00:04.3,lcore2@0000:00:04.4,lcore2@0000:00:04.5,lcore3@0000:00:04.6,lcore3@0000:00:04.7,lcore4@0000:80:04.0,lcore4@0000:80:04.1,lcore4@0000:80:04.2,lcore4@0000:80:04.3,lcore4@0000:80:04.4,lcore4@0000:80:04.5,lcore4@0000:80:04.6,lcore5@0000:80:04.7]\n+\ttestpmd>start\n+\n+3. Launch VM1 and VM2 with qemu::\n+\n+\ttaskset -c 32 qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net0 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :10\n+\n+\ttaskset -c 40 qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \\\n+\t-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \\\n+\t-numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img  \\\n+\t-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \\\n+\t-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \\\n+\t-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+\t-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+\t-chardev socket,id=char0,path=./vhost-net1 \\\n+\t-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \\\n+\t-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,\\\n+\tmq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on,packed=on -vnc :12\n+\n+4. On VM1, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.2\n+\tarp -s 1.1.1.8 52:54:00:00:00:02\n+\n+5. On VM2, set virtio device IP and run arp protocal::\n+\n+\tethtool -L ens5 combined 8\n+\tifconfig ens5 1.1.1.8\n+\tarp -s 1.1.1.2 52:54:00:00:00:01\n+\n+6. Scp 1MB file form VM1 to VM2::\n+\n+\tUnder VM1, run: `scp [xxx] root@1.1.1.8:/`   [xxx] is the file name\n+\n+7. Check the iperf performance between two VMs by below commands::\n+\n+\tUnder VM1, run: `iperf -s -i 1`\n+\tUnder VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`\n+\n+8. Rerun step 5-6 five times.\n\\ No newline at end of file\n",
    "prefixes": [
        "V1",
        "3/5"
    ]
}