[V1,1/3] test_plans/basic_4k_pages_dsa_test_plan: modify the dmas parameter

Message ID 20221111073830.2424805-1-weix.ling@intel.com (mailing list archive)
State Superseded
Headers
Series modify the dmas parameter and upstram |

Commit Message

Ling, WeiX Nov. 11, 2022, 7:38 a.m. UTC
  From DPDK-22.11, the dmas parameter have changed from
`lcore-dma=[lcore1@0000:00:04.0]` to `dmas=[txq0@0000:00:04.0]` by DPDK
local patch,so modify the dmas parameter.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 test_plans/basic_4k_pages_dsa_test_plan.rst | 502 ++++++++++----------
 1 file changed, 246 insertions(+), 256 deletions(-)
  

Patch

diff --git a/test_plans/basic_4k_pages_dsa_test_plan.rst b/test_plans/basic_4k_pages_dsa_test_plan.rst
index 69cf7fc1..c37a1a51 100644
--- a/test_plans/basic_4k_pages_dsa_test_plan.rst
+++ b/test_plans/basic_4k_pages_dsa_test_plan.rst
@@ -3,7 +3,7 @@ 
 
 =============================================
 Basic 4k-pages test with DSA driver test plan
-==============================================
+=============================================
 
 Description
 ===========
@@ -22,9 +22,9 @@  and packed ring vhost-user/virtio-net mergeable and non-mergeable path.
 5. Vhost-user using 1G hugepges and virtio-user using 4k-pages.
 
 Note:
-1. When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
+1.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may
 exceed IOMMU's max capability, better to use 1G guest hugepage.
-2. DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
+2.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd.
 
 Prerequisites
 =============
@@ -41,13 +41,13 @@  General set up
 	CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
 	ninja -C x86_64-native-linuxapp-gcc -j 110
 
-3. Get the PCI device ID and DSA device ID of DUT, for example, 0000:6a:00.0 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
+3. Get the PCI device ID and DSA device ID of DUT, for example, 0000:27:00.0 is PCI device ID, 0000:6a:01.0 - 0000:f6:01.0 are DSA device IDs::
 
 	<dpdk dir># ./usertools/dpdk-devbind.py -s
 
 	Network devices using kernel driver
 	===================================
-	0000:6a:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
+	0000:27:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=ice unused=vfio-pci
 
 	4DMA devices using kernel driver
 	4===============================
@@ -112,28 +112,27 @@  Common steps
 	Check WQ by 'ls /dev/dsa' and can find "wq0.0 wq1.0 wq1.1 wq1.2 wq1.3"
 
 Test Case 1: PVP split ring multi-queues with 4K-pages and dsa dpdk driver
-------------------------------------------------------------------------------
+--------------------------------------------------------------------------
 This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver.
 
 1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0
 	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:e7:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:e7:01.0-q4,lcore13@0000:e7:01.0-q5,lcore14@0000:e7:01.0-q6,lcore14@0000:e7:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:27:00.0 -a 0000:e7:01.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
@@ -152,10 +151,9 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost with 1G hugepage::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:e7:01.0-q0,lcore11@0000:e7:01.0-q1,lcore12@0000:e7:01.0-q2,lcore12@0000:e7:01.0-q3,lcore13@0000:ec:01.0-q0,lcore13@0000:ec:01.0-q1,lcore14@0000:ec:01.0-q2,lcore14@0000:ec:01.0-q3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:27:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q1;txq3@0000:e7:01.0-q1;txq4@0000:e7:01.0-q2;txq5@0000:e7:01.0-q2;txq6@0000:e7:01.0-q3;txq7@0000:e7:01.0-q3;rxq0@0000:ec:01.0-q0;rxq1@0000:ec:01.0-q0;rxq2@0000:ec:01.0-q1;rxq3@0000:ec:01.0-q1;rxq4@0000:ec:01.0-q2;rxq5@0000:ec:01.0-q2;rxq6@0000:ec:01.0-q3;rxq7@0000:ec:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -163,38 +161,37 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 9. Quit and relaunch virtio-user with mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
 10. Rerun step 4-6.
 
 Test Case 2: PVP packed ring multi-queues with 4K-pages and dsa dpdk driver
-------------------------------------------------------------------------------
+---------------------------------------------------------------------------
 This case tests packed ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa dpdk driver.
 
 1. Bind 2 dsa device and one nic port to vfio-pci like common step 1-2::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f6:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci e7:01.0 ec:01.0
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:6a:00.0 -a 0000:f1:01.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f1:01.0-q4,lcore13@0000:f1:01.0-q5,lcore14@0000:f1:01.0-q6,lcore14@0000:f1:01.0-q7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 --no-huge -m 1024 -a 0000:27:00.0 -a 0000:e7:01.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q0;txq3@0000:e7:01.0-q0;txq4@0000:e7:01.0-q1;txq5@0000:e7:01.0-q1;rxq2@0000:e7:01.0-q2;rxq3@0000:e7:01.0-q2;rxq4@0000:e7:01.0-q3;rxq5@0000:e7:01.0-q3;rxq6@0000:e7:01.0-q3;rxq7@0000:e7:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 4. Send tcp imix packets [64,1518] from packet generator, check the throughput can get expected data::
@@ -212,10 +209,9 @@  This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 7. Quit and relaunch vhost with with 1G hugepage::::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@0000:f1:01.0-q0,lcore11@0000:f1:01.0-q1,lcore12@0000:f1:01.0-q2,lcore12@0000:f1:01.0-q3,lcore13@0000:f6:01.0-q0,lcore13@0000:f6:01.0-q1,lcore14@0000:f6:01.0-q2,lcore14@0000:f6:01.0-q3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:27:00.0 -a 0000:e7:01.0,max_queues=4 -a 0000:ec:01.0,max_queues=4 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@0000:e7:01.0-q0;txq1@0000:e7:01.0-q0;txq2@0000:e7:01.0-q1;txq3@0000:e7:01.0-q1;txq4@0000:e7:01.0-q2;txq5@0000:e7:01.0-q2;txq6@0000:e7:01.0-q3;txq7@0000:e7:01.0-q3;rxq0@0000:ec:01.0-q0;rxq1@0000:ec:01.0-q0;rxq2@0000:ec:01.0-q1;rxq3@0000:ec:01.0-q1;rxq4@0000:ec:01.0-q2;rxq5@0000:ec:01.0-q2;rxq6@0000:ec:01.0-q3;rxq7@0000:ec:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
@@ -223,17 +219,19 @@  This case tests packed ring with multi-queues can work normally in 4k-pages envi
 
 9. Quit and relaunch virtio-user with mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 10.Rerun step 4-6.
 
 Test Case 3: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic
---------------------------------------------------------------------------------------------------------
-This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment.
+------------------------------------------------------------------------------------------------------
+This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path
+by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver
+in 4k-pages environment.
 
 1. Bind 1 dsa device to vfio-pci like common step 2::
 
@@ -242,16 +240,16 @@  This case test the function of Vhost tx offload in the topology of vhost-user/vi
 2. Launch vhost by below command::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q0]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q1;rxq0@0000:f1:01.0-q1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2::
 
-	taskset -c 10 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 10 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
 	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -260,9 +258,9 @@  This case test the function of Vhost tx offload in the topology of vhost-user/vi
 	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
 	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
-	taskset -c 11 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 11 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
 	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -291,9 +289,10 @@  This case test the function of Vhost tx offload in the topology of vhost-user/vi
 	testpmd>show port xstats all
 
 Test Case 4: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa dpdk driver test with tcp traffic
----------------------------------------------------------------------------------------------------------
+-------------------------------------------------------------------------------------------------------
 This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path 
-by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver in 4k-pages environment.
+by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver
+in 4k-pages environment.
 
 1. Bind 1 dsa device to vfio-pci like common step 2::
 
@@ -302,16 +301,16 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o
 2. Launch vhost by below command::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@0000:f1:01.0-q0,lcore4@0000:f1:01.0-q1]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;rxq0@0000:f1:01.0-q1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2::
 
-	taskset -c 32 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 32 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
 	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -320,9 +319,9 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o
 	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
 	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
-	taskset -c 33 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 33 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
 	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -351,7 +350,7 @@  by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous o
 	testpmd>show port xstats all
 
 Test Case 5: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa dpdk driver
----------------------------------------------------------------------------------------------------------
+-------------------------------------------------------------------------------------------------------
 This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in 
 vm2vm vhost-user/virtio-net multi-queues mergeable path when vhost uses the asynchronous operations with dsa dpdk driver.
 And one virtio-net is split ring, the other is packed ring. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment.
@@ -363,17 +362,16 @@  And one virtio-net is split ring, the other is packed ring. The vhost run in 1G
 2. Launch vhost::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore3@0000:f6:01.0-q3]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q2;rxq4@0000:f1:01.0-q3;rxq5@0000:f1:01.0-q3;rxq6@0000:f1:01.0-q3;rxq7@0000:f1:01.0-q3]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@0000:f6:01.0-q0;txq1@0000:f6:01.0-q0;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q0;txq4@0000:f6:01.0-q1;txq5@0000:f6:01.0-q1;rxq2@0000:f6:01.0-q2;rxq3@0000:f6:01.0-q2;rxq4@0000:f6:01.0-q3;rxq5@0000:f6:01.0-q3;rxq6@0000:f6:01.0-q3;rxq7@0000:f6:01.0-q3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Launch VM qemu::
 
-	taskset -c 10 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 10 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
 	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -382,9 +380,9 @@  And one virtio-net is split ring, the other is packed ring. The vhost run in 1G
 	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
 	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
-	taskset -c 11 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 11 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
 	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -417,102 +415,95 @@  And one virtio-net is split ring, the other is packed ring. The vhost run in 1G
 8. Relaunch vm1 and rerun step 4-7.
 
 Test Case 6: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa dpdk driver
----------------------------------------------------------------------------------------------------
+------------------------------------------------------------------------------------------------
 This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with
 dsa dpdk driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment.
 
-1. Bind 2 dsa channel to vfio-pci, launch vhost::
+1. Bind 2 dsa channel to vfio-pci, 
 
-	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u f1:01.0 f1:01.0
-	./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f1:01.0
-
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore2@0000:f1:01.0-q1,lcore3@0000:f1:01.0-q2,lcore3@0000:f1:01.0-q3,lcore4@0000:f6:01.0-q0,lcore4@0000:f6:01.0-q1,lcore5@0000:f6:01.0-q2,lcore5@0000:f6:01.0-q3]
-	testpmd>start
+    ls /dev/dsa #check wq configure, reset if exist
+    ./usertools/dpdk-devbind.py -u f1:01.0 f1:01.0
+    ./usertools/dpdk-devbind.py -b vfio-pci f1:01.0 f1:01.0
 
-2. Prepare tmpfs with 4K-pages::
+2. Launch vhost::
 
-	mkdir /mnt/tmpfs_4k
-	mkdir /mnt/tmpfs_4k_2
-	mount tmpfs /mnt/tmpfs_4k -t tmpfs -o size=4G
-	mount tmpfs /mnt/tmpfs_4k_2 -t tmpfs -o size=4G
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=4 -a 0000:f6:01.0,max_queues=4 \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q0;txq2@0000:f1:01.0-q0;txq3@0000:f1:01.0-q0;txq4@0000:f1:01.0-q1;txq5@0000:f1:01.0-q1;rxq2@0000:f1:01.0-q2;rxq3@0000:f1:01.0-q2;rxq4@0000:f1:01.0-q3;rxq5@0000:f1:01.0-q3;rxq6@0000:f1:01.0-q3;rxq7@0000:f1:01.0-q3]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:f6:01.0-q0;txq1@0000:f6:01.0-q0;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q0;txq4@0000:f6:01.0-q1;txq5@0000:f6:01.0-q1;rxq2@0000:f6:01.0-q2;rxq3@0000:f6:01.0-q2;rxq4@0000:f6:01.0-q3;rxq5@0000:f6:01.0-q3;rxq6@0000:f6:01.0-q3;rxq7@0000:f6:01.0-q3]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
 3. Launch VM qemu::
 
-	taskset -c 32 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
-	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-	-chardev socket,id=char0,path=./vhost-net0,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
-
-	taskset -c 33 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-	-chardev socket,id=char0,path=./vhost-net1,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    taskset -c 32 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
+    -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+
+    taskset -c 33 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
 4. On VM1, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
 5. On VM2, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
 6. Scp 1MB file form VM1 to VM2::
 
-	Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
 
 7. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
 8. Quit and relaunch vhost w/ diff dsa channels::
 
-	./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \
-	--lcore-dma=[lcore2@0000:f1:01.0-q0,lcore3@0000:f1:01.0-q1,lcore4@0000:f6:01.0-q0,lcore5@0000:f6:01.0-q1]
-	testpmd>start
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:f1:01.0,max_queues=2 -a 0000:f6:01.0,max_queues=2 \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q1;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f6:01.0-q0;rxq3@0000:f6:01.0-q1]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@0000:f1:01.0-q0;txq1@0000:f1:01.0-q1;txq2@0000:f6:01.0-q0;txq3@0000:f6:01.0-q1;rxq0@0000:f1:01.0-q0;rxq1@0000:f1:01.0-q1;rxq2@0000:f6:01.0-q0;rxq3@0000:f6:01.0-q1]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
+    testpmd>start
 
 9. On VM1, set virtio device::
 
-	<VM1># ethtool -L ens5 combined 4
+    <VM1># ethtool -L ens5 combined 4
 
 10. On VM2, set virtio device::
 
-	<VM2># ethtool -L ens5 combined 4
+    <VM2># ethtool -L ens5 combined 4
 
 11. Rerun step 6-7.
 
 Test Case 7: PVP split ring multi-queues with 4K-pages and dsa kernel driver
---------------------------------------------------------------------------------
+----------------------------------------------------------------------------
 This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver.
 
 1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0
 
 	.ls /dev/dsa #check wq configure, reset if exist
 	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
@@ -523,19 +514,18 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:27:00.0 \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
@@ -551,47 +541,55 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 	testpmd>start
 	testpmd>show port stats all
 
-7. Quit and relaunch vhost with diff dsa virtual channels and 1G-page::::
+7. Quit and relaunch vhost with diff dsa virtual channels and 1G-page::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.0,lcore13@wq0.1,lcore13@wq1.0,lcore14@wq1.1,lcore14@wq1.2]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -a 0000:27:00.0 \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.1;txq3@wq0.1;txq4@wq0.2;txq5@wq0.2;txq6@wq0.3;txq7@wq0.3;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.1;rxq3@wq0.1;rxq4@wq0.2;rxq5@wq0.2;rxq6@wq0.3;rxq7@wq0.3]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
 8. Rerun step 4-6.
 
+9. Quit and relaunch virtio-user with mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=8,server=1 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
+	testpmd>start
+
+10. Rerun step 4-6.
+
 Test Case 8: PVP packed ring multi-queues with 4K-pages and dsa kernel driver
----------------------------------------------------------------------------------
+-----------------------------------------------------------------------------
 This case tests split ring with multi-queues can work normally in 4k-pages environment when vhost uses the asynchronous operations with dsa kernel driver.
 
 1. Bind one nic port to vfio-pci and 2 dsa device to idxd like common step 1 and 3::
 
-	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 6a:00.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 27:00.0
 
 	.ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
+	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0 \
-	--lcore-dma=[lcore11@wq0.0,lcore12@wq0.1,lcore13@wq0.2,lcore14@wq0.3]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18 -m 1024 --no-huge -a 0000:27:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --txq=8 --rxq=8 --no-numa --socket-num=0
 	testpmd>set fwd mac
 	testpmd>start
 
 3. Launch virtio-user with inorder mergeable path::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
 	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,packed_vq=1,queues=8,server=1 \
-	-- -i --nb-cores=1 --txq=8 --rxq=8 --txd=1024 --rxd=1024
-	testpmd>set fwd mac
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
 	testpmd>start
 
 4. Send imix packets [64,1518] from packet generator, check the throughput can get expected data::
@@ -609,17 +607,26 @@  This case tests split ring with multi-queues can work normally in 4k-pages envir
 
 7. Quit and relaunch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:6a:00.0 \
-	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8 \
-	--lcore-dma=[lcore11@wq0.0,lcore11@wq0.1,lcore12@wq1.0,lcore2@wq1.1]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 10-18  -a 0000:27:00.0 \
+	--file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;txq6@wq0.1;txq7@wq0.1;rxq0@wq0.0;rxq1@wq0.0;rxq2@wq0.0;rxq3@wq0.0;rxq4@wq0.1;rxq5@wq0.1;rxq6@wq0.1;rxq7@wq0.1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --txq=8 --rxq=8
 	testpmd>set fwd mac
 	testpmd>start
 
 8. Rerun step 4-6.
 
+9. Quit and relaunch virtio-user with mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 1-5 --no-huge -m 1024 --no-pci --file-prefix=virtio \
+	--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,packed_vq=1,queues=8,server=1 \
+	-- -i --nb-cores=4 --txq=8 --rxq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd csum
+	testpmd>start
+
+10.Rerun step 4-6.
+
 Test Case 9: VM2VM split ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic
----------------------------------------------------------------------------------------------------------
+--------------------------------------------------------------------------------------------------------
 This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net split ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver
 in 4k-pages environment.
@@ -629,22 +636,22 @@  in 4k-pages environment.
 	ls /dev/dsa #check wq configure, reset if exist
 	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
 	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
 	ls /dev/dsa #check wq configure success
 
 2. Launch the Vhost sample by below commands::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-huge -m 1024 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0 --lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@wq0.0;rxq0@wq0.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@wq0.2;rxq0@wq0.3]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --rxq=1 --txq=1 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2 on socket 1::
 
-	taskset -c 7 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
+	taskset -c 7 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 8 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
 	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -653,9 +660,9 @@  in 4k-pages environment.
 	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
 	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,guest_ufo=on,host_ufo=on -vnc :10
 
-	taskset -c 8 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
+	taskset -c 8 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 8 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
 	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -684,33 +691,32 @@  in 4k-pages environment.
 	testpmd>show port xstats all
 
 Test Case 10: VM2VM packed ring vhost-user/virtio-net 4K-pages and dsa kernel driver test with tcp traffic
------------------------------------------------------------------------------------------------------------
+----------------------------------------------------------------------------------------------------------
 This case test the function of Vhost tx offload in the topology of vhost-user/virtio-net packed ring mergeable path
 by verifing the TSO/cksum in the TCP/IP stack when vhost uses the asynchronous operations with dsa dpdk driver
 in 4k-pages environment.
 
-1. Bind 2 dsa device to idxd like common step 2::
+1. Bind 1 dsa device to idxd like common step 2::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 2 0
 	ls /dev/dsa #check wq configure success
 
 2. Launch the Vhost sample by below commands::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-4 -n 4 --no-huge -m 1024 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=1,dmas=[txq0;rxq0]' \
-	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0 --lcore-dma=[lcore3@wq0.0,lcore4@wq1.0]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0@wq0.0;rxq0@wq0.0]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0@wq0.1;rxq0@wq0.1]' \
+	--iova=va -- -i --nb-cores=2 --txd=1024 --rxd=1024 --no-numa --socket-num=0
 	testpmd>start
 
 3. Launch VM1 and VM2 with qemu::
 
-	taskset -c 7 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 7 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
 	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -719,9 +725,9 @@  in 4k-pages environment.
 	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
 	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :10
 
-	taskset -c 8 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 8 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
 	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -750,40 +756,33 @@  in 4k-pages environment.
 	testpmd>show port xstats all
 
 Test Case 11: VM2VM vhost/virtio-net split packed ring multi queues with 1G/4k-pages and dsa kernel driver
------------------------------------------------------------------------------------------------------------
+----------------------------------------------------------------------------------------------------------
 This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split and packed ring mergeable path when vhost uses the asynchronous operations with
 dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment.
 
-1. Bind 8 dsa device to idxd like common step 3::
+1. Bind 2 dsa device to idxd like common step 3::
 
 	ls /dev/dsa #check wq configure, reset if exist
-	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0 74:01.0 79:01.0 e7:01.0 ec:01.0 f1:01.0 f6:01.0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 0
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 1
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 2
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 3
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 4
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 5
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 6
-	<dpdk dir># ./<dpdk build dir>/drivers/dma/idxd/dpdk_idxd_cfg.py -q 8 7
+	<dpdk dir># ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+	<dpdk dir># ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 0
+	<dpdk dir># ./drivers/dma/idxd/dpdk_idxd_cfg.py -q 4 1
 	ls /dev/dsa #check wq configure success
 
 2. Launch vhost::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@wq0.0,lcore2@wq1.1,lcore2@wq2.2,lcore2@wq3.3,lcore3@wq0.0,lcore3@wq2.2,lcore3@wq4.4,lcore3@wq5.5,lcore3@wq6.6,lcore3@wq7.7,lcore4@wq1.1,lcore4@wq3.3,lcore4@wq0.1,lcore4@wq1.2,lcore4@wq2.3,lcore4@wq3.4,lcore4@wq4.5,lcore4@wq5.6,lcore4@wq6.7,lcore5@wq7.0]
+	--vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+	--vdev 'net_vhost1,iface=vhost-net1,queues=8,dmas=[txq0@wq0.2;txq1@wq0.2;txq2@wq0.2;txq3@wq0.2;txq4@wq0.3;txq5@wq0.3;rxq2@wq1.2;rxq3@wq1.2;rxq4@wq1.3;rxq5@wq1.3;rxq6@wq1.3;rxq7@wq1.3]' \
+	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
 	testpmd>start
 
 3. Launch VM qemu::
 
-	taskset -c 32 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 32 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
 	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -792,9 +791,9 @@  dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
 	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
 
-	taskset -c 33 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+	taskset -c 33 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
 	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
+	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
 	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
 	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
 	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
@@ -827,95 +826,86 @@  dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-p
 8. Relaunch vm1 and rerun step 4-7.
 
 Test Case 12: VM2VM vhost/virtio-net split ring multi queues with 1G/4k-pages and dsa kernel driver
------------------------------------------------------------------------------------------------------
+---------------------------------------------------------------------------------------------------
 This case uses iperf and scp to test the payload of large packet (larger than 1MB) is valid after packets forwarding in
 vm2vm vhost-user/virtio-net split ring mergeable path when vhost uses the asynchronous operations with
 dsa kernel driver. The vhost run in 1G hugepages and the virtio-user run in 4k-pages environment.
 
 1. Bind 2 dsa channel to idxd, launch vhost::
 
-	ls /dev/dsa #check wq configure, reset if exist
-	./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
-	./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
-	./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0
-	./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 1
+    ls /dev/dsa #check wq configure, reset if exist
+    ./usertools/dpdk-devbind.py -u 6a:01.0 6f:01.0
+    ./usertools/dpdk-devbind.py -b idxd 6a:01.0 6f:01.0
+    ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 0
+    ./drivers/raw/ioat/dpdk_idxd_cfg.py -q 4 1
 
 2. Launch vhost::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8 \
-	--lcore-dma=[lcore2@wq0.0,lcore2@wq0.1,lcore3@wq0.2,lcore3@wq0.3,lcore4@wq1.0,lcore4@wq1.1,lcore5@wq1.2,lcore5@wq1.3]
-	testpmd>start
-
-3. Prepare tmpfs with 4K-pages::
-
-	mkdir /mnt/tmpfs_4k
-	mkdir /mnt/tmpfs_4k_2
-	mount tmpfs /mnt/tmpfs_4k -t tmpfs -o size=4G
-	mount tmpfs /mnt/tmpfs_4k_2 -t tmpfs -o size=4G
-
-4. Launch VM qemu::
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost -a 0000:6a:01.0,max_queues=4 -a 0000:6f:01.0,max_queues=4 \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.0;txq2@wq0.0;txq3@wq0.0;txq4@wq0.1;txq5@wq0.1;rxq2@wq1.0;rxq3@wq1.0;rxq4@wq1.1;rxq5@wq1.1;rxq6@wq1.1;rxq7@wq1.1]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
+    testpmd>start
 
-	taskset -c 10 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04.img  \
-	-chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
-	-chardev socket,id=char0,path=./vhost-net0,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+3. Launch VM qemu::
 
-	taskset -c 11 /usr/local/qemu-7.0.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
-	-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
-	-numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu20-04-2.img  \
-	-chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
-	-device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-	-monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
-	-netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
-	-chardev socket,id=char0,path=./vhost-net1,server \
-	-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
-	-device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
+    taskset -c 10 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04.img  \
+    -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -chardev socket,id=char0,path=./vhost-net0,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10
+
+    taskset -c 11 /usr/local/qemu-7.1.0/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 1 -m 4096 \
+    -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/tmpfs_4k_2,share=on \
+    -numa node,memdev=mem -mem-prealloc -drive file=/home/xingguang/osimg/ubuntu22-04-2.img  \
+    -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \
+    -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -chardev socket,id=char0,path=./vhost-net1,server \
+    -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=8 \
+    -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,mq=on,vectors=40,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
-5. On VM1, set virtio device IP and run arp protocal::
+4. On VM1, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.2
-	arp -s 1.1.1.8 52:54:00:00:00:02
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.2
+    arp -s 1.1.1.8 52:54:00:00:00:02
 
-6. On VM2, set virtio device IP and run arp protocal::
+5. On VM2, set virtio device IP and run arp protocal::
 
-	ethtool -L ens5 combined 8
-	ifconfig ens5 1.1.1.8
-	arp -s 1.1.1.2 52:54:00:00:00:01
+    ethtool -L ens5 combined 8
+    ifconfig ens5 1.1.1.8
+    arp -s 1.1.1.2 52:54:00:00:00:01
 
-7. Scp 1MB file form VM1 to VM2::
+6. Scp 1MB file form VM1 to VM2::
 
-	Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
+    Under VM1, run: `scp [xxx] root@1.1.1.8:/` [xxx] is the file name
 
-8. Check the iperf performance between two VMs by below commands::
+7. Check the iperf performance between two VMs by below commands::
 
-	Under VM1, run: `iperf -s -i 1`
-	Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
+    Under VM1, run: `iperf -s -i 1`
+    Under VM2, run: `iperf -c 1.1.1.2 -i 1 -t 60`
 
-9. Quit and relaunch vhost w/ diff dsa channels::
+8. Quit and relaunch vhost w/ diff dsa channels::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
-	--vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;rxq0;rxq1;rxq2;rxq3]' \
-	--iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4 \
-	--lcore-dma=[lcore2@wq0.0,lcore3@wq0.1,lcore4@wq1.0,lcore5@wq1.1]
-	testpmd>start
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost \
+    --vdev 'net_vhost0,iface=vhost-net0,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \
+    --vdev 'net_vhost1,iface=vhost-net1,queues=8,client=1,tso=1,dmas=[txq0@wq0.0;txq1@wq0.1;txq2@wq0.2;txq3@wq0.3;rxq0@wq0.0;rxq1@wq0.1;rxq2@wq0.2;rxq3@wq0.3]' \
+    --iova=va -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
+    testpmd>start
 
-10. On VM1, set virtio device::
+9. On VM1, set virtio device::
 
-	<VM1># ethtool -L ens5 combined 4
+    <VM1># ethtool -L ens5 combined 4
 
-11. On VM2, set virtio device::
+10. On VM2, set virtio device::
 
-	<VM2># ethtool -L ens5 combined 4
+    <VM2># ethtool -L ens5 combined 4
 
-12. Rerun step 6-7.
+11. Rerun step 6-7.