[V1,1/2] test_plans/loopback_virtio_user_server_mode_cbdma_test_plan: modify the dmas parameter

Message ID 20221109072149.1209496-1-weix.ling@intel.com (mailing list archive)
State Superseded
Headers
Series modify the dmas parameter by DPDK local |

Commit Message

Ling, WeiX Nov. 9, 2022, 7:21 a.m. UTC
  From DPDK-22.11, the dmas parameter have changed, so modify the dmas
parameter.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 ...irtio_user_server_mode_cbdma_test_plan.rst | 463 ++++++++++++++----
 1 file changed, 356 insertions(+), 107 deletions(-)
  

Patch

diff --git a/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst
index 7a0a8991..957ab7aa 100644
--- a/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_cbdma_test_plan.rst
@@ -8,10 +8,24 @@  Loopback vhost/virtio-user server mode with CBDMA test plan
 Description
 ===========
 
-Vhost asynchronous data path leverages DMA devices to offload memory copies from the CPU and it is implemented in an asynchronous way.
-In addition, vhost supports M:N mapping between vrings and DMA virtual channels. Specifically, one vring can use multiple different DMA
-channels and one DMA channel can be shared by multiple vrings at the same time. From DPDK22.07, Vhost enqueue and dequeue operation with
-CBDMA channels is supported in both split and packed ring.
+CBDMA is a kind of DMA engine, Vhost asynchronous data path leverages DMA devices
+to offload memory copies from the CPU and it is implemented in an asynchronous way.
+As a result, large packet copy can be accelerated by the DMA engine, and vhost can
+free CPU cycles for higher level functions.
+
+Asynchronous data path is enabled per tx/rx queue, and users need
+to specify the DMA device used by the tx/rx queue. Each tx/rx queue
+only supports to use one DMA device, but one DMA device can be shared
+among multiple tx/rx queues of different vhostpmd ports.
+
+Two PMD parameters are added:
+- dmas:	specify the used DMA device for a tx/rx queue
+(Default: no queues enable asynchronous data path)
+- dma-ring-size: DMA ring size.
+(Default: 4096).
+
+Here is an example:
+--vdev 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-ring-size=4096'
 
 This document provides the test plan for testing the following features when Vhost-user using asynchronous data path with
 CBDMA channels in loopback vhost-user/virtio-user topology.
@@ -49,7 +63,7 @@  General set up
     CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc
     ninja -C x86_64-native-linuxapp-gcc -j 110
 
-2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
+2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:01.0, 0000:00:01.1 is DMA device ID::
 
     <dpdk dir># ./usertools/dpdk-devbind.py -s
 
@@ -59,8 +73,8 @@  General set up
 
     DMA devices using kernel driver
     ===============================
-    0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
-    0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+    0000:00:01.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+    0000:00:01.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
 
 Test case
 =========
@@ -73,24 +87,22 @@  Common steps
     <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port DMA device id>
 
     For example, bind 2 CBDMA channels:
-    <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0 0000:00:04.1
+    <dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:00:01.0 0000:00:01.1
 
-Test Case 1: Loopback packed ring all path multi-queues payload check with server mode and cbdma enable
--------------------------------------------------------------------------------------------------------
+Test Case 1: Loopback packed ring inorder mergeable path multi-queues payload check with server mode and cbdma enable
+---------------------------------------------------------------------------------------------------------------------
 This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
-all path multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+inorder mergeable path multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels.
 
-1. Bind 8 CBDMA channel to vfio-pci, as common step 1.
+1. Bind 1 CBDMA port to vfio-pci, as common step 1.
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.4,lcore13@0000:00:04.5,lcore14@0000:00:04.6,lcore14@0000:00:04.7]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 -a 0000:00:01.0 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.0;txq5@0000:00:01.0;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.0;rxq5@0000:00:01.0;rxq6@0000:00:01.0;rxq7@0000:00:01.0]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 
-3. Launch virtio-user with packed ring mergeable inorder path::
+3. Launch virtio-user with packed ring inorder mergeable path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \
@@ -104,19 +116,31 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
     --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
     --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
 
-5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets::
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
 
 	testpmd> set fwd csum
 	testpmd> set txpkts 64,64,64,2000,2000,2000
 	testpmd> set burst 1
 	testpmd> start tx_first 1
+	testpmd> show port stats all
 	testpmd> stop
 
 6. Quit pdump, check all the packets length are 6192 Byte and the payload in receive packets are same in each pcap file.
 
-7. Quit and relaunch vhost and rerun step 4-6.
+Test Case 2: Loopback packed ring mergeable path multi-queues payload check with server mode and cbdma enable
+-------------------------------------------------------------------------------------------------------------
+This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
+mergeable path multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels.
+
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
 
-8. Quit and relaunch virtio with packed ring mergeable path as below::
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with packed ring mergeable path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,packed_vq=1,server=1 \
@@ -124,41 +148,113 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
 	testpmd>set fwd csum
 	testpmd>start
 
-9. Rerun steps 4-7.
+4. Attach pdump secondary process to primary process by same file-prefix::
+
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
+
+	testpmd> set fwd csum
+	testpmd> set txpkts 64,64,64,2000,2000,2000
+	testpmd> set burst 1
+	testpmd> start tx_first 1
+	testpmd> show port stats all
+	testpmd> stop
+
+6. Quit pdump, check all the packets length are 6192 Byte and the payload in receive packets are same in each pcap file.
+
+Test Case 3: Loopback packed ring inorder non-mergeable path multi-queues payload check with server mode and cbdma enable
+-------------------------------------------------------------------------------------------------------------------------
+This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
+inorder non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels.
 
-10. Quit and relaunch virtio with packed ring non-mergeable path as below::
+1. Bind 4 CBDMA port to vfio-pci, as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:01.2 -a 0000:00:01.3 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.1;txq3@0000:00:01.1;txq4@0000:00:01.2;txq5@0000:00:01.2;rxq2@0000:00:01.1;rxq3@0000:00:01.1;rxq4@0000:00:01.2;rxq5@0000:00:01.2;rxq6@0000:00:01.3;rxq7@0000:00:01.3]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with packed ring inorder non-mergeable path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \
 	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
-11. Rerun step 4.
+4. Attach pdump secondary process to primary process by same file-prefix::
 
-12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets::
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
 
 	testpmd> set fwd csum
 	testpmd> set txpkts 64,128,256,512
 	testpmd> set burst 1
 	testpmd> start tx_first 1
+	testpmd> show port stats all
 	testpmd> stop
 
-13. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
+6. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
 
-14. Quit and relaunch vhost and rerun step 11-13.
+Test Case 4: Loopback packed ring non-mergeable path multi-queues payload check with server mode and cbdma enable
+-----------------------------------------------------------------------------------------------------------------
+This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
+non-mergeable path multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels.
 
-15. Quit and relaunch virtio with packed ring inorder non-mergeable path as below::
+1. Bind 8 CBDMA port to vfio-pci, as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;rxq2@0000:00:04.2;rxq3@0000:00:04.3;rxq4@0000:00:04.4;rxq5@0000:00:04.5;rxq6@0000:00:04.6;rxq7@0000:00:04.7]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with packed ring non-mergeable path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,server=1 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,packed_vq=1,server=1 \
 	-- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
-16. Rerun step 11-14.
+4. Attach pdump secondary process to primary process by same file-prefix::
+
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
 
-17. Quit and relaunch virtio with packed ring vectorized path as below::
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
+
+	testpmd> set fwd csum
+	testpmd> set txpkts 64,128,256,512
+	testpmd> set burst 1
+	testpmd> start tx_first 1
+	testpmd> show port stats all
+	testpmd> stop
+
+6. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
+
+Test Case 5: Loopback packed ring vectorized path multi-queues payload check with server mode and cbdma enable
+--------------------------------------------------------------------------------------------------------------
+This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring
+vectorized path multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels.
+
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with packed ring vectorized path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,server=1 \
@@ -166,9 +262,37 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
 	testpmd>set fwd csum
 	testpmd>start
 
-18. Rerun step 11-14.
+4. Attach pdump secondary process to primary process by same file-prefix::
+
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
+
+	testpmd> set fwd csum
+	testpmd> set txpkts 64,128,256,512
+	testpmd> set burst 1
+	testpmd> start tx_first 1
+	testpmd> show port stats all
+	testpmd> stop
+
+6. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
+
+Test Case 6: Loopback packed ring vectorized path and ring size is not power of 2 multi-queues payload check with server mode and cbdma enable
+----------------------------------------------------------------------------------------------------------------------------------------------
+This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user packed ring vectorized path and
+ring size is not power of 2, multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels.
+
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
+
+2. Launch vhost by below command::
 
-19. Quit and relaunch virtio with packed ring vectorized path and ring size is not power of 2 as below::
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with packed ring vectorized path and ring size is not power of 2::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-6 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,packed_vq=1,vectorized=1,queue_size=1025,server=1 \
@@ -176,33 +300,35 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
 	testpmd>set fwd csum
 	testpmd>start
 
-20. Rerun step 11-14.
+4. Attach pdump secondary process to primary process by same file-prefix::
 
-21. Quit and relaunch vhost w/ iova=pa::
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-14 -n 4 \
-	-a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 -a 0000:00:04.3 -a 0000:00:04.4 -a 0000:00:04.5 -a 0000:00:04.6 -a 0000:00:04.7 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq6;txq7;rxq0;rxq1;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=pa -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:00:04.0,lcore11@0000:00:04.1,lcore12@0000:00:04.2,lcore12@0000:00:04.3,lcore13@0000:00:04.4,lcore13@0000:00:04.5,lcore14@0000:00:04.6,lcore14@0000:00:04.7]
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
 
-22. Rerun steps 3-6.
+	testpmd> set fwd csum
+	testpmd> set txpkts 64,128,256,512
+	testpmd> set burst 1
+	testpmd> start tx_first 1
+	testpmd> show port stats all
+	testpmd> stop
+
+6. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
 
-Test Case 2: Loopback split ring all path multi-queues payload check with server mode and cbdma enable
-------------------------------------------------------------------------------------------------------
-This case tests the payload is valid after forwading large chain packets in loopback vhost-user/virtio-user split ring
-all path multi-queues with server mode when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+Test Case 7: Loopback split ring inorder mergeable path multi-queues payload check with server mode and cbdma enable
+--------------------------------------------------------------------------------------------------------------------
 
-1. Bind 3 CBDMA channel to vfio-pci, as common step 1.
+1. Bind 1 CBDMA port to vfio-pci, as common step 1.
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=va -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:01.0 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.0;txq5@0000:00:01.0;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.0;rxq5@0000:00:01.0;rxq6@0000:00:01.0;rxq7@0000:00:01.0]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 
-3. Launch virtio-user with split ring mergeable inorder path::
+3. Launch virtio-user with split ring inorder mergeable path::
 
 	dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
 	-vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \
@@ -216,7 +342,7 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
     --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
     --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
 
-5. Send large pkts from vhost, check loopback performance can get expected and each queue can receive packets::
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
 
 	testpmd> set fwd csum
 	testpmd> set txpkts 64,64,64,2000,2000,2000
@@ -226,29 +352,67 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
 
 6. Quit pdump, check all the packets length are 6192 Byte and the payload in receive packets are same in each pcap file.
 
-7. Quit and relaunch vhost and rerun step 4-6.
+Test Case 8: Loopback split ring mergeable path multi-queues payload check with server mode and cbdma enable
+------------------------------------------------------------------------------------------------------------
 
-8. Quit and relaunch virtio with split ring mergeable path as below::
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=0,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with split ring mergeable path::
+
+	dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
+	-vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \
+	- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
-9. Rerun steps 4-7.
+4. Attach pdump secondary process to primary process by same file-prefix::
+
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
+
+	testpmd> set fwd csum
+	testpmd> set txpkts 64,64,64,2000,2000,2000
+	testpmd> set burst 1
+	testpmd> start tx_first 1
+	testpmd> stop
+
+6. Quit pdump, check all the packets length are 6192 Byte and the payload in receive packets are same in each pcap file.
 
-10. Quit and relaunch virtio with split ring non-mergeable path as below::
+Test Case 9: Loopback split ring inorder non-mergeable path multi-queues payload check with server mode and cbdma enable
+------------------------------------------------------------------------------------------------------------------------
+
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with split ring inorder non-mergeable path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \
-	-- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \
+	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
-11. Rerun step 4.
+4. Attach pdump secondary process to primary process by same file-prefix::
 
-12. Send pkts from vhost, check loopback performance can get expected and each queue can receive packets::
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
 
 	testpmd> set fwd csum
 	testpmd> set txpkts 64,128,256,512
@@ -256,21 +420,53 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
 	testpmd> start tx_first 1
 	testpmd> stop
 
-13. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
+6. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
+
+Test Case 10: Loopback split ring non-mergeable path multi-queues payload check with server mode and cbdma enable
+-----------------------------------------------------------------------------------------------------------------
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
 
-14. Quit and relaunch vhost and rerun step 11-13.
+2. Launch vhost by below command::
 
-15. Quit and relaunch virtio with split ring inorder non-mergeable path as below::
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with split ring non-mergeable path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
-	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=1,server=1 \
-	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,server=1 \
+	-- -i --enable-hw-vlan-strip --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
 	testpmd>set fwd csum
 	testpmd>start
 
-16. Rerun step 11-14.
+4. Attach pdump secondary process to primary process by same file-prefix::
+
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
+
+	testpmd> set fwd csum
+	testpmd> set txpkts 64,128,256,512
+	testpmd> set burst 1
+	testpmd> start tx_first 1
+	testpmd> stop
+
+6. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
 
-17. Quit and relaunch virtio with split ring vectorized path as below::
+Test Case 11: Loopback split ring vectorized path multi-queues payload check with server mode and cbdma enable
+--------------------------------------------------------------------------------------------------------------
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;rxq2@0000:00:01.0;rxq3@0000:00:01.0;rxq4@0000:00:01.1;rxq5@0000:00:01.1;rxq6@0000:00:01.1;rxq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with split ring vectorized path::
 
 	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
 	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,server=1 \
@@ -278,28 +474,33 @@  all path multi-queues with server mode when vhost uses the asynchronous operatio
 	testpmd>set fwd csum
 	testpmd>start
 
-18. Rerun step 11-14.
+4. Attach pdump secondary process to primary process by same file-prefix::
 
-19. Quit and relaunch vhost w/ iova=pa::
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:04.0 -a 0000:00:04.1 -a 0000:00:04.2 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0;txq1;txq2;txq3;txq4;txq5;rxq2;rxq3;rxq4;rxq5;rxq6;rxq7]' \
-	--iova=pa -- -i --nb-cores=5 --rxq=8 --txq=8 --txd=1024 --rxd=1024 \
-	--lcore-dma=[lcore11@0000:00:04.0,lcore12@0000:00:04.0,lcore13@0000:00:04.1,lcore13@0000:00:04.2,lcore14@0000:00:04.1,lcore14@0000:00:04.2,lcore15@0000:00:04.1,lcore15@0000:00:04.2]
+5. Send large packets from vhost, check loopback performance can get expected and each queue can receive packets::
 
-20. Rerun steps 3-6.
+	testpmd> set fwd csum
+	testpmd> set txpkts 64,128,256,512
+	testpmd> set burst 1
+	testpmd> start tx_first 1
+	testpmd> stop
 
-Test Case 3: Loopback split ring large chain packets stress test with server mode and cbdma enable
---------------------------------------------------------------------------------------------------
-This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode 
-when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA and PA mode test.
+6. Quit pdump, check all the packets length are 960 Byte and the payload in receive packets are same in each pcap file.
 
-1. Bind 1 CBDMA channel to vfio-pci, as common step 1.
+Test Case 12: Loopback split ring large chain packets stress test with server mode and cbdma enable
+---------------------------------------------------------------------------------------------------
+This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user split ring with server mode
+when vhost uses the asynchronous operations with CBDMA channels.
+
+1. Bind 1 CBDMA port to vfio-pci, as common step 1.
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,client=1,dmas=[txq0;rxq0]' --iova=va -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:01.0 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,client=1,dmas=[txq0@0000:00:01.0;rxq0@0000:00:01.0]' --iova=va -- -i --nb-cores=1 --mbuf-size=65535
 
 3. Launch virtio and start testpmd::
 
@@ -308,30 +509,23 @@  when vhost uses the asynchronous operations with CBDMA channels. Both iova as VA
 	-- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1
 	testpmd>start
 
-4. Send large packets from vhost, check virtio can receive packets::
+4. Send large packets from vhost, check virtio can receive packets and packets can loop::
 
-	testpmd> set txpkts 65535,65535,65535,65535,65535
+	testpmd> set txpkts 65535,65535
 	testpmd> start tx_first 32
 	testpmd> show port stats all
 
-5. Stop and quit vhost testpmd and relaunch vhost with iova=pa::
-
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,client=1,dmas=[txq0;rxq0]' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0]
-
-6. Rerun steps 4.
-
-Test Case 4: Loopback packed ring large chain packets stress test with server mode and cbdma enable
----------------------------------------------------------------------------------------------------
-This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user packed ring with server mode 
-when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as VA and PA mode test.
+Test Case 13: Loopback packed ring large chain packets stress test with server mode and cbdma enable
+----------------------------------------------------------------------------------------------------
+This is a stress test case about forwading large chain packets in loopback vhost-user/virtio-user
+packed ring with server mode when vhost uses the asynchronous operations with dsa dpdk driver. 
 
-1. Bind 1 CBDMA channel to vfio-pci, as common step 1.
+1. Bind 1 CBDMA port to vfio-pci, as common step 1.
 
 2. Launch vhost by below command::
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0;rxq0],client=1' --iova=va -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0]
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:01.0 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0@0000:00:01.0;rxq0@0000:00:01.0],client=1' --iova=va -- -i --nb-cores=1 --mbuf-size=65535
 
 3. Launch virtio and start testpmd::
 
@@ -340,15 +534,70 @@  when vhost uses the asynchronous operations with dsa dpdk driver. Both iova as V
 	-- -i --rxq=1 --txq=1 --txd=2048 --rxd=2048 --nb-cores=1
 	testpmd>start
 
-4. Send large packets from vhost, check virtio can receive packets::
+4. Send large packets from vhost, check virtio can receive packets and packets can loop::
 
-	testpmd> set txpkts 65535,65535,65535,65535,65535
+	testpmd> set txpkts 65535,65535
 	testpmd> start tx_first 32
 	testpmd> show port stats all
 
-5. Stop and quit vhost testpmd and relaunch vhost with iova=pa::
+Test Case 14: PV split and packed ring test txonly mode with cbdma enable
+-------------------------------------------------------------------------
+1. Bind 2 CBDMA port to vfio-pci, as common step 1.
+
+2. Launch vhost by below command::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 10-18 -n 4 -a 0000:00:01.0 -a 0000:00:01.1 \
+	--vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,dmas=[txq0@0000:00:01.0;txq1@0000:00:01.0;txq2@0000:00:01.0;txq3@0000:00:01.0;txq4@0000:00:01.1;txq5@0000:00:01.1;txq6@0000:00:01.1;txq7@0000:00:01.1]' \
+	--iova=va -- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+
+3. Launch virtio-user with split ring inorder mergeable path::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,server=1 \
+	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1024 --rxd=1024
+	testpmd>set fwd rxonly
+	testpmd>start
+
+4. Attach pdump secondary process to primary process by same file-prefix::
+
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+5. Send large packets from vhost::
+
+    testpmd> set fwd txonly
+    testpmd> async_vhost tx poll completed on
+    testpmd> set txpkts 64,64,64,2000,2000,2000
+    testpmd> set burst 1
+    testpmd> start tx_first 1
+
+6. Check the Rx-pps>0 and each queue can receive packets with 6192 Byte from virtio-user.
+
+7. Quit pdump, check packets with 6192 Byte in each pcap file.
+
+8. Relaunch virtio-user with packed ring vectorized path with ring size is not power of 2::
+
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 --file-prefix=virtio-user0 --no-pci --force-max-simd-bitwidth=512 \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1,packed_vq=1,server=1,queue_size=1025 \
+	-- -i --nb-cores=1 --rxq=8 --txq=8 --txd=1025 --rxd=1025
+	testpmd>set fwd rxonly
+	testpmd>start
+
+9. Attach pdump secondary process to primary process by same file-prefix::
+
+    <dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \
+    --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=./pdump-virtio-rx-q0.pcap,mbuf-size=8000' \
+    --pdump 'device_id=net_virtio_user0,queue=1,rx-dev=./pdump-virtio-rx-q1.pcap,mbuf-size=8000'
+
+10. Send packets from vhost::
+
+	testpmd> set fwd txonly
+	testpmd> async_vhost tx poll completed on
+	testpmd> set txpkts 64,128,256,512
+	testpmd> set burst 1
+	testpmd> start tx_first 1
 
-	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 2-3 -n 4 -a 0000:00:04.0 \
-	--vdev 'eth_vhost0,iface=vhost-net0,queues=1,dmas=[txq0],client=1' --iova=pa -- -i --nb-cores=1 --mbuf-size=65535 --lcore-dma=[lcore3@0000:00:04.0]
+11. Check each queue can receive packets with 960 Byte from virtio-user.
 
-6. Rerun steps 4.
+12. Quit pdump, check packets with 960 Byte in each pcap file.