[v1] test_plans/vhost_cbdma_test_plan.rst

Message ID 20201215234604.91346-1-yinan.wang@intel.com (mailing list archive)
State Accepted
Headers
Series [v1] test_plans/vhost_cbdma_test_plan.rst |

Commit Message

Wang, Yinan Dec. 15, 2020, 11:46 p.m. UTC
  Add one cbdma new case and optimize dynamic queue size test case.

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 test_plans/vhost_cbdma_test_plan.rst | 117 ++++++++++++++++++---------
 1 file changed, 80 insertions(+), 37 deletions(-)
  

Comments

Tu, Lijuan Dec. 21, 2020, 7:33 a.m. UTC | #1
> Add one cbdma new case and optimize dynamic queue size test case.
> 
> Signed-off-by: Yinan Wang <yinan.wang@intel.com>

Applied with commit message changed.
  

Patch

diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index b2230900..504b9aa0 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -61,7 +61,7 @@  operations of queues:
    otherwise, leverage librte_vhost to perform memory copy.
 
 Here is an example:
- $ ./testpmd -c f -n 4 \
+ $ ./dpdk-testpmd -c f -n 4 \
    --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024'
 
 Test Case 1: PVP Split all path with DMA-accelerated vhost enqueue
@@ -73,14 +73,14 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
-     set fwd mac
-     start
+    >set fwd mac
+    >start
 
 2. Launch virtio-user with inorder mergeable path::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -95,7 +95,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 4. Relaunch virtio-user with mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=0,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -103,15 +103,15 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 5. Relaunch virtio-user with inorder non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
-    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0in_order=1,queues=1 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=1,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
 
 6. Relaunch virtio-user with non-mergeable path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -119,7 +119,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 7. Relaunch virtio-user with vector_rx path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -129,54 +129,97 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations
 =============================================================================
 
-1. Bind two cbdma port and one nic port to igb_uio, then launch vhost by below command::
+1. Bind four cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
-     set fwd mac
-     start
+    >set fwd mac
+    >start
 
-2. Launch virtio-user by below ccd ommand::
+2. Launch virtio-user by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
     >start
 
-3. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target and RX/TX can work normally in two queues.
+3. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target.
 
-4. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+4. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log.
 
-     start
-     stop
-     port stop all
-     port config all rxq 1
-     port start all
-     start
+5. On virtio-user side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+
+    testpmd>port stop all
+    testpmd>port config all rxq 1
+    testpmd>port config all txq 1
+    testpmd>port start all
+    testpmd>start
+    testpmd>show port stats all
 
-5. Relaunch virtio-user with queues=2, check RX/TX can work normally in two queues::
+6. Relaunch virtio-user with 2 queues::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 30-31 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=2,server=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
     >start
 
-4. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
+7. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target.
 
-     start
-     stop
-     port stop all
-     port config all rxq 1
-     port start all
-     start
+8. Stop vhost port, check vhost RX and TX direction both exist packtes in queue0 from vhost log.
 
-6. Relaunch vhost with another two cbdma channels, check perforamnce can get target and RX/TX can work normally in two queueus::
+9. On vhost side, dynamic change rx queue numbers from 2 queue to 1 queues, then check one queue RX/TX can work normally::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
-    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.0],dmathr=512' \
+    testpmd>port stop all
+    testpmd>port config all rxq 1
+    testpmd>port config all txq 1
+    testpmd>port start all
+    testpmd>start
+    testpmd>show port stats all
+
+10. Relaunch vhost with another two cbdma channels and 2 queueus, check perforamnce can get target::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
+    --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@00:04.5;txq1@00:04.6],dmathr=512' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
-    >start
\ No newline at end of file
+    >start
+
+11. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log.
+
+Test Case3: CBDMA threshold value check
+========================================
+
+1. Bind four cbdma port to igb_uio, then launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-2 -n 4 \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1,dmas=[txq0@80:04.0;txq1@80:04.1],dmathr=512' --vdev 'eth_vhost1,iface=vhost-net1,queues=2,client=1,dmas=[txq0@80:04.2;txq1@80:04.3],dmathr=4096' -- \
+    -i --nb-cores=1 --rxq=2 --txq=2
+    >start
+
+2. Launch virtio-user1::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 7-8 \
+    --no-pci --file-prefix=virtio1 \
+    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+    -- -i --nb-cores=1 --rxq=2 --txq=2
+    >start
+
+3. Launch virtio-user0::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0,queue_size=4096 \
+    -- -i --nb-cores=1 --rxq=2 --txq=2
+    >start
+  
+4. Check the cbdma threshold value for each vhost port can be config correct from vhost log::
+
+    dma parameters: vid0,qid0,dma*,threshold:512
+    dma parameters: vid0,qid2,dma*,threshold:512
+    dma parameters: vid1,qid0,dma*,threshold:4096
+    dma parameters: vid1,qid2,dma*,threshold:4096
+
+