[V1,2/2] test_plans/*: Change igb_uio to vfio-pci

Message ID 20211125131311.134679-3-linglix.chen@intel.com (mailing list archive)
State Accepted
Headers
Series move ioat device IDs to DMA class: change misc to dma. |

Checks

Context Check Description
ci/Intel-dts-doc-test success Testing OK
ci/Intel-dts-suite-test fail Testing issues

Commit Message

Lingli Chen Nov. 25, 2021, 1:13 p.m. UTC
  Cbdma only tests vfio-pci from 21.11, so remove igb_uio.

Signed-off-by: Lingli Chen <linglix.chen@intel.com>
---
 test_plans/cbdma_test_plan.rst                | 14 ++--
 test_plans/dpdk_gro_lib_test_plan.rst         | 24 +++---
 test_plans/dpdk_gso_lib_test_plan.rst         | 12 +--
 .../dpdk_hugetlbfs_mount_size_test_plan.rst   | 10 +--
 .../pvp_diff_qemu_version_test_plan.rst       |  8 +-
 .../pvp_multi_paths_performance_test_plan.rst | 20 ++---
 ...host_single_core_performance_test_plan.rst | 20 ++---
 ...rtio_single_core_performance_test_plan.rst | 20 ++---
 ...emu_multi_paths_port_restart_test_plan.rst | 24 +++---
 test_plans/pvp_share_lib_test_plan.rst        |  4 +-
 .../pvp_vhost_user_reconnect_test_plan.rst    | 40 +++++-----
 test_plans/pvp_virtio_bonding_test_plan.rst   | 12 +--
 ...pvp_virtio_user_2M_hugepages_test_plan.rst |  4 +-
 ...er_multi_queues_port_restart_test_plan.rst | 20 ++---
 .../vdev_primary_secondary_test_plan.rst      | 14 ++--
 test_plans/vhost_cbdma_test_plan.rst          | 10 +--
 .../vhost_event_idx_interrupt_test_plan.rst   |  8 +-
 .../vhost_multi_queue_qemu_test_plan.rst      | 12 +--
 test_plans/vhost_user_interrupt_test_plan.rst |  4 +-
 .../vhost_user_live_migration_test_plan.rst   | 80 +++++++++----------
 .../vhost_virtio_pmd_interrupt_test_plan.rst  | 14 ++--
 .../vhost_virtio_user_interrupt_test_plan.rst | 12 +--
 .../virtio_event_idx_interrupt_test_plan.rst  | 20 ++---
 .../virtio_pvp_regression_test_plan.rst       | 32 ++++----
 ...tio_user_as_exceptional_path_test_plan.rst | 12 +--
 ...ser_for_container_networking_test_plan.rst |  4 +-
 test_plans/vm2vm_virtio_pmd_test_plan.rst     | 54 ++++++-------
 test_plans/vswitch_sample_cbdma_test_plan.rst | 12 +--
 28 files changed, 260 insertions(+), 260 deletions(-)
  

Comments

Lingli Chen Nov. 25, 2021, 5:17 a.m. UTC | #1
> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: Thursday, November 25, 2021 9:13 PM
> To: dts@dpdk.org
> Cc: Chen, LingliX <linglix.chen@intel.com>
> Subject: [dts][PATCH V1 2/2] test_plans/*: Change igb_uio to vfio-pci
> 
> Cbdma only tests vfio-pci from 21.11, so remove igb_uio.
> 
> Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> ---

Tested-by: Lingli Chen <linglix.chen@intel.com>
  
Wang, Yinan Nov. 25, 2021, 5:38 a.m. UTC | #2
Acked-by:  Yinan Wang <yinan.wang@intel.com>

> -----Original Message-----
> From: Chen, LingliX <linglix.chen@intel.com>
> Sent: 2021?11?25? 13:17
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: RE: [dts][PATCH V1 2/2] test_plans/*: Change igb_uio to vfio-pci
> 
> 
> > -----Original Message-----
> > From: Chen, LingliX <linglix.chen@intel.com>
> > Sent: Thursday, November 25, 2021 9:13 PM
> > To: dts@dpdk.org
> > Cc: Chen, LingliX <linglix.chen@intel.com>
> > Subject: [dts][PATCH V1 2/2] test_plans/*: Change igb_uio to vfio-pci
> >
> > Cbdma only tests vfio-pci from 21.11, so remove igb_uio.
> >
> > Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> > ---
> 
> Tested-by: Lingli Chen <linglix.chen@intel.com>
  
Tu, Lijuan Nov. 30, 2021, 2:48 a.m. UTC | #3
> -----Original Message-----
> From: Wang, Yinan <yinan.wang@intel.com>
> Sent: 2021年11月25日 13:38
> To: Chen, LingliX <linglix.chen@intel.com>; dts@dpdk.org
> Subject: RE: [dts][PATCH V1 2/2] test_plans/*: Change igb_uio to vfio-pci
> 
> Acked-by:  Yinan Wang <yinan.wang@intel.com>
> 
> > -----Original Message-----
> > From: Chen, LingliX <linglix.chen@intel.com>
> > Sent: 2021?11?25? 13:17
> > To: dts@dpdk.org
> > Cc: Wang, Yinan <yinan.wang@intel.com>
> > Subject: RE: [dts][PATCH V1 2/2] test_plans/*: Change igb_uio to vfio-pci
> >
> >
> > > -----Original Message-----
> > > From: Chen, LingliX <linglix.chen@intel.com>
> > > Sent: Thursday, November 25, 2021 9:13 PM
> > > To: dts@dpdk.org
> > > Cc: Chen, LingliX <linglix.chen@intel.com>
> > > Subject: [dts][PATCH V1 2/2] test_plans/*: Change igb_uio to vfio-pci
> > >
> > > Cbdma only tests vfio-pci from 21.11, so remove igb_uio.
> > >
> > > Signed-off-by: Lingli Chen <linglix.chen@intel.com>
> > > ---
> >
> > Tested-by: Lingli Chen <linglix.chen@intel.com>

Applied, thanks
  

Patch

diff --git a/test_plans/cbdma_test_plan.rst b/test_plans/cbdma_test_plan.rst
index d9dcc193..53fe5eb7 100644
--- a/test_plans/cbdma_test_plan.rst
+++ b/test_plans/cbdma_test_plan.rst
@@ -93,7 +93,7 @@  NIC RX -> copy packet -> free original -> update mac addresses -> NIC TX
 Test Case1: CBDMA basic test with differnet size packets
 ========================================================
 
-1.Bind one cbdma port and one nic port to igb_uio driver.
+1.Bind one cbdma port and one nic port to vfio-pci driver.
 
 2.Launch dma app::
 
@@ -106,7 +106,7 @@  Test Case1: CBDMA basic test with differnet size packets
 Test Case2: CBDMA test with multi-threads
 =========================================
 
-1.Bind one cbdma port and one nic port to igb_uio driver.
+1.Bind one cbdma port and one nic port to vfio-pci driver.
 
 2.Launch dma app with three cores::
 
@@ -119,7 +119,7 @@  Test Case2: CBDMA test with multi-threads
 Test Case3: CBDMA test with multi nic ports
 ===========================================
 
-1.Bind two cbdma ports and two nic ports to igb_uio driver.
+1.Bind two cbdma ports and two nic ports to vfio-pci driver.
 
 2.Launch dma app with multi-ports::
 
@@ -132,7 +132,7 @@  Test Case3: CBDMA test with multi nic ports
 Test Case4: CBDMA test with multi-queues
 ========================================
 
-1.Bind two cbdma ports and one nic port to igb_uio driver.
+1.Bind two cbdma ports and one nic port to vfio-pci driver.
 
 2.Launch dma app with multi-queues::
 
@@ -148,7 +148,7 @@  Check performance gains status when queue numbers added.
 Test Case5: CBDMA performance cmparison between mac-updating and no-mac-updating
 ================================================================================
 
-1.Bind one cbdma ports and one nic port to igb_uio driver.
+1.Bind one cbdma ports and one nic port to vfio-pci driver.
 
 2.Launch dma app::
 
@@ -173,7 +173,7 @@  Test Case5: CBDMA performance cmparison between mac-updating and no-mac-updating
 Test Case6: CBDMA performance cmparison between HW copies and SW copies using different packet size
 ===================================================================================================
 
-1.Bind four cbdma pors and one nic port to igb_uio driver.
+1.Bind four cbdma pors and one nic port to vfio-pci driver.
 
 2.Launch dma app with three cores::
 
@@ -198,7 +198,7 @@  Test Case6: CBDMA performance cmparison between HW copies and SW copies using di
 Test Case7: CBDMA multi application mode test
 =============================================
 
-1.Bind four cbdma ports to ugb_uio driver.
+1.Bind four cbdma ports to vfio-pci driver.
 
 2.Launch test-pmd app with three cores and proc_type primary:
 
diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst
index ef16d997..9685afc1 100644
--- a/test_plans/dpdk_gro_lib_test_plan.rst
+++ b/test_plans/dpdk_gro_lib_test_plan.rst
@@ -127,9 +127,9 @@  Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic
     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up
     ip netns exec ns1 ethtool -K [enp216s0f0] tso on
 
-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 1::
+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
@@ -179,9 +179,9 @@  Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic
     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up
     ip netns exec ns1 ethtool -K [enp216s0f0] tso on
 
-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 2::
+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 2::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
@@ -231,9 +231,9 @@  Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic
     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up
     ip netns exec ns1 ethtool -K [enp216s0f0] tso on
 
-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 4::
+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 4::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
@@ -299,9 +299,9 @@  Vxlan topology
     ip netns exec t2 ip addr add $VXLAN_IP/24 dev $VXLAN_NAME
     ip netns exec t2 ip link set up dev $VXLAN_NAME
 
-2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 4::
+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 4::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
@@ -363,9 +363,9 @@  NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net
     ip netns exec ns1 ifconfig enp26s0f0 1.1.1.8 up
     ip netns exec ns1 ethtool -K enp26s0f0 tso on
 
-2. Bind cbdma port and nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 1::
+2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-31 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2
     set fwd csum
@@ -421,9 +421,9 @@  NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net
     ip netns exec ns1 ifconfig enp26s0f0 1.1.1.8 up
     ip netns exec ns1 ethtool -K enp26s0f0 tso on
 
-2. Bind cbdma port and nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 1::
+2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-31 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1]' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2
     set fwd csum
diff --git a/test_plans/dpdk_gso_lib_test_plan.rst b/test_plans/dpdk_gso_lib_test_plan.rst
index be1bdd20..2c8b32ad 100644
--- a/test_plans/dpdk_gso_lib_test_plan.rst
+++ b/test_plans/dpdk_gso_lib_test_plan.rst
@@ -96,9 +96,9 @@  Test Case1: DPDK GSO test with tcp traffic
     ip netns exec ns1 ifconfig [enp216s0f0] 1.1.1.8 up
     ip netns exec ns1 ethtool -K [enp216s0f0] gro on
 
-2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x       # xx:xx.x is the pci addr of nic1
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x       # xx:xx.x is the pci addr of nic1
     ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
@@ -156,9 +156,9 @@  Test Case3: DPDK GSO test with vxlan traffic
     ip netns exec ns1 ip link add vxlan100 type vxlan id 1000 remote 188.0.0.2 local 188.0.0.1 dstport 4789 dev [enp216s0f0]
     ip netns exec ns1 ifconfig vxlan100 1.1.1.1/24 up
 
-2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
@@ -210,9 +210,9 @@  Test Case4: DPDK GSO test with gre traffic
     ip netns exec ns1 ip tunnel add gre100 mode gre remote 188.0.0.2 local 188.0.0.1
     ip netns exec ns1 ifconfig gre100 1.1.1.1/24 up
 
-2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
+2. Bind nic1 to vfio-pci, launch vhost-user with testpmd::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x
     ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
diff --git a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst
index 218b9604..bda21d39 100644
--- a/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst
+++ b/test_plans/dpdk_hugetlbfs_mount_size_test_plan.rst
@@ -49,7 +49,7 @@  Test Case 1: default hugepage size w/ and w/o numa
 
     mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind one nic port to igb_uio driver, launch testpmd::
+2. Bind one nic port to vfio-pci driver, launch testpmd::
 
     ./dpdk-testpmd -c 0x3 -n 4 --huge-dir /mnt/huge --file-prefix=abc -- -i
     testpmd>start
@@ -71,7 +71,7 @@  Test Case 2: mount size exactly match total hugepage size with two mount points
     mount -t hugetlbfs -o size=4G hugetlbfs /mnt/huge1
     mount -t hugetlbfs -o size=4G hugetlbfs /mnt/huge2
 
-2. Bind two nic ports to igb_uio driver, launch testpmd with numactl::
+2. Bind two nic ports to vfio-pci driver, launch testpmd with numactl::
 
     numactl --membind=1 ./dpdk-testpmd -l 31-32 -n 4 --legacy-mem --socket-mem 0,2048 --huge-dir /mnt/huge1 --file-prefix=abc -a 82:00.0 -- -i --socket-num=1 --no-numa
     testpmd>start
@@ -88,7 +88,7 @@  Test Case 3: mount size greater than total hugepage size with single mount point
 
     mount -t hugetlbfs -o size=9G hugetlbfs /mnt/huge
 
-2. Bind one nic port to igb_uio driver, launch testpmd::
+2. Bind one nic port to vfio-pci driver, launch testpmd::
 
     ./dpdk-testpmd -c 0x3 -n 4 --legacy-mem --huge-dir /mnt/huge --file-prefix=abc -- -i
     testpmd>start
@@ -104,7 +104,7 @@  Test Case 4: mount size greater than total hugepage size with multiple mount poi
     mount -t hugetlbfs -o size=4G hugetlbfs /mnt/huge2
     mount -t hugetlbfs -o size=1G hugetlbfs /mnt/huge3
 
-2. Bind one nic port to igb_uio driver, launch testpmd::
+2. Bind one nic port to vfio-pci driver, launch testpmd::
 
     numactl --membind=0 ./dpdk-testpmd -c 0x3 -n 4  --legacy-mem --socket-mem 2048,0 --huge-dir /mnt/huge1 --file-prefix=abc -- -i --socket-num=0 --no-numa
     testpmd>start
@@ -120,7 +120,7 @@  Test Case 4: mount size greater than total hugepage size with multiple mount poi
 Test Case 5: run dpdk app in limited hugepages controlled by cgroup
 ===================================================================
 
-1. Bind one nic port to igb_uio driver, launch testpmd in limited hugepages::
+1. Bind one nic port to vfio-pci driver, launch testpmd in limited hugepages::
 
     cgcreate -g hugetlb:/test-subgroup
     cgset -r hugetlb.1GB.limit_in_bytes=2147483648 test-subgroup
diff --git a/test_plans/pvp_diff_qemu_version_test_plan.rst b/test_plans/pvp_diff_qemu_version_test_plan.rst
index 612e0e26..125c9a65 100644
--- a/test_plans/pvp_diff_qemu_version_test_plan.rst
+++ b/test_plans/pvp_diff_qemu_version_test_plan.rst
@@ -47,7 +47,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path
 ========================================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -72,7 +72,7 @@  Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path
     -device virtio-net-pci,netdev=netdev1,mac=52:54:00:00:00:01,mrg_rxbuf=on \
     -vnc :10
 
-4. On VM, bind virtio net to igb_uio and run testpmd ::
+4. On VM, bind virtio net to vfio-pci and run testpmd ::
     ./testpmd -c 0x3 -n 3 -- -i \
     --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -85,7 +85,7 @@  Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path
 Test Case 2: PVP test with virtio 1.0 mergeable path
 ====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -110,7 +110,7 @@  Test Case 2: PVP test with virtio 1.0 mergeable path
     -device virtio-net-pci,netdev=netdev1,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd::
+3. On VM, bind virtio net to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 3 -- -i \
     --nb-cores=1 --txd=1024 --rxd=1024
diff --git a/test_plans/pvp_multi_paths_performance_test_plan.rst b/test_plans/pvp_multi_paths_performance_test_plan.rst
index 11a700f0..3a0a8a72 100644
--- a/test_plans/pvp_multi_paths_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_performance_test_plan.rst
@@ -56,7 +56,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 1: pvp test with virtio 1.1 mergeable path
 ====================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-3 \
@@ -80,7 +80,7 @@  Test Case 1: pvp test with virtio 1.1 mergeable path
 Test Case 2: pvp test with virtio 1.1 non-mergeable path
 ========================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-3 \
@@ -104,7 +104,7 @@  Test Case 2: pvp test with virtio 1.1 non-mergeable path
 Test Case 3: pvp test with inorder mergeable path
 =================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-3 \
@@ -128,7 +128,7 @@  Test Case 3: pvp test with inorder mergeable path
 Test Case 4: pvp test with inorder non-mergeable path
 =====================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4 \
@@ -152,7 +152,7 @@  Test Case 4: pvp test with inorder non-mergeable path
 Test Case 5: pvp test with mergeable path
 =========================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4 \
@@ -176,7 +176,7 @@  Test Case 5: pvp test with mergeable path
 Test Case 6: pvp test with non-mergeable path
 =============================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4 \
@@ -200,7 +200,7 @@  Test Case 6: pvp test with non-mergeable path
 Test Case 7: pvp test with vectorized_rx path
 =============================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4 \
@@ -224,7 +224,7 @@  Test Case 7: pvp test with vectorized_rx path
 Test Case 8: pvp test with virtio 1.1 inorder mergeable path
 ============================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-3 \
@@ -248,7 +248,7 @@  Test Case 8: pvp test with virtio 1.1 inorder mergeable path
 Test Case 9: pvp test with virtio 1.1 inorder non-mergeable path
 ================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-3 \
@@ -272,7 +272,7 @@  Test Case 9: pvp test with virtio 1.1 inorder non-mergeable path
 Test Case 10: pvp test with virtio 1.1 vectorized path
 ======================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \
diff --git a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
index 00e6009c..b10c415a 100644
--- a/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_vhost_single_core_performance_test_plan.rst
@@ -47,7 +47,7 @@  TG --> NIC --> Virtio --> Vhost --> Virtio --> NIC --> TG
 Test Case 1: vhost single core performance test with virtio 1.1 mergeable path
 ==============================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -68,7 +68,7 @@  Test Case 1: vhost single core performance test with virtio 1.1 mergeable path
 Test Case 2: vhost single core performance test with virtio 1.1 non-mergeable path
 ==================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -89,7 +89,7 @@  Test Case 2: vhost single core performance test with virtio 1.1 non-mergeable pa
 Test Case 3: vhost single core performance test with inorder mergeable path
 ===========================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -110,7 +110,7 @@  Test Case 3: vhost single core performance test with inorder mergeable path
 Test Case 4: vhost single core performance test with inorder non-mergeable path
 ===============================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -131,7 +131,7 @@  Test Case 4: vhost single core performance test with inorder non-mergeable path
 Test Case 5: vhost single core performance test with mergeable path
 ===================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -152,7 +152,7 @@  Test Case 5: vhost single core performance test with mergeable path
 Test Case 6: vhost single core performance test with non-mergeable path
 =======================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -173,7 +173,7 @@  Test Case 6: vhost single core performance test with non-mergeable path
 Test Case 7: vhost single core performance test with vectorized_rx path
 =======================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -194,7 +194,7 @@  Test Case 7: vhost single core performance test with vectorized_rx path
 Test Case 8: vhost single core performance test with virtio 1.1 inorder mergeable path
 ======================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -215,7 +215,7 @@  Test Case 8: vhost single core performance test with virtio 1.1 inorder mergeabl
 Test Case 9: vhost single core performance test with virtio 1.1 inorder non-mergeable path
 ==========================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
@@ -236,7 +236,7 @@  Test Case 9: vhost single core performance test with virtio 1.1 inorder non-merg
 Test Case 10: vhost single core performance test with virtio 1.1 vectorized path
 ================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
diff --git a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
index 3a66cd12..ea7ff698 100644
--- a/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
+++ b/test_plans/pvp_multi_paths_virtio_single_core_performance_test_plan.rst
@@ -47,7 +47,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 1: virtio single core performance test with virtio 1.1 mergeable path
 ===============================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024
@@ -67,7 +67,7 @@  Test Case 1: virtio single core performance test with virtio 1.1 mergeable path
 Test Case 2: virtio single core performance test with virtio 1.1 non-mergeable path
 ===================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -88,7 +88,7 @@  Test Case 2: virtio single core performance test with virtio 1.1 non-mergeable p
 Test Case 3: virtio single core performance test with inorder mergeable path
 ============================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -109,7 +109,7 @@  Test Case 3: virtio single core performance test with inorder mergeable path
 Test Case 4: virtio single core performance test with inorder non-mergeable path
 ================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -130,7 +130,7 @@  Test Case 4: virtio single core performance test with inorder non-mergeable path
 Test Case 5: virtio single core performance test with mergeable path
 ====================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -151,7 +151,7 @@  Test Case 5: virtio single core performance test with mergeable path
 Test Case 6: virtio single core performance test with non-mergeable path
 ========================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -172,7 +172,7 @@  Test Case 6: virtio single core performance test with non-mergeable path
 Test Case 7: virtio single core performance test with vectorized_rx path
 ========================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -193,7 +193,7 @@  Test Case 7: virtio single core performance test with vectorized_rx path
 Test Case 8: virtio single core performance test with virtio 1.1 inorder mergeable path
 =======================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4 \
@@ -214,7 +214,7 @@  Test Case 8: virtio single core performance test with virtio 1.1 inorder mergeab
 Test Case 9: virtio single core performance test with virtio 1.1 inorder non-mergeable path
 ===========================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -235,7 +235,7 @@  Test Case 9: virtio single core performance test with virtio 1.1 inorder non-mer
 Test Case 10: virtio single core performance test with virtio 1.1 vectorized path
 =================================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  --no-pci \
diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
index 9456fdc4..ddf8beca 100644
--- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
@@ -48,7 +48,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 1: pvp test with virtio 0.95 mergeable path
 =====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -71,7 +71,7 @@  Test Case 1: pvp test with virtio 0.95 mergeable path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd::
+3. On VM, bind virtio net to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 3 -- -i \
     --nb-cores=1 --txd=1024 --rxd=1024
@@ -95,7 +95,7 @@  Test Case 1: pvp test with virtio 0.95 mergeable path
 Test Case 2: pvp test with virtio 0.95 normal path
 ==================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -117,7 +117,7 @@  Test Case 2: pvp test with virtio 0.95 normal path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd with tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::
 
     ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip \
     --nb-cores=1 --txd=1024 --rxd=1024
@@ -141,7 +141,7 @@  Test Case 2: pvp test with virtio 0.95 normal path
 Test Case 3: pvp test with virtio 0.95 vrctor_rx path
 =====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -163,7 +163,7 @@  Test Case 3: pvp test with virtio 0.95 vrctor_rx path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without ant tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads::
 
     ./testpmd -c 0x3 -n 3 -- -i \
     --nb-cores=1 --txd=1024 --rxd=1024
@@ -187,7 +187,7 @@  Test Case 3: pvp test with virtio 0.95 vrctor_rx path
 Test Case 4: pvp test with virtio 1.0 mergeable path
 ====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -209,7 +209,7 @@  Test Case 4: pvp test with virtio 1.0 mergeable path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd::
+3. On VM, bind virtio net to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 3 -- -i \
     --nb-cores=1 --txd=1024 --rxd=1024
@@ -233,7 +233,7 @@  Test Case 4: pvp test with virtio 1.0 mergeable path
 Test Case 5: pvp test with virtio 1.0 normal path
 =================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -255,7 +255,7 @@  Test Case 5: pvp test with virtio 1.0 normal path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd with tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads::
 
     ./testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x0 --enable-hw-vlan-strip\
     --nb-cores=1 --txd=1024 --rxd=1024
@@ -279,7 +279,7 @@  Test Case 5: pvp test with virtio 1.0 normal path
 Test Case 6: pvp test with virtio 1.0 vrctor_rx path
 ====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
@@ -301,7 +301,7 @@  Test Case 6: pvp test with virtio 1.0 vrctor_rx path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
 
     ./testpmd -c 0x3 -n 3 -- -i \
     --nb-cores=1 --txd=1024 --rxd=1024
diff --git a/test_plans/pvp_share_lib_test_plan.rst b/test_plans/pvp_share_lib_test_plan.rst
index f0610e90..a1a6c56f 100644
--- a/test_plans/pvp_share_lib_test_plan.rst
+++ b/test_plans/pvp_share_lib_test_plan.rst
@@ -54,7 +54,7 @@  Test Case1: Vhost/virtio-user pvp share lib test with niantic
 
     export LD_LIBRARY_PATH=/root/dpdk/x86_64-native-linuxapp-gcc/drivers:$LD_LIBRARY_PATH
 
-4. Bind niantic port with igb_uio, use option ``-d`` to load the dynamic pmd when launch vhost::
+4. Bind niantic port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost::
 
     ./testpmd  -c 0x03 -n 4 -d librte_net_vhost.so.21.0 -d librte_net_i40e.so.21.0 -d librte_mempool_ring.so.21.0 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i
@@ -75,7 +75,7 @@  Test Case2: Vhost/virtio-user pvp share lib test with fortville
 
 Similar as Test Case1, all steps are similar except step 4:
 
-4. Bind fortville port with igb_uio, use option ``-d`` to load the dynamic pmd when launch vhost::
+4. Bind fortville port with vfio-pci, use option ``-d`` to load the dynamic pmd when launch vhost::
 
     ./testpmd  -c 0x03 -n 4 -d librte_net_vhost.so -d librte_net_i40e.so -d librte_mempool_ring.so \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i
diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
index 6641d447..f13bbb0a 100644
--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
@@ -59,7 +59,7 @@  Test Case1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user
 ==========================================================================
 Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::
 
     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
@@ -79,7 +79,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd::
+3. On VM, bind virtio net to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -104,7 +104,7 @@  Test Case2: vhost-user/virtio-pmd pvp split ring reconnect from VM
 ==================================================================
 Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::
 
     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
@@ -124,7 +124,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd::
+3. On VM, bind virtio net to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -149,7 +149,7 @@  Similar as Test Case1, all steps are similar except step 5, 6.
 Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from vhost-user
 ==========================================================================================
 
-1. Bind one port to igb_uio, launch the vhost by below command::
+1. Bind one port to vfio-pci, launch the vhost by below command::
 
     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -181,13 +181,13 @@  Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :11
 
-3. On VM1, bind virtio1 to igb_uio and run testpmd::
+3. On VM1, bind virtio1 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
-4. On VM2, bind virtio2 to igb_uio and run testpmd::
+4. On VM2, bind virtio2 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -211,7 +211,7 @@  Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
 Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from VMs
 ===================================================================================
 
-1. Bind one port to igb_uio, launch the vhost by below command::
+1. Bind one port to vfio-pci, launch the vhost by below command::
 
     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -243,13 +243,13 @@  Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
     -vnc :11
 
-3. On VM1, bind virtio1 to igb_uio and run testpmd::
+3. On VM1, bind virtio1 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
-4. On VM2, bind virtio2 to igb_uio and run testpmd::
+4. On VM2, bind virtio2 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -392,7 +392,7 @@  Test Case10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user
 ============================================================================
 Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::
 
     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
@@ -412,7 +412,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd::
+3. On VM, bind virtio net to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -437,7 +437,7 @@  Test Case11: vhost-user/virtio-pmd pvp packed ring reconnect from VM
 ====================================================================
 Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
-1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
+1. Bind one port to vfio-pci, then launch vhost with client mode by below commands::
 
     ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
@@ -457,7 +457,7 @@  Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd::
+3. On VM, bind virtio net to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -482,7 +482,7 @@  Similar as Test Case1, all steps are similar except step 5, 6.
 Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from vhost-user
 ============================================================================================
 
-1. Bind one port to igb_uio, launch the vhost by below command::
+1. Bind one port to vfio-pci, launch the vhost by below command::
 
     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -514,13 +514,13 @@  Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
     -vnc :11
 
-3. On VM1, bind virtio1 to igb_uio and run testpmd::
+3. On VM1, bind virtio1 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
-4. On VM2, bind virtio2 to igb_uio and run testpmd::
+4. On VM2, bind virtio2 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -544,7 +544,7 @@  Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
 Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from VMs
 =====================================================================================
 
-1. Bind one port to igb_uio, launch the vhost by below command::
+1. Bind one port to vfio-pci, launch the vhost by below command::
 
     ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -576,13 +576,13 @@  Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
     -vnc :11
 
-3. On VM1, bind virtio1 to igb_uio and run testpmd::
+3. On VM1, bind virtio1 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
-4. On VM2, bind virtio2 to igb_uio and run testpmd::
+4. On VM2, bind virtio2 to vfio-pci and run testpmd::
 
     ./testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst
index 90438cc9..2434802c 100644
--- a/test_plans/pvp_virtio_bonding_test_plan.rst
+++ b/test_plans/pvp_virtio_bonding_test_plan.rst
@@ -50,7 +50,7 @@  Test case 1: vhost-user/virtio-pmd pvp bonding test with mode 0
 ===============================================================
 Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 
-1. Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to vfio-pci,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -79,9 +79,9 @@  Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img -vnc :10
 
-3. On vm, bind four virtio-net devices to igb_uio::
+3. On vm, bind four virtio-net devices to vfio-pci::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x xx:xx.x xx:xx.x xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x
 
 4. Launch testpmd in VM::
 
@@ -112,7 +112,7 @@  Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 to 6
 ===================================================================================
 
-1. Bind one port to igb_uio,launch vhost by below command::
+1. Bind one port to vfio-pci,launch vhost by below command::
 
     ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -141,9 +141,9 @@  Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 t
     -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img -vnc :10
 
-3. On vm, bind four virtio-net devices to igb_uio::
+3. On vm, bind four virtio-net devices to vfio-pci::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x xx:xx.x xx:xx.x xx:xx.x
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x xx:xx.x xx:xx.x xx:xx.x
 
 4. Launch testpmd in VM::
 
diff --git a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
index 89af30f7..a4ac1f18 100644
--- a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
+++ b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
@@ -44,7 +44,7 @@  Test Case1:  Basic test for virtio-user split ring 2M hugepage
 
 1. Before the test, plese make sure only 2M hugepage are mounted in host.
 
-2. Bind one port to igb_uio, launch vhost::
+2. Bind one port to vfio-pci, launch vhost::
 
     ./testpmd -l 3-4 -n 4 --file-prefix=vhost \
     --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
@@ -64,7 +64,7 @@  Test Case1:  Basic test for virtio-user packed ring 2M hugepage
 
 1. Before the test, plese make sure only 2M hugepage are mounted in host.
 
-2. Bind one port to igb_uio, launch vhost::
+2. Bind one port to vfio-pci, launch vhost::
 
     ./testpmd -l 3-4 -n 4 --file-prefix=vhost \
     --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
diff --git a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
index e877a791..629f6f42 100644
--- a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
+++ b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
@@ -51,7 +51,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 1: pvp 2 queues test with packed ring mergeable path
 ===============================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -89,7 +89,7 @@  Test Case 1: pvp 2 queues test with packed ring mergeable path
 Test Case 2: pvp 2 queues test with packed ring non-mergeable path
 ==================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -122,7 +122,7 @@  Test Case 2: pvp 2 queues test with packed ring non-mergeable path
 Test Case 3: pvp 2 queues test with split ring inorder mergeable path
 =====================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -155,7 +155,7 @@  Test Case 3: pvp 2 queues test with split ring inorder mergeable path
 Test Case 4: pvp 2 queues test with split ring inorder non-mergeable path
 ==========================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -188,7 +188,7 @@  Test Case 4: pvp 2 queues test with split ring inorder non-mergeable path
 Test Case 5: pvp 2 queues test with split ring mergeable path
 =============================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -221,7 +221,7 @@  Test Case 5: pvp 2 queues test with split ring mergeable path
 Test Case 6: pvp 2 queues test with split ring non-mergeable path
 =================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -254,7 +254,7 @@  Test Case 6: pvp 2 queues test with split ring non-mergeable path
 Test Case 7: pvp 2 queues test with split ring vector_rx path
 =============================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -287,7 +287,7 @@  Test Case 7: pvp 2 queues test with split ring vector_rx path
 Test Case 8: pvp 2 queues test with packed ring inorder mergeable path
 ======================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -320,7 +320,7 @@  Test Case 8: pvp 2 queues test with packed ring inorder mergeable path
 Test Case 9: pvp 2 queues test with packed ring inorder non-mergeable path
 ===========================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
@@ -353,7 +353,7 @@  Test Case 9: pvp 2 queues test with packed ring inorder non-mergeable path
 Test Case 10: pvp 2 queues test with packed ring vectorized path
 ================================================================
 
-1. Bind one port to igb_uio, then launch vhost by below command::
+1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
     ./testpmd -n 4 -l 2-4  \
diff --git a/test_plans/vdev_primary_secondary_test_plan.rst b/test_plans/vdev_primary_secondary_test_plan.rst
index a148fcbe..1e6cd2e0 100644
--- a/test_plans/vdev_primary_secondary_test_plan.rst
+++ b/test_plans/vdev_primary_secondary_test_plan.rst
@@ -141,7 +141,7 @@  SW preparation: Change one line of the symmetric_mp sample and rebuild::
     vi ./examples/multi_process/symmetric_mp/main.c
     -.offloads = DEV_RX_OFFLOAD_CHECKSUM,
 
-1. Bind one port to igb_uio, launch testpmd by below command::
+1. Bind one port to vfio-pci, launch testpmd by below command::
 
     ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd txonly
@@ -161,10 +161,10 @@  SW preparation: Change one line of the symmetric_mp sample and rebuild::
     -chardev socket,id=char1,path=./vhost-net1,server -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=2 \
     -device virtio-net-pci,mac=52:54:00:00:00:03,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=15  -vnc :10 -daemonize
 
-3.  Bind virtio port to igb_uio::
+3.  Bind virtio port to vfio-pci::
 
-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
 
 4. Launch two process by example::
 
@@ -199,10 +199,10 @@  Test Case 2: Virtio-pmd primary and secondary process hotplug test
     -chardev socket,id=char1,path=./vhost-net1,server -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=2 \
     -device virtio-net-pci,mac=52:54:00:00:00:03,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=15  -vnc :10 -daemonize
 
-3.  Bind virtio port to igb_uio::
+3.  Bind virtio port to vfio-pci::
 
-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
-    ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
 
 4. Start sample code as primary process::
 
diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index 7fe74f12..3d0e518a 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -69,7 +69,7 @@  Packet pipeline:
 ================
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
-1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
+1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -127,7 +127,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 2: Split ring dynamic queue number test for DMA-accelerated vhost Tx operations
 =========================================================================================
 
-1. Bind 8 cbdma channels and one nic port to igb_uio, then launch vhost by below command::
+1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
      --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
@@ -178,7 +178,7 @@  Packet pipeline:
 ================
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
-1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
+1. Bind one cbdma port and one nic port to vfio-pci, then launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -245,7 +245,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx operations
 ==========================================================================================
 
-1. Bind 8 cbdma channels and one nic port to igb_uio, then launch vhost by below command::
+1. Bind 8 cbdma channels and one nic port to vfio-pci, then launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 28-29  \
      --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=8,client=1' \
@@ -292,7 +292,7 @@  Test Case 4: Packed ring dynamic queue number test for DMA-accelerated vhost Tx
 Test Case 5: Compare PVP split ring performance between CPU copy, CBDMA copy and Sync copy
 ==========================================================================================
 
-1. Bind one cbdma port and one nic port which on same numa to igb_uio, then launch vhost by below command::
+1. Bind one cbdma port and one nic port which on same numa to vfio-pci, then launch vhost by below command::
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0]' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
diff --git a/test_plans/vhost_event_idx_interrupt_test_plan.rst b/test_plans/vhost_event_idx_interrupt_test_plan.rst
index 0cf4834f..111de954 100644
--- a/test_plans/vhost_event_idx_interrupt_test_plan.rst
+++ b/test_plans/vhost_event_idx_interrupt_test_plan.rst
@@ -399,7 +399,7 @@  Test Case 6: wake up packed ring vhost-user cores by multi virtio-net in VMs wit
 Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test
 ===============================================================================================================
 
-1. Bind 16 cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::
+1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::
 
     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \
     --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \
@@ -460,7 +460,7 @@  Test Case 7: wake up split ring vhost-user cores with event idx interrupt mode a
 Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test
 ================================================================================================================================
 
-1. Bind two cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::
+1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::
 
     ./l3fwd-power -l 1-2 -n 4 --log-level=9 \
     --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \
@@ -515,7 +515,7 @@  Test Case 8: wake up split ring vhost-user cores by multi virtio-net in VMs with
 Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode and cbdma enabled 16 queues test
 ================================================================================================================
 
-1. Bind 16 cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::
+1. Bind 16 cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::
 
     ./l3fwd-power -l 1-16 -n 4 --log-level=9 \
     --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' \
@@ -576,7 +576,7 @@  Test Case 9: wake up packed ring vhost-user cores with event idx interrupt mode
 Test Case 10: wake up packed ring vhost-user cores by multi virtio-net in VMs with event idx interrupt mode and cbdma enabled test
 ==================================================================================================================================
 
-1. Bind two cbdma ports to igb_uio driver, then launch l3fwd-power example app with client mode::
+1. Bind two cbdma ports to vfio-pci driver, then launch l3fwd-power example app with client mode::
 
     ./l3fwd-power -l 1-2 -n 4 --log-level=9 \
     --vdev 'eth_vhost0,iface=/vhost-net0,queues=1,client=1,dmas=[txq0@00:04.0]' \
diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst
index abaf7af6..445848ff 100644
--- a/test_plans/vhost_multi_queue_qemu_test_plan.rst
+++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst
@@ -43,7 +43,7 @@  Test Case: vhost pmd/virtio-pmd PVP 2queues mergeable path performance
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
-1. Bind one port to igb_uio, then launch testpmd by below command: 
+1. Bind one port to vfio-pci, then launch testpmd by below command:
     rm -rf vhost-net*
     ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
@@ -62,7 +62,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \
     -vnc :2 -daemonize
 
-3. On VM, bind virtio net to igb_uio and run testpmd ::
+3. On VM, bind virtio net to vfio-pci and run testpmd ::
     ./testpmd -c 0x07 -n 3 -- -i \
     --rxq=2 --txq=2 --txqflags=0xf01 --rss-ip --nb-cores=2
     testpmd>set fwd mac
@@ -84,7 +84,7 @@  to RX/TX packets normally.
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
-1. Bind one port to igb_uio, then launch testpmd by below command, 
+1. Bind one port to vfio-pci, then launch testpmd by below command,
    ensure the vhost using 2 queues::
 
     rm -rf vhost-net*
@@ -106,7 +106,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \
     -vnc :2 -daemonize
 
-3. On VM, bind virtio net to igb_uio and run testpmd,
+3. On VM, bind virtio net to vfio-pci and run testpmd,
    using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \
@@ -160,7 +160,7 @@  packets.
 flow: 
 TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
-1. Bind one port to igb_uio, then launch testpmd by below command, 
+1. Bind one port to vfio-pci, then launch testpmd by below command,
    ensure the vhost using 2 queues::
 
     rm -rf vhost-net*
@@ -182,7 +182,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
     -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \
     -vnc :2 -daemonize
 
-3. On VM, bind virtio net to igb_uio and run testpmd,
+3. On VM, bind virtio net to vfio-pci and run testpmd,
    using one queue for testing at first::
  
     ./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \
diff --git a/test_plans/vhost_user_interrupt_test_plan.rst b/test_plans/vhost_user_interrupt_test_plan.rst
index f8d35297..0fb2f3b6 100644
--- a/test_plans/vhost_user_interrupt_test_plan.rst
+++ b/test_plans/vhost_user_interrupt_test_plan.rst
@@ -136,7 +136,7 @@  Test Case5: Wake up split ring vhost-user cores with l3fwd-power sample when mul
     ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip
 
-2. Bind 4 cbdma ports to igb_uio driver, then launch l3fwd-power with a virtual vhost device::
+2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device::
 
     ./l3fwd-power -l 9-12 -n 4 --log-level=9 \
     --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -p 0x1 --parse-ptype 1 \
@@ -157,7 +157,7 @@  Test Case6: Wake up packed ring vhost-user cores with l3fwd-power sample when mu
     ./testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4,packed_vq=1 -- -i --rxq=4 --txq=4 --rss-ip
 
-2. Bind 4 cbdma ports to igb_uio driver, then launch l3fwd-power with a virtual vhost device::
+2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device::
 
     ./l3fwd-power -l 9-12 -n 4 --log-level=9 \
     --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -p 0x1 --parse-ptype 1 \
diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index 7ee5fa87..276de3b9 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -74,9 +74,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port::
+2. Bind host port to vfio-pci and start testpmd with vhost port::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
@@ -95,11 +95,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
@@ -127,8 +127,8 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# cd /root/<dpdk_folder>
     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
     host VM# modprobe uio
-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko
+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0
     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
     host VM# screen -S vm
     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
@@ -174,9 +174,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::
+2. Bind host port to vfio-pci and start testpmd with vhost port,note not start vhost port before launching qemu::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -194,11 +194,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
@@ -225,8 +225,8 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# cd /root/<dpdk_folder>
     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
     host VM# modprobe uio
-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko
+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0
     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
     host VM# screen -S vm
     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
@@ -274,9 +274,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port::
+2. Bind host port to vfio-pci and start testpmd with vhost port::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
@@ -295,11 +295,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
@@ -362,9 +362,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port::
+2. Bind host port to vfio-pci and start testpmd with vhost port::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     host server# testpmd>start
 
@@ -383,11 +383,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     backup server # testpmd>start
 
@@ -454,9 +454,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port::
+2. Bind host port to vfio-pci and start testpmd with vhost port::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
@@ -475,11 +475,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
@@ -507,8 +507,8 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# cd /root/<dpdk_folder>
     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
     host VM# modprobe uio
-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko
+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0
     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
     host VM# screen -S vm
     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
@@ -554,9 +554,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::
+2. Bind host port to vfio-pci and start testpmd with vhost port,note not start vhost port before launching qemu::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -574,11 +574,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
@@ -605,8 +605,8 @@  On the backup server, run the vhost testpmd on the host and launch VM:
     host VM# cd /root/<dpdk_folder>
     host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
     host VM# modprobe uio
-    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
-    host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+    host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/vfio-pci.ko
+    host VM# ./tools/dpdk_nic_bind.py --bind=vfio-pci 00:03.0
     host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
     host VM# screen -S vm
     host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
@@ -654,9 +654,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port::
+2. Bind host port to vfio-pci and start testpmd with vhost port::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
@@ -675,11 +675,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
@@ -742,9 +742,9 @@  On host server side:
     host server# mkdir /mnt/huge
     host server# mount -t hugetlbfs hugetlbfs /mnt/huge
 
-2. Bind host port to igb_uio and start testpmd with vhost port::
+2. Bind host port to vfio-pci and start testpmd with vhost port::
 
-    host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+    host server# ./tools/dpdk-devbind.py -b vfio-pci 82:00.1
     host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     host server# testpmd>start
 
@@ -763,11 +763,11 @@  On host server side:
 
 On the backup server, run the vhost testpmd on the host and launch VM:
 
-4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+4. Set huge page, bind one port to vfio-pci and run testpmd on the backup server, the command is very similar to host::
 
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
-    backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+    backup server # ./tools/dpdk-devbind.py -b vfio-pci 82:00.0
     backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     backup server # testpmd>start
 
diff --git a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
index 77c1946c..9a108b3d 100644
--- a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
+++ b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst
@@ -52,7 +52,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 1: Basic virtio interrupt test with 4 queues
 =======================================================
 
-1. Bind one NIC port to igb_uio, then launch testpmd by below command::
+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
@@ -88,7 +88,7 @@  Test Case 1: Basic virtio interrupt test with 4 queues
 Test Case 2: Basic virtio interrupt test with 16 queues
 =======================================================
 
-1. Bind one NIC port to igb_uio, then launch testpmd by below command::
+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
@@ -124,7 +124,7 @@  Test Case 2: Basic virtio interrupt test with 16 queues
 Test Case 3: Basic virtio-1.0 interrupt test with 4 queues
 ==========================================================
 
-1. Bind one NIC port to igb_uio, then launch testpmd by below command::
+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
@@ -160,7 +160,7 @@  Test Case 3: Basic virtio-1.0 interrupt test with 4 queues
 Test Case 4: Packed ring virtio interrupt test with 16 queues
 =============================================================
 
-1. Bind one NIC port to igb_uio, then launch testpmd by below command::
+1. Bind one NIC port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
@@ -196,7 +196,7 @@  Test Case 4: Packed ring virtio interrupt test with 16 queues
 Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled
 =========================================================================
 
-1. Bind 16 cbdma channels and one NIC port to igb_uio, then launch testpmd by below command::
+1. Bind 16 cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command::
 
     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
 
@@ -231,7 +231,7 @@  Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled
 Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled
 ============================================================================
 
-1. Bind four cbdma channels and one NIC port to igb_uio, then launch testpmd by below command::
+1. Bind four cbdma channels and one NIC port to vfio-pci, then launch testpmd by below command::
 
     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip
 
@@ -266,7 +266,7 @@  Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled
 Test Case 7: Packed ring virtio interrupt test with 16 queues and cbdma enabled
 ===============================================================================
 
-1. Bind 16 cbdma channels ports and one NIC port to igb_uio, then launch testpmd by below command::
+1. Bind 16 cbdma channels ports and one NIC port to vfio-pci, then launch testpmd by below command::
 
     ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip
 
diff --git a/test_plans/vhost_virtio_user_interrupt_test_plan.rst b/test_plans/vhost_virtio_user_interrupt_test_plan.rst
index e70ec91f..239b1671 100644
--- a/test_plans/vhost_virtio_user_interrupt_test_plan.rst
+++ b/test_plans/vhost_virtio_user_interrupt_test_plan.rst
@@ -44,7 +44,7 @@  Test Case1: Split ring virtio-user interrupt test with vhost-user as backend
 
 flow: TG --> NIC --> Vhost --> Virtio
 
-1. Bind one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::
+1. Bind one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::
 
     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i  --rxq=1 --txq=1
     testpmd>start
@@ -114,7 +114,7 @@  Test Case4: Packed ring virtio-user interrupt test with vhost-user as backend
 
 flow: TG --> NIC --> Vhost --> Virtio
 
-1. Bind one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::
+1. Bind one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::
 
     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i  --rxq=1 --txq=1
     testpmd>start
@@ -184,7 +184,7 @@  Test Case7: LSC event between vhost-user and virtio-user with split ring and cbd
 
 flow: Vhost <--> Virtio
 
-1. Bind one cbdma port to igb_uio driver, then start vhost-user side::
+1. Bind one cbdma port to vfio-pci driver, then start vhost-user side::
 
     ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i
     testpmd>set fwd mac
@@ -211,7 +211,7 @@  Test Case8: Split ring virtio-user interrupt test with vhost-user as backend and
 
 flow: TG --> NIC --> Vhost --> Virtio
 
-1. Bind one cbdma port and one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::
+1. Bind one cbdma port and one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::
 
     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i  --rxq=1 --txq=1
     testpmd>start
@@ -232,7 +232,7 @@  Test Case9: LSC event between vhost-user and virtio-user with packed ring and cb
 
 flow: Vhost <--> Virtio
 
-1. Bind one cbdma port to igb_uio driver, then start vhost-user side::
+1. Bind one cbdma port to vfio-pci driver, then start vhost-user side::
 
     ./testpmd -c 0x3000 -n 4 -a 00:04.0 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0,dmas=[txq0@00:04.0]' -- -i
     testpmd>set fwd mac
@@ -259,7 +259,7 @@  Test Case10: Packed ring virtio-user interrupt test with vhost-user as backend a
 
 flow: TG --> NIC --> Vhost --> Virtio
 
-1. Bind one cbdma port and one NIC port to igb_uio, launch testpmd with a virtual vhost device as backend::
+1. Bind one cbdma port and one NIC port to vfio-pci, launch testpmd with a virtual vhost device as backend::
 
     ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i  --rxq=1 --txq=1
     testpmd>start
diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst
index 064aa10e..4ffb4d20 100644
--- a/test_plans/virtio_event_idx_interrupt_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst
@@ -49,7 +49,7 @@  TG --> NIC --> Vhost-user --> Virtio-net
 Test Case 1: Compare interrupt times with and without split ring virtio event idx enabled
 =========================================================================================
 
-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
@@ -82,7 +82,7 @@  Test Case 1: Compare interrupt times with and without split ring virtio event id
 Test Case 2: Split ring virtio-pci driver reload test
 =====================================================
 
-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -120,7 +120,7 @@  Test Case 2: Split ring virtio-pci driver reload test
 Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 16 queues test
 =============================================================================================
 
-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
@@ -155,7 +155,7 @@  Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 1
 Test Case 4: Compare interrupt times with and without packed ring virtio event idx enabled
 ==========================================================================================
 
-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
@@ -188,7 +188,7 @@  Test Case 4: Compare interrupt times with and without packed ring virtio event i
 Test Case 5: Packed ring virtio-pci driver reload test
 ======================================================
 
-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -226,7 +226,7 @@  Test Case 5: Packed ring virtio-pci driver reload test
 Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode 16 queues test
 ==============================================================================================
 
-1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
@@ -261,7 +261,7 @@  Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode
 Test Case 7: Split ring virtio-pci driver reload test with CBDMA enabled
 ========================================================================
 
-1. Bind one nic port and one cbdma channel to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -299,7 +299,7 @@  Test Case 7: Split ring virtio-pci driver reload test with CBDMA enabled
 Test Case 8: Wake up split ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test
 ================================================================================================================
 
-1. Bind one nic port and 16 cbdma channels to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
@@ -334,7 +334,7 @@  Test Case 8: Wake up split ring virtio-net cores with event idx interrupt mode a
 Test Case 9: Packed ring virtio-pci driver reload test with CBDMA enabled
 =========================================================================
 
-1. Bind one nic port and one cbdma channel to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port and one cbdma channel to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1,dmas=[txq0@00:04.0]' -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -372,7 +372,7 @@  Test Case 9: Packed ring virtio-pci driver reload test with CBDMA enabled
 Test Case 10: Wake up packed ring virtio-net cores with event idx interrupt mode and cbdma enabled 16 queues test
 =================================================================================================================
 
-1. Bind one nic port and 16 cbdma channels to igb_uio, then launch the vhost sample by below commands::
+1. Bind one nic port and 16 cbdma channels to vfio-pci, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
     ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7]' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
diff --git a/test_plans/virtio_pvp_regression_test_plan.rst b/test_plans/virtio_pvp_regression_test_plan.rst
index fb45c561..476b4274 100644
--- a/test_plans/virtio_pvp_regression_test_plan.rst
+++ b/test_plans/virtio_pvp_regression_test_plan.rst
@@ -49,7 +49,7 @@  TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 Test Case 1: pvp test with virtio 0.95 mergeable path
 =====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -70,7 +70,7 @@  Test Case 1: pvp test with virtio 0.95 mergeable path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
 
     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
@@ -88,7 +88,7 @@  Test Case 1: pvp test with virtio 0.95 mergeable path
 Test Case 2: pvp test with virtio 0.95 non-mergeable path
 =========================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -109,7 +109,7 @@  Test Case 2: pvp test with virtio 0.95 non-mergeable path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
 
     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
@@ -127,7 +127,7 @@  Test Case 2: pvp test with virtio 0.95 non-mergeable path
 Test Case 3: pvp test with virtio 0.95 vrctor_rx path
 =====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -148,7 +148,7 @@  Test Case 3: pvp test with virtio 0.95 vrctor_rx path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
 
     ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
@@ -166,7 +166,7 @@  Test Case 3: pvp test with virtio 0.95 vrctor_rx path
 Test Case 4: pvp test with virtio 1.0 mergeable path
 ====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -187,7 +187,7 @@  Test Case 4: pvp test with virtio 1.0 mergeable path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
 
     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
@@ -205,7 +205,7 @@  Test Case 4: pvp test with virtio 1.0 mergeable path
 Test Case 5: pvp test with virtio 1.0 non-mergeable path
 ========================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -226,7 +226,7 @@  Test Case 5: pvp test with virtio 1.0 non-mergeable path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
 
     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip\
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
@@ -244,7 +244,7 @@  Test Case 5: pvp test with virtio 1.0 non-mergeable path
 Test Case 6: pvp test with virtio 1.0 vrctor_rx path
 ====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -265,7 +265,7 @@  Test Case 6: pvp test with virtio 1.0 vrctor_rx path
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
     -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
 
     ./testpmd -c 0x7 -n 3 -a 0000:xx.00,vectorized -- -i \
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
@@ -283,7 +283,7 @@  Test Case 6: pvp test with virtio 1.0 vrctor_rx path
 Test Case 7: pvp test with virtio 1.1 mergeable path
 ====================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -304,7 +304,7 @@  Test Case 7: pvp test with virtio 1.1 mergeable path
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15,packed=on -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
 
     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
@@ -322,7 +322,7 @@  Test Case 7: pvp test with virtio 1.1 mergeable path
 Test Case 8: pvp test with virtio 1.1 non-mergeable path
 =========================================================
 
-1. Bind one port to igb_uio, then launch testpmd by below command::
+1. Bind one port to vfio-pci, then launch testpmd by below command::
 
     rm -rf vhost-net*
     ./testpmd -l 1-3 -n 4 \
@@ -343,7 +343,7 @@  Test Case 8: pvp test with virtio 1.1 non-mergeable path
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15,packed=on -vnc :10
 
-3. On VM, bind virtio net to igb_uio and run testpmd without tx-offloads::
+3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads::
 
     ./testpmd -c 0x7 -n 4 -- -i --enable-hw-vlan-strip \
     --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
diff --git a/test_plans/virtio_user_as_exceptional_path_test_plan.rst b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
index f04271fa..2dffa877 100644
--- a/test_plans/virtio_user_as_exceptional_path_test_plan.rst
+++ b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
@@ -71,9 +71,9 @@  Flow:tap0-->vhost-net-->virtio_user-->nic0-->nic1
 
     modprobe vhost-net
 
-3. Bind nic0 to igb_uio and launch the virtio_user with testpmd::
+3. Bind nic0 to vfio-pci and launch the virtio_user with testpmd::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x        # xx:xx.x is the pci addr of nic0
     ./testpmd -c 0xc0000 -n 4 --file-prefix=test2 \
     --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024 -- -i --rxd=1024 --txd=1024
     testpmd>set fwd csum
@@ -123,9 +123,9 @@  Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG
 
     ufw disable
 
-2. Bind the physical port to igb_uio, launch testpmd with one queue for virtio_user::
+2. Bind the physical port to vfio-pci, launch testpmd with one queue for virtio_user::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x        # xx:xx.x is the pci addr of nic0
     ./testpmd -l 1-2 -n 4  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024
 
 3. Check if there is a tap device generated::
@@ -153,9 +153,9 @@  Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG
 
     ufw disable
 
-2. Bind the physical port to igb_uio, launch testpmd with two queues for virtio_user::
+2. Bind the physical port to vfio-pci, launch testpmd with two queues for virtio_user::
 
-    ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
+    ./dpdk-devbind.py -b vfio-pci xx:xx.x        # xx:xx.x is the pci addr of nic0
     ./testpmd -l 1-2 -n 4  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --nb-cores=1
 
 3. Check if there is a tap device generated::
diff --git a/test_plans/virtio_user_for_container_networking_test_plan.rst b/test_plans/virtio_user_for_container_networking_test_plan.rst
index 15c9c248..d28b30a1 100644
--- a/test_plans/virtio_user_for_container_networking_test_plan.rst
+++ b/test_plans/virtio_user_for_container_networking_test_plan.rst
@@ -70,7 +70,7 @@  Test Case 1: packet forward test for container networking
     mkdir /mnt/huge
     mount -t hugetlbfs nodev /mnt/huge
 
-2. Bind one port to igb_uio, launch vhost::
+2. Bind one port to vfio-pci, launch vhost::
 
     ./testpmd -l 1-2 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
 
@@ -92,7 +92,7 @@  Test Case 2: packet forward with multi-queues for container networking
     mkdir /mnt/huge
     mount -t hugetlbfs nodev /mnt/huge
 
-2. Bind one port to igb_uio, launch vhost::
+2. Bind one port to vfio-pci, launch vhost::
 
     ./testpmd -l 1-3 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2
 
diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst
index 7eeaa652..30499bcd 100644
--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
@@ -46,7 +46,7 @@  Virtio-pmd <-> Vhost-user <-> Testpmd <-> Vhost-user <-> Virtio-pmd
 Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path
 ============================================================
 
-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::
 
     rm -rf vhost-net*
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -77,13 +77,13 @@  Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
 
     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
     testpmd>set fwd rxonly
     testpmd>start
 
-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 and send 64B packets, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 and send 64B packets, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
 
     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
     testpmd>set fwd txonly
@@ -101,7 +101,7 @@  Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path
 Test Case 2: VM2VM vhost-user/virtio-pmd with normal path
 =========================================================
 
-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::
 
     rm -rf vhost-net*
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -132,13 +132,13 @@  Test Case 2: VM2VM vhost-user/virtio-pmd with normal path
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::
+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 ::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
     testpmd>set fwd rxonly
     testpmd>start
 
-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio2 and send 64B packets ::
+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio2 and send 64B packets ::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
     testpmd>set fwd txonly
@@ -156,7 +156,7 @@  Test Case 2: VM2VM vhost-user/virtio-pmd with normal path
 Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path
 ===============================================================
 
-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::
 
     rm -rf vhost-net*
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -187,13 +187,13 @@  Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
 
     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
     testpmd>set fwd rxonly
     testpmd>start
 
-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2, [0000:xx.00] is [Bus,Device,Function] of virtio-net::
 
     ./dpdk-testpmd -c 0x3 -n 4 -a 0000:xx.00,vectorized=1 -- -i --txd=1024 --rxd=1024
     testpmd>set fwd txonly
@@ -211,7 +211,7 @@  Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path
 Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path
 ============================================================
 
-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::
 
     rm -rf vhost-net*
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -242,13 +242,13 @@  Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :12
 
-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::
+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 ::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
     testpmd>set fwd rxonly
     testpmd>start
 
-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 ::
+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 ::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
     testpmd>set fwd txonly
@@ -266,7 +266,7 @@  Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path
 Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
 ================================================================================
 
-1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
+1. Bind virtio with vfio-pci driver, launch the testpmd by below commands::
 
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -309,7 +309,7 @@  Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
     -CONFIG_RTE_LIBRTE_PMD_PCAP=n
     +CONFIG_RTE_LIBRTE_PMD_PCAP=y
 
-4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
+4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
 
     ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
     testpmd>set fwd rxonly
@@ -319,7 +319,7 @@  Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump  'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
 
-6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::
+6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
     testpmd>set fwd mac
@@ -354,7 +354,7 @@  Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
 Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid check
 ===================================================================================
 
-1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
+1. Bind virtio with vfio-pci driver, launch the testpmd by below commands::
 
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -397,7 +397,7 @@  Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
     -CONFIG_RTE_LIBRTE_PMD_PCAP=n
     +CONFIG_RTE_LIBRTE_PMD_PCAP=y
 
-4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
+4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
 
     ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
     testpmd>set fwd rxonly
@@ -407,7 +407,7 @@  Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump  'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
 
-6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::
+6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
     testpmd>set fwd mac
@@ -442,7 +442,7 @@  Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
 Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid check
 ===================================================================================
 
-1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
+1. Bind virtio with vfio-pci driver, launch the testpmd by below commands::
 
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -485,7 +485,7 @@  Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
     -CONFIG_RTE_LIBRTE_PMD_PCAP=n
     +CONFIG_RTE_LIBRTE_PMD_PCAP=y
 
-4. Bind virtio with igb_uio driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
+4. Bind virtio with vfio-pci driver,then run testpmd, set rxonly mode for virtio-pmd on VM1::
 
     ./dpdk-testpmd -c 0x3 -n 4 --file-prefix=test -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
     testpmd>set fwd rxonly
@@ -495,7 +495,7 @@  Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
 
     ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=test -- --pdump  'port=0,queue=*,rx-dev=/root/pdump-rx.pcap,mbuf-size=8000'
 
-6. On VM2, bind virtio with igb_uio driver,then run testpmd, config tx_packets to 8k length with chain mode::
+6. On VM2, bind virtio with vfio-pci driver,then run testpmd, config tx_packets to 8k length with chain mode::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --txd=1024 --rxd=1024 --max-pkt-len=9600 --rx-offloads=0x00002000
     testpmd>set fwd mac
@@ -530,7 +530,7 @@  Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
 Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
 ============================================================
 
-1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
+1. Bind one physical nic port to vfio-pci, then launch the testpmd by below commands::
 
     rm -rf vhost-net*
     ./dpdk-testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
@@ -561,13 +561,13 @@  Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on -vnc :12
 
-3. On VM1, bind vdev with igb_uio driver,then run testpmd, set rxonly for virtio1 ::
+3. On VM1, bind vdev with vfio-pci driver,then run testpmd, set rxonly for virtio1 ::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
     testpmd>set fwd rxonly
     testpmd>start
 
-4. On VM2, bind vdev with igb_uio driver,then run testpmd, set txonly for virtio2 ::
+4. On VM2, bind vdev with vfio-pci driver,then run testpmd, set txonly for virtio2 ::
 
     ./dpdk-testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x00 --enable-hw-vlan-strip --txd=1024 --rxd=1024
     testpmd>set fwd txonly
@@ -585,7 +585,7 @@  Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
 Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable with server mode stable test
 ==========================================================================================================
 
-1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
+1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
 
     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=8 --txq=8
@@ -659,7 +659,7 @@  Test Case 9: VM2VM virtio-pmd split ring mergeable path 8 queues CBDMA enable wi
 Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDMA enable with server mode test
 ==============================================================================================================
 
-1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost ports below commands::
+1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost ports below commands::
 
     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,client=1,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
     --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=8,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;txq4@0000:80:04.4;txq5@0000:80:04.5;txq6@0000:80:04.6;txq7@0000:80:04.7]'  -- -i --nb-cores=4 --txd=1024 --rxd=1024 --rxq=4 --txq=4
@@ -730,7 +730,7 @@  Test Case 10: VM2VM virtio-pmd split ring mergeable path dynamic queue size CBDM
 Test Case 11: VM2VM virtio-pmd packed ring mergeable path 8 queues CBDMA enable test
 =====================================================================================
 
-1. Bind 16 cbdma channels to igb_uio driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
+1. Bind 16 cbdma channels to vfio-pci driver, then launch the testpmd with 2 vhost port and 8 queues by below commands::
 
     rm -rf vhost-net*
     ./dpdk-testpmd -l 1-5 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=8,dmas=[txq0@0000:00:04.0;txq1@0000:00:04.1;txq2@0000:00:04.2;txq3@0000:00:04.3;txq4@0000:00:04.4;txq5@0000:00:04.5;txq6@0000:00:04.6;txq7@0000:00:04.7]' \
diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst
index 44518eec..9abc3a99 100644
--- a/test_plans/vswitch_sample_cbdma_test_plan.rst
+++ b/test_plans/vswitch_sample_cbdma_test_plan.rst
@@ -62,7 +62,7 @@  Modify the testpmd code as following::
 Test Case1: PVP performance check with CBDMA channel using vhost async driver
 =============================================================================
 
-1. Bind physical port to vfio-pci and CBDMA channel to igb_uio.
+1. Bind physical port to vfio-pci and CBDMA channel to vfio-pci.
 
 2. On host, launch dpdk-vhost by below command::
 
@@ -98,7 +98,7 @@  Test Case1: PVP performance check with CBDMA channel using vhost async driver
 Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
 =================================================================================
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.
+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
 
 2. On host, launch dpdk-vhost by below command::
 
@@ -136,7 +136,7 @@  Test Case2: PVP test with two VM and two CBDMA channels using vhost async driver
 Test Case3: VM2VM forwarding test with two CBDMA channels
 =========================================================
 
-1.Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.
+1.Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
 
 2. On host, launch dpdk-vhost by below command::
 
@@ -183,7 +183,7 @@  Test Case3: VM2VM forwarding test with two CBDMA channels
 Test Case4: VM2VM test with cbdma channels register/unregister stable check
 ============================================================================
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.
+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
 
 2. On host, launch dpdk-vhost by below command::
 
@@ -263,7 +263,7 @@  Test Case4: VM2VM test with cbdma channels register/unregister stable check
 Test Case5: VM2VM split ring test with iperf and reconnect stable check
 =======================================================================
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.
+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
 
 2. On host, launch dpdk-vhost by below command::
 
@@ -322,7 +322,7 @@  Test Case5: VM2VM split ring test with iperf and reconnect stable check
 Test Case6: VM2VM packed ring test with iperf and reconnect stable test
 =======================================================================
 
-1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio.
+1. Bind one physical ports to vfio-pci and two CBDMA channels to vfio-pci.
 
 2. On host, launch dpdk-vhost by below command::