From patchwork Thu Jun 1 14:54:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yuan, DukaiX" X-Patchwork-Id: 127818 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D897242BFF; Thu, 1 Jun 2023 09:16:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D3427406B3; Thu, 1 Jun 2023 09:16:58 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 55237406A2 for ; Thu, 1 Jun 2023 09:16:56 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685603817; x=1717139817; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=kxKuPmr0bCSNjuL/q0P2Y/SFXGGHXZrRE/Uc7S3hjB0=; b=KkzQtrFX+DnX3XD0qFhwT9ZwRXR62m95B3nYMM3WNJOzypQ8gNNZFMzj DKEsq3m0DJ+1lGcaJiGPnA7q57DcQ32KE2wUNVeufOWtQxFfm+yOE6StQ Bvfy6a4bODYse8o7wkQKvKw37mf46I2ClLeue2TQ+IeMLXRV8xgfNcI5w hdnkfiSbZMAnggu+HX/+yOvStOjUKyHnBLPebp13ReFu0LbaEr81Nmokc RAXyxOqaJytKG9k0NgZBqcVFip+WCiRmTdK8WzlIo98AhJaKAEz3MPcfZ BykfTQxWgn2iM77yRHgOnay0HsZwh3U4IbS5IgWpl68wVVP9TSAQ6i4Re w==; X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="383744632" X-IronPort-AV: E=Sophos;i="6.00,209,1681196400"; d="scan'208";a="383744632" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2023 00:16:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10727"; a="684732138" X-IronPort-AV: E=Sophos;i="6.00,209,1681196400"; d="scan'208";a="684732138" Received: from unknown (HELO localhost.localdomain) ([10.239.252.44]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2023 00:16:54 -0700 From: Dukai Yuan To: dts@dpdk.org Cc: Dukai Yuan Subject: [dts][PATCH V1] test_plans/pvp_qemu_multi_paths_port_restart: optimize qemu start-up parameters Date: Thu, 1 Jun 2023 14:54:22 +0000 Message-Id: <20230601145422.2618903-1-dukaix.yuan@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Starting from QEMU 5.2, the "-net" parameter has been removed and replaced by the "-device" parameter. In the new QEMU version, you can use the "-device" parameter to set network devices. Signed-off-by: Dukai Yuan --- ...emu_multi_paths_port_restart_test_plan.rst | 197 ++++++++++-------- 1 file changed, 113 insertions(+), 84 deletions(-) diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst index 84ee68de..5756ff58 100644 --- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst +++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst @@ -30,17 +30,19 @@ Test Case 1: pvp test with virtio 0.95 mergeable path 2. Launch VM with mrg_rxbuf feature on:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \ - -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd:: @@ -80,16 +82,19 @@ Test Case 2: pvp test with virtio 0.95 normal path 2. Launch VM with mrg_rxbuf feature off:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads:: @@ -129,16 +134,19 @@ Test Case 3: pvp test with virtio 0.95 vector_rx path 2. Launch VM with mrg_rxbuf feature off:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd without ant tx-offloads:: @@ -178,16 +186,19 @@ Test Case 4: pvp test with virtio 1.0 mergeable path 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd:: @@ -227,16 +238,19 @@ Test Case 5: pvp test with virtio 1.0 normal path 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads:: @@ -276,16 +290,19 @@ Test Case 6: pvp test with virtio 1.0 vector_rx path 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: @@ -325,16 +342,19 @@ Test Case 7: pvp test with virtio 1.1 mergeable path 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.1:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,packed=on,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd:: @@ -374,16 +394,19 @@ Test Case 8: pvp test with virtio 1.1 normal path 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.1:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,packed=on,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd with tx-offloads:: @@ -423,16 +446,19 @@ Test Case 9: pvp test with virtio 1.1 vector_rx path 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.1:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ - -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,packed=on,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd without tx-offloads:: @@ -472,16 +498,19 @@ Test Case 10: pvp test with virtio 1.0 mergeable path restart 10 times 2. Launch VM with 1 virtio, note: we need add "disable-modern=false" to enable virtio 1.0:: - qemu-system-x86_64 -name vm0 -enable-kvm -cpu host -smp 2 -m 4096 \ - -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img \ - -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \ - -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.2 -daemonize \ - -monitor unix:/tmp/vm0_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6000-:22 \ - -chardev socket,id=char0,path=./vhost-net \ + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 8 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \ - -vnc :10 + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 3. On VM, bind virtio net to vfio-pci and run testpmd::