From patchwork Wed May 31 17:15:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yuan, DukaiX" X-Patchwork-Id: 127750 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 007D442BF1; Wed, 31 May 2023 11:38:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B55840ED7; Wed, 31 May 2023 11:38:23 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 917DF40A82 for ; Wed, 31 May 2023 11:38:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685525901; x=1717061901; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=mmwFsReLT6m4ES34q+19E5du6O33AprLvDBMJfa/xlo=; b=F2bw3msIpOF15F735hLXYg1BRYJnABsMzzrHIJ35YFGQ/KRQbVoiaN5e 1R6cUTUriOm6MsHVrU11FjeneCypXaA9zMP4sQFLlWkJlZA2o/nELvrUz x4lGNBbJ1hQ0anPPBx5sJtgc5wigevFtddNtdDRxe3N7VIpzi9UZgbCLk BWMMmNwrcCnQATlS6RJAB1/u5Fa1YRZEHJP2QLYJPpcIs8HK23y2DOFjn vNLi7Cf6t07+IkZ0XNrDvfiRDdu5uhJ7cEOIYbU2rR11MR4U4fVhxIP5v tPiGe1B2P6Y9slrjWVU+k2r3xnV6exsBqtrEhW5WQ2V7Q8OxfBuOtrWvL A==; X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="357578640" X-IronPort-AV: E=Sophos;i="6.00,205,1681196400"; d="scan'208";a="357578640" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 02:38:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10726"; a="657255607" X-IronPort-AV: E=Sophos;i="6.00,205,1681196400"; d="scan'208";a="657255607" Received: from unknown (HELO localhost.localdomain) ([10.239.252.44]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2023 02:38:19 -0700 From: Dukai Yuan To: dts@dpdk.org Cc: Dukai Yuan Subject: [dts][PATCH V1] test_plans/dpdk_gro_lib: optimize qemu startup parameters Date: Wed, 31 May 2023 17:15:46 +0000 Message-Id: <20230531171546.3282600-1-dukaix.yuan@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Starting from QEMU 5.2, the "-net" parameter has been removed and replaced by the "-device" parameter. In the new QEMU version, you can use the "-device" parameter to set network devices. Signed-off-by: Dukai Yuan --- test_plans/dpdk_gro_lib_test_plan.rst | 88 ++++++++++++++++----------- 1 file changed, 52 insertions(+), 36 deletions(-) diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst index a48286eb..f77f3bdd 100644 --- a/test_plans/dpdk_gro_lib_test_plan.rst +++ b/test_plans/dpdk_gro_lib_test_plan.rst @@ -80,15 +80,19 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic 3. Set up vm with virto device and using kernel virtio-net driver:: - taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem \ - -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on \ - -vnc :10 -daemonize + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 1 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=6 \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 4. In vm, config the virtio-net device with ip and turn the kernel gro off:: @@ -134,15 +138,19 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic 3. Set up vm with virto device and using kernel virtio-net driver:: - taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem \ - -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on \ - -vnc :10 -daemonize + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid\ + -cpu host -smp 1 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=1 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 4. In vm, config the virtio-net device with ip and turn the kernel gro off:: @@ -188,15 +196,19 @@ Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic 3. Set up vm with virto device and using kernel virtio-net driver:: - taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem \ - -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on \ - -vnc :10 -daemonize + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 1 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=1 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 4. In vm, config the virtio-net device with ip and turn the kernel gro off:: @@ -319,15 +331,19 @@ Test Case5: DPDK GRO test with 2 queues using tcp/ipv4 traffic 3. Set up vm with virto device and using kernel virtio-net driver:: - taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem \ - -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -netdev user,id=yinan,hostfwd=tcp:127.0.0.1:6005-:22 -device e1000,netdev=yinan \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=15 \ - -vnc :10 -daemonize + qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \ + -drive file=/home/image/ubuntu2004.img -pidfile /tmp/.vm0.pid \ + -cpu host -smp 1 -m 8192 -numa node,memdev=mem -mem-prealloc \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/dev/hugepages,share=on \ + -device e1000,netdev=nttsip1 \ + -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \ + -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \ + -device virtio-serial \ + -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \ + -chardev socket,id=char0,path=/root/dpdk/vhost-net \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2 \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=6 \ + -monitor unix:/tmp/vm0_monitor.sock,server,nowait -vnc :4 4. In vm, config the virtio-net device with ip and turn the kernel gro off::