From patchwork Mon Jul 18 15:27:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, ZhiminX" X-Patchwork-Id: 114006 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FD70A034C; Mon, 18 Jul 2022 09:06:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 489614069F; Mon, 18 Jul 2022 09:06:19 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 319AE40041 for ; Mon, 18 Jul 2022 09:06:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658127978; x=1689663978; h=from:to:cc:subject:date:message-id; bh=Mkki+rApWuVlVK0vwZUl9CGuFW+KeJamgGBhdSR9zU8=; b=TeEOCgFKxWfbauHxDH9F+EbjsqpKzwT4PR1IpgyeteeUUqRFgoI1oIhD fMoqzeAx0mEzJKReXYQSA3aDViuxvA55l8Tjjyxpzr9XoYP9jnwky4YiO PLKMjmGLvS8h0bGKlF1n8HMv5CmDR8Dx6Upgguyn9j6CuxrtEHjU4jLoo PN8L7EE9ves960xqU492JnK1oWDbXUeQXjftfY60OT7v5dJaY0hHUi36p Oo7NcP35o44TFPS7QVAvtXtKXNqqGvbWUo013wgKbwroTOuAQEJYHdyQ7 n+qalMui9GMjd8+1aZkaIJ0sfja63+Z/jdyKJKGPk+m1d2iOI6/cYS9ge w==; X-IronPort-AV: E=McAfee;i="6400,9594,10411"; a="266557516" X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="266557516" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2022 00:06:16 -0700 X-IronPort-AV: E=Sophos;i="5.92,280,1650956400"; d="scan'208";a="547381313" Received: from unknown (HELO localhost.localdomain) ([10.239.252.93]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2022 00:06:15 -0700 From: Zhimin Huang To: dts@dpdk.org Cc: Zhimin Huang Subject: [dts][PATCH V1] tests/large_vf:modify test case to adapt ice change Date: Mon, 18 Jul 2022 23:27:29 +0800 Message-Id: <20220718152729.5200-1-zhiminx.huang@intel.com> X-Mailer: git-send-email 2.17.1 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org For ice kernel update, the available queue is 767 in 2 port E810 nic. it's only support 2 vfs start testpmd with 256 queue. Signed-off-by: Zhimin Huang Acked-by: Lijuan Tu --- test_plans/large_vf_test_plan.rst | 11 ++++++++--- tests/TestSuite_large_vf.py | 10 ++++++---- 2 files changed, 14 insertions(+), 7 deletions(-) diff --git a/test_plans/large_vf_test_plan.rst b/test_plans/large_vf_test_plan.rst index c7f0d119..e043a660 100644 --- a/test_plans/large_vf_test_plan.rst +++ b/test_plans/large_vf_test_plan.rst @@ -36,7 +36,7 @@ Note:: --total-num-mbufs=N, N is mbuf number, usually allocate 512 mbuf for one queue, if use 3 VFs, N >= 512*256*3=393216. -Test case: 3 Max VFs + 256 queues +Test case: multi VFs + 256 queues ================================= Subcase 1 : multi fdir for 256 queues of consistent queue group @@ -308,12 +308,17 @@ or:: Fail to setup test. -Subcase 7: negative: fail to setup 256 queues when more than 3 VFs +Subcase 7: negative: fail to setup 256 queues when more than 2 VFs ------------------------------------------------------------------ -Create 4 VFs. +Create 3 VFs. Bind all VFs to vfio-pci. Fail to start testpmd with "--txq=256 --rxq=256". +.. note:: + + For SW4.0 + ice-1.9.5, the available queue is 767 in 2 port E810 nic, it support 2 vfs start testpmd with 256 queue + + For SW3.2 + ice-1.8.3, the available queue is 943 in 2 port E810 nic, it support 3 vfs start testpmd with 256 queue Test case: 128 Max VFs + 4 queues (default) =========================================== diff --git a/tests/TestSuite_large_vf.py b/tests/TestSuite_large_vf.py index 00991790..4e2ff1d6 100644 --- a/tests/TestSuite_large_vf.py +++ b/tests/TestSuite_large_vf.py @@ -355,7 +355,9 @@ class TestLargeVf(TestCase): elif subcase_name == "test_more_than_3_vfs_256_queues": self.pmd_output.execute_cmd("quit", "#") # start testpmd use 256 queues - for i in range(self.vf_num + 1): + # for CVL 2 ports, the available queue is 767 for ice-1.9.5(SW4.0) + _vfs_num = self.vf_num - 1 + for i in range(_vfs_num + 1): if self.max_vf_num == 64: self.pmdout_list[0].start_testpmd( param=tv["param"], @@ -379,7 +381,7 @@ class TestLargeVf(TestCase): self.pmdout_list[0].execute_cmd("quit", "# ") break else: - if i < self.vf_num: + if i < _vfs_num: self.pmdout_list[i].start_testpmd( param=tv["param"], ports=[self.sriov_vfs_port[i].pci], @@ -406,7 +408,7 @@ class TestLargeVf(TestCase): self.pmdout_list[0].execute_cmd("quit", "# ") self.pmdout_list[1].execute_cmd("quit", "# ") self.pmdout_list[2].execute_cmd("quit", "# ") - if self.vf_num > 3: + if _vfs_num > 3: self.pmdout_list[3].execute_cmd("quit", "# ") self.pmdout_list[4].execute_cmd("quit", "# ") self.pmdout_list[5].execute_cmd("quit", "# ") @@ -645,7 +647,7 @@ class TestLargeVf(TestCase): ) self.check_txonly_pkts(rxtx_num) - def test_3_vfs_256_queues(self): + def test_multi_vfs_256_queues(self): self.create_iavf(self.vf_num + 1) self.launch_testpmd("--rxq=256 --txq=256", total=True) self.config_testpmd()