From patchwork Fri Aug 19 10:09:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Peng1X" X-Patchwork-Id: 115244 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 256DBA00C4; Fri, 19 Aug 2022 04:19:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8AE5D40DDC; Fri, 19 Aug 2022 04:19:17 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 2E270400D7; Fri, 19 Aug 2022 04:19:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660875555; x=1692411555; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ek4XqgGWy5XPevxBuKmhEMm2zTtOCUH8c+er2UJRd3s=; b=BCpwGybHh6mk39qlLKwiaBElDwHUqrzLlrm+PSqUWt4agZ+PNLnldAtQ XkHlIXmIPkQONs7M2B2OH1HNITV1hdXtFBbv33YDC7yXPQ3Fl1pHpxpLK MQ/2Cv+QrJ6bHg3SBKKh2KBDMEriVpSLpZm9qkBbrwqQLQcb5QulmV+sz S+wZ+JdwnJ1tAFs2JT5LoieKIDo+qwPp5Z5TGpM3/t3YxMwVbpA6D1BSu izOepqqdxZGjny6CMzNeaMmpd6EVFtbrchWjLBkokb9W+u6S79aI56ima lytDFNjjyoeFGZ7KTCJsqxekjOjwHfCNNEebL8mz0OcVHHJyM7yRqRqW1 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10443"; a="293703889" X-IronPort-AV: E=Sophos;i="5.93,247,1654585200"; d="scan'208";a="293703889" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Aug 2022 19:19:13 -0700 X-IronPort-AV: E=Sophos;i="5.93,247,1654585200"; d="scan'208";a="676296198" Received: from unknown (HELO localhost.localdomain) ([10.239.252.253]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Aug 2022 19:19:11 -0700 From: peng1x.zhang@intel.com To: dev@dpdk.org Cc: aman.deep.singh@intel.com, yuying.zhang@intel.com, Peng Zhang , stable@dpdk.org Subject: [PATCH v2] app/testpmd: fix incorrect queues state of secondary process Date: Fri, 19 Aug 2022 18:09:40 +0800 Message-Id: <20220819100940.657437-1-peng1x.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220623181502.181567-1-peng1x.zhang@intel.com> References: <20220623181502.181567-1-peng1x.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Peng Zhang Primary process could set up queues state correctly when starting port, but under multi-process scenario, "stream_init" function would get wrong queues state for secondary process. This commit is to get queues state from ethdev which is located in shared memory. Fixes: 3c4426db54fc ("app/testpmd: do not poll stopped queues") Cc: stable@dpdk.org Signed-off-by: Peng Zhang --- app/test-pmd/testpmd.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index addcbcac85..70f907d96b 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -75,6 +75,8 @@ #include "testpmd.h" +#include + #ifndef MAP_HUGETLB /* FreeBSD may not have MAP_HUGETLB (in fact, it probably doesn't) */ #define HUGE_FLAG (0x40000) @@ -2402,9 +2404,23 @@ start_packet_forwarding(int with_tx_first) if (!pkt_fwd_shared_rxq_check()) return; - if (stream_init != NULL) - for (i = 0; i < cur_fwd_config.nb_fwd_streams; i++) + if (stream_init != NULL) { + for (i = 0; i < cur_fwd_config.nb_fwd_streams; i++) { + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + struct fwd_stream *fs = fwd_streams[i]; + struct rte_eth_dev_data *dev_rx_data, *dev_tx_data; + + dev_rx_data = (&rte_eth_devices[fs->rx_port])->data; + dev_tx_data = (&rte_eth_devices[fs->tx_port])->data; + + uint8_t rx_state = dev_rx_data->rx_queue_state[fs->rx_port]; + ports[fs->rx_port].rxq[fs->rx_queue].state = rx_state; + uint8_t tx_state = dev_tx_data->tx_queue_state[fs->tx_port]; + ports[fs->tx_port].txq[fs->tx_queue].state = tx_state; + } stream_init(fwd_streams[i]); + } + } port_fwd_begin = cur_fwd_config.fwd_eng->port_fwd_begin; if (port_fwd_begin != NULL) {