From patchwork Wed Jul 12 07:42:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qiming Yang X-Patchwork-Id: 129492 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF27C42E52; Wed, 12 Jul 2023 10:00:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C6C242BB1; Wed, 12 Jul 2023 10:00:09 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 19E31406BA for ; Wed, 12 Jul 2023 10:00:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689148806; x=1720684806; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6C1iwd/eZQ1DUY1gaePavfJxZ8s8XWAHqvvBKCsBh80=; b=UdooDWqqdMwjJ448Grimv4E3Mor7NgAr9CIUoTyM+uF0tZaLmOIArynO 7/I35uyDJlK44Q3JebfMbkFEunb6FHDVCAk0ZGg7GXEn6LAWWBbOQ2xCF +xA0h6nTifXmP7stb04ALcSgh2KqL/K2KuV57LL0UtoEOF/bZsbwA8irI 5QNkr6qAHDGynfVzFHPLgXhNAiG9nNRboFraO/s7BlL4mSNEBTI5ZVpeO yEQ6DCyw1xZVGhj4O7MpvK87wskuviOVz7RbwKovkRLBJGVfkf2qkvMkb p8CSHyP9Z+3nB7e2P3swWG4NnJFItKiNgp7DvASF/t6C6ESO7B+Y/0jJY g==; X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="354736660" X-IronPort-AV: E=Sophos;i="6.01,199,1684825200"; d="scan'208";a="354736660" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 01:00:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10768"; a="845564072" X-IronPort-AV: E=Sophos;i="6.01,199,1684825200"; d="scan'208";a="845564072" Received: from dpdk-qiming3.sh.intel.com ([10.67.111.4]) by orsmga004.jf.intel.com with ESMTP; 12 Jul 2023 00:59:54 -0700 From: Qiming Yang To: dev@dpdk.org Cc: beilei.xing@intel.com, qi.z.zhang@intel.com, Qiming Yang , Mingjin Ye Subject: [PATCH 2/3] net/igc: fix Rx and Tx queue status get Date: Wed, 12 Jul 2023 07:42:14 +0000 Message-Id: <20230712074215.3249336-3-qiming.yang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230712074215.3249336-1-qiming.yang@intel.com> References: <20230712074215.3249336-1-qiming.yang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Igc driver don't enable queue start/stop functions, queue status is not updated when the HW queue enabled or disabled. It caused application can't get correct queue status. This patch fixes the issue by updating the queue states when the queue is disabled or enabled. Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx") Signed-off-by: Qiming Yang Signed-off-by: Mingjin Ye --- drivers/net/igc/igc_txrx.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index c11b6f7f25..5c60e3e997 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -1215,6 +1215,7 @@ igc_rx_init(struct rte_eth_dev *dev) dvmolr |= IGC_DVMOLR_STRCRC; IGC_WRITE_REG(hw, IGC_DVMOLR(rxq->reg_idx), dvmolr); + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; } return 0; @@ -1888,6 +1889,7 @@ igc_dev_clear_queues(struct rte_eth_dev *dev) if (txq != NULL) { igc_tx_queue_release_mbufs(txq); igc_reset_tx_queue(txq); + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } } @@ -1896,6 +1898,7 @@ igc_dev_clear_queues(struct rte_eth_dev *dev) if (rxq != NULL) { igc_rx_queue_release_mbufs(rxq); igc_reset_rx_queue(rxq); + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } } } @@ -2143,6 +2146,7 @@ igc_tx_init(struct rte_eth_dev *dev) IGC_TXDCTL_WTHRESH_MSK; txdctl |= IGC_TXDCTL_QUEUE_ENABLE; IGC_WRITE_REG(hw, IGC_TXDCTL(txq->reg_idx), txdctl); + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; } if (offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) {