From patchwork Fri Nov 11 09:58:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 119798 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8606CA0542; Fri, 11 Nov 2022 11:00:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 385EE427F2; Fri, 11 Nov 2022 11:00:45 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 1D36E40697 for ; Fri, 11 Nov 2022 11:00:43 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668160844; x=1699696844; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=embh2HP7EYFUfPeRVVAfbE+CkzeyizCjYEvDC9MgPHQ=; b=XIopv4aNNlQip09G+ZePAfOxyd6NIHOeUMXC4Z0+atyz0Ok2S9J8NUTo 09BjUvABhniIY0f+yRtXW7geXXYXAHGGcEkYwZPKG1GNuCIqJAPnp0eHY 0o1mM++wlTY+c/WgBvOX159BCwVaA8774QC/8HH3+osJlRlExrtcHXIM+ 36qWILIcYhJrKND/2duJmbkkc1Cx0gGLZ3/jJTDWQyKs1zJE/JnSWoA22 v1aMeC4Qqna662MgPjQXIuJh2srSm3nLd/+6iiOydGvCSogzDptaaut2Y EQ6Do12z83KIttRisEQxGIao8KGfxNuMBOYBH6dcanMOXjhyhdMU7UviD A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="312715968" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="312715968" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Nov 2022 02:00:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="780129376" X-IronPort-AV: E=Sophos;i="5.96,156,1665471600"; d="scan'208";a="780129376" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga001.fm.intel.com with ESMTP; 11 Nov 2022 02:00:40 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, jeroendb@google.com, rushilg@google.com, jrkim@google.com, Junfeng Guo Subject: [PATCH] net/gve: support queue release Date: Fri, 11 Nov 2022 17:58:24 +0800 Message-Id: <20221111095824.71778-1-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for queue operations: - rx_queue_release - tx_queue_release Previous gve_tx_queue_release and gve_rx_queue_release functions are only used internally to release Rx/Tx queue related resources. But when the queues or ports are required to re-config, both of the dev ops tx_queue_release and ops rx_queue_release will be checked and then called. Without these two dev ops, the Rx/Tx queue struct will be set as NULL directly. Signed-off-by: Junfeng Guo Reviewed-by: Ferruh Yigit --- drivers/net/gve/gve_ethdev.c | 16 ++++++---------- drivers/net/gve/gve_ethdev.h | 4 ++-- drivers/net/gve/gve_rx.c | 6 +++--- drivers/net/gve/gve_tx.c | 6 +++--- 4 files changed, 14 insertions(+), 18 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 0086162f63..274a183250 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -244,8 +244,6 @@ static int gve_dev_close(struct rte_eth_dev *dev) { struct gve_priv *priv = dev->data->dev_private; - struct gve_tx_queue *txq; - struct gve_rx_queue *rxq; int err = 0; uint16_t i; @@ -255,15 +253,11 @@ gve_dev_close(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Failed to stop dev."); } - for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev->data->tx_queues[i]; - gve_tx_queue_release(txq); - } + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release(dev, i); - for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev->data->rx_queues[i]; - gve_rx_queue_release(rxq); - } + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release(dev, i); gve_free_qpls(priv); rte_free(priv->adminq); @@ -362,6 +356,8 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .dev_infos_get = gve_dev_info_get, .rx_queue_setup = gve_rx_queue_setup, .tx_queue_setup = gve_tx_queue_setup, + .rx_queue_release = gve_rx_queue_release, + .tx_queue_release = gve_tx_queue_release, .link_update = gve_link_update, .mtu_set = gve_dev_mtu_set, }; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index f6cac3ff2b..235e55899e 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -291,10 +291,10 @@ gve_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf); void -gve_tx_queue_release(void *txq); +gve_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void -gve_rx_queue_release(void *rxq); +gve_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid); void gve_stop_tx_queues(struct rte_eth_dev *dev); diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 4c5b8c517d..518c9d109c 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -192,9 +192,9 @@ gve_release_rxq_mbufs(struct gve_rx_queue *rxq) } void -gve_rx_queue_release(void *rxq) +gve_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - struct gve_rx_queue *q = rxq; + struct gve_rx_queue *q = dev->data->rx_queues[qid]; if (!q) return; @@ -232,7 +232,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, /* Free memory if needed. */ if (dev->data->rx_queues[queue_id]) { - gve_rx_queue_release(dev->data->rx_queues[queue_id]); + gve_rx_queue_release(dev, queue_id); dev->data->rx_queues[queue_id] = NULL; } diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index 4420a17192..bf4e8fea2c 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -514,9 +514,9 @@ gve_release_txq_mbufs(struct gve_tx_queue *txq) } void -gve_tx_queue_release(void *txq) +gve_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) { - struct gve_tx_queue *q = txq; + struct gve_tx_queue *q = dev->data->tx_queues[qid]; if (!q) return; @@ -553,7 +553,7 @@ gve_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, /* Free memory if needed. */ if (dev->data->tx_queues[queue_id]) { - gve_tx_queue_release(dev->data->tx_queues[queue_id]); + gve_tx_queue_release(dev, queue_id); dev->data->tx_queues[queue_id] = NULL; }