From patchwork Mon Jan 30 06:26:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122655 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44370424BA; Mon, 30 Jan 2023 07:32:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 239EA42B8E; Mon, 30 Jan 2023 07:32:29 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id F17C142D29 for ; Mon, 30 Jan 2023 07:32:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060348; x=1706596348; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s4csOlqgcxrtmJhT5Ew7Sx7toMhq7kjfzK+NzcvnKRc=; b=WEIgN7PnEweg3HWGlPzSxQUzkbNh69sdTZ0otMzMeR5yyeg7vAVwAosl DSAvGHANH4C1StOdBLj6sA0clwZgkWuUOdAEMr8Z9oqGhESw5FAgeJS7X 3dZIA++nG9G6hplUjI4IZRaLGYklQGwFzc3jGSi1ZVPd38Zb0jx/8B9+P 91d09OxWl9YFm4K3hQgmrR7V+8mTvsKgN5FnnL0Qv7AAUdYLYQ7QNeJ9M yRrhxlv0tr5pjEoG/cZTA6ynE73o/TdFPxlNV/20QqBgq5MDp6y4se2xY pB0E6TpiW8w7PmwkJr3V83oljdgT1QHFyJwJJfitXRtmISo3z8K39B3bT Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035686" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035686" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906454" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906454" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:24 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 4/9] net/gve: support queue release and stop for DQO Date: Mon, 30 Jan 2023 14:26:37 +0800 Message-Id: <20230130062642.3337239-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for queue operations: - gve_tx_queue_release_dqo - gve_rx_queue_release_dqo - gve_stop_tx_queues_dqo - gve_stop_rx_queues_dqo Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 18 +++++++++--- drivers/net/gve/gve_ethdev.h | 12 ++++++++ drivers/net/gve/gve_rx.c | 3 ++ drivers/net/gve/gve_rx_dqo.c | 57 ++++++++++++++++++++++++++++++++++++ drivers/net/gve/gve_tx.c | 3 ++ drivers/net/gve/gve_tx_dqo.c | 55 ++++++++++++++++++++++++++++++++++ 6 files changed, 144 insertions(+), 4 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 3543378978..7c4be3a1cb 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -292,11 +292,19 @@ gve_dev_close(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Failed to stop dev."); } - for (i = 0; i < dev->data->nb_tx_queues; i++) - gve_tx_queue_release(dev, i); + if (gve_is_gqi(priv)) { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release(dev, i); + } else { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release_dqo(dev, i); - for (i = 0; i < dev->data->nb_rx_queues; i++) - gve_rx_queue_release(dev, i); + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release_dqo(dev, i); + } gve_free_qpls(priv); rte_free(priv->adminq); @@ -408,6 +416,8 @@ gve_eth_dev_ops_override(struct eth_dev_ops *local_eth_dev_ops) /* override eth_dev ops for DQO */ local_eth_dev_ops->tx_queue_setup = gve_tx_queue_setup_dqo; local_eth_dev_ops->rx_queue_setup = gve_rx_queue_setup_dqo; + local_eth_dev_ops->tx_queue_release = gve_tx_queue_release_dqo; + local_eth_dev_ops->rx_queue_release = gve_rx_queue_release_dqo; } static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 0adfc90554..93314f2db3 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -353,4 +353,16 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf); +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 518c9d109c..9ba975c9b4 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -343,6 +343,9 @@ gve_stop_rx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_rx_queues_dqo(dev); + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index e8a6d575fc..aca6f8ea2d 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,38 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) +{ + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i]) { + rte_pktmbuf_free_seg(rxq->sw_ring[i]); + rxq->sw_ring[i] = NULL; + } + } + + rxq->nb_avail = rxq->nb_rx_desc; +} + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_rx_queue *q = dev->data->rx_queues[qid]; + + if (q == NULL) + return; + + gve_release_rxq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static void gve_reset_rxq_dqo(struct gve_rx_queue *rxq) { @@ -54,6 +86,12 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->rx_desc_cnt; + /* Free memory if needed */ + if (dev->data->rx_queues[queue_id]) { + gve_rx_queue_release_dqo(dev, queue_id); + dev->data->rx_queues[queue_id] = NULL; + } + /* Allocate the RX queue data structure. */ rxq = rte_zmalloc_socket("gve rxq", sizeof(struct gve_rx_queue), @@ -146,3 +184,22 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(rxq); return err; } + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_rx_queue *rxq; + uint16_t i; + int err; + + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + gve_release_rxq_mbufs_dqo(rxq); + gve_reset_rxq_dqo(rxq); + } +} diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index bf4e8fea2c..0eb42b1216 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -658,6 +658,9 @@ gve_stop_tx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_tx_queues_dqo(dev); + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy txqs"); diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 4f8bad31bb..e2e4153f27 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,36 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) +{ + uint16_t i; + + for (i = 0; i < txq->sw_size; i++) { + if (txq->sw_ring[i]) { + rte_pktmbuf_free_seg(txq->sw_ring[i]); + txq->sw_ring[i] = NULL; + } + } +} + +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_tx_queue *q = dev->data->tx_queues[qid]; + + if (q == NULL) + return; + + gve_release_txq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static int check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh, uint16_t tx_free_thresh) @@ -90,6 +120,12 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->tx_desc_cnt; + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_id]) { + gve_tx_queue_release_dqo(dev, queue_id); + dev->data->tx_queues[queue_id] = NULL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("gve txq", sizeof(struct gve_tx_queue), @@ -176,3 +212,22 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(txq); return err; } + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_tx_queue *txq; + uint16_t i; + int err; + + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy txqs"); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + gve_release_txq_mbufs_dqo(txq); + gve_reset_txq_dqo(txq); + } +}