From patchwork Mon Jan 30 06:26:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122652 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4B2E0424BA; Mon, 30 Jan 2023 07:32:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB38340EE7; Mon, 30 Jan 2023 07:32:18 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 1FB2540EE2 for ; Mon, 30 Jan 2023 07:32:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060337; x=1706596337; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ayWp9Vdd1lDbUhyWk9xDz5BVc711kzbtowy9RABLu24=; b=oCes7OBTqe7Wh6+7Caa3XzmIOm/qqaRUvQ928+Xj/nqDMKI8Lfi2eWtA EpLwouDPabrExx+sY+IMkQ7eRt2pA5rM5zZb5AZnes+OY6Ctktw83fOw5 LjPTZKsKTmS0j7TEofEs9r/ehp+KPUZm8RL3648KwL0DEdYyVbhG9wVkB 2QCXJS3hNfTRvwJ1qTchUXNHosP0nZ0fVbqR+i1Azw7uNQHZINYYwUB86 +oyJ9L/kBlXzUaIWa8yKurVGDkDn5q5uzEm3MetUY9t2AB0iA/oUTDnqW P9/rzgTizgmzjKoijmOypPSWnqJGoSKY0/9l7LXgd8IdGTX3dw8I0mSXq Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035651" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035651" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906423" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906423" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:12 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 1/9] net/gve: add Tx queue setup for DQO Date: Mon, 30 Jan 2023 14:26:34 +0800 Message-Id: <20230130062642.3337239-2-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for tx_queue_setup_dqo ops. DQO format has submission and completion queue pair for each Tx/Rx queue. Note that with DQO format all descriptors and doorbells, as well as counters are written in little-endian. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- .mailmap | 3 + MAINTAINERS | 3 + drivers/net/gve/base/gve.h | 1 + drivers/net/gve/base/gve_desc_dqo.h | 4 - drivers/net/gve/base/gve_osdep.h | 4 + drivers/net/gve/gve_ethdev.c | 16 ++- drivers/net/gve/gve_ethdev.h | 33 +++++- drivers/net/gve/gve_tx_dqo.c | 178 ++++++++++++++++++++++++++++ drivers/net/gve/meson.build | 1 + 9 files changed, 235 insertions(+), 8 deletions(-) create mode 100644 drivers/net/gve/gve_tx_dqo.c diff --git a/.mailmap b/.mailmap index 452267a567..553b9ce3ca 100644 --- a/.mailmap +++ b/.mailmap @@ -578,6 +578,7 @@ Jens Freimann Jeremy Plsek Jeremy Spewock Jerin Jacob +Jeroen de Borst Jerome Jutteau Jerry Hao OS Jerry Lilijun @@ -642,6 +643,7 @@ Jonathan Erb Jon DeVree Jon Loeliger Joongi Kim +Jordan Kimbrough Jørgen Østergaard Sloth Jörg Thalheim Joseph Richard @@ -1145,6 +1147,7 @@ Roy Franz Roy Pledge Roy Shterman Ruifeng Wang +Rushil Gupta Ryan E Hall Sabyasachi Sengupta Sachin Saxena diff --git a/MAINTAINERS b/MAINTAINERS index 9a0f416d2e..7ffa709b3b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -703,6 +703,9 @@ F: doc/guides/nics/features/enic.ini Google Virtual Ethernet M: Junfeng Guo +M: Jeroen de Borst +M: Rushil Gupta +M: Jordan Kimbrough F: drivers/net/gve/ F: doc/guides/nics/gve.rst F: doc/guides/nics/features/gve.ini diff --git a/drivers/net/gve/base/gve.h b/drivers/net/gve/base/gve.h index 2dc4507acb..2b7cf7d99b 100644 --- a/drivers/net/gve/base/gve.h +++ b/drivers/net/gve/base/gve.h @@ -7,6 +7,7 @@ #define _GVE_H_ #include "gve_desc.h" +#include "gve_desc_dqo.h" #define GVE_VERSION "1.3.0" #define GVE_VERSION_PREFIX "GVE-" diff --git a/drivers/net/gve/base/gve_desc_dqo.h b/drivers/net/gve/base/gve_desc_dqo.h index ee1afdecb8..bb4a18d4d1 100644 --- a/drivers/net/gve/base/gve_desc_dqo.h +++ b/drivers/net/gve/base/gve_desc_dqo.h @@ -13,10 +13,6 @@ #define GVE_TX_MAX_HDR_SIZE_DQO 255 #define GVE_TX_MIN_TSO_MSS_DQO 88 -#ifndef __LITTLE_ENDIAN_BITFIELD -#error "Only little endian supported" -#endif - /* Basic TX descriptor (DTYPE 0x0C) */ struct gve_tx_pkt_desc_dqo { __le64 buf_addr; diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h index 7cb73002f4..abf3d379ae 100644 --- a/drivers/net/gve/base/gve_osdep.h +++ b/drivers/net/gve/base/gve_osdep.h @@ -35,6 +35,10 @@ typedef rte_be16_t __be16; typedef rte_be32_t __be32; typedef rte_be64_t __be64; +typedef rte_le16_t __le16; +typedef rte_le32_t __le32; +typedef rte_le64_t __le64; + typedef rte_iova_t dma_addr_t; #define ETH_MIN_MTU RTE_ETHER_MIN_MTU diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 97781f0ed3..d03f2fba92 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -299,6 +299,7 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->default_txconf = (struct rte_eth_txconf) { .tx_free_thresh = GVE_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = GVE_DEFAULT_TX_RS_THRESH, .offloads = 0, }; @@ -360,6 +361,13 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .mtu_set = gve_dev_mtu_set, }; +static void +gve_eth_dev_ops_override(struct eth_dev_ops *local_eth_dev_ops) +{ + /* override eth_dev ops for DQO */ + local_eth_dev_ops->tx_queue_setup = gve_tx_queue_setup_dqo; +} + static void gve_free_counter_array(struct gve_priv *priv) { @@ -595,6 +603,7 @@ gve_teardown_priv_resources(struct gve_priv *priv) static int gve_dev_init(struct rte_eth_dev *eth_dev) { + static struct eth_dev_ops gve_local_eth_dev_ops = gve_eth_dev_ops; struct gve_priv *priv = eth_dev->data->dev_private; int max_tx_queues, max_rx_queues; struct rte_pci_device *pci_dev; @@ -602,8 +611,6 @@ gve_dev_init(struct rte_eth_dev *eth_dev) rte_be32_t *db_bar; int err; - eth_dev->dev_ops = &gve_eth_dev_ops; - if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -642,9 +649,12 @@ gve_dev_init(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = gve_rx_burst; eth_dev->tx_pkt_burst = gve_tx_burst; } else { - PMD_DRV_LOG(ERR, "DQO_RDA is not implemented and will be added in the future"); + /* override Tx/Rx setup/release eth_dev ops */ + gve_eth_dev_ops_override(&gve_local_eth_dev_ops); } + eth_dev->dev_ops = &gve_local_eth_dev_ops; + eth_dev->data->mac_addrs = &priv->dev_addr; return 0; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 235e55899e..2dfcef6893 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -11,6 +11,9 @@ #include "base/gve.h" +/* TODO: this is a workaround to ensure that Tx complq is enough */ +#define DQO_TX_MULTIPLIER 4 + /* * Following macros are derived from linux/pci_regs.h, however, * we can't simply include that header here, as there is no such @@ -25,7 +28,8 @@ #define PCI_MSIX_FLAGS_QSIZE 0x07FF /* Table size */ #define GVE_DEFAULT_RX_FREE_THRESH 512 -#define GVE_DEFAULT_TX_FREE_THRESH 256 +#define GVE_DEFAULT_TX_FREE_THRESH 32 +#define GVE_DEFAULT_TX_RS_THRESH 32 #define GVE_TX_MAX_FREE_SZ 512 #define GVE_MIN_BUF_SIZE 1024 @@ -50,6 +54,13 @@ union gve_tx_desc { struct gve_tx_seg_desc seg; /* subsequent descs for a packet */ }; +/* Tx desc for DQO format */ +union gve_tx_desc_dqo { + struct gve_tx_pkt_desc_dqo pkt; + struct gve_tx_tso_context_desc_dqo tso_ctx; + struct gve_tx_general_context_desc_dqo general_ctx; +}; + /* Offload features */ union gve_tx_offload { uint64_t data; @@ -78,8 +89,10 @@ struct gve_tx_queue { uint32_t tx_tail; uint16_t nb_tx_desc; uint16_t nb_free; + uint16_t nb_used; uint32_t next_to_clean; uint16_t free_thresh; + uint16_t rs_thresh; /* Only valid for DQO_QPL queue format */ uint16_t sw_tail; @@ -102,6 +115,17 @@ struct gve_tx_queue { const struct rte_memzone *qres_mz; struct gve_queue_resources *qres; + /* newly added for DQO*/ + volatile union gve_tx_desc_dqo *tx_ring; + struct gve_tx_compl_desc *compl_ring; + const struct rte_memzone *compl_ring_mz; + uint64_t compl_ring_phys_addr; + uint32_t complq_tail; + uint16_t sw_size; + uint8_t cur_gen_bit; + uint32_t last_desc_cleaned; + void **txqs; + /* Only valid for DQO_RDA queue format */ struct gve_tx_queue *complq; @@ -308,4 +332,11 @@ gve_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t gve_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +/* Below functions are used for DQO */ + +int +gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c new file mode 100644 index 0000000000..4f8bad31bb --- /dev/null +++ b/drivers/net/gve/gve_tx_dqo.c @@ -0,0 +1,178 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Intel Corporation + */ + +#include "gve_ethdev.h" +#include "base/gve_adminq.h" + +static int +check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh, + uint16_t tx_free_thresh) +{ + if (tx_rs_thresh >= (nb_desc - 2)) { + PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than the " + "number of TX descriptors (%u) minus 2", + tx_rs_thresh, nb_desc); + return -EINVAL; + } + if (tx_free_thresh >= (nb_desc - 3)) { + PMD_DRV_LOG(ERR, "tx_free_thresh (%u) must be less than the " + "number of TX descriptors (%u) minus 3.", + tx_free_thresh, nb_desc); + return -EINVAL; + } + if (tx_rs_thresh > tx_free_thresh) { + PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than or " + "equal to tx_free_thresh (%u).", + tx_rs_thresh, tx_free_thresh); + return -EINVAL; + } + if ((nb_desc % tx_rs_thresh) != 0) { + PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the " + "number of TX descriptors (%u).", + tx_rs_thresh, nb_desc); + return -EINVAL; + } + + return 0; +} + +static void +gve_reset_txq_dqo(struct gve_tx_queue *txq) +{ + struct rte_mbuf **sw_ring; + uint32_t size, i; + + if (txq == NULL) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + size = txq->nb_tx_desc * sizeof(union gve_tx_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)txq->tx_ring)[i] = 0; + + size = txq->sw_size * sizeof(struct gve_tx_compl_desc); + for (i = 0; i < size; i++) + ((volatile char *)txq->compl_ring)[i] = 0; + + sw_ring = txq->sw_ring; + for (i = 0; i < txq->sw_size; i++) + sw_ring[i] = NULL; + + txq->tx_tail = 0; + txq->nb_used = 0; + + txq->last_desc_cleaned = 0; + txq->sw_tail = 0; + txq->nb_free = txq->nb_tx_desc - 1; + + txq->complq_tail = 0; + txq->cur_gen_bit = 1; +} + +int +gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct gve_priv *hw = dev->data->dev_private; + const struct rte_memzone *mz; + struct gve_tx_queue *txq; + uint16_t free_thresh; + uint16_t rs_thresh; + uint16_t sw_size; + int err = 0; + + if (nb_desc != hw->tx_desc_cnt) { + PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", + hw->tx_desc_cnt); + } + nb_desc = hw->tx_desc_cnt; + + /* Allocate the TX queue data structure. */ + txq = rte_zmalloc_socket("gve txq", + sizeof(struct gve_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for tx queue structure"); + return -ENOMEM; + } + + /* need to check free_thresh here */ + free_thresh = conf->tx_free_thresh ? + conf->tx_free_thresh : GVE_DEFAULT_TX_FREE_THRESH; + rs_thresh = conf->tx_rs_thresh ? + conf->tx_rs_thresh : GVE_DEFAULT_TX_RS_THRESH; + if (check_tx_thresh_dqo(nb_desc, rs_thresh, free_thresh)) + return -EINVAL; + + txq->nb_tx_desc = nb_desc; + txq->free_thresh = free_thresh; + txq->rs_thresh = rs_thresh; + txq->queue_id = queue_id; + txq->port_id = dev->data->port_id; + txq->ntfy_id = queue_id; + txq->hw = hw; + txq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[txq->ntfy_id].id)]; + + /* Allocate software ring */ + sw_size = nb_desc * DQO_TX_MULTIPLIER; + txq->sw_ring = rte_zmalloc_socket("gve tx sw ring", + sw_size * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->sw_ring == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW TX ring"); + err = -ENOMEM; + goto err_txq; + } + txq->sw_size = sw_size; + + /* Allocate TX hardware ring descriptors. */ + mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_id, + nb_desc * sizeof(union gve_tx_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX"); + err = -ENOMEM; + goto err_txq; + } + txq->tx_ring = (union gve_tx_desc_dqo *)mz->addr; + txq->tx_ring_phys_addr = mz->iova; + txq->mz = mz; + + /* Allocate TX completion ring descriptors. */ + mz = rte_eth_dma_zone_reserve(dev, "tx_compl_ring", queue_id, + sw_size * sizeof(struct gve_tx_compl_desc), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX completion queue"); + err = -ENOMEM; + goto err_txq; + } + txq->compl_ring = (struct gve_tx_compl_desc *)mz->addr; + txq->compl_ring_phys_addr = mz->iova; + txq->compl_ring_mz = mz; + txq->txqs = dev->data->tx_queues; + + mz = rte_eth_dma_zone_reserve(dev, "txq_res", queue_id, + sizeof(struct gve_queue_resources), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX resource"); + err = -ENOMEM; + goto err_txq; + } + txq->qres = (struct gve_queue_resources *)mz->addr; + txq->qres_mz = mz; + + gve_reset_txq_dqo(txq); + + dev->data->tx_queues[queue_id] = txq; + + return 0; + +err_txq: + rte_free(txq); + return err; +} diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build index af0010c01c..2ddb0cbf9e 100644 --- a/drivers/net/gve/meson.build +++ b/drivers/net/gve/meson.build @@ -11,6 +11,7 @@ sources = files( 'base/gve_adminq.c', 'gve_rx.c', 'gve_tx.c', + 'gve_tx_dqo.c', 'gve_ethdev.c', ) includes += include_directories('base') From patchwork Mon Jan 30 06:26:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122653 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A9E5424BA; Mon, 30 Jan 2023 07:32:31 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A13140FDF; Mon, 30 Jan 2023 07:32:22 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id EA420427F5 for ; Mon, 30 Jan 2023 07:32:20 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060341; x=1706596341; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bmbOnResrsGUARKiW6/0WBWMPkVIAQTKhSz5uoXeLAs=; b=EcuSFULRWSrGrQFgaNg0FYQr4oRbrzGQ4jXxDY2u6hZY6UqbA7eiJ2aW 2lxO5hDsQ8H4o2memfWda0Ky7ZFWZ1SIMZqnNsSFO42iptOgOpEpcAspX VdlrMLKXyy7EEzw6Sfe6omgL6w6oK2QGyytQxYLU8FQufR/iG1eKxotYB DD7b6Jr6sOs4qbz+IJJfiZCkUQHwFT1P7DcSzBDi8VnV0bVlo5ZLE9gE/ fH6JVKdVrYeU/ROthZWkg1hgIS8XdReseNHJS2LFx0hvRW8WkkUsm9mPc 3jnFC0J/KjD8+rdJYRyQ+TI9UuPSrw9VwacoXy6SZBf3n/mC3inNtLJwT w==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035661" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035661" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906433" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906433" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:16 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 2/9] net/gve: add Rx queue setup for DQO Date: Mon, 30 Jan 2023 14:26:35 +0800 Message-Id: <20230130062642.3337239-3-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for rx_queue_setup_dqo ops. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 14 ++++ drivers/net/gve/gve_rx_dqo.c | 148 +++++++++++++++++++++++++++++++++++ drivers/net/gve/meson.build | 1 + 4 files changed, 164 insertions(+) create mode 100644 drivers/net/gve/gve_rx_dqo.c diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index d03f2fba92..26182b0422 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -366,6 +366,7 @@ gve_eth_dev_ops_override(struct eth_dev_ops *local_eth_dev_ops) { /* override eth_dev ops for DQO */ local_eth_dev_ops->tx_queue_setup = gve_tx_queue_setup_dqo; + local_eth_dev_ops->rx_queue_setup = gve_rx_queue_setup_dqo; } static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 2dfcef6893..0adfc90554 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -145,6 +145,7 @@ struct gve_rx_queue { uint16_t nb_rx_desc; uint16_t expected_seqno; /* the next expected seqno */ uint16_t free_thresh; + uint16_t nb_rx_hold; uint32_t next_avail; uint32_t nb_avail; @@ -163,6 +164,14 @@ struct gve_rx_queue { uint16_t ntfy_id; uint16_t rx_buf_len; + /* newly added for DQO*/ + volatile struct gve_rx_desc_dqo *rx_ring; + struct gve_rx_compl_desc_dqo *compl_ring; + const struct rte_memzone *compl_ring_mz; + uint64_t compl_ring_phys_addr; + uint8_t cur_gen_bit; + uint16_t bufq_tail; + /* Only valid for DQO_RDA queue format */ struct gve_rx_queue *bufq; @@ -334,6 +343,11 @@ gve_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); /* Below functions are used for DQO */ +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool); int gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c new file mode 100644 index 0000000000..e8a6d575fc --- /dev/null +++ b/drivers/net/gve/gve_rx_dqo.c @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Intel Corporation + */ + +#include "gve_ethdev.h" +#include "base/gve_adminq.h" + +static void +gve_reset_rxq_dqo(struct gve_rx_queue *rxq) +{ + struct rte_mbuf **sw_ring; + uint32_t size, i; + + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "pointer to rxq is NULL"); + return; + } + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_compl_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->compl_ring)[i] = 0; + + sw_ring = rxq->sw_ring; + for (i = 0; i < rxq->nb_rx_desc; i++) + sw_ring[i] = NULL; + + rxq->bufq_tail = 0; + rxq->next_avail = 0; + rxq->nb_rx_hold = rxq->nb_rx_desc - 1; + + rxq->rx_tail = 0; + rxq->cur_gen_bit = 1; +} + +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool) +{ + struct gve_priv *hw = dev->data->dev_private; + const struct rte_memzone *mz; + struct gve_rx_queue *rxq; + uint16_t free_thresh; + int err = 0; + + if (nb_desc != hw->rx_desc_cnt) { + PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", + hw->rx_desc_cnt); + } + nb_desc = hw->rx_desc_cnt; + + /* Allocate the RX queue data structure. */ + rxq = rte_zmalloc_socket("gve rxq", + sizeof(struct gve_rx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for rx queue structure"); + return -ENOMEM; + } + + /* check free_thresh here */ + free_thresh = conf->rx_free_thresh ? + conf->rx_free_thresh : GVE_DEFAULT_RX_FREE_THRESH; + if (free_thresh >= nb_desc) { + PMD_DRV_LOG(ERR, "rx_free_thresh (%u) must be less than nb_desc (%u).", + free_thresh, rxq->nb_rx_desc); + err = -EINVAL; + goto err_rxq; + } + + rxq->nb_rx_desc = nb_desc; + rxq->free_thresh = free_thresh; + rxq->queue_id = queue_id; + rxq->port_id = dev->data->port_id; + rxq->ntfy_id = hw->num_ntfy_blks / 2 + queue_id; + + rxq->mpool = pool; + rxq->hw = hw; + rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)]; + + rxq->rx_buf_len = + rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM; + + /* Allocate software ring */ + rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq->sw_ring == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW RX ring"); + err = -ENOMEM; + goto err_rxq; + } + + /* Allocate RX buffer queue */ + mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_id, + nb_desc * sizeof(struct gve_rx_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue"); + err = -ENOMEM; + goto err_rxq; + } + rxq->rx_ring = (struct gve_rx_desc_dqo *)mz->addr; + rxq->rx_ring_phys_addr = mz->iova; + rxq->mz = mz; + + /* Allocate RX completion queue */ + mz = rte_eth_dma_zone_reserve(dev, "compl_ring", queue_id, + nb_desc * sizeof(struct gve_rx_compl_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX completion queue"); + err = -ENOMEM; + goto err_rxq; + } + /* Zero all the descriptors in the ring */ + memset(mz->addr, 0, nb_desc * sizeof(struct gve_rx_compl_desc_dqo)); + rxq->compl_ring = (struct gve_rx_compl_desc_dqo *)mz->addr; + rxq->compl_ring_phys_addr = mz->iova; + rxq->compl_ring_mz = mz; + + mz = rte_eth_dma_zone_reserve(dev, "rxq_res", queue_id, + sizeof(struct gve_queue_resources), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX resource"); + err = -ENOMEM; + goto err_rxq; + } + rxq->qres = (struct gve_queue_resources *)mz->addr; + rxq->qres_mz = mz; + + gve_reset_rxq_dqo(rxq); + + dev->data->rx_queues[queue_id] = rxq; + + return 0; + +err_rxq: + rte_free(rxq); + return err; +} diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build index 2ddb0cbf9e..c9d87903f9 100644 --- a/drivers/net/gve/meson.build +++ b/drivers/net/gve/meson.build @@ -11,6 +11,7 @@ sources = files( 'base/gve_adminq.c', 'gve_rx.c', 'gve_tx.c', + 'gve_rx_dqo.c', 'gve_tx_dqo.c', 'gve_ethdev.c', ) From patchwork Mon Jan 30 06:26:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122654 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 804C1424BA; Mon, 30 Jan 2023 07:32:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6D9942BC9; Mon, 30 Jan 2023 07:32:26 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 4A9E742B8E for ; Mon, 30 Jan 2023 07:32:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060344; x=1706596344; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=P+tfqKUjECT0v0kb7XTy5YLGMn2+ySTWEpDhUaQ/1oI=; b=Vuf17bNkQlMr9xYgzXAHAfEiW3guWRUI2sTTkKUWAjklpz2WtH2/yeV4 9ykCTySumdvr4KuFAs3yyrFw+proHuZ1/hgssJ7SVHB8EhjoR0N+dPfrm k/vZvOA59H9JWxWA5a70rvy5IgGjXRIeO2JKEUtB4+dwF+Lcx71zH7PLM jNagkgedaSBglyrY2bDW64auWcv2+l/37pPOZOUReSl1KhOgXLSBRg1ga NvYzzQSl0/AOYHe9GolMKVQjj6xCm5m5TdlRCLEZmwC9+6yi6782ZtcE3 dOLZCV9/bMep3k3+9hKWYT93PK/2xPkVX7YYGG0RAVQ+CciTMip4quoMt Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035676" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035676" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906443" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906443" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:20 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 3/9] net/gve: support device start and close for DQO Date: Mon, 30 Jan 2023 14:26:36 +0800 Message-Id: <20230130062642.3337239-4-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add device start and close support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/base/gve_adminq.c | 10 +++---- drivers/net/gve/gve_ethdev.c | 43 ++++++++++++++++++++++++++++++- 2 files changed, 47 insertions(+), 6 deletions(-) diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index e745b709b2..e963f910a0 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -497,11 +497,11 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_tx_queue.queue_page_list_id = cpu_to_be32(qpl_id); } else { cmd.create_tx_queue.tx_ring_size = - cpu_to_be16(txq->nb_tx_desc); + cpu_to_be16(priv->tx_desc_cnt); cmd.create_tx_queue.tx_comp_ring_addr = - cpu_to_be64(txq->complq->tx_ring_phys_addr); + cpu_to_be64(txq->compl_ring_phys_addr); cmd.create_tx_queue.tx_comp_ring_size = - cpu_to_be16(priv->tx_compq_size); + cpu_to_be16(priv->tx_compq_size * DQO_TX_MULTIPLIER); } return gve_adminq_issue_cmd(priv, &cmd); @@ -549,9 +549,9 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_rx_queue.rx_ring_size = cpu_to_be16(priv->rx_desc_cnt); cmd.create_rx_queue.rx_desc_ring_addr = - cpu_to_be64(rxq->rx_ring_phys_addr); + cpu_to_be64(rxq->compl_ring_phys_addr); cmd.create_rx_queue.rx_data_ring_addr = - cpu_to_be64(rxq->bufq->rx_ring_phys_addr); + cpu_to_be64(rxq->rx_ring_phys_addr); cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rxq->rx_buf_len); cmd.create_rx_queue.rx_buff_ring_size = diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 26182b0422..3543378978 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -78,6 +78,9 @@ gve_free_qpls(struct gve_priv *priv) uint16_t nb_rxqs = priv->max_nb_rxq; uint32_t i; + if (priv->queue_format != GVE_GQI_QPL_FORMAT) + return; + for (i = 0; i < nb_txqs + nb_rxqs; i++) { if (priv->qpl[i].mz != NULL) rte_memzone_free(priv->qpl[i].mz); @@ -138,6 +141,41 @@ gve_refill_pages(struct gve_rx_queue *rxq) return 0; } +static int +gve_refill_dqo(struct gve_rx_queue *rxq) +{ + struct rte_mbuf *nmb; + uint16_t i; + int diag; + + diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0], rxq->nb_rx_desc); + if (diag < 0) { + for (i = 0; i < rxq->nb_rx_desc - 1; i++) { + nmb = rte_pktmbuf_alloc(rxq->mpool); + if (!nmb) + break; + rxq->sw_ring[i] = nmb; + } + if (i < rxq->nb_rx_desc - 1) + return -ENOMEM; + } + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (i == rxq->nb_rx_desc - 1) + break; + nmb = rxq->sw_ring[i]; + rxq->rx_ring[i].buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + rxq->rx_ring[i].buf_id = rte_cpu_to_le_16(i); + } + + rxq->nb_rx_hold = 0; + rxq->bufq_tail = rxq->nb_rx_desc - 1; + + rte_write32(rxq->bufq_tail, rxq->qrx_tail); + + return 0; +} + static int gve_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -206,7 +244,10 @@ gve_dev_start(struct rte_eth_dev *dev) rte_write32(rte_cpu_to_be_32(GVE_IRQ_MASK), rxq->ntfy_addr); - err = gve_refill_pages(rxq); + if (gve_is_gqi(priv)) + err = gve_refill_pages(rxq); + else + err = gve_refill_dqo(rxq); if (err) { PMD_DRV_LOG(ERR, "Failed to refill for RX"); goto err_rx; From patchwork Mon Jan 30 06:26:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122655 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44370424BA; Mon, 30 Jan 2023 07:32:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 239EA42B8E; Mon, 30 Jan 2023 07:32:29 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id F17C142D29 for ; Mon, 30 Jan 2023 07:32:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060348; x=1706596348; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s4csOlqgcxrtmJhT5Ew7Sx7toMhq7kjfzK+NzcvnKRc=; b=WEIgN7PnEweg3HWGlPzSxQUzkbNh69sdTZ0otMzMeR5yyeg7vAVwAosl DSAvGHANH4C1StOdBLj6sA0clwZgkWuUOdAEMr8Z9oqGhESw5FAgeJS7X 3dZIA++nG9G6hplUjI4IZRaLGYklQGwFzc3jGSi1ZVPd38Zb0jx/8B9+P 91d09OxWl9YFm4K3hQgmrR7V+8mTvsKgN5FnnL0Qv7AAUdYLYQ7QNeJ9M yRrhxlv0tr5pjEoG/cZTA6ynE73o/TdFPxlNV/20QqBgq5MDp6y4se2xY pB0E6TpiW8w7PmwkJr3V83oljdgT1QHFyJwJJfitXRtmISo3z8K39B3bT Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035686" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035686" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906454" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906454" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:24 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 4/9] net/gve: support queue release and stop for DQO Date: Mon, 30 Jan 2023 14:26:37 +0800 Message-Id: <20230130062642.3337239-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for queue operations: - gve_tx_queue_release_dqo - gve_rx_queue_release_dqo - gve_stop_tx_queues_dqo - gve_stop_rx_queues_dqo Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 18 +++++++++--- drivers/net/gve/gve_ethdev.h | 12 ++++++++ drivers/net/gve/gve_rx.c | 3 ++ drivers/net/gve/gve_rx_dqo.c | 57 ++++++++++++++++++++++++++++++++++++ drivers/net/gve/gve_tx.c | 3 ++ drivers/net/gve/gve_tx_dqo.c | 55 ++++++++++++++++++++++++++++++++++ 6 files changed, 144 insertions(+), 4 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 3543378978..7c4be3a1cb 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -292,11 +292,19 @@ gve_dev_close(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Failed to stop dev."); } - for (i = 0; i < dev->data->nb_tx_queues; i++) - gve_tx_queue_release(dev, i); + if (gve_is_gqi(priv)) { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release(dev, i); + } else { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release_dqo(dev, i); - for (i = 0; i < dev->data->nb_rx_queues; i++) - gve_rx_queue_release(dev, i); + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release_dqo(dev, i); + } gve_free_qpls(priv); rte_free(priv->adminq); @@ -408,6 +416,8 @@ gve_eth_dev_ops_override(struct eth_dev_ops *local_eth_dev_ops) /* override eth_dev ops for DQO */ local_eth_dev_ops->tx_queue_setup = gve_tx_queue_setup_dqo; local_eth_dev_ops->rx_queue_setup = gve_rx_queue_setup_dqo; + local_eth_dev_ops->tx_queue_release = gve_tx_queue_release_dqo; + local_eth_dev_ops->rx_queue_release = gve_rx_queue_release_dqo; } static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 0adfc90554..93314f2db3 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -353,4 +353,16 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf); +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 518c9d109c..9ba975c9b4 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -343,6 +343,9 @@ gve_stop_rx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_rx_queues_dqo(dev); + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index e8a6d575fc..aca6f8ea2d 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,38 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) +{ + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i]) { + rte_pktmbuf_free_seg(rxq->sw_ring[i]); + rxq->sw_ring[i] = NULL; + } + } + + rxq->nb_avail = rxq->nb_rx_desc; +} + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_rx_queue *q = dev->data->rx_queues[qid]; + + if (q == NULL) + return; + + gve_release_rxq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static void gve_reset_rxq_dqo(struct gve_rx_queue *rxq) { @@ -54,6 +86,12 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->rx_desc_cnt; + /* Free memory if needed */ + if (dev->data->rx_queues[queue_id]) { + gve_rx_queue_release_dqo(dev, queue_id); + dev->data->rx_queues[queue_id] = NULL; + } + /* Allocate the RX queue data structure. */ rxq = rte_zmalloc_socket("gve rxq", sizeof(struct gve_rx_queue), @@ -146,3 +184,22 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(rxq); return err; } + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_rx_queue *rxq; + uint16_t i; + int err; + + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + gve_release_rxq_mbufs_dqo(rxq); + gve_reset_rxq_dqo(rxq); + } +} diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index bf4e8fea2c..0eb42b1216 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -658,6 +658,9 @@ gve_stop_tx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_tx_queues_dqo(dev); + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy txqs"); diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 4f8bad31bb..e2e4153f27 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,36 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) +{ + uint16_t i; + + for (i = 0; i < txq->sw_size; i++) { + if (txq->sw_ring[i]) { + rte_pktmbuf_free_seg(txq->sw_ring[i]); + txq->sw_ring[i] = NULL; + } + } +} + +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_tx_queue *q = dev->data->tx_queues[qid]; + + if (q == NULL) + return; + + gve_release_txq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static int check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh, uint16_t tx_free_thresh) @@ -90,6 +120,12 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->tx_desc_cnt; + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_id]) { + gve_tx_queue_release_dqo(dev, queue_id); + dev->data->tx_queues[queue_id] = NULL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("gve txq", sizeof(struct gve_tx_queue), @@ -176,3 +212,22 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(txq); return err; } + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_tx_queue *txq; + uint16_t i; + int err; + + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy txqs"); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + gve_release_txq_mbufs_dqo(txq); + gve_reset_txq_dqo(txq); + } +} From patchwork Mon Jan 30 06:26:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122656 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1AD7424BA; Mon, 30 Jan 2023 07:32:47 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6A21E427E9; Mon, 30 Jan 2023 07:32:33 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id D90CC410FB for ; Mon, 30 Jan 2023 07:32:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060352; x=1706596352; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EXYWc2RPYgYWSMwm0jcddbYfXxkRmCR3DDtmVgl770Q=; b=kndWCNET6Nkch2fiy+wHqazeChAjgUyFoEP/CYT33XJIbjgGxOYP3Mg4 KYd3dcmAdjtzRO0waEk+yqw7YmvIB5U8dF3hhQpojRN2ZJvph3Xc/4Qpf 5mt23ZCQ5be1BFI7aYlehrbSu6fvz6guR6jBEtx8vxgcGNCvM81yGSVjk +sfp/4G1/hEiTQngngDRXbibAQWmoBdNXz9cHv0ZsgGh2Gr15UFJozlDn W+HnyTEiSLFMu9o3UTesML4UfnG1SzPWO2hFOGy7aJNLRM700BTLXNwAR hA9Q6G7vO0VkdwkUuLQ0NxNTcFp8ERr8WXN550d+NWOy32DGfVFvB/vrR g==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035698" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035698" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:31 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906469" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906469" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:27 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 5/9] net/gve: support basic Tx data path for DQO Date: Mon, 30 Jan 2023 14:26:38 +0800 Message-Id: <20230130062642.3337239-6-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic Tx data path support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 4 + drivers/net/gve/gve_tx_dqo.c | 141 +++++++++++++++++++++++++++++++++++ 3 files changed, 146 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 7c4be3a1cb..512a038968 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -703,6 +703,7 @@ gve_dev_init(struct rte_eth_dev *eth_dev) } else { /* override Tx/Rx setup/release eth_dev ops */ gve_eth_dev_ops_override(&gve_local_eth_dev_ops); + eth_dev->tx_pkt_burst = gve_tx_burst_dqo; } eth_dev->dev_ops = &gve_local_eth_dev_ops; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 93314f2db3..ba657dd6c1 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -125,6 +125,7 @@ struct gve_tx_queue { uint8_t cur_gen_bit; uint32_t last_desc_cleaned; void **txqs; + uint16_t re_cnt; /* Only valid for DQO_RDA queue format */ struct gve_tx_queue *complq; @@ -365,4 +366,7 @@ gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); void gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); +uint16_t +gve_tx_burst_dqo(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index e2e4153f27..3583c82246 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,147 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_tx_clean_dqo(struct gve_tx_queue *txq) +{ + struct gve_tx_compl_desc *compl_ring; + struct gve_tx_compl_desc *compl_desc; + struct gve_tx_queue *aim_txq; + uint16_t nb_desc_clean; + struct rte_mbuf *txe; + uint16_t compl_tag; + uint16_t next; + + next = txq->complq_tail; + compl_ring = txq->compl_ring; + compl_desc = &compl_ring[next]; + + if (compl_desc->generation != txq->cur_gen_bit) + return; + + compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag); + + aim_txq = txq->txqs[compl_desc->id]; + + switch (compl_desc->type) { + case GVE_COMPL_TYPE_DQO_DESC: + /* need to clean Descs from last_cleaned to compl_tag */ + if (aim_txq->last_desc_cleaned > compl_tag) + nb_desc_clean = aim_txq->nb_tx_desc - aim_txq->last_desc_cleaned + + compl_tag; + else + nb_desc_clean = compl_tag - aim_txq->last_desc_cleaned; + aim_txq->nb_free += nb_desc_clean; + aim_txq->last_desc_cleaned = compl_tag; + break; + case GVE_COMPL_TYPE_DQO_REINJECTION: + PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_REINJECTION !!!"); + /* FALLTHROUGH */ + case GVE_COMPL_TYPE_DQO_PKT: + txe = aim_txq->sw_ring[compl_tag]; + if (txe != NULL) { + rte_pktmbuf_free_seg(txe); + txe = NULL; + } + break; + case GVE_COMPL_TYPE_DQO_MISS: + rte_delay_us_sleep(1); + PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_MISS ignored !!!"); + break; + default: + PMD_DRV_LOG(ERR, "unknown completion type."); + return; + } + + next++; + if (next == txq->nb_tx_desc * DQO_TX_MULTIPLIER) { + next = 0; + txq->cur_gen_bit ^= 1; + } + + txq->complq_tail = next; +} + +uint16_t +gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct gve_tx_queue *txq = tx_queue; + volatile union gve_tx_desc_dqo *txr; + volatile union gve_tx_desc_dqo *txd; + struct rte_mbuf **sw_ring; + struct rte_mbuf *tx_pkt; + uint16_t mask, sw_mask; + uint16_t nb_to_clean; + uint16_t nb_tx = 0; + uint16_t nb_used; + uint16_t tx_id; + uint16_t sw_id; + + sw_ring = txq->sw_ring; + txr = txq->tx_ring; + + mask = txq->nb_tx_desc - 1; + sw_mask = txq->sw_size - 1; + tx_id = txq->tx_tail; + sw_id = txq->sw_tail; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + tx_pkt = tx_pkts[nb_tx]; + + if (txq->nb_free <= txq->free_thresh) { + nb_to_clean = DQO_TX_MULTIPLIER * txq->rs_thresh; + while (nb_to_clean--) + gve_tx_clean_dqo(txq); + } + + if (txq->nb_free < tx_pkt->nb_segs) + break; + + nb_used = tx_pkt->nb_segs; + + do { + txd = &txr[tx_id]; + + sw_ring[sw_id] = tx_pkt; + + /* fill Tx descriptor */ + txd->pkt.buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt)); + txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; + txd->pkt.compl_tag = rte_cpu_to_le_16(sw_id); + txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO); + + /* size of desc_ring and sw_ring could be different */ + tx_id = (tx_id + 1) & mask; + sw_id = (sw_id + 1) & sw_mask; + + tx_pkt = tx_pkt->next; + } while (tx_pkt); + + /* fill the last descriptor with End of Packet (EOP) bit */ + txd->pkt.end_of_packet = 1; + + txq->nb_free -= nb_used; + txq->nb_used += nb_used; + } + + /* update the tail pointer if any packets were processed */ + if (nb_tx > 0) { + /* Request a descriptor completion on the last descriptor */ + txq->re_cnt += nb_tx; + if (txq->re_cnt >= GVE_TX_MIN_RE_INTERVAL) { + txd = &txr[(tx_id - 1) & mask]; + txd->pkt.report_event = true; + txq->re_cnt = 0; + } + + rte_write32(tx_id, txq->qtx_tail); + txq->tx_tail = tx_id; + txq->sw_tail = sw_id; + } + + return nb_tx; +} + static inline void gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) { From patchwork Mon Jan 30 06:26:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122657 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB3E2424BA; Mon, 30 Jan 2023 07:32:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3C39342D4D; Mon, 30 Jan 2023 07:32:37 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id B966642D3B for ; Mon, 30 Jan 2023 07:32:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060355; x=1706596355; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O38K76eBW3merUeeJh5bJMfWxVYmsi4XflBS3s7dgQ8=; b=bF4FgdGQdEPIC4WKIJ9wt2SQI0rw6kYIGRzpZTYgv0wOuZCXcW/yinw3 +VljQw3wtkmpJbvX0ByOIm8W4tqATaIJ11/hSZoHrljdwp2nrRPz8pekI ck6IGF/Yg28g0Gyzkk+IpvlXh2X8z6QdJNswMEh1vJhp0UmcR359iWw0p nsXilj/Y8Cy1crQS6tHDgMUdXVxptgBNF80S9+fHmbZ8uHarx/9DTY+vU x16n6pWdwjlYLX9qnta2C6v1FSJc1XZAIRQ+9YoAuIA8nXv4XFes5oMUy 74dYwZoAaR0lEGj9oHU0MPhcWDVpKJESGhCO28a/noM4ThY8jEVG5rViu Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035709" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035709" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906479" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906479" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:31 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 6/9] net/gve: support basic Rx data path for DQO Date: Mon, 30 Jan 2023 14:26:39 +0800 Message-Id: <20230130062642.3337239-7-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic Rx data path support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 3 + drivers/net/gve/gve_rx_dqo.c | 128 +++++++++++++++++++++++++++++++++++ 3 files changed, 132 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 512a038968..89e3f09c37 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -703,6 +703,7 @@ gve_dev_init(struct rte_eth_dev *eth_dev) } else { /* override Tx/Rx setup/release eth_dev ops */ gve_eth_dev_ops_override(&gve_local_eth_dev_ops); + eth_dev->rx_pkt_burst = gve_rx_burst_dqo; eth_dev->tx_pkt_burst = gve_tx_burst_dqo; } diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index ba657dd6c1..d434f9babe 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -366,6 +366,9 @@ gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); void gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); +uint16_t +gve_rx_burst_dqo(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t gve_tx_burst_dqo(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index aca6f8ea2d..244517ce5d 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,134 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_rx_refill_dqo(struct gve_rx_queue *rxq) +{ + volatile struct gve_rx_desc_dqo *rx_buf_ring; + volatile struct gve_rx_desc_dqo *rx_buf_desc; + struct rte_mbuf *nmb[rxq->free_thresh]; + uint16_t nb_refill = rxq->free_thresh; + uint16_t nb_desc = rxq->nb_rx_desc; + uint16_t next_avail = rxq->bufq_tail; + struct rte_eth_dev *dev; + uint64_t dma_addr; + uint16_t delta; + int i; + + if (rxq->nb_rx_hold < rxq->free_thresh) + return; + + rx_buf_ring = rxq->rx_ring; + delta = nb_desc - next_avail; + if (unlikely(delta < nb_refill)) { + if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, delta) == 0)) { + for (i = 0; i < delta; i++) { + rx_buf_desc = &rx_buf_ring[next_avail + i]; + rxq->sw_ring[next_avail + i] = nmb[i]; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i])); + rx_buf_desc->header_buf_addr = 0; + rx_buf_desc->buf_addr = dma_addr; + } + nb_refill -= delta; + next_avail = 0; + rxq->nb_rx_hold -= delta; + } else { + dev = &rte_eth_devices[rxq->port_id]; + dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; + PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + return; + } + } + + if (nb_desc - next_avail >= nb_refill) { + if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill) == 0)) { + for (i = 0; i < nb_refill; i++) { + rx_buf_desc = &rx_buf_ring[next_avail + i]; + rxq->sw_ring[next_avail + i] = nmb[i]; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i])); + rx_buf_desc->header_buf_addr = 0; + rx_buf_desc->buf_addr = dma_addr; + } + next_avail += nb_refill; + rxq->nb_rx_hold -= nb_refill; + } else { + dev = &rte_eth_devices[rxq->port_id]; + dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; + PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + } + } + + rte_write32(next_avail, rxq->qrx_tail); + + rxq->bufq_tail = next_avail; +} + +uint16_t +gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + volatile struct gve_rx_compl_desc_dqo *rx_compl_ring; + volatile struct gve_rx_compl_desc_dqo *rx_desc; + struct gve_rx_queue *rxq; + struct rte_mbuf *rxm; + uint16_t rx_id_bufq; + uint16_t pkt_len; + uint16_t rx_id; + uint16_t nb_rx; + + nb_rx = 0; + rxq = rx_queue; + rx_id = rxq->rx_tail; + rx_id_bufq = rxq->next_avail; + rx_compl_ring = rxq->compl_ring; + + while (nb_rx < nb_pkts) { + rx_desc = &rx_compl_ring[rx_id]; + + /* check status */ + if (rx_desc->generation != rxq->cur_gen_bit) + break; + + if (unlikely(rx_desc->rx_error)) + continue; + + pkt_len = rx_desc->packet_len; + + rx_id++; + if (rx_id == rxq->nb_rx_desc) { + rx_id = 0; + rxq->cur_gen_bit ^= 1; + } + + rxm = rxq->sw_ring[rx_id_bufq]; + rx_id_bufq++; + if (rx_id_bufq == rxq->nb_rx_desc) + rx_id_bufq = 0; + rxq->nb_rx_hold++; + + rxm->pkt_len = pkt_len; + rxm->data_len = pkt_len; + rxm->port = rxq->port_id; + rxm->ol_flags = 0; + + rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; + rxm->hash.rss = rte_be_to_cpu_32(rx_desc->hash); + + rx_pkts[nb_rx++] = rxm; + } + + if (nb_rx > 0) { + rxq->rx_tail = rx_id; + if (rx_id_bufq != rxq->next_avail) + rxq->next_avail = rx_id_bufq; + + gve_rx_refill_dqo(rxq); + } + + return nb_rx; +} + static inline void gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) { From patchwork Mon Jan 30 06:26:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122658 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0311C424BA; Mon, 30 Jan 2023 07:33:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 619AC42D30; Mon, 30 Jan 2023 07:32:41 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 3317A40C35 for ; Mon, 30 Jan 2023 07:32:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060360; x=1706596360; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f1oh1P2U3l6PWwQKo75ZV2M+hQy/BWnK+GN9J7civvY=; b=STYL3zBlPWOsXx/V14/2JjdIuVvF7M9Flvv9hlauH1rNKUXwVgS34Kje iH3k9vVFWtZIDN1LpKtQIx0HBMGG79yMcr33wYus3Kc9XAK+onwpY7HKE 8fThFo04brCpDHeTYHByOR7F+syP3Q1IE+eQpLWlFe/DwliQas2iTUahg e/gmwUKr67XF8ZKLIMJyQ+xjAcZQxIncOB+0ZGPM4PqlDISvqnrnjeuiI 4eE4AJm3i6KfUury59sUaUZb+sKAX7DJ2UsdcqtBGL97DTc++IDEQCW1X mHFxuv4WQeSwNkXFgGTerq3PAg2SfxhZum4ouHpPDhVJVyGbzDHOtkWPJ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035722" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035722" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906485" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906485" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:35 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 7/9] net/gve: support basic stats for DQO Date: Mon, 30 Jan 2023 14:26:40 +0800 Message-Id: <20230130062642.3337239-8-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic stats support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 60 ++++++++++++++++++++++++++++++++++++ drivers/net/gve/gve_ethdev.h | 11 +++++++ drivers/net/gve/gve_rx_dqo.c | 12 +++++++- drivers/net/gve/gve_tx_dqo.c | 6 ++++ 4 files changed, 88 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 89e3f09c37..fae00305f9 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -369,6 +369,64 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) return 0; } +static int +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint16_t i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct gve_tx_queue *txq = dev->data->tx_queues[i]; + if (txq == NULL) + continue; + + stats->opackets += txq->packets; + stats->obytes += txq->bytes; + stats->oerrors += txq->errors; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct gve_rx_queue *rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + stats->ipackets += rxq->packets; + stats->ibytes += rxq->bytes; + stats->ierrors += rxq->errors; + stats->rx_nombuf += rxq->no_mbufs; + } + + return 0; +} + +static int +gve_dev_stats_reset(struct rte_eth_dev *dev) +{ + uint16_t i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct gve_tx_queue *txq = dev->data->tx_queues[i]; + if (txq == NULL) + continue; + + txq->packets = 0; + txq->bytes = 0; + txq->errors = 0; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct gve_rx_queue *rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + rxq->packets = 0; + rxq->bytes = 0; + rxq->errors = 0; + rxq->no_mbufs = 0; + } + + return 0; +} + static int gve_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { @@ -407,6 +465,8 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .rx_queue_release = gve_rx_queue_release, .tx_queue_release = gve_tx_queue_release, .link_update = gve_link_update, + .stats_get = gve_dev_stats_get, + .stats_reset = gve_dev_stats_reset, .mtu_set = gve_dev_mtu_set, }; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index d434f9babe..2e0f96499d 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -105,6 +105,11 @@ struct gve_tx_queue { struct gve_queue_page_list *qpl; struct gve_tx_iovec *iov_ring; + /* stats items */ + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint16_t port_id; uint16_t queue_id; @@ -156,6 +161,12 @@ struct gve_rx_queue { /* only valid for GQI_QPL queue format */ struct gve_queue_page_list *qpl; + /* stats items */ + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t no_mbufs; + struct gve_priv *hw; const struct rte_memzone *qres_mz; struct gve_queue_resources *qres; diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 244517ce5d..41ead5bd98 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -37,6 +37,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail = 0; rxq->nb_rx_hold -= delta; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -57,6 +58,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail += nb_refill; rxq->nb_rx_hold -= nb_refill; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -80,7 +82,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint16_t pkt_len; uint16_t rx_id; uint16_t nb_rx; + uint64_t bytes; + bytes = 0; nb_rx = 0; rxq = rx_queue; rx_id = rxq->rx_tail; @@ -94,8 +98,10 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (rx_desc->generation != rxq->cur_gen_bit) break; - if (unlikely(rx_desc->rx_error)) + if (unlikely(rx_desc->rx_error)) { + rxq->errors++; continue; + } pkt_len = rx_desc->packet_len; @@ -120,6 +126,7 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm->hash.rss = rte_be_to_cpu_32(rx_desc->hash); rx_pkts[nb_rx++] = rxm; + bytes += pkt_len; } if (nb_rx > 0) { @@ -128,6 +135,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxq->next_avail = rx_id_bufq; gve_rx_refill_dqo(rxq); + + rxq->packets += nb_rx; + rxq->bytes += bytes; } return nb_rx; diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 3583c82246..9c1361c894 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -80,10 +80,12 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t nb_used; uint16_t tx_id; uint16_t sw_id; + uint64_t bytes; sw_ring = txq->sw_ring; txr = txq->tx_ring; + bytes = 0; mask = txq->nb_tx_desc - 1; sw_mask = txq->sw_size - 1; tx_id = txq->tx_tail; @@ -118,6 +120,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_id = (tx_id + 1) & mask; sw_id = (sw_id + 1) & sw_mask; + bytes += tx_pkt->pkt_len; tx_pkt = tx_pkt->next; } while (tx_pkt); @@ -141,6 +144,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) rte_write32(tx_id, txq->qtx_tail); txq->tx_tail = tx_id; txq->sw_tail = sw_id; + + txq->packets += nb_tx; + txq->bytes += bytes; } return nb_tx; From patchwork Mon Jan 30 06:26:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122659 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 377C7424BA; Mon, 30 Jan 2023 07:33:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9882542D38; Mon, 30 Jan 2023 07:32:45 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 3875942D49 for ; Mon, 30 Jan 2023 07:32:44 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060364; x=1706596364; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3ztCh2ZPPctYTGF9UOOMn7a52dNfWwqxrmS7OrHE7rQ=; b=d8R6pI/tk6pLMQZQjEVxdPJnrzzjD1+kV5cm7oAX1hRPSST7d51Sn10S dJd1+lrHl3IEkjK/Ov1oCabpIp/ZHwfcJwU9UKaW4JI0R0ZAGuVFZOZW8 kosjMGrvmp1xSa0hfgsg3xT0BR5rv582nmQrYb6q44ZZAv0NVTEPsAycC RQaRsIhatcGsImF+Jo8/qnNGK7ws9MvZWAq77r/hdm2WtN0/0PxXidCWJ uizYalvWVUDanXp9uFaWGBdnX6mchOrZWGXgYdK2JGnTUJJg9BbT5o83N m4AyDa6+NteBw6ecs/LqXVXjG3AAK6OzbM8AcuIvuxk9P5zcmVblAdz35 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035730" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035730" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906512" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906512" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:39 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Jordan Kimbrough , Rushil Gupta , Jeroen de Borst Subject: [RFC v2 8/9] net/gve: support jumbo frame for GQI Date: Mon, 30 Jan 2023 14:26:41 +0800 Message-Id: <20230130062642.3337239-9-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add multi-segment support to enable GQI Rx Jumbo Frame. Signed-off-by: Jordan Kimbrough Signed-off-by: Rushil Gupta Signed-off-by: Junfeng Guo Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.h | 8 +++ drivers/net/gve/gve_rx.c | 128 ++++++++++++++++++++++++++--------- 2 files changed, 105 insertions(+), 31 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 2e0f96499d..608a2f2fb4 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -138,6 +138,13 @@ struct gve_tx_queue { uint8_t is_gqi_qpl; }; +struct gve_rx_ctx { + struct rte_mbuf *mbuf_head; + struct rte_mbuf *mbuf_tail; + uint16_t total_frags; + bool drop_pkt; +}; + struct gve_rx_queue { volatile struct gve_rx_desc *rx_desc_ring; volatile union gve_rx_data_slot *rx_data_ring; @@ -146,6 +153,7 @@ struct gve_rx_queue { uint64_t rx_ring_phys_addr; struct rte_mbuf **sw_ring; struct rte_mempool *mpool; + struct gve_rx_ctx ctx; uint16_t rx_tail; uint16_t nb_rx_desc; diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 9ba975c9b4..2468fc70ee 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -5,6 +5,8 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +#define GVE_PKT_CONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x)) + static inline void gve_rx_refill(struct gve_rx_queue *rxq) { @@ -80,40 +82,70 @@ gve_rx_refill(struct gve_rx_queue *rxq) } } -uint16_t -gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +/* + * This method processes a single rte_mbuf and handles packet segmentation + * In QPL mode it copies data from the mbuf to the gve_rx_queue. + */ +static void +gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, + uint16_t rx_id) { - volatile struct gve_rx_desc *rxr, *rxd; - struct gve_rx_queue *rxq = rx_queue; - uint16_t rx_id = rxq->rx_tail; - struct rte_mbuf *rxe; - uint16_t nb_rx, len; + uint16_t padding = 0; uint64_t addr; - uint16_t i; - - rxr = rxq->rx_desc_ring; - nb_rx = 0; - - for (i = 0; i < nb_pkts; i++) { - rxd = &rxr[rx_id]; - if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno) - break; - if (rxd->flags_seq & GVE_RXF_ERR) - continue; - - len = rte_be_to_cpu_16(rxd->len) - GVE_RX_PAD; - rxe = rxq->sw_ring[rx_id]; - if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + GVE_RX_PAD; - rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), - (void *)(size_t)addr, len); - } + rxe->data_len = len; + if (!rxq->ctx.mbuf_head) { + rxq->ctx.mbuf_head = rxe; + rxq->ctx.mbuf_tail = rxe; + rxe->nb_segs = 1; rxe->pkt_len = len; rxe->data_len = len; rxe->port = rxq->port_id; rxe->ol_flags = 0; + padding = GVE_RX_PAD; + } else { + rxq->ctx.mbuf_head->pkt_len += len; + rxq->ctx.mbuf_head->nb_segs += 1; + rxq->ctx.mbuf_tail->next = rxe; + rxq->ctx.mbuf_tail = rxe; + } + if (rxq->is_gqi_qpl) { + addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), + (void *)(size_t)addr, len); + } +} + +/* + * This method processes a single packet fragment associated with the + * passed packet descriptor. + * This methods returns whether the fragment is the last fragment + * of a packet. + */ +static bool +gve_rx(struct gve_rx_queue *rxq, volatile struct gve_rx_desc *rxd, uint16_t rx_id) +{ + bool is_last_frag = !GVE_PKT_CONT_BIT_IS_SET(rxd->flags_seq); + uint16_t frag_size = rte_be_to_cpu_16(rxd->len); + struct gve_rx_ctx *ctx = &rxq->ctx; + bool is_first_frag = ctx->total_frags == 0; + struct rte_mbuf *rxe; + + if (ctx->drop_pkt) + goto finish_frag; + if (rxd->flags_seq & GVE_RXF_ERR) { + ctx->drop_pkt = true; + goto finish_frag; + } + + if (is_first_frag) + frag_size -= GVE_RX_PAD; + + rxe = rxq->sw_ring[rx_id]; + gve_rx_mbuf(rxq, rxe, frag_size, rx_id); + + if (is_first_frag) { if (rxd->flags_seq & GVE_RXF_TCP) rxe->packet_type |= RTE_PTYPE_L4_TCP; if (rxd->flags_seq & GVE_RXF_UDP) @@ -127,18 +159,52 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxe->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; rxe->hash.rss = rte_be_to_cpu_32(rxd->rss_hash); } + } - rxq->expected_seqno = gve_next_seqno(rxq->expected_seqno); +finish_frag: + ctx->total_frags++; + return is_last_frag; +} + +static void +gve_rx_ctx_clear(struct gve_rx_ctx *ctx) +{ + ctx->mbuf_head = NULL; + ctx->mbuf_tail = NULL; + ctx->drop_pkt = false; + ctx->total_frags = 0; +} + +uint16_t +gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + volatile struct gve_rx_desc *rxr, *rxd; + struct gve_rx_queue *rxq = rx_queue; + struct gve_rx_ctx *ctx = &rxq->ctx; + uint16_t rx_id = rxq->rx_tail; + uint16_t nb_rx; + + rxr = rxq->rx_desc_ring; + nb_rx = 0; + + while (nb_rx < nb_pkts) { + rxd = &rxr[rx_id]; + if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno) + break; + + if (gve_rx(rxq, rxd, rx_id)) { + if (!ctx->drop_pkt) + rx_pkts[nb_rx++] = ctx->mbuf_head; + rxq->nb_avail += ctx->total_frags; + gve_rx_ctx_clear(ctx); + } rx_id++; if (rx_id == rxq->nb_rx_desc) rx_id = 0; - - rx_pkts[nb_rx] = rxe; - nb_rx++; + rxq->expected_seqno = gve_next_seqno(rxq->expected_seqno); } - rxq->nb_avail += nb_rx; rxq->rx_tail = rx_id; if (rxq->nb_avail > rxq->free_thresh) From patchwork Mon Jan 30 06:26:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122660 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C903424BA; Mon, 30 Jan 2023 07:33:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DD41A42D49; Mon, 30 Jan 2023 07:32:50 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id E354842F84 for ; Mon, 30 Jan 2023 07:32:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675060369; x=1706596369; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hm/X8AiMEW7Ljz7tEybdLkGjss+84jKquKyg8OVVbU4=; b=ItpJk6pQY5KMLNXGhRyEU8m14QjlIyp4cJWuGmJxhf6K++F4KAPQ4PmQ +0sXK1Op0iGcf/c1ap0qhTKgENOM8CfqH10iWd3azdu5JS0oONdLDhPFW U4ISW/iJLwHFtCFvj83HGW+355J8QdF/M98lvw0wZJznDybh92slaEhr/ D4o9GryMjtmcvpqw2YhAeHg2JDynxzo58xssBXA2gMh/dO40FovzxcFlJ 2KvFSbWbnbCj17aA7Xl160HMm9S/xT/OfY5WuNfDAsx2x+4WOX6HUror7 hncr+Xy+Y3CBw2CY8cslG2N05mxmd5RZM2targ3zk0E80xvFOE+RGYdk5 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="392035739" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="392035739" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2023 22:32:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10605"; a="787906542" X-IronPort-AV: E=Sophos;i="5.97,257,1669104000"; d="scan'208";a="787906542" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga004.jf.intel.com with ESMTP; 29 Jan 2023 22:32:44 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v2 9/9] net/gve: add AdminQ command to verify driver compatibility Date: Mon, 30 Jan 2023 14:26:42 +0800 Message-Id: <20230130062642.3337239-10-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230130062642.3337239-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> <20230130062642.3337239-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Check whether the driver is compatible with the device presented. Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Junfeng Guo Signed-off-by: Jeroen de Borst --- drivers/net/gve/base/gve_adminq.c | 19 ++++++++++ drivers/net/gve/base/gve_adminq.h | 48 +++++++++++++++++++++++++ drivers/net/gve/base/gve_osdep.h | 8 +++++ drivers/net/gve/gve_ethdev.c | 60 +++++++++++++++++++++++++++++++ drivers/net/gve/gve_ethdev.h | 1 + 5 files changed, 136 insertions(+) diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index e963f910a0..5576990cb1 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -401,6 +401,9 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv, case GVE_ADMINQ_GET_PTYPE_MAP: priv->adminq_get_ptype_map_cnt++; break; + case GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY: + priv->adminq_verify_driver_compatibility_cnt++; + break; default: PMD_DRV_LOG(ERR, "unknown AQ command opcode %d", opcode); } @@ -859,6 +862,22 @@ int gve_adminq_report_stats(struct gve_priv *priv, u64 stats_report_len, return gve_adminq_execute_cmd(priv, &cmd); } +int gve_adminq_verify_driver_compatibility(struct gve_priv *priv, + u64 driver_info_len, + dma_addr_t driver_info_addr) +{ + union gve_adminq_command cmd; + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = cpu_to_be32(GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY); + cmd.verify_driver_compatibility = (struct gve_adminq_verify_driver_compatibility) { + .driver_info_len = cpu_to_be64(driver_info_len), + .driver_info_addr = cpu_to_be64(driver_info_addr), + }; + + return gve_adminq_execute_cmd(priv, &cmd); +} + int gve_adminq_report_link_speed(struct gve_priv *priv) { struct gve_dma_mem link_speed_region_dma_mem; diff --git a/drivers/net/gve/base/gve_adminq.h b/drivers/net/gve/base/gve_adminq.h index 05550119de..c82e02405c 100644 --- a/drivers/net/gve/base/gve_adminq.h +++ b/drivers/net/gve/base/gve_adminq.h @@ -23,6 +23,7 @@ enum gve_adminq_opcodes { GVE_ADMINQ_REPORT_STATS = 0xC, GVE_ADMINQ_REPORT_LINK_SPEED = 0xD, GVE_ADMINQ_GET_PTYPE_MAP = 0xE, + GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY = 0xF, }; /* Admin queue status codes */ @@ -145,6 +146,48 @@ enum gve_sup_feature_mask { }; #define GVE_DEV_OPT_LEN_GQI_RAW_ADDRESSING 0x0 +#define GVE_VERSION_STR_LEN 128 + +enum gve_driver_capbility { + gve_driver_capability_gqi_qpl = 0, + gve_driver_capability_gqi_rda = 1, + gve_driver_capability_dqo_qpl = 2, /* reserved for future use */ + gve_driver_capability_dqo_rda = 3, +}; + +#define GVE_CAP1(a) BIT((int)a) +#define GVE_CAP2(a) BIT(((int)a) - 64) +#define GVE_CAP3(a) BIT(((int)a) - 128) +#define GVE_CAP4(a) BIT(((int)a) - 192) + +#define GVE_DRIVER_CAPABILITY_FLAGS1 \ + (GVE_CAP1(gve_driver_capability_gqi_qpl) | \ + GVE_CAP1(gve_driver_capability_gqi_rda) | \ + GVE_CAP1(gve_driver_capability_dqo_rda)) + +#define GVE_DRIVER_CAPABILITY_FLAGS2 0x0 +#define GVE_DRIVER_CAPABILITY_FLAGS3 0x0 +#define GVE_DRIVER_CAPABILITY_FLAGS4 0x0 + +struct gve_driver_info { + u8 os_type; /* 0x01 = Linux */ + u8 driver_major; + u8 driver_minor; + u8 driver_sub; + __be32 os_version_major; + __be32 os_version_minor; + __be32 os_version_sub; + __be64 driver_capability_flags[4]; + u8 os_version_str1[GVE_VERSION_STR_LEN]; + u8 os_version_str2[GVE_VERSION_STR_LEN]; +}; + +struct gve_adminq_verify_driver_compatibility { + __be64 driver_info_len; + __be64 driver_info_addr; +}; + +GVE_CHECK_STRUCT_LEN(16, gve_adminq_verify_driver_compatibility); struct gve_adminq_configure_device_resources { __be64 counter_array; @@ -345,6 +388,8 @@ union gve_adminq_command { struct gve_adminq_report_stats report_stats; struct gve_adminq_report_link_speed report_link_speed; struct gve_adminq_get_ptype_map get_ptype_map; + struct gve_adminq_verify_driver_compatibility + verify_driver_compatibility; }; }; u8 reserved[64]; @@ -377,5 +422,8 @@ int gve_adminq_report_link_speed(struct gve_priv *priv); struct gve_ptype_lut; int gve_adminq_get_ptype_map_dqo(struct gve_priv *priv, struct gve_ptype_lut *ptype_lut); +int gve_adminq_verify_driver_compatibility(struct gve_priv *priv, + u64 driver_info_len, + dma_addr_t driver_info_addr); #endif /* _GVE_ADMINQ_H */ diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h index abf3d379ae..a8feae18f4 100644 --- a/drivers/net/gve/base/gve_osdep.h +++ b/drivers/net/gve/base/gve_osdep.h @@ -21,6 +21,9 @@ #include #include #include +#include +#include +#include #include "../gve_logs.h" @@ -82,6 +85,11 @@ typedef rte_iova_t dma_addr_t; { gve_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0) } #define GVE_CHECK_UNION_LEN(n, X) enum gve_static_asset_enum_##X \ { gve_static_assert_##X = (n) / ((sizeof(union X) == (n)) ? 1 : 0) } +#ifndef LINUX_VERSION_MAJOR +#define LINUX_VERSION_MAJOR (((LINUX_VERSION_CODE) >> 16) & 0xff) +#define LINUX_VERSION_SUBLEVEL (((LINUX_VERSION_CODE) >> 8) & 0xff) +#define LINUX_VERSION_PATCHLEVEL ((LINUX_VERSION_CODE) & 0xff) +#endif static __rte_always_inline u8 readb(volatile void *addr) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index fae00305f9..096f7c2d60 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -314,6 +314,60 @@ gve_dev_close(struct rte_eth_dev *dev) return err; } +static int +gve_verify_driver_compatibility(struct gve_priv *priv) +{ + const struct rte_memzone *driver_info_bus; + struct gve_driver_info *driver_info; + struct utsname uts; + char *release; + int err; + + driver_info_bus = rte_memzone_reserve_aligned("verify_driver_compatibility", + sizeof(struct gve_driver_info), + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, + PAGE_SIZE); + if (driver_info_bus == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memzone for driver compatibility"); + return -ENOMEM; + } + driver_info = (struct gve_driver_info *)driver_info_bus->addr; + *driver_info = (struct gve_driver_info) { + .os_type = 1, /* Linux */ + .os_version_major = cpu_to_be32(LINUX_VERSION_MAJOR), + .os_version_minor = cpu_to_be32(LINUX_VERSION_SUBLEVEL), + .os_version_sub = cpu_to_be32(LINUX_VERSION_PATCHLEVEL), + .driver_capability_flags = { + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS1), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS2), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS3), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS4), + }, + }; + + if (uname(&uts) > 0) + release = uts.release; + + /* OS version */ + rte_strscpy((char *)driver_info->os_version_str1, release, + sizeof(driver_info->os_version_str1)); + /* DPDK version */ + rte_strscpy((char *)driver_info->os_version_str2, rte_version(), + sizeof(driver_info->os_version_str2)); + + err = gve_adminq_verify_driver_compatibility(priv, + sizeof(struct gve_driver_info), + (dma_addr_t)driver_info_bus); + + /* It's ok if the device doesn't support this */ + if (err == -EOPNOTSUPP) + err = 0; + + rte_memzone_free(driver_info_bus); + return err; +} + static int gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -625,6 +679,12 @@ gve_init_priv(struct gve_priv *priv, bool skip_describe_device) return err; } + err = gve_verify_driver_compatibility(priv); + if (err) { + PMD_DRV_LOG(ERR, "Could not verify driver compatibility: err=%d", err); + goto free_adminq; + } + if (skip_describe_device) goto setup_device; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 608a2f2fb4..cd26225c19 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -250,6 +250,7 @@ struct gve_priv { uint32_t adminq_report_stats_cnt; uint32_t adminq_report_link_speed_cnt; uint32_t adminq_get_ptype_map_cnt; + uint32_t adminq_verify_driver_compatibility_cnt; volatile uint32_t state_flags;