From patchwork Fri Feb 17 07:32:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124108 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 946DC41CBC; Fri, 17 Feb 2023 08:39:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0FDAF410FB; Fri, 17 Feb 2023 08:39:03 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id DED62410F1 for ; Fri, 17 Feb 2023 08:39:00 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619541; x=1708155541; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=khMsGFbGIBYdefICaEHPY35/6NZfAgSzJPijiF3xkLk=; b=nEOJ/hWhX/B0/UaOtPu20rojWiQXAK8geT6iz1KQ3zGyVC30F7wUlb7R rqU5M+QRNt3nXAG6PCNEJm7+PduSmL6vpuMgpI7qic3XEg4LVHD2l3uTo QuvdTkL1EkzAVay+yKWX0Tlme9VF9K03jq4cby0fHGqO2kchId3ePpIvV HaUz7+ByFwNgBcXCHnmtOfuRwxYhofTd8YFgE4BXGUXfZ8F0WaQjNWfiO uArEfDxArzLhZCBXpmwSYODhxDQw1d0DTylnVx0RyC1KGdu8RzlhHHm9K VHLSeqAGZwa0aovQBi20SIPzhEaML09UWzif37X1Zjx3O9dHKghhmujxU g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153010" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153010" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458621" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458621" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:38:56 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 01/10] net/gve: add Tx queue setup for DQO Date: Fri, 17 Feb 2023 15:32:19 +0800 Message-Id: <20230217073228.340815-2-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for tx_queue_setup_dqo ops. DQO format has submission and completion queue pair for each Tx/Rx queue. Note that with DQO format all descriptors and doorbells, as well as counters are written in little-endian. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- .mailmap | 3 + MAINTAINERS | 3 + drivers/net/gve/base/gve.h | 3 +- drivers/net/gve/base/gve_desc_dqo.h | 6 +- drivers/net/gve/base/gve_osdep.h | 6 +- drivers/net/gve/gve_ethdev.c | 19 ++- drivers/net/gve/gve_ethdev.h | 35 +++++- drivers/net/gve/gve_tx_dqo.c | 184 ++++++++++++++++++++++++++++ drivers/net/gve/meson.build | 3 +- 9 files changed, 248 insertions(+), 14 deletions(-) create mode 100644 drivers/net/gve/gve_tx_dqo.c diff --git a/.mailmap b/.mailmap index 2af8606181..abfb09039e 100644 --- a/.mailmap +++ b/.mailmap @@ -579,6 +579,7 @@ Jens Freimann Jeremy Plsek Jeremy Spewock Jerin Jacob +Jeroen de Borst Jerome Jutteau Jerry Hao OS Jerry Lilijun @@ -643,6 +644,7 @@ Jonathan Erb Jon DeVree Jon Loeliger Joongi Kim +Jordan Kimbrough Jørgen Østergaard Sloth Jörg Thalheim Joseph Richard @@ -1148,6 +1150,7 @@ Roy Franz Roy Pledge Roy Shterman Ruifeng Wang +Rushil Gupta Ryan E Hall Sabyasachi Sengupta Sachin Saxena diff --git a/MAINTAINERS b/MAINTAINERS index 3495946d0f..0b04fe20f2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -703,6 +703,9 @@ F: doc/guides/nics/features/enic.ini Google Virtual Ethernet M: Junfeng Guo +M: Jeroen de Borst +M: Rushil Gupta +M: Jordan Kimbrough F: drivers/net/gve/ F: doc/guides/nics/gve.rst F: doc/guides/nics/features/gve.ini diff --git a/drivers/net/gve/base/gve.h b/drivers/net/gve/base/gve.h index 2dc4507acb..22d175910d 100644 --- a/drivers/net/gve/base/gve.h +++ b/drivers/net/gve/base/gve.h @@ -1,12 +1,13 @@ /* SPDX-License-Identifier: MIT * Google Virtual Ethernet (gve) driver - * Copyright (C) 2015-2022 Google, Inc. + * Copyright (C) 2015-2023 Google, Inc. */ #ifndef _GVE_H_ #define _GVE_H_ #include "gve_desc.h" +#include "gve_desc_dqo.h" #define GVE_VERSION "1.3.0" #define GVE_VERSION_PREFIX "GVE-" diff --git a/drivers/net/gve/base/gve_desc_dqo.h b/drivers/net/gve/base/gve_desc_dqo.h index ee1afdecb8..431abac424 100644 --- a/drivers/net/gve/base/gve_desc_dqo.h +++ b/drivers/net/gve/base/gve_desc_dqo.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: MIT * Google Virtual Ethernet (gve) driver - * Copyright (C) 2015-2022 Google, Inc. + * Copyright (C) 2015-2023 Google, Inc. */ /* GVE DQO Descriptor formats */ @@ -13,10 +13,6 @@ #define GVE_TX_MAX_HDR_SIZE_DQO 255 #define GVE_TX_MIN_TSO_MSS_DQO 88 -#ifndef __LITTLE_ENDIAN_BITFIELD -#error "Only little endian supported" -#endif - /* Basic TX descriptor (DTYPE 0x0C) */ struct gve_tx_pkt_desc_dqo { __le64 buf_addr; diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h index 7cb73002f4..71759d254f 100644 --- a/drivers/net/gve/base/gve_osdep.h +++ b/drivers/net/gve/base/gve_osdep.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation */ #ifndef _GVE_OSDEP_H_ @@ -35,6 +35,10 @@ typedef rte_be16_t __be16; typedef rte_be32_t __be32; typedef rte_be64_t __be64; +typedef rte_le16_t __le16; +typedef rte_le32_t __le32; +typedef rte_le64_t __le64; + typedef rte_iova_t dma_addr_t; #define ETH_MIN_MTU RTE_ETHER_MIN_MTU diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 06d1b796c8..a02a48ef11 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation */ #include "gve_ethdev.h" @@ -299,6 +299,7 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->default_txconf = (struct rte_eth_txconf) { .tx_free_thresh = GVE_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = GVE_DEFAULT_TX_RS_THRESH, .offloads = 0, }; @@ -420,6 +421,17 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .mtu_set = gve_dev_mtu_set, }; +static const struct eth_dev_ops gve_eth_dev_ops_dqo = { + .dev_configure = gve_dev_configure, + .dev_start = gve_dev_start, + .dev_stop = gve_dev_stop, + .dev_close = gve_dev_close, + .dev_infos_get = gve_dev_info_get, + .tx_queue_setup = gve_tx_queue_setup_dqo, + .link_update = gve_link_update, + .mtu_set = gve_dev_mtu_set, +}; + static void gve_free_counter_array(struct gve_priv *priv) { @@ -662,8 +674,6 @@ gve_dev_init(struct rte_eth_dev *eth_dev) rte_be32_t *db_bar; int err; - eth_dev->dev_ops = &gve_eth_dev_ops; - if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -699,10 +709,11 @@ gve_dev_init(struct rte_eth_dev *eth_dev) return err; if (gve_is_gqi(priv)) { + eth_dev->dev_ops = &gve_eth_dev_ops; eth_dev->rx_pkt_burst = gve_rx_burst; eth_dev->tx_pkt_burst = gve_tx_burst; } else { - PMD_DRV_LOG(ERR, "DQO_RDA is not implemented and will be added in the future"); + eth_dev->dev_ops = &gve_eth_dev_ops_dqo; } eth_dev->data->mac_addrs = &priv->dev_addr; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 64e571bcae..c4b66acb0a 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation */ #ifndef _GVE_ETHDEV_H_ @@ -11,6 +11,9 @@ #include "base/gve.h" +/* TODO: this is a workaround to ensure that Tx complq is enough */ +#define DQO_TX_MULTIPLIER 4 + /* * Following macros are derived from linux/pci_regs.h, however, * we can't simply include that header here, as there is no such @@ -25,7 +28,8 @@ #define PCI_MSIX_FLAGS_QSIZE 0x07FF /* Table size */ #define GVE_DEFAULT_RX_FREE_THRESH 512 -#define GVE_DEFAULT_TX_FREE_THRESH 256 +#define GVE_DEFAULT_TX_FREE_THRESH 32 +#define GVE_DEFAULT_TX_RS_THRESH 32 #define GVE_TX_MAX_FREE_SZ 512 #define GVE_MIN_BUF_SIZE 1024 @@ -50,6 +54,13 @@ union gve_tx_desc { struct gve_tx_seg_desc seg; /* subsequent descs for a packet */ }; +/* Tx desc for DQO format */ +union gve_tx_desc_dqo { + struct gve_tx_pkt_desc_dqo pkt; + struct gve_tx_tso_context_desc_dqo tso_ctx; + struct gve_tx_general_context_desc_dqo general_ctx; +}; + /* Offload features */ union gve_tx_offload { uint64_t data; @@ -78,8 +89,10 @@ struct gve_tx_queue { uint32_t tx_tail; uint16_t nb_tx_desc; uint16_t nb_free; + uint16_t nb_used; uint32_t next_to_clean; uint16_t free_thresh; + uint16_t rs_thresh; /* Only valid for DQO_QPL queue format */ uint16_t sw_tail; @@ -107,6 +120,17 @@ struct gve_tx_queue { const struct rte_memzone *qres_mz; struct gve_queue_resources *qres; + /* newly added for DQO */ + volatile union gve_tx_desc_dqo *tx_ring; + struct gve_tx_compl_desc *compl_ring; + const struct rte_memzone *compl_ring_mz; + uint64_t compl_ring_phys_addr; + uint32_t complq_tail; + uint16_t sw_size; + uint8_t cur_gen_bit; + uint32_t last_desc_cleaned; + void **txqs; + /* Only valid for DQO_RDA queue format */ struct gve_tx_queue *complq; @@ -319,4 +343,11 @@ gve_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t gve_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +/* Below functions are used for DQO */ + +int +gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c new file mode 100644 index 0000000000..acf4ee2952 --- /dev/null +++ b/drivers/net/gve/gve_tx_dqo.c @@ -0,0 +1,184 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022-2023 Intel Corporation + */ + +#include "gve_ethdev.h" +#include "base/gve_adminq.h" + +static int +check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh, + uint16_t tx_free_thresh) +{ + if (tx_rs_thresh >= (nb_desc - 2)) { + PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than the " + "number of TX descriptors (%u) minus 2", + tx_rs_thresh, nb_desc); + return -EINVAL; + } + if (tx_free_thresh >= (nb_desc - 3)) { + PMD_DRV_LOG(ERR, "tx_free_thresh (%u) must be less than the " + "number of TX descriptors (%u) minus 3.", + tx_free_thresh, nb_desc); + return -EINVAL; + } + if (tx_rs_thresh > tx_free_thresh) { + PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than or " + "equal to tx_free_thresh (%u).", + tx_rs_thresh, tx_free_thresh); + return -EINVAL; + } + if ((nb_desc % tx_rs_thresh) != 0) { + PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the " + "number of TX descriptors (%u).", + tx_rs_thresh, nb_desc); + return -EINVAL; + } + + return 0; +} + +static void +gve_reset_txq_dqo(struct gve_tx_queue *txq) +{ + struct rte_mbuf **sw_ring; + uint32_t size, i; + + if (txq == NULL) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + size = txq->nb_tx_desc * sizeof(union gve_tx_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)txq->tx_ring)[i] = 0; + + size = txq->sw_size * sizeof(struct gve_tx_compl_desc); + for (i = 0; i < size; i++) + ((volatile char *)txq->compl_ring)[i] = 0; + + sw_ring = txq->sw_ring; + for (i = 0; i < txq->sw_size; i++) + sw_ring[i] = NULL; + + txq->tx_tail = 0; + txq->nb_used = 0; + + txq->last_desc_cleaned = 0; + txq->sw_tail = 0; + txq->nb_free = txq->nb_tx_desc - 1; + + txq->complq_tail = 0; + txq->cur_gen_bit = 1; +} + +int +gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *conf) +{ + struct gve_priv *hw = dev->data->dev_private; + const struct rte_memzone *mz; + struct gve_tx_queue *txq; + uint16_t free_thresh; + uint16_t rs_thresh; + uint16_t sw_size; + int err = 0; + + if (nb_desc != hw->tx_desc_cnt) { + PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", + hw->tx_desc_cnt); + } + nb_desc = hw->tx_desc_cnt; + + /* Allocate the TX queue data structure. */ + txq = rte_zmalloc_socket("gve txq", + sizeof(struct gve_tx_queue), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for tx queue structure"); + return -ENOMEM; + } + + /* need to check free_thresh here */ + free_thresh = conf->tx_free_thresh ? + conf->tx_free_thresh : GVE_DEFAULT_TX_FREE_THRESH; + rs_thresh = conf->tx_rs_thresh ? + conf->tx_rs_thresh : GVE_DEFAULT_TX_RS_THRESH; + if (check_tx_thresh_dqo(nb_desc, rs_thresh, free_thresh)) + return -EINVAL; + + txq->nb_tx_desc = nb_desc; + txq->free_thresh = free_thresh; + txq->rs_thresh = rs_thresh; + txq->queue_id = queue_id; + txq->port_id = dev->data->port_id; + txq->ntfy_id = queue_id; + txq->hw = hw; + txq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[txq->ntfy_id].id)]; + + /* Allocate software ring */ + sw_size = nb_desc * DQO_TX_MULTIPLIER; + txq->sw_ring = rte_zmalloc_socket("gve tx sw ring", + sw_size * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (txq->sw_ring == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW TX ring"); + err = -ENOMEM; + goto free_txq; + } + txq->sw_size = sw_size; + + /* Allocate TX hardware ring descriptors. */ + mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_id, + nb_desc * sizeof(union gve_tx_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX"); + err = -ENOMEM; + goto free_txq_sw_ring; + } + txq->tx_ring = (union gve_tx_desc_dqo *)mz->addr; + txq->tx_ring_phys_addr = mz->iova; + txq->mz = mz; + + /* Allocate TX completion ring descriptors. */ + mz = rte_eth_dma_zone_reserve(dev, "tx_compl_ring", queue_id, + sw_size * sizeof(struct gve_tx_compl_desc), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX completion queue"); + err = -ENOMEM; + goto free_txq_mz; + } + txq->compl_ring = (struct gve_tx_compl_desc *)mz->addr; + txq->compl_ring_phys_addr = mz->iova; + txq->compl_ring_mz = mz; + txq->txqs = dev->data->tx_queues; + + mz = rte_eth_dma_zone_reserve(dev, "txq_res", queue_id, + sizeof(struct gve_queue_resources), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX resource"); + err = -ENOMEM; + goto free_txq_cq_mz; + } + txq->qres = (struct gve_queue_resources *)mz->addr; + txq->qres_mz = mz; + + gve_reset_txq_dqo(txq); + + dev->data->tx_queues[queue_id] = txq; + + return 0; + +free_txq_cq_mz: + rte_memzone_free(txq->compl_ring_mz); +free_txq_mz: + rte_memzone_free(txq->mz); +free_txq_sw_ring: + rte_free(txq->sw_ring); +free_txq: + rte_free(txq); + return err; +} diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build index af0010c01c..a699432160 100644 --- a/drivers/net/gve/meson.build +++ b/drivers/net/gve/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(C) 2022 Intel Corporation +# Copyright(C) 2022-2023 Intel Corporation if is_windows build = false @@ -11,6 +11,7 @@ sources = files( 'base/gve_adminq.c', 'gve_rx.c', 'gve_tx.c', + 'gve_tx_dqo.c', 'gve_ethdev.c', ) includes += include_directories('base') From patchwork Fri Feb 17 07:32:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124109 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1CC841CBC; Fri, 17 Feb 2023 08:39:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5887D41611; Fri, 17 Feb 2023 08:39:06 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 5398F41611 for ; Fri, 17 Feb 2023 08:39:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619544; x=1708155544; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F3iT94yMa2aU93pShYoDpKVkVzM+w8Ac6Zv11Adb+ZE=; b=VoMg52ii9XH00g/h7DZEbcw4JBvPlMzS+CXtOOWc6b4PPjp9SDoIMY3x H6m5CxM+7RZCKuXGkCQHclR4lM8hJZjA53D3paOg2h90VICWvOMcZr+/S Ssi0dyWGAu+XiBOqq/dTiqa8NvtcHNWgucqA2R1u+r/tjQpeOpZb0WC6/ aT5rTaypxNm3/s0xWBYXExRmkm58OSctZMNyjesOEQ36CBQW7cXRTFGth roIb2Sd2wKOmjbNZWo59HJOp1FH0z/MjPH+2ZMXFhkKMxtYKvQ+sQj2jf 6e6N+gA6V4W/HW0DTEFSRhJ6uMclZTuXhpC75A0tTi14iaF4rDZGnGOO1 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153019" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153019" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458652" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458652" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:00 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 02/10] net/gve: add Rx queue setup for DQO Date: Fri, 17 Feb 2023 15:32:20 +0800 Message-Id: <20230217073228.340815-3-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for rx_queue_setup_dqo ops. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 14 ++++ drivers/net/gve/gve_rx_dqo.c | 154 +++++++++++++++++++++++++++++++++++ drivers/net/gve/meson.build | 1 + 4 files changed, 170 insertions(+) create mode 100644 drivers/net/gve/gve_rx_dqo.c diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index a02a48ef11..0f55d028f5 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -427,6 +427,7 @@ static const struct eth_dev_ops gve_eth_dev_ops_dqo = { .dev_stop = gve_dev_stop, .dev_close = gve_dev_close, .dev_infos_get = gve_dev_info_get, + .rx_queue_setup = gve_rx_queue_setup_dqo, .tx_queue_setup = gve_tx_queue_setup_dqo, .link_update = gve_link_update, .mtu_set = gve_dev_mtu_set, diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c4b66acb0a..c4e5b8cb43 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -150,6 +150,7 @@ struct gve_rx_queue { uint16_t nb_rx_desc; uint16_t expected_seqno; /* the next expected seqno */ uint16_t free_thresh; + uint16_t nb_rx_hold; uint32_t next_avail; uint32_t nb_avail; @@ -174,6 +175,14 @@ struct gve_rx_queue { uint16_t ntfy_id; uint16_t rx_buf_len; + /* newly added for DQO */ + volatile struct gve_rx_desc_dqo *rx_ring; + struct gve_rx_compl_desc_dqo *compl_ring; + const struct rte_memzone *compl_ring_mz; + uint64_t compl_ring_phys_addr; + uint8_t cur_gen_bit; + uint16_t bufq_tail; + /* Only valid for DQO_RDA queue format */ struct gve_rx_queue *bufq; @@ -345,6 +354,11 @@ gve_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); /* Below functions are used for DQO */ +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool); int gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c new file mode 100644 index 0000000000..9c412c1481 --- /dev/null +++ b/drivers/net/gve/gve_rx_dqo.c @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022-2023 Intel Corporation + */ + +#include "gve_ethdev.h" +#include "base/gve_adminq.h" + +static void +gve_reset_rxq_dqo(struct gve_rx_queue *rxq) +{ + struct rte_mbuf **sw_ring; + uint32_t size, i; + + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "pointer to rxq is NULL"); + return; + } + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_compl_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->compl_ring)[i] = 0; + + sw_ring = rxq->sw_ring; + for (i = 0; i < rxq->nb_rx_desc; i++) + sw_ring[i] = NULL; + + rxq->bufq_tail = 0; + rxq->next_avail = 0; + rxq->nb_rx_hold = rxq->nb_rx_desc - 1; + + rxq->rx_tail = 0; + rxq->cur_gen_bit = 1; +} + +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool) +{ + struct gve_priv *hw = dev->data->dev_private; + const struct rte_memzone *mz; + struct gve_rx_queue *rxq; + uint16_t free_thresh; + int err = 0; + + if (nb_desc != hw->rx_desc_cnt) { + PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", + hw->rx_desc_cnt); + } + nb_desc = hw->rx_desc_cnt; + + /* Allocate the RX queue data structure. */ + rxq = rte_zmalloc_socket("gve rxq", + sizeof(struct gve_rx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for rx queue structure"); + return -ENOMEM; + } + + /* check free_thresh here */ + free_thresh = conf->rx_free_thresh ? + conf->rx_free_thresh : GVE_DEFAULT_RX_FREE_THRESH; + if (free_thresh >= nb_desc) { + PMD_DRV_LOG(ERR, "rx_free_thresh (%u) must be less than nb_desc (%u).", + free_thresh, rxq->nb_rx_desc); + err = -EINVAL; + goto free_rxq; + } + + rxq->nb_rx_desc = nb_desc; + rxq->free_thresh = free_thresh; + rxq->queue_id = queue_id; + rxq->port_id = dev->data->port_id; + rxq->ntfy_id = hw->num_ntfy_blks / 2 + queue_id; + + rxq->mpool = pool; + rxq->hw = hw; + rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)]; + + rxq->rx_buf_len = + rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM; + + /* Allocate software ring */ + rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq->sw_ring == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW RX ring"); + err = -ENOMEM; + goto free_rxq; + } + + /* Allocate RX buffer queue */ + mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_id, + nb_desc * sizeof(struct gve_rx_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue"); + err = -ENOMEM; + goto free_rxq_sw_ring; + } + rxq->rx_ring = (struct gve_rx_desc_dqo *)mz->addr; + rxq->rx_ring_phys_addr = mz->iova; + rxq->mz = mz; + + /* Allocate RX completion queue */ + mz = rte_eth_dma_zone_reserve(dev, "compl_ring", queue_id, + nb_desc * sizeof(struct gve_rx_compl_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX completion queue"); + err = -ENOMEM; + goto free_rxq_mz; + } + /* Zero all the descriptors in the ring */ + memset(mz->addr, 0, nb_desc * sizeof(struct gve_rx_compl_desc_dqo)); + rxq->compl_ring = (struct gve_rx_compl_desc_dqo *)mz->addr; + rxq->compl_ring_phys_addr = mz->iova; + rxq->compl_ring_mz = mz; + + mz = rte_eth_dma_zone_reserve(dev, "rxq_res", queue_id, + sizeof(struct gve_queue_resources), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX resource"); + err = -ENOMEM; + goto free_rxq_cq_mz; + } + rxq->qres = (struct gve_queue_resources *)mz->addr; + rxq->qres_mz = mz; + + gve_reset_rxq_dqo(rxq); + + dev->data->rx_queues[queue_id] = rxq; + + return 0; + +free_rxq_cq_mz: + rte_memzone_free(rxq->compl_ring_mz); +free_rxq_mz: + rte_memzone_free(rxq->mz); +free_rxq_sw_ring: + rte_free(rxq->sw_ring); +free_rxq: + rte_free(rxq); + return err; +} diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build index a699432160..8caee3714b 100644 --- a/drivers/net/gve/meson.build +++ b/drivers/net/gve/meson.build @@ -11,6 +11,7 @@ sources = files( 'base/gve_adminq.c', 'gve_rx.c', 'gve_tx.c', + 'gve_rx_dqo.c', 'gve_tx_dqo.c', 'gve_ethdev.c', ) From patchwork Fri Feb 17 07:32:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124110 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83C4041CBC; Fri, 17 Feb 2023 08:39:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A7BE842D0C; Fri, 17 Feb 2023 08:39:09 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 9A95942D0C for ; Fri, 17 Feb 2023 08:39:07 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619547; x=1708155547; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RECBGWeGdwonycWqVIUM2A6wMmx7I//Mr4JYZ9w86IM=; b=SoiwjYP/CQzCOoHtHIfCXmrMIcv8EvQD6C1vhJsdUJWJq7vSTW0UpOlj K+dq6RdGblBhM8UIr5WbGfW1fjPvTAxRK33WAgj9FW1BdOm+BLynQCa2M BAPao/3FWSwl8dRHe/V+gmE2QTRjfSOeG1+3iN207tzAFYbVLJF87G0Tt YhbrDfpZ+Ex1CbEa2FkIKGFMTjK1pf/q41s5DKttZcqR85/D7YvqXhzIN HxFhYSNvjVDAj6/Lqo/SMPXyq4Gk2406nL3DDUx59Gs+TlSN18uvZxOH8 ZNj/fcr6Kv+QeecKMq3dgYCr0riliJP8kPwmD0hsWmgVJQHSxT6tgbmQS g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153026" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153026" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458660" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458660" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:04 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 03/10] net/gve: support device start and close for DQO Date: Fri, 17 Feb 2023 15:32:21 +0800 Message-Id: <20230217073228.340815-4-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add device start and close support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/base/gve_adminq.c | 12 ++++----- drivers/net/gve/gve_ethdev.c | 43 ++++++++++++++++++++++++++++++- 2 files changed, 48 insertions(+), 7 deletions(-) diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index e745b709b2..650d520e3d 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: MIT * Google Virtual Ethernet (gve) driver - * Copyright (C) 2015-2022 Google, Inc. + * Copyright (C) 2015-2023 Google, Inc. */ #include "../gve_ethdev.h" @@ -497,11 +497,11 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_tx_queue.queue_page_list_id = cpu_to_be32(qpl_id); } else { cmd.create_tx_queue.tx_ring_size = - cpu_to_be16(txq->nb_tx_desc); + cpu_to_be16(priv->tx_desc_cnt); cmd.create_tx_queue.tx_comp_ring_addr = - cpu_to_be64(txq->complq->tx_ring_phys_addr); + cpu_to_be64(txq->compl_ring_phys_addr); cmd.create_tx_queue.tx_comp_ring_size = - cpu_to_be16(priv->tx_compq_size); + cpu_to_be16(priv->tx_compq_size * DQO_TX_MULTIPLIER); } return gve_adminq_issue_cmd(priv, &cmd); @@ -549,9 +549,9 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_rx_queue.rx_ring_size = cpu_to_be16(priv->rx_desc_cnt); cmd.create_rx_queue.rx_desc_ring_addr = - cpu_to_be64(rxq->rx_ring_phys_addr); + cpu_to_be64(rxq->compl_ring_phys_addr); cmd.create_rx_queue.rx_data_ring_addr = - cpu_to_be64(rxq->bufq->rx_ring_phys_addr); + cpu_to_be64(rxq->rx_ring_phys_addr); cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rxq->rx_buf_len); cmd.create_rx_queue.rx_buff_ring_size = diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 0f55d028f5..413696890f 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -78,6 +78,9 @@ gve_free_qpls(struct gve_priv *priv) uint16_t nb_rxqs = priv->max_nb_rxq; uint32_t i; + if (priv->queue_format != GVE_GQI_QPL_FORMAT) + return; + for (i = 0; i < nb_txqs + nb_rxqs; i++) { if (priv->qpl[i].mz != NULL) rte_memzone_free(priv->qpl[i].mz); @@ -138,6 +141,41 @@ gve_refill_pages(struct gve_rx_queue *rxq) return 0; } +static int +gve_refill_dqo(struct gve_rx_queue *rxq) +{ + struct rte_mbuf *nmb; + uint16_t i; + int diag; + + diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0], rxq->nb_rx_desc); + if (diag < 0) { + for (i = 0; i < rxq->nb_rx_desc - 1; i++) { + nmb = rte_pktmbuf_alloc(rxq->mpool); + if (!nmb) + break; + rxq->sw_ring[i] = nmb; + } + if (i < rxq->nb_rx_desc - 1) + return -ENOMEM; + } + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (i == rxq->nb_rx_desc - 1) + break; + nmb = rxq->sw_ring[i]; + rxq->rx_ring[i].buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + rxq->rx_ring[i].buf_id = rte_cpu_to_le_16(i); + } + + rxq->nb_rx_hold = 0; + rxq->bufq_tail = rxq->nb_rx_desc - 1; + + rte_write32(rxq->bufq_tail, rxq->qrx_tail); + + return 0; +} + static int gve_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -206,7 +244,10 @@ gve_dev_start(struct rte_eth_dev *dev) rte_write32(rte_cpu_to_be_32(GVE_IRQ_MASK), rxq->ntfy_addr); - err = gve_refill_pages(rxq); + if (gve_is_gqi(priv)) + err = gve_refill_pages(rxq); + else + err = gve_refill_dqo(rxq); if (err) { PMD_DRV_LOG(ERR, "Failed to refill for RX"); goto err_rx; From patchwork Fri Feb 17 07:32:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124111 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ED08041CBC; Fri, 17 Feb 2023 08:39:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C70BF42D17; Fri, 17 Feb 2023 08:39:12 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 9F45842D1A for ; Fri, 17 Feb 2023 08:39:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619551; x=1708155551; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AhWCJEmWSRMwNO17L2Cb9F25sU/UFhb9a83uAFdBhM4=; b=ikyiu2JPV9P7GLV85YT1X3n4xaWw3xoysAcW4MHPoUU60ur+A75/wLrJ wbnElMSWrARQdXziqz4LPGfxZDAIR77G7URKfPnJQdf8iJ68eMFbarC3+ QySXyMMxTJ5WfFNs3sMftsQKb5pkjj6Vsx2gAQ/VO6/R9P0cKzBmQqLUH 4SZNdLJdD1T0NsequOn1jiKQQSb/eQXggZvN39XQpR0NbQN64M0ikt5es +vjGXB1iuL1iJUKSo4M+oAHd3iT21n0f1G47TciK8DvRubgfLHSnENKl+ NB5E/BioPr5ESQAPXPAf4fisw/YQ4UXtiOpKdaQ3E/mRsVgpPWWJB5tKJ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153035" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153035" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458672" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458672" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:07 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 04/10] net/gve: support queue release and stop for DQO Date: Fri, 17 Feb 2023 15:32:22 +0800 Message-Id: <20230217073228.340815-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for queue operations: - gve_tx_queue_release_dqo - gve_rx_queue_release_dqo - gve_stop_tx_queues_dqo - gve_stop_rx_queues_dqo Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 18 +++++++++--- drivers/net/gve/gve_ethdev.h | 12 ++++++++ drivers/net/gve/gve_rx.c | 5 +++- drivers/net/gve/gve_rx_dqo.c | 57 ++++++++++++++++++++++++++++++++++++ drivers/net/gve/gve_tx.c | 5 +++- drivers/net/gve/gve_tx_dqo.c | 55 ++++++++++++++++++++++++++++++++++ 6 files changed, 146 insertions(+), 6 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 413696890f..efa121ca4d 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -292,11 +292,19 @@ gve_dev_close(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Failed to stop dev."); } - for (i = 0; i < dev->data->nb_tx_queues; i++) - gve_tx_queue_release(dev, i); + if (gve_is_gqi(priv)) { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release(dev, i); + } else { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release_dqo(dev, i); - for (i = 0; i < dev->data->nb_rx_queues; i++) - gve_rx_queue_release(dev, i); + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release_dqo(dev, i); + } gve_free_qpls(priv); rte_free(priv->adminq); @@ -470,6 +478,8 @@ static const struct eth_dev_ops gve_eth_dev_ops_dqo = { .dev_infos_get = gve_dev_info_get, .rx_queue_setup = gve_rx_queue_setup_dqo, .tx_queue_setup = gve_tx_queue_setup_dqo, + .rx_queue_release = gve_rx_queue_release_dqo, + .tx_queue_release = gve_tx_queue_release_dqo, .link_update = gve_link_update, .mtu_set = gve_dev_mtu_set, }; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c4e5b8cb43..5cc57afdb9 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -364,4 +364,16 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf); +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 66fbcf3930..e264bcadad 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation */ #include "gve_ethdev.h" @@ -354,6 +354,9 @@ gve_stop_rx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_rx_queues_dqo(dev); + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 9c412c1481..8236cd7b50 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,38 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) +{ + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i]) { + rte_pktmbuf_free_seg(rxq->sw_ring[i]); + rxq->sw_ring[i] = NULL; + } + } + + rxq->nb_avail = rxq->nb_rx_desc; +} + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_rx_queue *q = dev->data->rx_queues[qid]; + + if (q == NULL) + return; + + gve_release_rxq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static void gve_reset_rxq_dqo(struct gve_rx_queue *rxq) { @@ -54,6 +86,12 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->rx_desc_cnt; + /* Free memory if needed */ + if (dev->data->rx_queues[queue_id]) { + gve_rx_queue_release_dqo(dev, queue_id); + dev->data->rx_queues[queue_id] = NULL; + } + /* Allocate the RX queue data structure. */ rxq = rte_zmalloc_socket("gve rxq", sizeof(struct gve_rx_queue), @@ -152,3 +190,22 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(rxq); return err; } + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_rx_queue *rxq; + uint16_t i; + int err; + + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + gve_release_rxq_mbufs_dqo(rxq); + gve_reset_rxq_dqo(rxq); + } +} diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index 9b41c59358..86f558d7a0 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(C) 2022 Intel Corporation + * Copyright(C) 2022-2023 Intel Corporation */ #include "gve_ethdev.h" @@ -671,6 +671,9 @@ gve_stop_tx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_tx_queues_dqo(dev); + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy txqs"); diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index acf4ee2952..34f131cd7e 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,36 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) +{ + uint16_t i; + + for (i = 0; i < txq->sw_size; i++) { + if (txq->sw_ring[i]) { + rte_pktmbuf_free_seg(txq->sw_ring[i]); + txq->sw_ring[i] = NULL; + } + } +} + +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_tx_queue *q = dev->data->tx_queues[qid]; + + if (q == NULL) + return; + + gve_release_txq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static int check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh, uint16_t tx_free_thresh) @@ -90,6 +120,12 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->tx_desc_cnt; + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_id]) { + gve_tx_queue_release_dqo(dev, queue_id); + dev->data->tx_queues[queue_id] = NULL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("gve txq", sizeof(struct gve_tx_queue), @@ -182,3 +218,22 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(txq); return err; } + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_tx_queue *txq; + uint16_t i; + int err; + + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy txqs"); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + gve_release_txq_mbufs_dqo(txq); + gve_reset_txq_dqo(txq); + } +} From patchwork Fri Feb 17 07:32:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124112 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DFC241CBC; Fri, 17 Feb 2023 08:39:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1584B42BD9; Fri, 17 Feb 2023 08:39:17 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 4E011410F6 for ; Fri, 17 Feb 2023 08:39:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619555; x=1708155555; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xOD7i7o0PgVa2GlEE1/uIvtmi6NTZyyb793WuVvcnG8=; b=G/X+vn9Fv1tc65ucUwAcdGd7OIRFLmCGKTLzuoZJn8I/xZ2L8Z1EotxM lDdc4bNgP+OcCO8Lg0Cfv1xmbVk+r58Mpz7R3sTdw+aaw7SM5GfZw3qO7 9b2Zrcp7XMtgOmN9u+tx+FbHV/y7tr04JmmEVepeppKAnx8DcGcZL7WoL +vfCHWpIHtssFwNcJyiYdfCsxiSHbMV+3PsEi3uXG6U2VYlbaEGaWtT6L lczs0DCpxJn9f8IhylssRIXXdXRT5x21hVDNnCRPR3ipF9vvA22w7nxht vqr9VwIubaHzrKizh4mRHJHIvlv9w7Y4qUPtKtRYQiTwY4MvBvk8lu/gk g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153051" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153051" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458679" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458679" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:11 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 05/10] net/gve: support basic Tx data path for DQO Date: Fri, 17 Feb 2023 15:32:23 +0800 Message-Id: <20230217073228.340815-6-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic Tx data path support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 4 + drivers/net/gve/gve_tx_dqo.c | 141 +++++++++++++++++++++++++++++++++++ 3 files changed, 146 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index efa121ca4d..1197194e41 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -766,6 +766,7 @@ gve_dev_init(struct rte_eth_dev *eth_dev) eth_dev->tx_pkt_burst = gve_tx_burst; } else { eth_dev->dev_ops = &gve_eth_dev_ops_dqo; + eth_dev->tx_pkt_burst = gve_tx_burst_dqo; } eth_dev->data->mac_addrs = &priv->dev_addr; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 5cc57afdb9..f39a0884f2 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -130,6 +130,7 @@ struct gve_tx_queue { uint8_t cur_gen_bit; uint32_t last_desc_cleaned; void **txqs; + uint16_t re_cnt; /* Only valid for DQO_RDA queue format */ struct gve_tx_queue *complq; @@ -376,4 +377,7 @@ gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); void gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); +uint16_t +gve_tx_burst_dqo(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 34f131cd7e..af43ff870a 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,147 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_tx_clean_dqo(struct gve_tx_queue *txq) +{ + struct gve_tx_compl_desc *compl_ring; + struct gve_tx_compl_desc *compl_desc; + struct gve_tx_queue *aim_txq; + uint16_t nb_desc_clean; + struct rte_mbuf *txe; + uint16_t compl_tag; + uint16_t next; + + next = txq->complq_tail; + compl_ring = txq->compl_ring; + compl_desc = &compl_ring[next]; + + if (compl_desc->generation != txq->cur_gen_bit) + return; + + compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag); + + aim_txq = txq->txqs[compl_desc->id]; + + switch (compl_desc->type) { + case GVE_COMPL_TYPE_DQO_DESC: + /* need to clean Descs from last_cleaned to compl_tag */ + if (aim_txq->last_desc_cleaned > compl_tag) + nb_desc_clean = aim_txq->nb_tx_desc - aim_txq->last_desc_cleaned + + compl_tag; + else + nb_desc_clean = compl_tag - aim_txq->last_desc_cleaned; + aim_txq->nb_free += nb_desc_clean; + aim_txq->last_desc_cleaned = compl_tag; + break; + case GVE_COMPL_TYPE_DQO_REINJECTION: + PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_REINJECTION !!!"); + /* FALLTHROUGH */ + case GVE_COMPL_TYPE_DQO_PKT: + txe = aim_txq->sw_ring[compl_tag]; + if (txe != NULL) { + rte_pktmbuf_free_seg(txe); + txe = NULL; + } + break; + case GVE_COMPL_TYPE_DQO_MISS: + rte_delay_us_sleep(1); + PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_MISS ignored !!!"); + break; + default: + PMD_DRV_LOG(ERR, "unknown completion type."); + return; + } + + next++; + if (next == txq->nb_tx_desc * DQO_TX_MULTIPLIER) { + next = 0; + txq->cur_gen_bit ^= 1; + } + + txq->complq_tail = next; +} + +uint16_t +gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct gve_tx_queue *txq = tx_queue; + volatile union gve_tx_desc_dqo *txr; + volatile union gve_tx_desc_dqo *txd; + struct rte_mbuf **sw_ring; + struct rte_mbuf *tx_pkt; + uint16_t mask, sw_mask; + uint16_t nb_to_clean; + uint16_t nb_tx = 0; + uint16_t nb_used; + uint16_t tx_id; + uint16_t sw_id; + + sw_ring = txq->sw_ring; + txr = txq->tx_ring; + + mask = txq->nb_tx_desc - 1; + sw_mask = txq->sw_size - 1; + tx_id = txq->tx_tail; + sw_id = txq->sw_tail; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + tx_pkt = tx_pkts[nb_tx]; + + if (txq->nb_free <= txq->free_thresh) { + nb_to_clean = DQO_TX_MULTIPLIER * txq->rs_thresh; + while (nb_to_clean--) + gve_tx_clean_dqo(txq); + } + + if (txq->nb_free < tx_pkt->nb_segs) + break; + + nb_used = tx_pkt->nb_segs; + + do { + txd = &txr[tx_id]; + + sw_ring[sw_id] = tx_pkt; + + /* fill Tx descriptor */ + txd->pkt.buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt)); + txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; + txd->pkt.compl_tag = rte_cpu_to_le_16(sw_id); + txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO); + + /* size of desc_ring and sw_ring could be different */ + tx_id = (tx_id + 1) & mask; + sw_id = (sw_id + 1) & sw_mask; + + tx_pkt = tx_pkt->next; + } while (tx_pkt); + + /* fill the last descriptor with End of Packet (EOP) bit */ + txd->pkt.end_of_packet = 1; + + txq->nb_free -= nb_used; + txq->nb_used += nb_used; + } + + /* update the tail pointer if any packets were processed */ + if (nb_tx > 0) { + /* Request a descriptor completion on the last descriptor */ + txq->re_cnt += nb_tx; + if (txq->re_cnt >= GVE_TX_MIN_RE_INTERVAL) { + txd = &txr[(tx_id - 1) & mask]; + txd->pkt.report_event = true; + txq->re_cnt = 0; + } + + rte_write32(tx_id, txq->qtx_tail); + txq->tx_tail = tx_id; + txq->sw_tail = sw_id; + } + + return nb_tx; +} + static inline void gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) { From patchwork Fri Feb 17 07:32:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124113 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB5A941CBC; Fri, 17 Feb 2023 08:39:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 565FE41143; Fri, 17 Feb 2023 08:39:23 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 284B341143 for ; Fri, 17 Feb 2023 08:39:19 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619559; x=1708155559; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rV4ECUXSlDXP22KxTojKu2OtPKS42EP9TOi1j9Y76YE=; b=bUT3iZUyG8+qObBQW9cNfTMRPq9alWC3e5YyuNXRAsbkZDGIZQ7JDxDt 6zXGI0uo19gF5WO1jb8+Lji6SlUyRVLvIQMoB42htEeX+EIiyPcXvVAki MnnFtX0M+SmHPwMwi05L4roa0U0PTyLCZdqdZzrr+IWqzIAU1E4HR80SV es18vJCyqSq+duhjc5qZPCR7eOnrCftPI7A/kqjm2shP+KhlyZ8ktoalh mC857C1384Dv1XWhpczMsEJAI3Vka4UyDfvj95wEFhtxHipWSf8Qwf1Wi UsXaVTBQfL+Amxc4D/bUs98PGGgrWB0h6RWFUZKAFbdPCY+UkK848CDPZ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153064" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153064" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458687" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458687" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:15 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 06/10] net/gve: support basic Rx data path for DQO Date: Fri, 17 Feb 2023 15:32:24 +0800 Message-Id: <20230217073228.340815-7-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic Rx data path support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 3 + drivers/net/gve/gve_rx_dqo.c | 128 +++++++++++++++++++++++++++++++++++ 3 files changed, 132 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 1197194e41..1c9d272c2b 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -766,6 +766,7 @@ gve_dev_init(struct rte_eth_dev *eth_dev) eth_dev->tx_pkt_burst = gve_tx_burst; } else { eth_dev->dev_ops = &gve_eth_dev_ops_dqo; + eth_dev->rx_pkt_burst = gve_rx_burst_dqo; eth_dev->tx_pkt_burst = gve_tx_burst_dqo; } diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index f39a0884f2..a8e0dd5f3d 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -377,6 +377,9 @@ gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); void gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); +uint16_t +gve_rx_burst_dqo(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t gve_tx_burst_dqo(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 8236cd7b50..a281b237a4 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,134 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_rx_refill_dqo(struct gve_rx_queue *rxq) +{ + volatile struct gve_rx_desc_dqo *rx_buf_ring; + volatile struct gve_rx_desc_dqo *rx_buf_desc; + struct rte_mbuf *nmb[rxq->free_thresh]; + uint16_t nb_refill = rxq->free_thresh; + uint16_t nb_desc = rxq->nb_rx_desc; + uint16_t next_avail = rxq->bufq_tail; + struct rte_eth_dev *dev; + uint64_t dma_addr; + uint16_t delta; + int i; + + if (rxq->nb_rx_hold < rxq->free_thresh) + return; + + rx_buf_ring = rxq->rx_ring; + delta = nb_desc - next_avail; + if (unlikely(delta < nb_refill)) { + if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, delta) == 0)) { + for (i = 0; i < delta; i++) { + rx_buf_desc = &rx_buf_ring[next_avail + i]; + rxq->sw_ring[next_avail + i] = nmb[i]; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i])); + rx_buf_desc->header_buf_addr = 0; + rx_buf_desc->buf_addr = dma_addr; + } + nb_refill -= delta; + next_avail = 0; + rxq->nb_rx_hold -= delta; + } else { + dev = &rte_eth_devices[rxq->port_id]; + dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; + PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + return; + } + } + + if (nb_desc - next_avail >= nb_refill) { + if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill) == 0)) { + for (i = 0; i < nb_refill; i++) { + rx_buf_desc = &rx_buf_ring[next_avail + i]; + rxq->sw_ring[next_avail + i] = nmb[i]; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i])); + rx_buf_desc->header_buf_addr = 0; + rx_buf_desc->buf_addr = dma_addr; + } + next_avail += nb_refill; + rxq->nb_rx_hold -= nb_refill; + } else { + dev = &rte_eth_devices[rxq->port_id]; + dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; + PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + } + } + + rte_write32(next_avail, rxq->qrx_tail); + + rxq->bufq_tail = next_avail; +} + +uint16_t +gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + volatile struct gve_rx_compl_desc_dqo *rx_compl_ring; + volatile struct gve_rx_compl_desc_dqo *rx_desc; + struct gve_rx_queue *rxq; + struct rte_mbuf *rxm; + uint16_t rx_id_bufq; + uint16_t pkt_len; + uint16_t rx_id; + uint16_t nb_rx; + + nb_rx = 0; + rxq = rx_queue; + rx_id = rxq->rx_tail; + rx_id_bufq = rxq->next_avail; + rx_compl_ring = rxq->compl_ring; + + while (nb_rx < nb_pkts) { + rx_desc = &rx_compl_ring[rx_id]; + + /* check status */ + if (rx_desc->generation != rxq->cur_gen_bit) + break; + + if (unlikely(rx_desc->rx_error)) + continue; + + pkt_len = rx_desc->packet_len; + + rx_id++; + if (rx_id == rxq->nb_rx_desc) { + rx_id = 0; + rxq->cur_gen_bit ^= 1; + } + + rxm = rxq->sw_ring[rx_id_bufq]; + rx_id_bufq++; + if (rx_id_bufq == rxq->nb_rx_desc) + rx_id_bufq = 0; + rxq->nb_rx_hold++; + + rxm->pkt_len = pkt_len; + rxm->data_len = pkt_len; + rxm->port = rxq->port_id; + rxm->ol_flags = 0; + + rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; + rxm->hash.rss = rte_be_to_cpu_32(rx_desc->hash); + + rx_pkts[nb_rx++] = rxm; + } + + if (nb_rx > 0) { + rxq->rx_tail = rx_id; + if (rx_id_bufq != rxq->next_avail) + rxq->next_avail = rx_id_bufq; + + gve_rx_refill_dqo(rxq); + } + + return nb_rx; +} + static inline void gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) { From patchwork Fri Feb 17 07:32:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124114 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0DC041CBC; Fri, 17 Feb 2023 08:39:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B884842D37; Fri, 17 Feb 2023 08:39:25 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id ABAAA40A8B for ; Fri, 17 Feb 2023 08:39:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619562; x=1708155562; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KFHovYtrdVgKV0AlB+q5EkbMXxQOV5DK3khSFGVDVVo=; b=g0RA1ULPieaL3vQOClBKE7/S0DzaSFhBO4VEJyzo/YMiZzJ5fa1ninh/ LIUHsebrxjSPvaPSx+kt6UMvoK24f4N1/cbDxa4sf67OyPGBrJo6cfExZ WBTglR2KmlmG+dh5rG9ERGaB/AyeoMayYTWWp/MRYMXqOImn0eGxK4Ci/ AH/Nx9S4YzCHiNLC0AG7vi3ZZ/DTw+1CmBcdaz1q9y0hwd++jCgp8juKO 9Lih+mm65Hh8y0mM2oAontgZ65nOdSeKwpARFCe6dLZqJE7Uxf3fjI5mf tx+8bwM4+BUGUSxkjQEKXU5dwLy83yXNC3/N4s3/mCnLvYGmp147yIMe0 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153083" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153083" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458707" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458707" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:18 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 07/10] net/gve: support basic stats for DQO Date: Fri, 17 Feb 2023 15:32:25 +0800 Message-Id: <20230217073228.340815-8-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic stats support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 2 ++ drivers/net/gve/gve_rx_dqo.c | 12 +++++++++++- drivers/net/gve/gve_tx_dqo.c | 6 ++++++ 3 files changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 1c9d272c2b..2541738da1 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -481,6 +481,8 @@ static const struct eth_dev_ops gve_eth_dev_ops_dqo = { .rx_queue_release = gve_rx_queue_release_dqo, .tx_queue_release = gve_tx_queue_release_dqo, .link_update = gve_link_update, + .stats_get = gve_dev_stats_get, + .stats_reset = gve_dev_stats_reset, .mtu_set = gve_dev_mtu_set, }; diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index a281b237a4..2a540b1ba5 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -37,6 +37,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail = 0; rxq->nb_rx_hold -= delta; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -57,6 +58,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail += nb_refill; rxq->nb_rx_hold -= nb_refill; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -80,7 +82,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint16_t pkt_len; uint16_t rx_id; uint16_t nb_rx; + uint64_t bytes; + bytes = 0; nb_rx = 0; rxq = rx_queue; rx_id = rxq->rx_tail; @@ -94,8 +98,10 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (rx_desc->generation != rxq->cur_gen_bit) break; - if (unlikely(rx_desc->rx_error)) + if (unlikely(rx_desc->rx_error)) { + rxq->errors++; continue; + } pkt_len = rx_desc->packet_len; @@ -120,6 +126,7 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm->hash.rss = rte_be_to_cpu_32(rx_desc->hash); rx_pkts[nb_rx++] = rxm; + bytes += pkt_len; } if (nb_rx > 0) { @@ -128,6 +135,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxq->next_avail = rx_id_bufq; gve_rx_refill_dqo(rxq); + + rxq->packets += nb_rx; + rxq->bytes += bytes; } return nb_rx; diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index af43ff870a..450cf71a6b 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -80,10 +80,12 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t nb_used; uint16_t tx_id; uint16_t sw_id; + uint64_t bytes; sw_ring = txq->sw_ring; txr = txq->tx_ring; + bytes = 0; mask = txq->nb_tx_desc - 1; sw_mask = txq->sw_size - 1; tx_id = txq->tx_tail; @@ -118,6 +120,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_id = (tx_id + 1) & mask; sw_id = (sw_id + 1) & sw_mask; + bytes += tx_pkt->pkt_len; tx_pkt = tx_pkt->next; } while (tx_pkt); @@ -141,6 +144,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) rte_write32(tx_id, txq->qtx_tail); txq->tx_tail = tx_id; txq->sw_tail = sw_id; + + txq->packets += nb_tx; + txq->bytes += bytes; } return nb_tx; From patchwork Fri Feb 17 07:32:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124115 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E4CDE41CBC; Fri, 17 Feb 2023 08:39:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB55C42D42; Fri, 17 Feb 2023 08:39:27 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 1D09242D40 for ; Fri, 17 Feb 2023 08:39:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619566; x=1708155566; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OCFVDbREgwN0+T6KO01XQZsgsJb6U4qkLyfIevwCb6w=; b=Szec/tPrpiAQFwsSpfFh8bPvD4mSiLeQ1gPG02moF9/kLwpzYW0Kmd51 yEFJoZtAD8JKJ3S3efL+3DYTTZNpsPpKajiLnahpyOX2OWlZAVYaOpfjC l4KcLYQ6I2ZiMlgimFFNPS4cHPhUyojjwjoofkHHcj4fbRuvSdg2an4cP cQy2M9Qia/WgT3ElQhbCBkt7/xmKhBkM6eDTlgZBFkn4mQ3i6Agi+DiR5 0Wjv0hFagbKOz/hB+h2dXn2i+3vqcU0klBF+a0tlhTVmeEO9+KAqiLp5L eZ/o1mOY1ezJvYh19c2W//zW4dTxZuPySdWjhc3gjqenmeCCFSF75Nwjb g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153103" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153103" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458728" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458728" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:22 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 08/10] net/gve: enable Tx checksum offload for DQO Date: Fri, 17 Feb 2023 15:32:26 +0800 Message-Id: <20230217073228.340815-9-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable Tx checksum offload once any flag of L4 checksum is set. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.h | 4 ++++ drivers/net/gve/gve_tx_dqo.c | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index a8e0dd5f3d..bca6e86ef0 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -38,6 +38,10 @@ #define GVE_MAX_MTU RTE_ETHER_MTU #define GVE_MIN_MTU RTE_ETHER_MIN_MTU +#define GVE_TX_CKSUM_OFFLOAD_MASK ( \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_TCP_SEG) + /* A list of pages registered with the device during setup and used by a queue * as buffers */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 450cf71a6b..e925d6c3d0 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -77,6 +77,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t mask, sw_mask; uint16_t nb_to_clean; uint16_t nb_tx = 0; + uint64_t ol_flags; uint16_t nb_used; uint16_t tx_id; uint16_t sw_id; @@ -103,6 +104,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (txq->nb_free < tx_pkt->nb_segs) break; + ol_flags = tx_pkt->ol_flags; nb_used = tx_pkt->nb_segs; do { @@ -127,6 +129,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* fill the last descriptor with End of Packet (EOP) bit */ txd->pkt.end_of_packet = 1; + if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK) + txd->pkt.checksum_offload_enable = 1; + txq->nb_free -= nb_used; txq->nb_used += nb_used; } From patchwork Fri Feb 17 07:32:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124116 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 391CE41CBC; Fri, 17 Feb 2023 08:39:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48C0842B8B; Fri, 17 Feb 2023 08:39:32 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id D826842D38 for ; Fri, 17 Feb 2023 08:39:29 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619570; x=1708155570; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yiUZY7FHCmg1rfd/sot0/DAfn/1AtiSjqaeeM+QMHyQ=; b=MwTX0iLCnITnobiogEn8USL0ZssVuDGLUz3ekc9olElyAEE5HokR+Hqw 7LxGWJXwHERv9c324cWOoiKz2NYqd7OT0UNgxSKPKfzKnCCyEIj3ZWsdY eihrPaJek0o+NEb6Xa2G95rLqh+V/FSWy2NfMtJH9Z+lPo0dHKkZFbxp7 ba0TyIWC2/IpdbOHfeavcHf5CRJNto1AMyQJcnbfRzLlSjp03UScbFwHS SHLbcGssxwHaOQmRr7uM11+svRg5A0pJ/90uGGdNe8Sf5AG1fbecgkehs iEVn2TggYQnbFIDoX6vxAOLav19vtTFsG1UVEJFUx+2YgthApLS1R7b8q A==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153111" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153111" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458759" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458759" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:25 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jeroen de Borst Subject: [RFC v3 09/10] net/gve: support jumbo frame for GQI Date: Fri, 17 Feb 2023 15:32:27 +0800 Message-Id: <20230217073228.340815-10-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add multi-segment support to enable GQI Rx Jumbo Frame. Signed-off-by: Rushil Gupta Signed-off-by: Junfeng Guo Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.h | 8 ++ drivers/net/gve/gve_rx.c | 137 +++++++++++++++++++++++++---------- 2 files changed, 108 insertions(+), 37 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index bca6e86ef0..02b997312c 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -142,6 +142,13 @@ struct gve_tx_queue { uint8_t is_gqi_qpl; }; +struct gve_rx_ctx { + struct rte_mbuf *mbuf_head; + struct rte_mbuf *mbuf_tail; + uint16_t total_frags; + bool drop_pkt; +}; + struct gve_rx_queue { volatile struct gve_rx_desc *rx_desc_ring; volatile union gve_rx_data_slot *rx_data_ring; @@ -150,6 +157,7 @@ struct gve_rx_queue { uint64_t rx_ring_phys_addr; struct rte_mbuf **sw_ring; struct rte_mempool *mpool; + struct gve_rx_ctx ctx; uint16_t rx_tail; uint16_t nb_rx_desc; diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index e264bcadad..ecef0c4a86 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -5,6 +5,8 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +#define GVE_PKT_CONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x)) + static inline void gve_rx_refill(struct gve_rx_queue *rxq) { @@ -82,43 +84,72 @@ gve_rx_refill(struct gve_rx_queue *rxq) } } -uint16_t -gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +/* + * This method processes a single rte_mbuf and handles packet segmentation + * In QPL mode it copies data from the mbuf to the gve_rx_queue. + */ +static void +gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, + uint16_t rx_id) { - volatile struct gve_rx_desc *rxr, *rxd; - struct gve_rx_queue *rxq = rx_queue; - uint16_t rx_id = rxq->rx_tail; - struct rte_mbuf *rxe; - uint16_t nb_rx, len; - uint64_t bytes = 0; + uint16_t padding = 0; uint64_t addr; - uint16_t i; - - rxr = rxq->rx_desc_ring; - nb_rx = 0; - for (i = 0; i < nb_pkts; i++) { - rxd = &rxr[rx_id]; - if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno) - break; - - if (rxd->flags_seq & GVE_RXF_ERR) { - rxq->errors++; - continue; - } - - len = rte_be_to_cpu_16(rxd->len) - GVE_RX_PAD; - rxe = rxq->sw_ring[rx_id]; - if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + GVE_RX_PAD; - rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), - (void *)(size_t)addr, len); - } + rxe->data_len = len; + if (!rxq->ctx.mbuf_head) { + rxq->ctx.mbuf_head = rxe; + rxq->ctx.mbuf_tail = rxe; + rxe->nb_segs = 1; rxe->pkt_len = len; rxe->data_len = len; rxe->port = rxq->port_id; rxe->ol_flags = 0; + padding = GVE_RX_PAD; + } else { + rxq->ctx.mbuf_head->pkt_len += len; + rxq->ctx.mbuf_head->nb_segs += 1; + rxq->ctx.mbuf_tail->next = rxe; + rxq->ctx.mbuf_tail = rxe; + } + if (rxq->is_gqi_qpl) { + addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), + (void *)(size_t)addr, len); + } +} + +/* + * This method processes a single packet fragment associated with the + * passed packet descriptor. + * This methods returns whether the fragment is the last fragment + * of a packet. + */ +static bool +gve_rx(struct gve_rx_queue *rxq, volatile struct gve_rx_desc *rxd, uint16_t rx_id) +{ + bool is_last_frag = !GVE_PKT_CONT_BIT_IS_SET(rxd->flags_seq); + uint16_t frag_size = rte_be_to_cpu_16(rxd->len); + struct gve_rx_ctx *ctx = &rxq->ctx; + bool is_first_frag = ctx->total_frags == 0; + struct rte_mbuf *rxe; + + if (ctx->drop_pkt) + goto finish_frag; + if (rxd->flags_seq & GVE_RXF_ERR) { + ctx->drop_pkt = true; + rxq->errors++; + goto finish_frag; + } + + if (is_first_frag) + frag_size -= GVE_RX_PAD; + + rxe = rxq->sw_ring[rx_id]; + gve_rx_mbuf(rxq, rxe, frag_size, rx_id); + rxq->bytes += frag_size; + + if (is_first_frag) { if (rxd->flags_seq & GVE_RXF_TCP) rxe->packet_type |= RTE_PTYPE_L4_TCP; if (rxd->flags_seq & GVE_RXF_UDP) @@ -132,28 +163,60 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxe->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; rxe->hash.rss = rte_be_to_cpu_32(rxd->rss_hash); } + } - rxq->expected_seqno = gve_next_seqno(rxq->expected_seqno); +finish_frag: + ctx->total_frags++; + return is_last_frag; +} + +static void +gve_rx_ctx_clear(struct gve_rx_ctx *ctx) +{ + ctx->mbuf_head = NULL; + ctx->mbuf_tail = NULL; + ctx->drop_pkt = false; + ctx->total_frags = 0; +} + +uint16_t +gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + volatile struct gve_rx_desc *rxr, *rxd; + struct gve_rx_queue *rxq = rx_queue; + struct gve_rx_ctx *ctx = &rxq->ctx; + uint16_t rx_id = rxq->rx_tail; + uint16_t nb_rx; + + rxr = rxq->rx_desc_ring; + nb_rx = 0; + + while (nb_rx < nb_pkts) { + rxd = &rxr[rx_id]; + if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno) + break; + + if (gve_rx(rxq, rxd, rx_id)) { + if (!ctx->drop_pkt) + rx_pkts[nb_rx++] = ctx->mbuf_head; + rxq->nb_avail += ctx->total_frags; + gve_rx_ctx_clear(ctx); + } rx_id++; if (rx_id == rxq->nb_rx_desc) rx_id = 0; - rx_pkts[nb_rx] = rxe; - bytes += len; - nb_rx++; + rxq->expected_seqno = gve_next_seqno(rxq->expected_seqno); } - rxq->nb_avail += nb_rx; rxq->rx_tail = rx_id; if (rxq->nb_avail > rxq->free_thresh) gve_rx_refill(rxq); - if (nb_rx) { + if (nb_rx) rxq->packets += nb_rx; - rxq->bytes += bytes; - } return nb_rx; } From patchwork Fri Feb 17 07:32:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124117 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79BE541CBC; Fri, 17 Feb 2023 08:40:00 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D91F242D3A; Fri, 17 Feb 2023 08:39:36 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 3F8E642D40 for ; Fri, 17 Feb 2023 08:39:34 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619574; x=1708155574; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qTcAJ/bLyQy7RmVrc+/br9yND/iZ7f2pO/jlahuGRA0=; b=HgfVjnPAlyAP5RrdcXEUyARji5JQDJi+3fHLhpHkmzEg0EHnCRt+cDKo HsIy5ZCYzprK4Md75abzD17k0eTZ9nMtDdJCUaEu5td2Bh0vwu3ROp+iz uwzNMgWEIeQaTwS1SMinLYEv3qkNTx6bt6T+zUbcsbB0aqwVno04mCOnu AwlZYtedqbl4WQWrX+3nBvOaydUKaHhPpA1/0nrUF+XqePruasVG4z82f Mb+oZz8TEIu9Iixp5GjDwlSJXgjXgJFJuSpf6Dtan+SYRSewA1HvI0zkv ydO7A4HxxuXjgX1HreObRvFgz8PWWVYqoNUCSm0uTMLzTfSgTRwUHgWEj A==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153135" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153135" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458798" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458798" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:29 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Joshua Washington , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 10/10] net/gve: add AdminQ command to verify driver compatibility Date: Fri, 17 Feb 2023 15:32:28 +0800 Message-Id: <20230217073228.340815-11-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Check whether the driver is compatible with the device presented. Change gve_driver_info fields to report DPDK as OS type and DPDK RTE version as OS version, reserving driver_version fields for GVE driver version based on features. Signed-off-by: Rushil Gupta Signed-off-by: Joshua Washington Signed-off-by: Jordan Kimbrough Signed-off-by: Junfeng Guo Signed-off-by: Jeroen de Borst --- drivers/net/gve/base/gve.h | 3 -- drivers/net/gve/base/gve_adminq.c | 19 +++++++++ drivers/net/gve/base/gve_adminq.h | 49 ++++++++++++++++++++++- drivers/net/gve/base/gve_osdep.h | 34 ++++++++++++++++ drivers/net/gve/gve_ethdev.c | 66 +++++++++++++++++++++++++------ drivers/net/gve/gve_ethdev.h | 1 + drivers/net/gve/gve_version.c | 14 +++++++ drivers/net/gve/gve_version.h | 25 ++++++++++++ drivers/net/gve/meson.build | 1 + 9 files changed, 197 insertions(+), 15 deletions(-) create mode 100644 drivers/net/gve/gve_version.c create mode 100644 drivers/net/gve/gve_version.h diff --git a/drivers/net/gve/base/gve.h b/drivers/net/gve/base/gve.h index 22d175910d..52ffc057a2 100644 --- a/drivers/net/gve/base/gve.h +++ b/drivers/net/gve/base/gve.h @@ -9,9 +9,6 @@ #include "gve_desc.h" #include "gve_desc_dqo.h" -#define GVE_VERSION "1.3.0" -#define GVE_VERSION_PREFIX "GVE-" - #ifndef GOOGLE_VENDOR_ID #define GOOGLE_VENDOR_ID 0x1ae0 #endif diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index 650d520e3d..380bccf18a 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -401,6 +401,9 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv, case GVE_ADMINQ_GET_PTYPE_MAP: priv->adminq_get_ptype_map_cnt++; break; + case GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY: + priv->adminq_verify_driver_compatibility_cnt++; + break; default: PMD_DRV_LOG(ERR, "unknown AQ command opcode %d", opcode); } @@ -859,6 +862,22 @@ int gve_adminq_report_stats(struct gve_priv *priv, u64 stats_report_len, return gve_adminq_execute_cmd(priv, &cmd); } +int gve_adminq_verify_driver_compatibility(struct gve_priv *priv, + u64 driver_info_len, + dma_addr_t driver_info_addr) +{ + union gve_adminq_command cmd; + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = cpu_to_be32(GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY); + cmd.verify_driver_compatibility = (struct gve_adminq_verify_driver_compatibility) { + .driver_info_len = cpu_to_be64(driver_info_len), + .driver_info_addr = cpu_to_be64(driver_info_addr), + }; + + return gve_adminq_execute_cmd(priv, &cmd); +} + int gve_adminq_report_link_speed(struct gve_priv *priv) { struct gve_dma_mem link_speed_region_dma_mem; diff --git a/drivers/net/gve/base/gve_adminq.h b/drivers/net/gve/base/gve_adminq.h index 05550119de..89722eef7a 100644 --- a/drivers/net/gve/base/gve_adminq.h +++ b/drivers/net/gve/base/gve_adminq.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: MIT * Google Virtual Ethernet (gve) driver - * Copyright (C) 2015-2022 Google, Inc. + * Copyright (C) 2015-2023 Google, Inc. */ #ifndef _GVE_ADMINQ_H @@ -23,6 +23,7 @@ enum gve_adminq_opcodes { GVE_ADMINQ_REPORT_STATS = 0xC, GVE_ADMINQ_REPORT_LINK_SPEED = 0xD, GVE_ADMINQ_GET_PTYPE_MAP = 0xE, + GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY = 0xF, }; /* Admin queue status codes */ @@ -146,6 +147,47 @@ enum gve_sup_feature_mask { #define GVE_DEV_OPT_LEN_GQI_RAW_ADDRESSING 0x0 +enum gve_driver_capbility { + gve_driver_capability_gqi_qpl = 0, + gve_driver_capability_gqi_rda = 1, + gve_driver_capability_dqo_qpl = 2, /* reserved for future use */ + gve_driver_capability_dqo_rda = 3, +}; + +#define GVE_CAP1(a) BIT((int)a) +#define GVE_CAP2(a) BIT(((int)a) - 64) +#define GVE_CAP3(a) BIT(((int)a) - 128) +#define GVE_CAP4(a) BIT(((int)a) - 192) + +#define GVE_DRIVER_CAPABILITY_FLAGS1 \ + (GVE_CAP1(gve_driver_capability_gqi_qpl) | \ + GVE_CAP1(gve_driver_capability_gqi_rda) | \ + GVE_CAP1(gve_driver_capability_dqo_rda)) + +#define GVE_DRIVER_CAPABILITY_FLAGS2 0x0 +#define GVE_DRIVER_CAPABILITY_FLAGS3 0x0 +#define GVE_DRIVER_CAPABILITY_FLAGS4 0x0 + +struct gve_driver_info { + u8 os_type; /* 0x05 = DPDK */ + u8 driver_major; + u8 driver_minor; + u8 driver_sub; + __be32 os_version_major; + __be32 os_version_minor; + __be32 os_version_sub; + __be64 driver_capability_flags[4]; + u8 os_version_str1[OS_VERSION_STRLEN]; + u8 os_version_str2[OS_VERSION_STRLEN]; +}; + +struct gve_adminq_verify_driver_compatibility { + __be64 driver_info_len; + __be64 driver_info_addr; +}; + +GVE_CHECK_STRUCT_LEN(16, gve_adminq_verify_driver_compatibility); + struct gve_adminq_configure_device_resources { __be64 counter_array; __be64 irq_db_addr; @@ -345,6 +387,8 @@ union gve_adminq_command { struct gve_adminq_report_stats report_stats; struct gve_adminq_report_link_speed report_link_speed; struct gve_adminq_get_ptype_map get_ptype_map; + struct gve_adminq_verify_driver_compatibility + verify_driver_compatibility; }; }; u8 reserved[64]; @@ -377,5 +421,8 @@ int gve_adminq_report_link_speed(struct gve_priv *priv); struct gve_ptype_lut; int gve_adminq_get_ptype_map_dqo(struct gve_priv *priv, struct gve_ptype_lut *ptype_lut); +int gve_adminq_verify_driver_compatibility(struct gve_priv *priv, + u64 driver_info_len, + dma_addr_t driver_info_addr); #endif /* _GVE_ADMINQ_H */ diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h index 71759d254f..e642c9557e 100644 --- a/drivers/net/gve/base/gve_osdep.h +++ b/drivers/net/gve/base/gve_osdep.h @@ -21,9 +21,14 @@ #include #include #include +#include #include "../gve_logs.h" +#ifdef __linux__ +#include +#endif + typedef uint8_t u8; typedef uint16_t u16; typedef uint32_t u32; @@ -73,6 +78,12 @@ typedef rte_iova_t dma_addr_t; #define msleep(ms) rte_delay_ms(ms) +#define OS_VERSION_STRLEN 128 +struct os_version_string { + char os_version_str1[OS_VERSION_STRLEN]; + char os_version_str2[OS_VERSION_STRLEN]; +}; + /* These macros are used to generate compilation errors if a struct/union * is not exactly the correct length. It gives a divide by zero error if * the struct/union is not of the correct size, otherwise it creates an @@ -82,6 +93,11 @@ typedef rte_iova_t dma_addr_t; { gve_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0) } #define GVE_CHECK_UNION_LEN(n, X) enum gve_static_asset_enum_##X \ { gve_static_assert_##X = (n) / ((sizeof(union X) == (n)) ? 1 : 0) } +#ifndef LINUX_VERSION_MAJOR +#define LINUX_VERSION_MAJOR (((LINUX_VERSION_CODE) >> 16) & 0xff) +#define LINUX_VERSION_SUBLEVEL (((LINUX_VERSION_CODE) >> 8) & 0xff) +#define LINUX_VERSION_PATCHLEVEL ((LINUX_VERSION_CODE) & 0xff) +#endif static __rte_always_inline u8 readb(volatile void *addr) @@ -160,4 +176,22 @@ gve_free_dma_mem(struct gve_dma_mem *mem) mem->pa = 0; } + +static inline void +populate_driver_version_strings(struct os_version_string *os_version_str) +{ +#ifdef __linux__ + struct utsname uts; + if (uname(&uts) >= 0) { + /* OS version */ + rte_strscpy(os_version_str->os_version_str1, uts.release, + sizeof(os_version_str->os_version_str1)); + /* DPDK version */ + rte_strscpy(os_version_str->os_version_str2, uts.version, + sizeof(os_version_str->os_version_str2)); + } +#endif +} + + #endif /* _GVE_OSDEP_H_ */ diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 2541738da1..0fed32fbe0 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -5,21 +5,13 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" #include "base/gve_register.h" - -const char gve_version_str[] = GVE_VERSION; -static const char gve_version_prefix[] = GVE_VERSION_PREFIX; +#include "base/gve_osdep.h" +#include "gve_version.h" static void gve_write_version(uint8_t *driver_version_register) { - const char *c = gve_version_prefix; - - while (*c) { - writeb(*c, driver_version_register); - c++; - } - - c = gve_version_str; + const char *c = gve_version_string(); while (*c) { writeb(*c, driver_version_register); c++; @@ -314,6 +306,52 @@ gve_dev_close(struct rte_eth_dev *dev) return err; } +static int +gve_verify_driver_compatibility(struct gve_priv *priv) +{ + const struct rte_memzone *driver_info_bus; + struct gve_driver_info *driver_info; + int err; + + driver_info_bus = rte_memzone_reserve_aligned("verify_driver_compatibility", + sizeof(struct gve_driver_info), + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, PAGE_SIZE); + if (driver_info_bus == NULL) { + PMD_DRV_LOG(ERR, + "Could not alloc memzone for driver compatibility"); + return -ENOMEM; + } + driver_info = (struct gve_driver_info *)driver_info_bus->addr; + *driver_info = (struct gve_driver_info) { + .os_type = 5, /* DPDK */ + .driver_major = GVE_VERSION_MAJOR, + .driver_minor = GVE_VERSION_MINOR, + .driver_sub = GVE_VERSION_SUB, + .os_version_major = cpu_to_be32(DPDK_VERSION_MAJOR), + .os_version_minor = cpu_to_be32(DPDK_VERSION_MINOR), + .os_version_sub = cpu_to_be32(DPDK_VERSION_SUB), + .driver_capability_flags = { + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS1), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS2), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS3), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS4), + }, + }; + + populate_driver_version_strings((struct os_version_string *)&driver_info->os_version_str1); + + err = gve_adminq_verify_driver_compatibility(priv, + sizeof(struct gve_driver_info), (dma_addr_t)driver_info_bus); + + /* It's ok if the device doesn't support this */ + if (err == -EOPNOTSUPP) + err = 0; + + rte_memzone_free(driver_info_bus); + return err; +} + static int gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -631,6 +669,12 @@ gve_init_priv(struct gve_priv *priv, bool skip_describe_device) return err; } + err = gve_verify_driver_compatibility(priv); + if (err) { + PMD_DRV_LOG(ERR, "Could not verify driver compatibility: err=%d", err); + goto free_adminq; + } + if (skip_describe_device) goto setup_device; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 02b997312c..9e220cf4dc 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -254,6 +254,7 @@ struct gve_priv { uint32_t adminq_report_stats_cnt; uint32_t adminq_report_link_speed_cnt; uint32_t adminq_get_ptype_map_cnt; + uint32_t adminq_verify_driver_compatibility_cnt; volatile uint32_t state_flags; diff --git a/drivers/net/gve/gve_version.c b/drivers/net/gve/gve_version.c new file mode 100644 index 0000000000..7eaa689d90 --- /dev/null +++ b/drivers/net/gve/gve_version.c @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: MIT + * Google Virtual Ethernet (gve) driver + * Copyright (C) 2015-2023 Google, Inc. + */ +#include "gve_version.h" + +const char *gve_version_string(void) +{ + static char gve_version[10]; + snprintf(gve_version, sizeof(gve_version), "%s%d.%d.%d", + GVE_VERSION_PREFIX, GVE_VERSION_MAJOR, GVE_VERSION_MINOR, + GVE_VERSION_SUB); + return gve_version; +} diff --git a/drivers/net/gve/gve_version.h b/drivers/net/gve/gve_version.h new file mode 100644 index 0000000000..b1f626f95e --- /dev/null +++ b/drivers/net/gve/gve_version.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: MIT + * Google Virtual Ethernet (gve) driver + * Copyright (C) 2015-2023 Google, Inc. + */ + +#ifndef _GVE_VERSION_H_ +#define _GVE_VERSION_H_ + +#include + +#define GVE_VERSION_PREFIX "GVE-" +#define GVE_VERSION_MAJOR 0 +#define GVE_VERSION_MINOR 9 +#define GVE_VERSION_SUB 0 + +#define DPDK_VERSION_MAJOR (100 * RTE_VER_YEAR + RTE_VER_MONTH) +#define DPDK_VERSION_MINOR RTE_VER_MINOR +#define DPDK_VERSION_SUB RTE_VER_RELEASE + + +const char * +gve_version_string(void); + + +#endif /* GVE_VERSION_H */ diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build index 8caee3714b..9f10e98bf6 100644 --- a/drivers/net/gve/meson.build +++ b/drivers/net/gve/meson.build @@ -14,5 +14,6 @@ sources = files( 'gve_rx_dqo.c', 'gve_tx_dqo.c', 'gve_ethdev.c', + 'gve_version.c', ) includes += include_directories('base')