From patchwork Fri Feb 17 07:32:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124109 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B1CC841CBC; Fri, 17 Feb 2023 08:39:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5887D41611; Fri, 17 Feb 2023 08:39:06 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 5398F41611 for ; Fri, 17 Feb 2023 08:39:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619544; x=1708155544; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F3iT94yMa2aU93pShYoDpKVkVzM+w8Ac6Zv11Adb+ZE=; b=VoMg52ii9XH00g/h7DZEbcw4JBvPlMzS+CXtOOWc6b4PPjp9SDoIMY3x H6m5CxM+7RZCKuXGkCQHclR4lM8hJZjA53D3paOg2h90VICWvOMcZr+/S Ssi0dyWGAu+XiBOqq/dTiqa8NvtcHNWgucqA2R1u+r/tjQpeOpZb0WC6/ aT5rTaypxNm3/s0xWBYXExRmkm58OSctZMNyjesOEQ36CBQW7cXRTFGth roIb2Sd2wKOmjbNZWo59HJOp1FH0z/MjPH+2ZMXFhkKMxtYKvQ+sQj2jf 6e6N+gA6V4W/HW0DTEFSRhJ6uMclZTuXhpC75A0tTi14iaF4rDZGnGOO1 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153019" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153019" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458652" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458652" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:00 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 02/10] net/gve: add Rx queue setup for DQO Date: Fri, 17 Feb 2023 15:32:20 +0800 Message-Id: <20230217073228.340815-3-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for rx_queue_setup_dqo ops. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 14 ++++ drivers/net/gve/gve_rx_dqo.c | 154 +++++++++++++++++++++++++++++++++++ drivers/net/gve/meson.build | 1 + 4 files changed, 170 insertions(+) create mode 100644 drivers/net/gve/gve_rx_dqo.c diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index a02a48ef11..0f55d028f5 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -427,6 +427,7 @@ static const struct eth_dev_ops gve_eth_dev_ops_dqo = { .dev_stop = gve_dev_stop, .dev_close = gve_dev_close, .dev_infos_get = gve_dev_info_get, + .rx_queue_setup = gve_rx_queue_setup_dqo, .tx_queue_setup = gve_tx_queue_setup_dqo, .link_update = gve_link_update, .mtu_set = gve_dev_mtu_set, diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index c4b66acb0a..c4e5b8cb43 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -150,6 +150,7 @@ struct gve_rx_queue { uint16_t nb_rx_desc; uint16_t expected_seqno; /* the next expected seqno */ uint16_t free_thresh; + uint16_t nb_rx_hold; uint32_t next_avail; uint32_t nb_avail; @@ -174,6 +175,14 @@ struct gve_rx_queue { uint16_t ntfy_id; uint16_t rx_buf_len; + /* newly added for DQO */ + volatile struct gve_rx_desc_dqo *rx_ring; + struct gve_rx_compl_desc_dqo *compl_ring; + const struct rte_memzone *compl_ring_mz; + uint64_t compl_ring_phys_addr; + uint8_t cur_gen_bit; + uint16_t bufq_tail; + /* Only valid for DQO_RDA queue format */ struct gve_rx_queue *bufq; @@ -345,6 +354,11 @@ gve_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); /* Below functions are used for DQO */ +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool); int gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c new file mode 100644 index 0000000000..9c412c1481 --- /dev/null +++ b/drivers/net/gve/gve_rx_dqo.c @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022-2023 Intel Corporation + */ + +#include "gve_ethdev.h" +#include "base/gve_adminq.h" + +static void +gve_reset_rxq_dqo(struct gve_rx_queue *rxq) +{ + struct rte_mbuf **sw_ring; + uint32_t size, i; + + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "pointer to rxq is NULL"); + return; + } + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_compl_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->compl_ring)[i] = 0; + + sw_ring = rxq->sw_ring; + for (i = 0; i < rxq->nb_rx_desc; i++) + sw_ring[i] = NULL; + + rxq->bufq_tail = 0; + rxq->next_avail = 0; + rxq->nb_rx_hold = rxq->nb_rx_desc - 1; + + rxq->rx_tail = 0; + rxq->cur_gen_bit = 1; +} + +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool) +{ + struct gve_priv *hw = dev->data->dev_private; + const struct rte_memzone *mz; + struct gve_rx_queue *rxq; + uint16_t free_thresh; + int err = 0; + + if (nb_desc != hw->rx_desc_cnt) { + PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", + hw->rx_desc_cnt); + } + nb_desc = hw->rx_desc_cnt; + + /* Allocate the RX queue data structure. */ + rxq = rte_zmalloc_socket("gve rxq", + sizeof(struct gve_rx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for rx queue structure"); + return -ENOMEM; + } + + /* check free_thresh here */ + free_thresh = conf->rx_free_thresh ? + conf->rx_free_thresh : GVE_DEFAULT_RX_FREE_THRESH; + if (free_thresh >= nb_desc) { + PMD_DRV_LOG(ERR, "rx_free_thresh (%u) must be less than nb_desc (%u).", + free_thresh, rxq->nb_rx_desc); + err = -EINVAL; + goto free_rxq; + } + + rxq->nb_rx_desc = nb_desc; + rxq->free_thresh = free_thresh; + rxq->queue_id = queue_id; + rxq->port_id = dev->data->port_id; + rxq->ntfy_id = hw->num_ntfy_blks / 2 + queue_id; + + rxq->mpool = pool; + rxq->hw = hw; + rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)]; + + rxq->rx_buf_len = + rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM; + + /* Allocate software ring */ + rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq->sw_ring == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW RX ring"); + err = -ENOMEM; + goto free_rxq; + } + + /* Allocate RX buffer queue */ + mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_id, + nb_desc * sizeof(struct gve_rx_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue"); + err = -ENOMEM; + goto free_rxq_sw_ring; + } + rxq->rx_ring = (struct gve_rx_desc_dqo *)mz->addr; + rxq->rx_ring_phys_addr = mz->iova; + rxq->mz = mz; + + /* Allocate RX completion queue */ + mz = rte_eth_dma_zone_reserve(dev, "compl_ring", queue_id, + nb_desc * sizeof(struct gve_rx_compl_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX completion queue"); + err = -ENOMEM; + goto free_rxq_mz; + } + /* Zero all the descriptors in the ring */ + memset(mz->addr, 0, nb_desc * sizeof(struct gve_rx_compl_desc_dqo)); + rxq->compl_ring = (struct gve_rx_compl_desc_dqo *)mz->addr; + rxq->compl_ring_phys_addr = mz->iova; + rxq->compl_ring_mz = mz; + + mz = rte_eth_dma_zone_reserve(dev, "rxq_res", queue_id, + sizeof(struct gve_queue_resources), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX resource"); + err = -ENOMEM; + goto free_rxq_cq_mz; + } + rxq->qres = (struct gve_queue_resources *)mz->addr; + rxq->qres_mz = mz; + + gve_reset_rxq_dqo(rxq); + + dev->data->rx_queues[queue_id] = rxq; + + return 0; + +free_rxq_cq_mz: + rte_memzone_free(rxq->compl_ring_mz); +free_rxq_mz: + rte_memzone_free(rxq->mz); +free_rxq_sw_ring: + rte_free(rxq->sw_ring); +free_rxq: + rte_free(rxq); + return err; +} diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build index a699432160..8caee3714b 100644 --- a/drivers/net/gve/meson.build +++ b/drivers/net/gve/meson.build @@ -11,6 +11,7 @@ sources = files( 'base/gve_adminq.c', 'gve_rx.c', 'gve_tx.c', + 'gve_rx_dqo.c', 'gve_tx_dqo.c', 'gve_ethdev.c', )