From patchwork Wed Jan 18 02:53:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122226 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F4AD42407; Wed, 18 Jan 2023 03:59:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1EB5242D16; Wed, 18 Jan 2023 03:59:01 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id E56EE427EE for ; Wed, 18 Jan 2023 03:58:59 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010740; x=1705546740; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bmbOnResrsGUARKiW6/0WBWMPkVIAQTKhSz5uoXeLAs=; b=O0xykWfsW7NPjfWhQVTMboBL7kECxolrcjqo7kgQjlOEWhf5kaFGzN/k GIagv/y/28rBidqihWsmU/1keMaGt6Gk6w7obZwr+IfQxCF6OrPqc9qhj 7Mj6Al+1jg7U4yTN4J9PaHOC54C9X3aJBfHzKhkvmzVhfDmLU67/vqmJZ deXF/wV0d5dPQi6eJOrLhzWO5rfNSxvRS7XJZkbH7bMS6oyMuebdG5rAk Puxc1fT3YvUChSDtKThdPMDBScn1iytf5y5TazdCAcFBBUgYxTQbHRgnK lhTA6gdj7DIhnN5Bb59oHPxYMJ7XDmfvfOh3IkAIhVPgIZoq9ANtUyAO7 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575455" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575455" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:58:59 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911134" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911134" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:58:56 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC 1/8] net/gve: add Rx queue setup for DQO Date: Wed, 18 Jan 2023 10:53:40 +0800 Message-Id: <20230118025347.1567078-2-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for rx_queue_setup_dqo ops. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 14 ++++ drivers/net/gve/gve_rx_dqo.c | 148 +++++++++++++++++++++++++++++++++++ drivers/net/gve/meson.build | 1 + 4 files changed, 164 insertions(+) create mode 100644 drivers/net/gve/gve_rx_dqo.c diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index d03f2fba92..26182b0422 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -366,6 +366,7 @@ gve_eth_dev_ops_override(struct eth_dev_ops *local_eth_dev_ops) { /* override eth_dev ops for DQO */ local_eth_dev_ops->tx_queue_setup = gve_tx_queue_setup_dqo; + local_eth_dev_ops->rx_queue_setup = gve_rx_queue_setup_dqo; } static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 2dfcef6893..0adfc90554 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -145,6 +145,7 @@ struct gve_rx_queue { uint16_t nb_rx_desc; uint16_t expected_seqno; /* the next expected seqno */ uint16_t free_thresh; + uint16_t nb_rx_hold; uint32_t next_avail; uint32_t nb_avail; @@ -163,6 +164,14 @@ struct gve_rx_queue { uint16_t ntfy_id; uint16_t rx_buf_len; + /* newly added for DQO*/ + volatile struct gve_rx_desc_dqo *rx_ring; + struct gve_rx_compl_desc_dqo *compl_ring; + const struct rte_memzone *compl_ring_mz; + uint64_t compl_ring_phys_addr; + uint8_t cur_gen_bit; + uint16_t bufq_tail; + /* Only valid for DQO_RDA queue format */ struct gve_rx_queue *bufq; @@ -334,6 +343,11 @@ gve_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); /* Below functions are used for DQO */ +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool); int gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c new file mode 100644 index 0000000000..e8a6d575fc --- /dev/null +++ b/drivers/net/gve/gve_rx_dqo.c @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Intel Corporation + */ + +#include "gve_ethdev.h" +#include "base/gve_adminq.h" + +static void +gve_reset_rxq_dqo(struct gve_rx_queue *rxq) +{ + struct rte_mbuf **sw_ring; + uint32_t size, i; + + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "pointer to rxq is NULL"); + return; + } + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + + size = rxq->nb_rx_desc * sizeof(struct gve_rx_compl_desc_dqo); + for (i = 0; i < size; i++) + ((volatile char *)rxq->compl_ring)[i] = 0; + + sw_ring = rxq->sw_ring; + for (i = 0; i < rxq->nb_rx_desc; i++) + sw_ring[i] = NULL; + + rxq->bufq_tail = 0; + rxq->next_avail = 0; + rxq->nb_rx_hold = rxq->nb_rx_desc - 1; + + rxq->rx_tail = 0; + rxq->cur_gen_bit = 1; +} + +int +gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *conf, + struct rte_mempool *pool) +{ + struct gve_priv *hw = dev->data->dev_private; + const struct rte_memzone *mz; + struct gve_rx_queue *rxq; + uint16_t free_thresh; + int err = 0; + + if (nb_desc != hw->rx_desc_cnt) { + PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", + hw->rx_desc_cnt); + } + nb_desc = hw->rx_desc_cnt; + + /* Allocate the RX queue data structure. */ + rxq = rte_zmalloc_socket("gve rxq", + sizeof(struct gve_rx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for rx queue structure"); + return -ENOMEM; + } + + /* check free_thresh here */ + free_thresh = conf->rx_free_thresh ? + conf->rx_free_thresh : GVE_DEFAULT_RX_FREE_THRESH; + if (free_thresh >= nb_desc) { + PMD_DRV_LOG(ERR, "rx_free_thresh (%u) must be less than nb_desc (%u).", + free_thresh, rxq->nb_rx_desc); + err = -EINVAL; + goto err_rxq; + } + + rxq->nb_rx_desc = nb_desc; + rxq->free_thresh = free_thresh; + rxq->queue_id = queue_id; + rxq->port_id = dev->data->port_id; + rxq->ntfy_id = hw->num_ntfy_blks / 2 + queue_id; + + rxq->mpool = pool; + rxq->hw = hw; + rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)]; + + rxq->rx_buf_len = + rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM; + + /* Allocate software ring */ + rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", + nb_desc * sizeof(struct rte_mbuf *), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq->sw_ring == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW RX ring"); + err = -ENOMEM; + goto err_rxq; + } + + /* Allocate RX buffer queue */ + mz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_id, + nb_desc * sizeof(struct gve_rx_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue"); + err = -ENOMEM; + goto err_rxq; + } + rxq->rx_ring = (struct gve_rx_desc_dqo *)mz->addr; + rxq->rx_ring_phys_addr = mz->iova; + rxq->mz = mz; + + /* Allocate RX completion queue */ + mz = rte_eth_dma_zone_reserve(dev, "compl_ring", queue_id, + nb_desc * sizeof(struct gve_rx_compl_desc_dqo), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX completion queue"); + err = -ENOMEM; + goto err_rxq; + } + /* Zero all the descriptors in the ring */ + memset(mz->addr, 0, nb_desc * sizeof(struct gve_rx_compl_desc_dqo)); + rxq->compl_ring = (struct gve_rx_compl_desc_dqo *)mz->addr; + rxq->compl_ring_phys_addr = mz->iova; + rxq->compl_ring_mz = mz; + + mz = rte_eth_dma_zone_reserve(dev, "rxq_res", queue_id, + sizeof(struct gve_queue_resources), + PAGE_SIZE, socket_id); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX resource"); + err = -ENOMEM; + goto err_rxq; + } + rxq->qres = (struct gve_queue_resources *)mz->addr; + rxq->qres_mz = mz; + + gve_reset_rxq_dqo(rxq); + + dev->data->rx_queues[queue_id] = rxq; + + return 0; + +err_rxq: + rte_free(rxq); + return err; +} diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build index 2ddb0cbf9e..c9d87903f9 100644 --- a/drivers/net/gve/meson.build +++ b/drivers/net/gve/meson.build @@ -11,6 +11,7 @@ sources = files( 'base/gve_adminq.c', 'gve_rx.c', 'gve_tx.c', + 'gve_rx_dqo.c', 'gve_tx_dqo.c', 'gve_ethdev.c', ) From patchwork Wed Jan 18 02:53:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122227 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D395742407; Wed, 18 Jan 2023 03:59:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 021A742D35; Wed, 18 Jan 2023 03:59:04 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 03FFC42D35 for ; Wed, 18 Jan 2023 03:59:02 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010743; x=1705546743; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=P+tfqKUjECT0v0kb7XTy5YLGMn2+ySTWEpDhUaQ/1oI=; b=VSDIARykpbCICqbJdnhnaA6eSaLqOMEr6rPrEyBR/ShcO6dNk2QaplWR Hp03b3eViF0yhvhBceUjaPAAAlv1wrPb/oU+AlH5YrGkj2obgfIeSoc8P 7lWGHmdwO0+dx2GxC3PlBMHcp7YNb73c5OS5frh3K443PgNCvl9OvOOnC ddDVJFgJXQo6p0AkaZSRrmhGFxDp7cI4zC8yQlAoKNdfG0lL0dpsQ/d4v 8phDCs6PHZWKnlEnsExEUMDc1d7/9SHSnzv793bQEzOAvD6l/aZVpez6e wBcl5fKDgngjsMdN6oPa1wooKEXLFR4XEVthu66G54C0qAut7rIpFgZWI w==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575463" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575463" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:59:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911143" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911143" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:58:59 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC 2/8] net/gve: support device start and close for DQO Date: Wed, 18 Jan 2023 10:53:41 +0800 Message-Id: <20230118025347.1567078-3-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add device start and close support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/base/gve_adminq.c | 10 +++---- drivers/net/gve/gve_ethdev.c | 43 ++++++++++++++++++++++++++++++- 2 files changed, 47 insertions(+), 6 deletions(-) diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index e745b709b2..e963f910a0 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -497,11 +497,11 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_tx_queue.queue_page_list_id = cpu_to_be32(qpl_id); } else { cmd.create_tx_queue.tx_ring_size = - cpu_to_be16(txq->nb_tx_desc); + cpu_to_be16(priv->tx_desc_cnt); cmd.create_tx_queue.tx_comp_ring_addr = - cpu_to_be64(txq->complq->tx_ring_phys_addr); + cpu_to_be64(txq->compl_ring_phys_addr); cmd.create_tx_queue.tx_comp_ring_size = - cpu_to_be16(priv->tx_compq_size); + cpu_to_be16(priv->tx_compq_size * DQO_TX_MULTIPLIER); } return gve_adminq_issue_cmd(priv, &cmd); @@ -549,9 +549,9 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) cmd.create_rx_queue.rx_ring_size = cpu_to_be16(priv->rx_desc_cnt); cmd.create_rx_queue.rx_desc_ring_addr = - cpu_to_be64(rxq->rx_ring_phys_addr); + cpu_to_be64(rxq->compl_ring_phys_addr); cmd.create_rx_queue.rx_data_ring_addr = - cpu_to_be64(rxq->bufq->rx_ring_phys_addr); + cpu_to_be64(rxq->rx_ring_phys_addr); cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rxq->rx_buf_len); cmd.create_rx_queue.rx_buff_ring_size = diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 26182b0422..3543378978 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -78,6 +78,9 @@ gve_free_qpls(struct gve_priv *priv) uint16_t nb_rxqs = priv->max_nb_rxq; uint32_t i; + if (priv->queue_format != GVE_GQI_QPL_FORMAT) + return; + for (i = 0; i < nb_txqs + nb_rxqs; i++) { if (priv->qpl[i].mz != NULL) rte_memzone_free(priv->qpl[i].mz); @@ -138,6 +141,41 @@ gve_refill_pages(struct gve_rx_queue *rxq) return 0; } +static int +gve_refill_dqo(struct gve_rx_queue *rxq) +{ + struct rte_mbuf *nmb; + uint16_t i; + int diag; + + diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0], rxq->nb_rx_desc); + if (diag < 0) { + for (i = 0; i < rxq->nb_rx_desc - 1; i++) { + nmb = rte_pktmbuf_alloc(rxq->mpool); + if (!nmb) + break; + rxq->sw_ring[i] = nmb; + } + if (i < rxq->nb_rx_desc - 1) + return -ENOMEM; + } + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (i == rxq->nb_rx_desc - 1) + break; + nmb = rxq->sw_ring[i]; + rxq->rx_ring[i].buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + rxq->rx_ring[i].buf_id = rte_cpu_to_le_16(i); + } + + rxq->nb_rx_hold = 0; + rxq->bufq_tail = rxq->nb_rx_desc - 1; + + rte_write32(rxq->bufq_tail, rxq->qrx_tail); + + return 0; +} + static int gve_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { @@ -206,7 +244,10 @@ gve_dev_start(struct rte_eth_dev *dev) rte_write32(rte_cpu_to_be_32(GVE_IRQ_MASK), rxq->ntfy_addr); - err = gve_refill_pages(rxq); + if (gve_is_gqi(priv)) + err = gve_refill_pages(rxq); + else + err = gve_refill_dqo(rxq); if (err) { PMD_DRV_LOG(ERR, "Failed to refill for RX"); goto err_rx; From patchwork Wed Jan 18 02:53:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122228 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A00B442407; Wed, 18 Jan 2023 03:59:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2954242D3D; Wed, 18 Jan 2023 03:59:09 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 93A9D40151 for ; Wed, 18 Jan 2023 03:59:06 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010746; x=1705546746; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s4csOlqgcxrtmJhT5Ew7Sx7toMhq7kjfzK+NzcvnKRc=; b=RDEtGNCNTwXp8od2q0tbeNWslUdb0vCANbEGen8XtrNk0GT6dQiS2wqU UzQvv+srI0twdBisOjR2BDW+830OzuEt2Iv5VoaTmulyb3McRTBby3Tcr 7FxQepeSFbRTvuhYUV8ZRAV4OrAJ3d8WlCGYgeMbYdMFCBZkLopkajwjl 5MQ1saM1zV7bBE4xEILyFRBkSv+/PLnCty3o6e0GHN5bgUTR3xccYUPP5 OZjoKmVdV+Ti8M9/cWCL332GAUVIouSLwwmPxroKtcYV+aT5Mu6SdHtd0 JQGVBbws5qCXWgUQ0CKgQZjidoWQzVCaX9W+nrypJrU29+uXZiEvpXB5L g==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575478" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575478" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:59:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911161" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911161" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:59:02 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC 3/8] net/gve: support queue release and stop for DQO Date: Wed, 18 Jan 2023 10:53:42 +0800 Message-Id: <20230118025347.1567078-4-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for queue operations: - gve_tx_queue_release_dqo - gve_rx_queue_release_dqo - gve_stop_tx_queues_dqo - gve_stop_rx_queues_dqo Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 18 +++++++++--- drivers/net/gve/gve_ethdev.h | 12 ++++++++ drivers/net/gve/gve_rx.c | 3 ++ drivers/net/gve/gve_rx_dqo.c | 57 ++++++++++++++++++++++++++++++++++++ drivers/net/gve/gve_tx.c | 3 ++ drivers/net/gve/gve_tx_dqo.c | 55 ++++++++++++++++++++++++++++++++++ 6 files changed, 144 insertions(+), 4 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 3543378978..7c4be3a1cb 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -292,11 +292,19 @@ gve_dev_close(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Failed to stop dev."); } - for (i = 0; i < dev->data->nb_tx_queues; i++) - gve_tx_queue_release(dev, i); + if (gve_is_gqi(priv)) { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release(dev, i); + + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release(dev, i); + } else { + for (i = 0; i < dev->data->nb_tx_queues; i++) + gve_tx_queue_release_dqo(dev, i); - for (i = 0; i < dev->data->nb_rx_queues; i++) - gve_rx_queue_release(dev, i); + for (i = 0; i < dev->data->nb_rx_queues; i++) + gve_rx_queue_release_dqo(dev, i); + } gve_free_qpls(priv); rte_free(priv->adminq); @@ -408,6 +416,8 @@ gve_eth_dev_ops_override(struct eth_dev_ops *local_eth_dev_ops) /* override eth_dev ops for DQO */ local_eth_dev_ops->tx_queue_setup = gve_tx_queue_setup_dqo; local_eth_dev_ops->rx_queue_setup = gve_rx_queue_setup_dqo; + local_eth_dev_ops->tx_queue_release = gve_tx_queue_release_dqo; + local_eth_dev_ops->rx_queue_release = gve_rx_queue_release_dqo; } static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 0adfc90554..93314f2db3 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -353,4 +353,16 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf); +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid); + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 518c9d109c..9ba975c9b4 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -343,6 +343,9 @@ gve_stop_rx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_rx_queues_dqo(dev); + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index e8a6d575fc..aca6f8ea2d 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,38 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) +{ + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i]) { + rte_pktmbuf_free_seg(rxq->sw_ring[i]); + rxq->sw_ring[i] = NULL; + } + } + + rxq->nb_avail = rxq->nb_rx_desc; +} + +void +gve_rx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_rx_queue *q = dev->data->rx_queues[qid]; + + if (q == NULL) + return; + + gve_release_rxq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static void gve_reset_rxq_dqo(struct gve_rx_queue *rxq) { @@ -54,6 +86,12 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->rx_desc_cnt; + /* Free memory if needed */ + if (dev->data->rx_queues[queue_id]) { + gve_rx_queue_release_dqo(dev, queue_id); + dev->data->rx_queues[queue_id] = NULL; + } + /* Allocate the RX queue data structure. */ rxq = rte_zmalloc_socket("gve rxq", sizeof(struct gve_rx_queue), @@ -146,3 +184,22 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(rxq); return err; } + +void +gve_stop_rx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_rx_queue *rxq; + uint16_t i; + int err; + + err = gve_adminq_destroy_rx_queues(hw, dev->data->nb_rx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy rxqs"); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + gve_release_rxq_mbufs_dqo(rxq); + gve_reset_rxq_dqo(rxq); + } +} diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index bf4e8fea2c..0eb42b1216 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -658,6 +658,9 @@ gve_stop_tx_queues(struct rte_eth_dev *dev) uint16_t i; int err; + if (!gve_is_gqi(hw)) + return gve_stop_tx_queues_dqo(dev); + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); if (err != 0) PMD_DRV_LOG(WARNING, "failed to destroy txqs"); diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 4f8bad31bb..e2e4153f27 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,36 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) +{ + uint16_t i; + + for (i = 0; i < txq->sw_size; i++) { + if (txq->sw_ring[i]) { + rte_pktmbuf_free_seg(txq->sw_ring[i]); + txq->sw_ring[i] = NULL; + } + } +} + +void +gve_tx_queue_release_dqo(struct rte_eth_dev *dev, uint16_t qid) +{ + struct gve_tx_queue *q = dev->data->tx_queues[qid]; + + if (q == NULL) + return; + + gve_release_txq_mbufs_dqo(q); + rte_free(q->sw_ring); + rte_memzone_free(q->mz); + rte_memzone_free(q->compl_ring_mz); + rte_memzone_free(q->qres_mz); + q->qres = NULL; + rte_free(q); +} + static int check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh, uint16_t tx_free_thresh) @@ -90,6 +120,12 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, } nb_desc = hw->tx_desc_cnt; + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_id]) { + gve_tx_queue_release_dqo(dev, queue_id); + dev->data->tx_queues[queue_id] = NULL; + } + /* Allocate the TX queue data structure. */ txq = rte_zmalloc_socket("gve txq", sizeof(struct gve_tx_queue), @@ -176,3 +212,22 @@ gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rte_free(txq); return err; } + +void +gve_stop_tx_queues_dqo(struct rte_eth_dev *dev) +{ + struct gve_priv *hw = dev->data->dev_private; + struct gve_tx_queue *txq; + uint16_t i; + int err; + + err = gve_adminq_destroy_tx_queues(hw, dev->data->nb_tx_queues); + if (err != 0) + PMD_DRV_LOG(WARNING, "failed to destroy txqs"); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + gve_release_txq_mbufs_dqo(txq); + gve_reset_txq_dqo(txq); + } +} From patchwork Wed Jan 18 02:53:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122229 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62F6542407; Wed, 18 Jan 2023 03:59:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1763A42D3F; Wed, 18 Jan 2023 03:59:12 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 8BE3D42D3F for ; Wed, 18 Jan 2023 03:59:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010749; x=1705546749; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EXYWc2RPYgYWSMwm0jcddbYfXxkRmCR3DDtmVgl770Q=; b=FiWxp+grX4AvIjO2z4UHZQ/ckonj2K+d1vJ1AfxhapTmAdQvkfSROtPQ 9LWL2ZMPO+A4p9x2f3sGU16WZ1MkhPhuWruleGdFPExEOL/ZQpFC2Db2a 3aWSc/wY8ky973bJJCgk5E64m93ohgVvGRJiVPoGEdgJoE+VAj48u8elY XOYn6vFz5pw37TVIfxCKGBKWdGpuBClWAH8GnO0GCIC3Ozdo9VN1+PWc9 qyqKavzfDkTe6K8UNkoV3XOkH0ol/JnEtXttLHt2m0qsUpXSdIvCoqSOE KRYM1mO//KL58ZZXEdI60bh1iXHPRAlVqKC5xcRCPPyHjoib/zzMNO/62 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575492" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575492" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:59:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911173" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911173" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:59:06 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC 4/8] net/gve: support basic Tx data path for DQO Date: Wed, 18 Jan 2023 10:53:43 +0800 Message-Id: <20230118025347.1567078-5-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic Tx data path support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 4 + drivers/net/gve/gve_tx_dqo.c | 141 +++++++++++++++++++++++++++++++++++ 3 files changed, 146 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 7c4be3a1cb..512a038968 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -703,6 +703,7 @@ gve_dev_init(struct rte_eth_dev *eth_dev) } else { /* override Tx/Rx setup/release eth_dev ops */ gve_eth_dev_ops_override(&gve_local_eth_dev_ops); + eth_dev->tx_pkt_burst = gve_tx_burst_dqo; } eth_dev->dev_ops = &gve_local_eth_dev_ops; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 93314f2db3..ba657dd6c1 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -125,6 +125,7 @@ struct gve_tx_queue { uint8_t cur_gen_bit; uint32_t last_desc_cleaned; void **txqs; + uint16_t re_cnt; /* Only valid for DQO_RDA queue format */ struct gve_tx_queue *complq; @@ -365,4 +366,7 @@ gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); void gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); +uint16_t +gve_tx_burst_dqo(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); + #endif /* _GVE_ETHDEV_H_ */ diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index e2e4153f27..3583c82246 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -5,6 +5,147 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_tx_clean_dqo(struct gve_tx_queue *txq) +{ + struct gve_tx_compl_desc *compl_ring; + struct gve_tx_compl_desc *compl_desc; + struct gve_tx_queue *aim_txq; + uint16_t nb_desc_clean; + struct rte_mbuf *txe; + uint16_t compl_tag; + uint16_t next; + + next = txq->complq_tail; + compl_ring = txq->compl_ring; + compl_desc = &compl_ring[next]; + + if (compl_desc->generation != txq->cur_gen_bit) + return; + + compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag); + + aim_txq = txq->txqs[compl_desc->id]; + + switch (compl_desc->type) { + case GVE_COMPL_TYPE_DQO_DESC: + /* need to clean Descs from last_cleaned to compl_tag */ + if (aim_txq->last_desc_cleaned > compl_tag) + nb_desc_clean = aim_txq->nb_tx_desc - aim_txq->last_desc_cleaned + + compl_tag; + else + nb_desc_clean = compl_tag - aim_txq->last_desc_cleaned; + aim_txq->nb_free += nb_desc_clean; + aim_txq->last_desc_cleaned = compl_tag; + break; + case GVE_COMPL_TYPE_DQO_REINJECTION: + PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_REINJECTION !!!"); + /* FALLTHROUGH */ + case GVE_COMPL_TYPE_DQO_PKT: + txe = aim_txq->sw_ring[compl_tag]; + if (txe != NULL) { + rte_pktmbuf_free_seg(txe); + txe = NULL; + } + break; + case GVE_COMPL_TYPE_DQO_MISS: + rte_delay_us_sleep(1); + PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_MISS ignored !!!"); + break; + default: + PMD_DRV_LOG(ERR, "unknown completion type."); + return; + } + + next++; + if (next == txq->nb_tx_desc * DQO_TX_MULTIPLIER) { + next = 0; + txq->cur_gen_bit ^= 1; + } + + txq->complq_tail = next; +} + +uint16_t +gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct gve_tx_queue *txq = tx_queue; + volatile union gve_tx_desc_dqo *txr; + volatile union gve_tx_desc_dqo *txd; + struct rte_mbuf **sw_ring; + struct rte_mbuf *tx_pkt; + uint16_t mask, sw_mask; + uint16_t nb_to_clean; + uint16_t nb_tx = 0; + uint16_t nb_used; + uint16_t tx_id; + uint16_t sw_id; + + sw_ring = txq->sw_ring; + txr = txq->tx_ring; + + mask = txq->nb_tx_desc - 1; + sw_mask = txq->sw_size - 1; + tx_id = txq->tx_tail; + sw_id = txq->sw_tail; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + tx_pkt = tx_pkts[nb_tx]; + + if (txq->nb_free <= txq->free_thresh) { + nb_to_clean = DQO_TX_MULTIPLIER * txq->rs_thresh; + while (nb_to_clean--) + gve_tx_clean_dqo(txq); + } + + if (txq->nb_free < tx_pkt->nb_segs) + break; + + nb_used = tx_pkt->nb_segs; + + do { + txd = &txr[tx_id]; + + sw_ring[sw_id] = tx_pkt; + + /* fill Tx descriptor */ + txd->pkt.buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt)); + txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; + txd->pkt.compl_tag = rte_cpu_to_le_16(sw_id); + txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO); + + /* size of desc_ring and sw_ring could be different */ + tx_id = (tx_id + 1) & mask; + sw_id = (sw_id + 1) & sw_mask; + + tx_pkt = tx_pkt->next; + } while (tx_pkt); + + /* fill the last descriptor with End of Packet (EOP) bit */ + txd->pkt.end_of_packet = 1; + + txq->nb_free -= nb_used; + txq->nb_used += nb_used; + } + + /* update the tail pointer if any packets were processed */ + if (nb_tx > 0) { + /* Request a descriptor completion on the last descriptor */ + txq->re_cnt += nb_tx; + if (txq->re_cnt >= GVE_TX_MIN_RE_INTERVAL) { + txd = &txr[(tx_id - 1) & mask]; + txd->pkt.report_event = true; + txq->re_cnt = 0; + } + + rte_write32(tx_id, txq->qtx_tail); + txq->tx_tail = tx_id; + txq->sw_tail = sw_id; + } + + return nb_tx; +} + static inline void gve_release_txq_mbufs_dqo(struct gve_tx_queue *txq) { From patchwork Wed Jan 18 02:53:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122230 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 533D242407; Wed, 18 Jan 2023 03:59:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3A0442D4E; Wed, 18 Jan 2023 03:59:15 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 1DDD542D4E for ; Wed, 18 Jan 2023 03:59:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010754; x=1705546754; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O38K76eBW3merUeeJh5bJMfWxVYmsi4XflBS3s7dgQ8=; b=N4/KYlbNPCXhqx9o0yTuQvs1q2vimtCS6qT2wRhUYcshs32ilsSL2/yZ QmFQr3ttwrZOJHvy9PHr5lBz4RacUen6lp4QIibFGbMi5One0DFxSJY3l 5TFHnN0x+SkazuPN6taJSsnm0BzGDzXyiUCowlyaml7++IsYUbNAke4Bg 5LVKcbgFHsFumZaqbfAAvvxj3+oRd1SaVZiYvi+LAJ5T/nQfchoYTw7sL WfLv2uHo/20bCU/UY3+lps4JFXnm2Ph807EwRY9RpdfutxoMgd+bXJgJV bDnvuZZUvN+Pdy+WQ00j5u/QRLU2VpUdOZQIVSpvAXpDrDc2iBl1gtpWj w==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575513" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575513" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:59:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911192" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911192" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:59:09 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC 5/8] net/gve: support basic Rx data path for DQO Date: Wed, 18 Jan 2023 10:53:44 +0800 Message-Id: <20230118025347.1567078-6-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic Rx data path support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 1 + drivers/net/gve/gve_ethdev.h | 3 + drivers/net/gve/gve_rx_dqo.c | 128 +++++++++++++++++++++++++++++++++++ 3 files changed, 132 insertions(+) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 512a038968..89e3f09c37 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -703,6 +703,7 @@ gve_dev_init(struct rte_eth_dev *eth_dev) } else { /* override Tx/Rx setup/release eth_dev ops */ gve_eth_dev_ops_override(&gve_local_eth_dev_ops); + eth_dev->rx_pkt_burst = gve_rx_burst_dqo; eth_dev->tx_pkt_burst = gve_tx_burst_dqo; } diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index ba657dd6c1..d434f9babe 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -366,6 +366,9 @@ gve_stop_tx_queues_dqo(struct rte_eth_dev *dev); void gve_stop_rx_queues_dqo(struct rte_eth_dev *dev); +uint16_t +gve_rx_burst_dqo(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t gve_tx_burst_dqo(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index aca6f8ea2d..244517ce5d 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -5,6 +5,134 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +static inline void +gve_rx_refill_dqo(struct gve_rx_queue *rxq) +{ + volatile struct gve_rx_desc_dqo *rx_buf_ring; + volatile struct gve_rx_desc_dqo *rx_buf_desc; + struct rte_mbuf *nmb[rxq->free_thresh]; + uint16_t nb_refill = rxq->free_thresh; + uint16_t nb_desc = rxq->nb_rx_desc; + uint16_t next_avail = rxq->bufq_tail; + struct rte_eth_dev *dev; + uint64_t dma_addr; + uint16_t delta; + int i; + + if (rxq->nb_rx_hold < rxq->free_thresh) + return; + + rx_buf_ring = rxq->rx_ring; + delta = nb_desc - next_avail; + if (unlikely(delta < nb_refill)) { + if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, delta) == 0)) { + for (i = 0; i < delta; i++) { + rx_buf_desc = &rx_buf_ring[next_avail + i]; + rxq->sw_ring[next_avail + i] = nmb[i]; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i])); + rx_buf_desc->header_buf_addr = 0; + rx_buf_desc->buf_addr = dma_addr; + } + nb_refill -= delta; + next_avail = 0; + rxq->nb_rx_hold -= delta; + } else { + dev = &rte_eth_devices[rxq->port_id]; + dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; + PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + return; + } + } + + if (nb_desc - next_avail >= nb_refill) { + if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill) == 0)) { + for (i = 0; i < nb_refill; i++) { + rx_buf_desc = &rx_buf_ring[next_avail + i]; + rxq->sw_ring[next_avail + i] = nmb[i]; + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i])); + rx_buf_desc->header_buf_addr = 0; + rx_buf_desc->buf_addr = dma_addr; + } + next_avail += nb_refill; + rxq->nb_rx_hold -= nb_refill; + } else { + dev = &rte_eth_devices[rxq->port_id]; + dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; + PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + } + } + + rte_write32(next_avail, rxq->qrx_tail); + + rxq->bufq_tail = next_avail; +} + +uint16_t +gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + volatile struct gve_rx_compl_desc_dqo *rx_compl_ring; + volatile struct gve_rx_compl_desc_dqo *rx_desc; + struct gve_rx_queue *rxq; + struct rte_mbuf *rxm; + uint16_t rx_id_bufq; + uint16_t pkt_len; + uint16_t rx_id; + uint16_t nb_rx; + + nb_rx = 0; + rxq = rx_queue; + rx_id = rxq->rx_tail; + rx_id_bufq = rxq->next_avail; + rx_compl_ring = rxq->compl_ring; + + while (nb_rx < nb_pkts) { + rx_desc = &rx_compl_ring[rx_id]; + + /* check status */ + if (rx_desc->generation != rxq->cur_gen_bit) + break; + + if (unlikely(rx_desc->rx_error)) + continue; + + pkt_len = rx_desc->packet_len; + + rx_id++; + if (rx_id == rxq->nb_rx_desc) { + rx_id = 0; + rxq->cur_gen_bit ^= 1; + } + + rxm = rxq->sw_ring[rx_id_bufq]; + rx_id_bufq++; + if (rx_id_bufq == rxq->nb_rx_desc) + rx_id_bufq = 0; + rxq->nb_rx_hold++; + + rxm->pkt_len = pkt_len; + rxm->data_len = pkt_len; + rxm->port = rxq->port_id; + rxm->ol_flags = 0; + + rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; + rxm->hash.rss = rte_be_to_cpu_32(rx_desc->hash); + + rx_pkts[nb_rx++] = rxm; + } + + if (nb_rx > 0) { + rxq->rx_tail = rx_id; + if (rx_id_bufq != rxq->next_avail) + rxq->next_avail = rx_id_bufq; + + gve_rx_refill_dqo(rxq); + } + + return nb_rx; +} + static inline void gve_release_rxq_mbufs_dqo(struct gve_rx_queue *rxq) { From patchwork Wed Jan 18 02:53:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122231 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DED2042407; Wed, 18 Jan 2023 03:59:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A973A42C4D; Wed, 18 Jan 2023 03:59:18 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id DDF0440151 for ; Wed, 18 Jan 2023 03:59:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010757; x=1705546757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f1oh1P2U3l6PWwQKo75ZV2M+hQy/BWnK+GN9J7civvY=; b=dVUsZIKyf08U1G5ZElf+7LCy/r4EA0SCWGlyNwnaiSdAY0kOl+LdTSUU OsNTwvPp/rKPp4XKA6hhY4pPQIT1OAO1wQ+C81UbUJ1MqCeCb9muMfo+a /fNK7EMiXvTZqZ376YdSmXFwwupBqaCgUSc9NnSslIZCoEBi+p5kFS/NZ HWll2il1RWcxJDt8G5XOQwKvFiSQYia4ttG6MLQGJXlwlPxXqKvP8Bw3O 7ZL2Ejfamsue14hi+f/Tp7AEi3R+qHDaWZ/QNOJPHqO2rYzhprWEX3U5B TWyVmWquWmadbHXFc/dt/If2U0/blgQyPuDw2c9nFxGjGiaBZSwC2K9KZ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575527" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575527" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:59:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911220" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911220" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:59:13 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC 6/8] net/gve: support basic stats for DQO Date: Wed, 18 Jan 2023 10:53:45 +0800 Message-Id: <20230118025347.1567078-7-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic stats support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 60 ++++++++++++++++++++++++++++++++++++ drivers/net/gve/gve_ethdev.h | 11 +++++++ drivers/net/gve/gve_rx_dqo.c | 12 +++++++- drivers/net/gve/gve_tx_dqo.c | 6 ++++ 4 files changed, 88 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 89e3f09c37..fae00305f9 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -369,6 +369,64 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) return 0; } +static int +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint16_t i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct gve_tx_queue *txq = dev->data->tx_queues[i]; + if (txq == NULL) + continue; + + stats->opackets += txq->packets; + stats->obytes += txq->bytes; + stats->oerrors += txq->errors; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct gve_rx_queue *rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + stats->ipackets += rxq->packets; + stats->ibytes += rxq->bytes; + stats->ierrors += rxq->errors; + stats->rx_nombuf += rxq->no_mbufs; + } + + return 0; +} + +static int +gve_dev_stats_reset(struct rte_eth_dev *dev) +{ + uint16_t i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct gve_tx_queue *txq = dev->data->tx_queues[i]; + if (txq == NULL) + continue; + + txq->packets = 0; + txq->bytes = 0; + txq->errors = 0; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct gve_rx_queue *rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + rxq->packets = 0; + rxq->bytes = 0; + rxq->errors = 0; + rxq->no_mbufs = 0; + } + + return 0; +} + static int gve_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { @@ -407,6 +465,8 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .rx_queue_release = gve_rx_queue_release, .tx_queue_release = gve_tx_queue_release, .link_update = gve_link_update, + .stats_get = gve_dev_stats_get, + .stats_reset = gve_dev_stats_reset, .mtu_set = gve_dev_mtu_set, }; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index d434f9babe..2e0f96499d 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -105,6 +105,11 @@ struct gve_tx_queue { struct gve_queue_page_list *qpl; struct gve_tx_iovec *iov_ring; + /* stats items */ + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint16_t port_id; uint16_t queue_id; @@ -156,6 +161,12 @@ struct gve_rx_queue { /* only valid for GQI_QPL queue format */ struct gve_queue_page_list *qpl; + /* stats items */ + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t no_mbufs; + struct gve_priv *hw; const struct rte_memzone *qres_mz; struct gve_queue_resources *qres; diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 244517ce5d..41ead5bd98 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -37,6 +37,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail = 0; rxq->nb_rx_hold -= delta; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -57,6 +58,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail += nb_refill; rxq->nb_rx_hold -= nb_refill; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -80,7 +82,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint16_t pkt_len; uint16_t rx_id; uint16_t nb_rx; + uint64_t bytes; + bytes = 0; nb_rx = 0; rxq = rx_queue; rx_id = rxq->rx_tail; @@ -94,8 +98,10 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (rx_desc->generation != rxq->cur_gen_bit) break; - if (unlikely(rx_desc->rx_error)) + if (unlikely(rx_desc->rx_error)) { + rxq->errors++; continue; + } pkt_len = rx_desc->packet_len; @@ -120,6 +126,7 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm->hash.rss = rte_be_to_cpu_32(rx_desc->hash); rx_pkts[nb_rx++] = rxm; + bytes += pkt_len; } if (nb_rx > 0) { @@ -128,6 +135,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxq->next_avail = rx_id_bufq; gve_rx_refill_dqo(rxq); + + rxq->packets += nb_rx; + rxq->bytes += bytes; } return nb_rx; diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 3583c82246..9c1361c894 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -80,10 +80,12 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t nb_used; uint16_t tx_id; uint16_t sw_id; + uint64_t bytes; sw_ring = txq->sw_ring; txr = txq->tx_ring; + bytes = 0; mask = txq->nb_tx_desc - 1; sw_mask = txq->sw_size - 1; tx_id = txq->tx_tail; @@ -118,6 +120,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_id = (tx_id + 1) & mask; sw_id = (sw_id + 1) & sw_mask; + bytes += tx_pkt->pkt_len; tx_pkt = tx_pkt->next; } while (tx_pkt); @@ -141,6 +144,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) rte_write32(tx_id, txq->qtx_tail); txq->tx_tail = tx_id; txq->sw_tail = sw_id; + + txq->packets += nb_tx; + txq->bytes += bytes; } return nb_tx; From patchwork Wed Jan 18 02:53:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122232 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C33D42407; Wed, 18 Jan 2023 03:59:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A0C9442D5D; Wed, 18 Jan 2023 03:59:21 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id D6CF242D5D for ; Wed, 18 Jan 2023 03:59:19 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010760; x=1705546760; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3ztCh2ZPPctYTGF9UOOMn7a52dNfWwqxrmS7OrHE7rQ=; b=nEl3mqzQYkRPnpT+cxaIBr2JMII+lqtnstt27E9eQg4/r9Bslzeg7v+r ZiJuSMQ8sSsWDbE47Z/Zk5zILiIFfCixDrxPJUckaSTeluZlnYv1YKkxY 8cGs/e29rCXbiOtUe9zeMq99uj+AzXTw16j+5j88vUtjzYz7UYmp21dxD qCgmw+tzHR/Z36b7O0SgAqXhrhEgeL1dtEbpMbMeJg/KTGG2Ioc5Afl6Y n15YsYW8PfNWjh1Egn7+6w5C3G7cPAKrMMVA9Zz3OL/BhJ4H5nkQS45Tf roB4XVwgwDC6SyRi+mdpCrjStoC0Jgc0QNujFDGDIAnLs9HxeDbHPxZ4f Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575532" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575532" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:59:19 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911228" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911228" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:59:16 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Jordan Kimbrough , Rushil Gupta , Jeroen de Borst Subject: [RFC 7/8] net/gve: support jumbo frame for GQI Date: Wed, 18 Jan 2023 10:53:46 +0800 Message-Id: <20230118025347.1567078-8-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add multi-segment support to enable GQI Rx Jumbo Frame. Signed-off-by: Jordan Kimbrough Signed-off-by: Rushil Gupta Signed-off-by: Junfeng Guo Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.h | 8 +++ drivers/net/gve/gve_rx.c | 128 ++++++++++++++++++++++++++--------- 2 files changed, 105 insertions(+), 31 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 2e0f96499d..608a2f2fb4 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -138,6 +138,13 @@ struct gve_tx_queue { uint8_t is_gqi_qpl; }; +struct gve_rx_ctx { + struct rte_mbuf *mbuf_head; + struct rte_mbuf *mbuf_tail; + uint16_t total_frags; + bool drop_pkt; +}; + struct gve_rx_queue { volatile struct gve_rx_desc *rx_desc_ring; volatile union gve_rx_data_slot *rx_data_ring; @@ -146,6 +153,7 @@ struct gve_rx_queue { uint64_t rx_ring_phys_addr; struct rte_mbuf **sw_ring; struct rte_mempool *mpool; + struct gve_rx_ctx ctx; uint16_t rx_tail; uint16_t nb_rx_desc; diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 9ba975c9b4..2468fc70ee 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -5,6 +5,8 @@ #include "gve_ethdev.h" #include "base/gve_adminq.h" +#define GVE_PKT_CONT_BIT_IS_SET(x) (GVE_RXF_PKT_CONT & (x)) + static inline void gve_rx_refill(struct gve_rx_queue *rxq) { @@ -80,40 +82,70 @@ gve_rx_refill(struct gve_rx_queue *rxq) } } -uint16_t -gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +/* + * This method processes a single rte_mbuf and handles packet segmentation + * In QPL mode it copies data from the mbuf to the gve_rx_queue. + */ +static void +gve_rx_mbuf(struct gve_rx_queue *rxq, struct rte_mbuf *rxe, uint16_t len, + uint16_t rx_id) { - volatile struct gve_rx_desc *rxr, *rxd; - struct gve_rx_queue *rxq = rx_queue; - uint16_t rx_id = rxq->rx_tail; - struct rte_mbuf *rxe; - uint16_t nb_rx, len; + uint16_t padding = 0; uint64_t addr; - uint16_t i; - - rxr = rxq->rx_desc_ring; - nb_rx = 0; - - for (i = 0; i < nb_pkts; i++) { - rxd = &rxr[rx_id]; - if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno) - break; - if (rxd->flags_seq & GVE_RXF_ERR) - continue; - - len = rte_be_to_cpu_16(rxd->len) - GVE_RX_PAD; - rxe = rxq->sw_ring[rx_id]; - if (rxq->is_gqi_qpl) { - addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + GVE_RX_PAD; - rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), - (void *)(size_t)addr, len); - } + rxe->data_len = len; + if (!rxq->ctx.mbuf_head) { + rxq->ctx.mbuf_head = rxe; + rxq->ctx.mbuf_tail = rxe; + rxe->nb_segs = 1; rxe->pkt_len = len; rxe->data_len = len; rxe->port = rxq->port_id; rxe->ol_flags = 0; + padding = GVE_RX_PAD; + } else { + rxq->ctx.mbuf_head->pkt_len += len; + rxq->ctx.mbuf_head->nb_segs += 1; + rxq->ctx.mbuf_tail->next = rxe; + rxq->ctx.mbuf_tail = rxe; + } + if (rxq->is_gqi_qpl) { + addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * PAGE_SIZE + padding; + rte_memcpy((void *)((size_t)rxe->buf_addr + rxe->data_off), + (void *)(size_t)addr, len); + } +} + +/* + * This method processes a single packet fragment associated with the + * passed packet descriptor. + * This methods returns whether the fragment is the last fragment + * of a packet. + */ +static bool +gve_rx(struct gve_rx_queue *rxq, volatile struct gve_rx_desc *rxd, uint16_t rx_id) +{ + bool is_last_frag = !GVE_PKT_CONT_BIT_IS_SET(rxd->flags_seq); + uint16_t frag_size = rte_be_to_cpu_16(rxd->len); + struct gve_rx_ctx *ctx = &rxq->ctx; + bool is_first_frag = ctx->total_frags == 0; + struct rte_mbuf *rxe; + + if (ctx->drop_pkt) + goto finish_frag; + if (rxd->flags_seq & GVE_RXF_ERR) { + ctx->drop_pkt = true; + goto finish_frag; + } + + if (is_first_frag) + frag_size -= GVE_RX_PAD; + + rxe = rxq->sw_ring[rx_id]; + gve_rx_mbuf(rxq, rxe, frag_size, rx_id); + + if (is_first_frag) { if (rxd->flags_seq & GVE_RXF_TCP) rxe->packet_type |= RTE_PTYPE_L4_TCP; if (rxd->flags_seq & GVE_RXF_UDP) @@ -127,18 +159,52 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxe->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; rxe->hash.rss = rte_be_to_cpu_32(rxd->rss_hash); } + } - rxq->expected_seqno = gve_next_seqno(rxq->expected_seqno); +finish_frag: + ctx->total_frags++; + return is_last_frag; +} + +static void +gve_rx_ctx_clear(struct gve_rx_ctx *ctx) +{ + ctx->mbuf_head = NULL; + ctx->mbuf_tail = NULL; + ctx->drop_pkt = false; + ctx->total_frags = 0; +} + +uint16_t +gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + volatile struct gve_rx_desc *rxr, *rxd; + struct gve_rx_queue *rxq = rx_queue; + struct gve_rx_ctx *ctx = &rxq->ctx; + uint16_t rx_id = rxq->rx_tail; + uint16_t nb_rx; + + rxr = rxq->rx_desc_ring; + nb_rx = 0; + + while (nb_rx < nb_pkts) { + rxd = &rxr[rx_id]; + if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno) + break; + + if (gve_rx(rxq, rxd, rx_id)) { + if (!ctx->drop_pkt) + rx_pkts[nb_rx++] = ctx->mbuf_head; + rxq->nb_avail += ctx->total_frags; + gve_rx_ctx_clear(ctx); + } rx_id++; if (rx_id == rxq->nb_rx_desc) rx_id = 0; - - rx_pkts[nb_rx] = rxe; - nb_rx++; + rxq->expected_seqno = gve_next_seqno(rxq->expected_seqno); } - rxq->nb_avail += nb_rx; rxq->rx_tail = rx_id; if (rxq->nb_avail > rxq->free_thresh) From patchwork Wed Jan 18 02:53:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 122233 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 355A642407; Wed, 18 Jan 2023 03:59:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C29242D66; Wed, 18 Jan 2023 03:59:25 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id ED90442D59 for ; Wed, 18 Jan 2023 03:59:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674010763; x=1705546763; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hm/X8AiMEW7Ljz7tEybdLkGjss+84jKquKyg8OVVbU4=; b=XoC7DfFKct3w7K3O1fesVqbp5xFda1pnoZGwqD/1ob3ac6nqaI2AyG+y GbNSAk5c166eGP9fQLkCL/qdavfQG5YpN/yDXmfVGfsz5w0Z6+PZ0a6mX 3HYf/6a4OHCv0ua6/1dX9315CtuqzZeK+SYZZO3kVX/Nbvjo3oBNxDVmL 0D4JoxCzLVH8C51O+yo470A1DEodn3gCfx9OMZkqljE2d3dol5ZTORyVU DpXyE5xf0GZbs90X5OhAWGhuThrihKKtihUNbxtkglR3PtU0pKJtpt+cq Wmh91X4yDrXK3RdHZgslDnYREQgSxYJOcmcw8BJbWcXKTTtCDFnpfSqUL g==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322575543" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322575543" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 18:59:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="722911237" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="722911237" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by fmsmga008.fm.intel.com with ESMTP; 17 Jan 2023 18:59:19 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC 8/8] net/gve: add AdminQ command to verify driver compatibility Date: Wed, 18 Jan 2023 10:53:47 +0800 Message-Id: <20230118025347.1567078-9-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118025347.1567078-1-junfeng.guo@intel.com> References: <20230118025347.1567078-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Check whether the driver is compatible with the device presented. Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Junfeng Guo Signed-off-by: Jeroen de Borst --- drivers/net/gve/base/gve_adminq.c | 19 ++++++++++ drivers/net/gve/base/gve_adminq.h | 48 +++++++++++++++++++++++++ drivers/net/gve/base/gve_osdep.h | 8 +++++ drivers/net/gve/gve_ethdev.c | 60 +++++++++++++++++++++++++++++++ drivers/net/gve/gve_ethdev.h | 1 + 5 files changed, 136 insertions(+) diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index e963f910a0..5576990cb1 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -401,6 +401,9 @@ static int gve_adminq_issue_cmd(struct gve_priv *priv, case GVE_ADMINQ_GET_PTYPE_MAP: priv->adminq_get_ptype_map_cnt++; break; + case GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY: + priv->adminq_verify_driver_compatibility_cnt++; + break; default: PMD_DRV_LOG(ERR, "unknown AQ command opcode %d", opcode); } @@ -859,6 +862,22 @@ int gve_adminq_report_stats(struct gve_priv *priv, u64 stats_report_len, return gve_adminq_execute_cmd(priv, &cmd); } +int gve_adminq_verify_driver_compatibility(struct gve_priv *priv, + u64 driver_info_len, + dma_addr_t driver_info_addr) +{ + union gve_adminq_command cmd; + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = cpu_to_be32(GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY); + cmd.verify_driver_compatibility = (struct gve_adminq_verify_driver_compatibility) { + .driver_info_len = cpu_to_be64(driver_info_len), + .driver_info_addr = cpu_to_be64(driver_info_addr), + }; + + return gve_adminq_execute_cmd(priv, &cmd); +} + int gve_adminq_report_link_speed(struct gve_priv *priv) { struct gve_dma_mem link_speed_region_dma_mem; diff --git a/drivers/net/gve/base/gve_adminq.h b/drivers/net/gve/base/gve_adminq.h index 05550119de..c82e02405c 100644 --- a/drivers/net/gve/base/gve_adminq.h +++ b/drivers/net/gve/base/gve_adminq.h @@ -23,6 +23,7 @@ enum gve_adminq_opcodes { GVE_ADMINQ_REPORT_STATS = 0xC, GVE_ADMINQ_REPORT_LINK_SPEED = 0xD, GVE_ADMINQ_GET_PTYPE_MAP = 0xE, + GVE_ADMINQ_VERIFY_DRIVER_COMPATIBILITY = 0xF, }; /* Admin queue status codes */ @@ -145,6 +146,48 @@ enum gve_sup_feature_mask { }; #define GVE_DEV_OPT_LEN_GQI_RAW_ADDRESSING 0x0 +#define GVE_VERSION_STR_LEN 128 + +enum gve_driver_capbility { + gve_driver_capability_gqi_qpl = 0, + gve_driver_capability_gqi_rda = 1, + gve_driver_capability_dqo_qpl = 2, /* reserved for future use */ + gve_driver_capability_dqo_rda = 3, +}; + +#define GVE_CAP1(a) BIT((int)a) +#define GVE_CAP2(a) BIT(((int)a) - 64) +#define GVE_CAP3(a) BIT(((int)a) - 128) +#define GVE_CAP4(a) BIT(((int)a) - 192) + +#define GVE_DRIVER_CAPABILITY_FLAGS1 \ + (GVE_CAP1(gve_driver_capability_gqi_qpl) | \ + GVE_CAP1(gve_driver_capability_gqi_rda) | \ + GVE_CAP1(gve_driver_capability_dqo_rda)) + +#define GVE_DRIVER_CAPABILITY_FLAGS2 0x0 +#define GVE_DRIVER_CAPABILITY_FLAGS3 0x0 +#define GVE_DRIVER_CAPABILITY_FLAGS4 0x0 + +struct gve_driver_info { + u8 os_type; /* 0x01 = Linux */ + u8 driver_major; + u8 driver_minor; + u8 driver_sub; + __be32 os_version_major; + __be32 os_version_minor; + __be32 os_version_sub; + __be64 driver_capability_flags[4]; + u8 os_version_str1[GVE_VERSION_STR_LEN]; + u8 os_version_str2[GVE_VERSION_STR_LEN]; +}; + +struct gve_adminq_verify_driver_compatibility { + __be64 driver_info_len; + __be64 driver_info_addr; +}; + +GVE_CHECK_STRUCT_LEN(16, gve_adminq_verify_driver_compatibility); struct gve_adminq_configure_device_resources { __be64 counter_array; @@ -345,6 +388,8 @@ union gve_adminq_command { struct gve_adminq_report_stats report_stats; struct gve_adminq_report_link_speed report_link_speed; struct gve_adminq_get_ptype_map get_ptype_map; + struct gve_adminq_verify_driver_compatibility + verify_driver_compatibility; }; }; u8 reserved[64]; @@ -377,5 +422,8 @@ int gve_adminq_report_link_speed(struct gve_priv *priv); struct gve_ptype_lut; int gve_adminq_get_ptype_map_dqo(struct gve_priv *priv, struct gve_ptype_lut *ptype_lut); +int gve_adminq_verify_driver_compatibility(struct gve_priv *priv, + u64 driver_info_len, + dma_addr_t driver_info_addr); #endif /* _GVE_ADMINQ_H */ diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h index abf3d379ae..a8feae18f4 100644 --- a/drivers/net/gve/base/gve_osdep.h +++ b/drivers/net/gve/base/gve_osdep.h @@ -21,6 +21,9 @@ #include #include #include +#include +#include +#include #include "../gve_logs.h" @@ -82,6 +85,11 @@ typedef rte_iova_t dma_addr_t; { gve_static_assert_##X = (n) / ((sizeof(struct X) == (n)) ? 1 : 0) } #define GVE_CHECK_UNION_LEN(n, X) enum gve_static_asset_enum_##X \ { gve_static_assert_##X = (n) / ((sizeof(union X) == (n)) ? 1 : 0) } +#ifndef LINUX_VERSION_MAJOR +#define LINUX_VERSION_MAJOR (((LINUX_VERSION_CODE) >> 16) & 0xff) +#define LINUX_VERSION_SUBLEVEL (((LINUX_VERSION_CODE) >> 8) & 0xff) +#define LINUX_VERSION_PATCHLEVEL ((LINUX_VERSION_CODE) & 0xff) +#endif static __rte_always_inline u8 readb(volatile void *addr) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index fae00305f9..096f7c2d60 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -314,6 +314,60 @@ gve_dev_close(struct rte_eth_dev *dev) return err; } +static int +gve_verify_driver_compatibility(struct gve_priv *priv) +{ + const struct rte_memzone *driver_info_bus; + struct gve_driver_info *driver_info; + struct utsname uts; + char *release; + int err; + + driver_info_bus = rte_memzone_reserve_aligned("verify_driver_compatibility", + sizeof(struct gve_driver_info), + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, + PAGE_SIZE); + if (driver_info_bus == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memzone for driver compatibility"); + return -ENOMEM; + } + driver_info = (struct gve_driver_info *)driver_info_bus->addr; + *driver_info = (struct gve_driver_info) { + .os_type = 1, /* Linux */ + .os_version_major = cpu_to_be32(LINUX_VERSION_MAJOR), + .os_version_minor = cpu_to_be32(LINUX_VERSION_SUBLEVEL), + .os_version_sub = cpu_to_be32(LINUX_VERSION_PATCHLEVEL), + .driver_capability_flags = { + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS1), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS2), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS3), + cpu_to_be64(GVE_DRIVER_CAPABILITY_FLAGS4), + }, + }; + + if (uname(&uts) > 0) + release = uts.release; + + /* OS version */ + rte_strscpy((char *)driver_info->os_version_str1, release, + sizeof(driver_info->os_version_str1)); + /* DPDK version */ + rte_strscpy((char *)driver_info->os_version_str2, rte_version(), + sizeof(driver_info->os_version_str2)); + + err = gve_adminq_verify_driver_compatibility(priv, + sizeof(struct gve_driver_info), + (dma_addr_t)driver_info_bus); + + /* It's ok if the device doesn't support this */ + if (err == -EOPNOTSUPP) + err = 0; + + rte_memzone_free(driver_info_bus); + return err; +} + static int gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -625,6 +679,12 @@ gve_init_priv(struct gve_priv *priv, bool skip_describe_device) return err; } + err = gve_verify_driver_compatibility(priv); + if (err) { + PMD_DRV_LOG(ERR, "Could not verify driver compatibility: err=%d", err); + goto free_adminq; + } + if (skip_describe_device) goto setup_device; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 608a2f2fb4..cd26225c19 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -250,6 +250,7 @@ struct gve_priv { uint32_t adminq_report_stats_cnt; uint32_t adminq_report_link_speed_cnt; uint32_t adminq_get_ptype_map_cnt; + uint32_t adminq_verify_driver_compatibility_cnt; volatile uint32_t state_flags;