From patchwork Sat Nov 11 00:34:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joshua Washington X-Patchwork-Id: 134106 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D63A432F8; Sat, 11 Nov 2023 01:35:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4DEB2402A6; Sat, 11 Nov 2023 01:35:16 +0100 (CET) Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by mails.dpdk.org (Postfix) with ESMTP id 2DE2A4025C for ; Sat, 11 Nov 2023 01:35:14 +0100 (CET) Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1cc2a0c7c6cso27383525ad.1 for ; Fri, 10 Nov 2023 16:35:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699662913; x=1700267713; darn=dpdk.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=DXEyMHrbvQ4x0QVCRgimKo/LfF8kVKGrcULq4yF6DA4=; b=r9A/NPbNrTwWz5uCA4GKwePABizr8cb2pv5DHaI74h4hgx8X8s9K96uIehCbc2gd6L rKVNfrywkXsGyEr58BZ/lS3qrg1XRbjEvljY+6XNXeSW8ltH4VyazoAyxKltPBLdF4L6 vOpSRbGdhdCwUQDAJLG/c7Ra98FOJoY7WwSpvM4jZP9crvaky+p/Nam8Jw1EgGyahCc3 mQ6D9G9RmYymhbnIBGYCmwRlAZsqfqC+wLctNgok/TFG1rtw9jZ2vcRcTayWqKToRvYq wkTU3pk2ubwH98hKG4wiLODIKrad29U7JHP1KECqWUIpfdA4j/4A7BqDZqM16qtd9gZN Oy7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699662913; x=1700267713; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=DXEyMHrbvQ4x0QVCRgimKo/LfF8kVKGrcULq4yF6DA4=; b=eSthK9DXyd5yZZi5rb0SEcE3i6UDi+rbeNtJGxTALNsLy/o4ShYtASJNelIaRKXXtv 7gMC4tpEmxaF+tXBdf38ZcgT7EasxLbbwH9/lquVo11+q5W+TgRfP6l63WvAtcBOwf7i gyEc4mOPMMfrz/rSy5ovlsRkjEf5/SKjGP4kRRR446xX8q9ZTswiEVWb30mEnmxOjsbP JkVOynN8xWPDnVzaQi8SQkxTLFAS1acfq9FtTWB2rD0ZbMqNcuBPr/J21J8mGIsGp67m /f6Oc5k0ai6Rg10NbWC7DBve5kCkCqPUfxfYGiFN3TgSoosdA5f/d41xuBLKmnlokNEC 6CUg== X-Gm-Message-State: AOJu0YzarOIZaiKhKReIAzxaxQhIHFPLD+sWaCIY1Ftqd/UggzXhD+Ga 1XdIJfh1DTSCpLwM1dwQAUzqKw8iNdhjcw== X-Google-Smtp-Source: AGHT+IFHORZ4Ew430jc9h3V6/iyBhelc63+WBdgujVylgVZJaRmBOKU97nMa5ATDq+fx7DDpfoTh9mbs6qesdw== X-Received: from joshwash.sea.corp.google.com ([2620:15c:11c:202:4ecc:b88d:af23:7213]) (user=joshwash job=sendgmr) by 2002:a17:902:c144:b0:1cc:7bb1:abff with SMTP id 4-20020a170902c14400b001cc7bb1abffmr200609plj.10.1699662913011; Fri, 10 Nov 2023 16:35:13 -0800 (PST) Date: Fri, 10 Nov 2023 16:34:09 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231111003410.2950594-1-joshwash@google.com> Subject: [PATCH] net/gve: fix RX buffer size alignment From: Joshua Washington To: Junfeng Guo , Jeroen de Borst , Rushil Gupta , Joshua Washington , Xiaoyun Li Cc: dev@dpdk.org, stable@dpdk.org, Ferruh Yigit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In GVE, both queue formats have RX buffer size alignment requirements which are not respected whenever the mbuf size is greater than the minimum required by DPDK (2048 + 128). This causes the driver to break silently in initialization, and no queues are created, leading to no network traffic. This change aims to remedy this by restricting the RX receive buffer sizes to valid sizes for their respective queue formats. Fixes: 4bec2d0b5572 ("net/gve: support queue operations") Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO") Cc: junfeng.guo@intel.com Cc: stable@dpdk.org Signed-off-by: Joshua Washington Reviewed-by: Rushil Gupta --- drivers/net/gve/gve_ethdev.c | 5 ++++- drivers/net/gve/gve_ethdev.h | 22 +++++++++++++++++++++- drivers/net/gve/gve_rx.c | 10 +++++++++- drivers/net/gve/gve_rx_dqo.c | 9 ++++++++- 4 files changed, 42 insertions(+), 4 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index eb3bc7e151..43b4ab523d 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -296,7 +296,10 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_mac_addrs = 1; dev_info->max_rx_queues = priv->max_nb_rxq; dev_info->max_tx_queues = priv->max_nb_txq; - dev_info->min_rx_bufsize = GVE_MIN_BUF_SIZE; + if (gve_is_gqi(priv)) + dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_GQI; + else + dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_DQO; dev_info->max_rx_pktlen = priv->max_mtu + RTE_ETHER_HDR_LEN; dev_info->max_mtu = priv->max_mtu; dev_info->min_mtu = RTE_ETHER_MIN_MTU; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 755ee8ad15..0cc3b176f9 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -20,7 +20,13 @@ #define GVE_DEFAULT_TX_RS_THRESH 32 #define GVE_TX_MAX_FREE_SZ 512 -#define GVE_MIN_BUF_SIZE 1024 +#define GVE_RX_BUF_ALIGN_DQO 128 +#define GVE_RX_MIN_BUF_SIZE_DQO 1024 +#define GVE_RX_MAX_BUF_SIZE_DQO ((16 * 1024) - GVE_RX_BUF_ALIGN_DQO) + +#define GVE_RX_BUF_ALIGN_GQI 2048 +#define GVE_RX_MIN_BUF_SIZE_GQI 2048 +#define GVE_RX_MAX_BUF_SIZE_GQI 4096 #define GVE_TX_CKSUM_OFFLOAD_MASK ( \ RTE_MBUF_F_TX_L4_MASK | \ @@ -337,6 +343,20 @@ gve_clear_device_rings_ok(struct gve_priv *priv) &priv->state_flags); } +static inline int +gve_validate_rx_buffer_size(struct gve_priv *priv, uint16_t rx_buffer_size) +{ + uint16_t min_rx_buffer_size = gve_is_gqi(priv) ? + GVE_RX_MIN_BUF_SIZE_GQI : GVE_RX_MIN_BUF_SIZE_DQO; + if (rx_buffer_size < min_rx_buffer_size) { + PMD_DRV_LOG(ERR, "mbuf size must be at least %hu bytes", + min_rx_buffer_size); + return -EINVAL; + } + + return 0; +} + int gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *conf, diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index b8c92ccda0..0049c6428d 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -301,6 +301,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, const struct rte_memzone *mz; struct gve_rx_queue *rxq; uint16_t free_thresh; + uint32_t mbuf_len; int err = 0; if (nb_desc != hw->rx_desc_cnt) { @@ -344,7 +345,14 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, rxq->hw = hw; rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)]; - rxq->rx_buf_len = rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM; + mbuf_len = + rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM; + err = gve_validate_rx_buffer_size(hw, mbuf_len); + if (err) + goto err_rxq; + rxq->rx_buf_len = + RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_GQI, + RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_GQI)); /* Allocate software ring */ rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", sizeof(struct rte_mbuf *) * nb_desc, diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 7e7ddac48e..2ec6135705 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -220,6 +220,7 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, const struct rte_memzone *mz; struct gve_rx_queue *rxq; uint16_t free_thresh; + uint32_t mbuf_len; int err = 0; if (nb_desc != hw->rx_desc_cnt) { @@ -264,8 +265,14 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id, rxq->hw = hw; rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)]; - rxq->rx_buf_len = + mbuf_len = rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM; + err = gve_validate_rx_buffer_size(hw, mbuf_len); + if (err) + goto free_rxq; + rxq->rx_buf_len = + RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_DQO, + RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_DQO)); /* Allocate software ring */ rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring",