From patchwork Thu Feb 15 09:52:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rushil Gupta X-Patchwork-Id: 136824 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5F28443B0F; Thu, 15 Feb 2024 10:52:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF776433ED; Thu, 15 Feb 2024 10:52:56 +0100 (CET) Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by mails.dpdk.org (Postfix) with ESMTP id CC7DC406B6 for ; Thu, 15 Feb 2024 10:52:54 +0100 (CET) Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc64b659a9cso1148643276.3 for ; Thu, 15 Feb 2024 01:52:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707990774; x=1708595574; darn=dpdk.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=u4SD7ezJxpDEGAr4e3j5O+Tp2nFIFgbdLg8zlyH6eBE=; b=Ni3gW9OFCWiIgc3WBcP9SZvt2wfI8f5nrSvMCn2uTxuAcNkGJOhDOE17PlFB9s1AUw 5EOJ+vGqvqA0qWKwW6tEFt5J3/zp8g7JFYCyMeWx9Bb8sDH1Fn5XgDjKHjCE5ViLRlBR 7fAsLVm+OrhN2m/N5W7l/jFnQU4VhS5AbJw+7SSZEkc5Y9tA+zeXQ8xk532pErtuw+c9 Zr7r3eMH4eu4opb1srs+CjAKv2jZ+9ppY9BpeC0rM7+WSb+zjNvEmn8QUbpwURLNTyS2 jyugsuvbqhM+u7tNP5tqXh7euH4LvQ8ADaEBQCdwu58/F9GhVL6+Er5hXeRHQsIm0JZf HDDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707990774; x=1708595574; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=u4SD7ezJxpDEGAr4e3j5O+Tp2nFIFgbdLg8zlyH6eBE=; b=gAtrP6C1TmAcFUvgkBG4YGyxg9NEuCxkjSmsJdRq+ln6HDwzDNgjHawermMtMuVtRe 1hesr5PU89SF3cFnux7bMg1LfR7amdQP5IbtxoK3k1EdWxqqrfGPkTaWR1we69rytIgv e/Av+qS9bFNxg3twKDMOvM2CceIDII5nqF+98oz+fBZLkCRJeABlYnK4W9Q4qnq6Nvvw Xi/hV8yApmJz7kyJWF6I+TuqWmy48A+ExJRfudrzFh5kqCjW+eTaxWlV33LeOxl03uCj CD9cWQYFTxG4IAKS3n28KQAg+4HK/BTLENE5di45//m75n7WOP0/OU9v0XpnZ2gd/dx8 tP8Q== X-Gm-Message-State: AOJu0YxlHMlPTDVcmUUqDXj+0n5DpXBLPTyg80DxLnc1axW/euSO8cK9 LYM0h91VUK5wgauS10hAwuw7+GVkJygbYMLnE9y2U06288XetBdTuX7T8tb5YHq6gNDpgevOjpF EUcOX7Q== X-Google-Smtp-Source: AGHT+IFlvojTgOQuqgreSAyoczxea70LZ32/kaWXIKvd2vibgo+N5FJSSb2X03pJTfRD43lzbWdLdd+VnSUP X-Received: from rushilg.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:f27]) (user=rushilg job=sendgmr) by 2002:a05:6902:728:b0:dcd:ac69:eb1a with SMTP id l8-20020a056902072800b00dcdac69eb1amr268458ybt.12.1707990774195; Thu, 15 Feb 2024 01:52:54 -0800 (PST) Date: Thu, 15 Feb 2024 09:52:50 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240215095250.233572-1-rushilg@google.com> Subject: [PATCH] net/gve: fix dqo bug for chained descriptors From: Rushil Gupta To: junfeng.guo@intel.com, jeroendb@google.com, joshwash@google.com, ferruh.yigit@amd.com Cc: dev@dpdk.org, stable@dpdk.org, Rushil Gupta X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org net/gve: fix completion path for chained segments Dqo Tx path had a bug where driver was overwriting mbufs in sw-ring. We fixed this bug by cleaning slots for all segments. Fixes: 4022f9 ("net/gve: support basic Tx data path for DQO") Cc: stable@dpdk.org Signed-off-by: Rushil Gupta Reviewed-by: Joshua Washington --- drivers/net/gve/gve_tx_dqo.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 16101de..d060664 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -13,7 +13,7 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq) struct gve_tx_compl_desc *compl_desc; struct gve_tx_queue *aim_txq; uint16_t nb_desc_clean; - struct rte_mbuf *txe; + struct rte_mbuf *txe, *txe_next; uint16_t compl_tag; uint16_t next; @@ -43,10 +43,15 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq) PMD_DRV_LOG(DEBUG, "GVE_COMPL_TYPE_DQO_REINJECTION !!!"); /* FALLTHROUGH */ case GVE_COMPL_TYPE_DQO_PKT: + /* free all segments. */ txe = aim_txq->sw_ring[compl_tag]; - if (txe != NULL) { + while (txe != NULL) { + txe_next = txe->next; rte_pktmbuf_free_seg(txe); - txe = NULL; + if (aim_txq->sw_ring[compl_tag] == txe) + aim_txq->sw_ring[compl_tag] = NULL; + txe = txe_next; + compl_tag = (compl_tag + 1) & (aim_txq->sw_size - 1); } break; case GVE_COMPL_TYPE_DQO_MISS: @@ -83,6 +88,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t tx_id; uint16_t sw_id; uint64_t bytes; + uint16_t first_sw_id; sw_ring = txq->sw_ring; txr = txq->tx_ring; @@ -107,23 +113,26 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) ol_flags = tx_pkt->ol_flags; nb_used = tx_pkt->nb_segs; - + first_sw_id = sw_id; do { - txd = &txr[tx_id]; + if (sw_ring[sw_id] != NULL) { + PMD_DRV_LOG(ERR, "Overwriting an entry in sw_ring"); + } + txd = &txr[tx_id]; sw_ring[sw_id] = tx_pkt; /* fill Tx descriptor */ txd->pkt.buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt)); txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; - txd->pkt.compl_tag = rte_cpu_to_le_16(sw_id); + txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id); txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO); /* size of desc_ring and sw_ring could be different */ tx_id = (tx_id + 1) & mask; sw_id = (sw_id + 1) & sw_mask; - bytes += tx_pkt->pkt_len; + bytes += tx_pkt->data_len; tx_pkt = tx_pkt->next; } while (tx_pkt);