From patchwork Tue May 14 08:41:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Loftus, Ciara" X-Patchwork-Id: 140049 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B44E4402A; Tue, 14 May 2024 10:43:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 56F854068E; Tue, 14 May 2024 10:43:07 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.16]) by mails.dpdk.org (Postfix) with ESMTP id 298A0402ED; Tue, 14 May 2024 10:43:05 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1715676185; x=1747212185; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9KvEG33BwdjssPwZh56mDWIOo82RsTL+9MS8eZf4aEk=; b=hqGXYDg+GAmyxumecFMoP1t3k3M7cxbLjBnaJTTgPAQ2sN2w97Vg0yhg Fuy6eivxGiO6xtws9lbX6QqjHfzufQW4IyA5CaW7Wo6rrtXqZV+xdocVJ arvLyET5kAX5z1IvNCxJxj5uKt8vO9zNqJFI14LdZoE14vrhywKyLpuOJ FXXM/fM5Y6V7tjnVbcofJvvwFfMX0r9wPF38sNeiq6vCryk79zzDVYmkr n3KDY5he4bBFVO4QMkNn98sWST13V+Qh7bnVgnwZQCD4iHOZa/Af1DGm4 RX1K8/AsMy+DtxL1XjIQtRDzpg9a3CAO12nNN99v1F1Z8KvF7VVcUj+zp g==; X-CSE-ConnectionGUID: juCYDjhmTt+NMINvSJ3XYQ== X-CSE-MsgGUID: Rz5P6vMlQx+oCgUPC0bByw== X-IronPort-AV: E=McAfee;i="6600,9927,11072"; a="11513811" X-IronPort-AV: E=Sophos;i="6.08,159,1712646000"; d="scan'208";a="11513811" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa110.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 May 2024 01:43:05 -0700 X-CSE-ConnectionGUID: A/gKsWDmRUOsCsqEOeXxRg== X-CSE-MsgGUID: WYe23x3wQPeDIkHapWs0Zg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,159,1712646000"; d="scan'208";a="30657435" Received: from silpixa00401177.ir.intel.com (HELO vm177..) ([10.55.128.139]) by fmviesa006.fm.intel.com with ESMTP; 14 May 2024 01:43:03 -0700 From: Ciara Loftus To: dev@dpdk.org Cc: Ciara Loftus , stable@dpdk.org, Stephen Hemminger Subject: [PATCH v2 2/4] net/af_xdp: fix mbuf alloc failed statistic Date: Tue, 14 May 2024 08:41:53 +0000 Message-Id: <20240514084155.50673-3-ciara.loftus@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240514084155.50673-1-ciara.loftus@intel.com> References: <20240514084155.50673-1-ciara.loftus@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Failures to allocate mbufs in the receive path were not being accounted for in the ethdev statistics. Fix this. Bugzilla ID: 1429 Fixes: f1debd77efaf ("net/af_xdp: introduce AF_XDP PMD") cc: stable@dpdk.org Reported-by: Stephen Hemminger Signed-off-by: Ciara Loftus --- v2: * Fixed typo in commit message * Remove unnecessary local stat for alloc_failed drivers/net/af_xdp/rte_eth_af_xdp.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index fee0d5d5f3..9bcf971ae5 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -312,6 +312,7 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) unsigned long rx_bytes = 0; int i; struct rte_mbuf *fq_bufs[ETH_AF_XDP_RX_BATCH_SIZE]; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port]; nb_pkts = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx); @@ -339,6 +340,8 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * xsk_ring_cons__peek */ rx->cached_cons -= nb_pkts; + dev->data->rx_mbuf_alloc_failed += nb_pkts; + return 0; } @@ -390,6 +393,7 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) int i; uint32_t free_thresh = fq->size >> 1; struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE]; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port]; if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh) (void)reserve_fill_queue(umem, nb_pkts, NULL, fq); @@ -408,6 +412,7 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * xsk_ring_cons__peek */ rx->cached_cons -= nb_pkts; + dev->data->rx_mbuf_alloc_failed += nb_pkts; return 0; }