From patchwork Fri Apr 26 00:51:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Du, Frank" X-Patchwork-Id: 139687 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 30BDD43F0E; Fri, 26 Apr 2024 03:05:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C24FE43A5C; Fri, 26 Apr 2024 03:05:41 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) by mails.dpdk.org (Postfix) with ESMTP id DC36C43A52 for ; Fri, 26 Apr 2024 03:05:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1714093540; x=1745629540; h=from:to:subject:date:message-id:mime-version: content-transfer-encoding; bh=HUfrX+EEelDmEoxFdhoyZPlsir4bQt2vXgRmkCylX7M=; b=ewN/AiB4buTNEsNNMqrSupeoUTEG/Fpmt2KKLPC+Up8V5R8tNgRGnKEX D5mlUxRzQ+vY8L7t+N4MUqlz9BxLl2Drwx6dnABS2NRrZvzT2B1L0byJz 9sSwm5ENCrP+9G0e8qluNsF8XwZKMRyYyIyEY33H1GjJnwR3NEUttPc+b e8T4AI9fkuCvUvmVvD7rGQB9V0WrB4INlhpkaf8F3wSlRtzQJOYrNMTv7 rLa2pFXmlOlmp5EBk/8V6dBO7RNMqi0cOsuJ0ythUOzyA5iTYZPPefWtP daf9NZJb/PsKCRkc/r+7VQxZykyTekNSVXp2L2y+s0HigcpNSkWIyAhGC A==; X-CSE-ConnectionGUID: 6WpUlbdISL+pQw9iAWMfIQ== X-CSE-MsgGUID: GrYIuT3MRr6u84YQQZBRQg== X-IronPort-AV: E=McAfee;i="6600,9927,11055"; a="9686847" X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="9686847" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2024 18:05:38 -0700 X-CSE-ConnectionGUID: EmzxDuPfScCOT1msCgrRcg== X-CSE-MsgGUID: kNqPeq5ISFC8kkr/GhpMfg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="25271864" Received: from media-frankdu-kahawai-node2.sh.intel.com ([10.67.119.123]) by fmviesa007.fm.intel.com with ESMTP; 25 Apr 2024 18:05:37 -0700 From: Frank Du To: dev@dpdk.org Subject: [PATCH] net/af_xdp: fix umem map size for zero copy Date: Fri, 26 Apr 2024 08:51:28 +0800 Message-Id: <20240426005128.148730-1-frank.du@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current calculation assumes that the mbufs are contiguous. However, this assumption is incorrect when the memory spans across a huge page. Correct to directly read the size from the mempool memory chunks. Signed-off-by: Frank Du --- drivers/net/af_xdp/rte_eth_af_xdp.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 268a130c49..cb95d17d13 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -1039,7 +1039,7 @@ eth_link_update(struct rte_eth_dev *dev __rte_unused, } #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG) -static inline uintptr_t get_base_addr(struct rte_mempool *mp, uint64_t *align) +static inline uintptr_t get_memhdr_info(struct rte_mempool *mp, uint64_t *align, size_t *len) { struct rte_mempool_memhdr *memhdr; uintptr_t memhdr_addr, aligned_addr; @@ -1048,6 +1048,7 @@ static inline uintptr_t get_base_addr(struct rte_mempool *mp, uint64_t *align) memhdr_addr = (uintptr_t)memhdr->addr; aligned_addr = memhdr_addr & ~(getpagesize() - 1); *align = memhdr_addr - aligned_addr; + *len = memhdr->len; return aligned_addr; } @@ -1125,6 +1126,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, void *base_addr = NULL; struct rte_mempool *mb_pool = rxq->mb_pool; uint64_t umem_size, align = 0; + size_t len = 0; if (internals->shared_umem) { if (get_shared_umem(rxq, internals->if_name, &umem) < 0) @@ -1156,10 +1158,8 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals, } umem->mb_pool = mb_pool; - base_addr = (void *)get_base_addr(mb_pool, &align); - umem_size = (uint64_t)mb_pool->populated_size * - (uint64_t)usr_config.frame_size + - align; + base_addr = (void *)get_memhdr_info(mb_pool, &align, &len); + umem_size = (uint64_t)len + align; ret = xsk_umem__create(&umem->umem, base_addr, umem_size, &rxq->fq, &rxq->cq, &usr_config);