From patchwork Fri Apr 14 05:47:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjun Wu X-Patchwork-Id: 126070 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13E984293B; Fri, 14 Apr 2023 07:48:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A465442D41; Fri, 14 Apr 2023 07:48:09 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 8494942D1D; Fri, 14 Apr 2023 07:48:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1681451287; x=1712987287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zf95UicB2sR8bUMQkgTDvvIWXiPH4h4nstTj4vuTxYs=; b=ABSh2UKkWmHGcWn4n2+7CiyV8Eiw/9deyhHnxo7/i9CT0U65BcIA+rAt yY773kV2M4+NzcqBYHreK/u/beBHcrIxZ2A5utb8BGaF9yG21ts7plP56 2luonDXgKBqDMra/z5HLXItrLhHKMQelvD9ondNA5zGJQ2xuG9D4Xc8Op nI1jv5zT3CDC0mJW59K32/PVHGiwZebTb8Pg3FN+/Aec/vNEG1erfqREQ N1L1y6OSXGSFdFOTVz1ZietIsaDMKr3OwysAJ2eMHJ1p/w/f7hWk3gERo K1dq/+09xtOAewYAosepsQuF35D/IQAlvFUPdsZeZZzVzpmHqGDzgjzSm w==; X-IronPort-AV: E=McAfee;i="6600,9927,10679"; a="343150154" X-IronPort-AV: E=Sophos;i="5.99,195,1677571200"; d="scan'208";a="343150154" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2023 22:48:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10679"; a="779060374" X-IronPort-AV: E=Sophos;i="5.99,195,1677571200"; d="scan'208";a="779060374" Received: from dpdk-wuwenjun-icelake-ii.sh.intel.com ([10.67.110.157]) by FMSMGA003.fm.intel.com with ESMTP; 13 Apr 2023 22:48:04 -0700 From: Wenjun Wu To: dev@dpdk.org, Yuying.Zhang@intel.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, qi.z.zhang@intel.com Cc: Wenjun Wu , stable@dpdk.org Subject: [PATCH v2 4/5] net/idpf: fix Rx data buffer size Date: Fri, 14 Apr 2023 13:47:43 +0800 Message-Id: <20230414054744.1399735-5-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230414054744.1399735-1-wenjun1.wu@intel.com> References: <20230414035151.1377726-1-wenjun1.wu@intel.com> <20230414054744.1399735-1-wenjun1.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch does two fixes. 1. No matter what the mbuf size is, the data buffer size should not be greater than 16K - 128. 2. Align data buffer size to 128. Fixes: 9c47c29739a1 ("net/idpf: add Rx queue setup") Cc: stable@dpdk.org Signed-off-by: Wenjun Wu --- drivers/common/idpf/idpf_common_rxtx.h | 3 +++ drivers/net/idpf/idpf_rxtx.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h index 11260d07f9..6cb83fc0a6 100644 --- a/drivers/common/idpf/idpf_common_rxtx.h +++ b/drivers/common/idpf/idpf_common_rxtx.h @@ -34,6 +34,9 @@ #define IDPF_MAX_TSO_FRAME_SIZE 262143 #define IDPF_TX_MAX_MTU_SEG 10 +#define IDPF_RLAN_CTX_DBUF_S 7 +#define IDPF_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + #define IDPF_TX_CKSUM_OFFLOAD_MASK ( \ RTE_MBUF_F_TX_IP_CKSUM | \ RTE_MBUF_F_TX_L4_MASK | \ diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index 414f9a37f6..3e3d81ca6d 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -155,7 +155,8 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq, bufq->adapter = adapter; len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM; - bufq->rx_buf_len = len; + bufq->rx_buf_len = RTE_ALIGN_FLOOR(len, (1 << IDPF_RLAN_CTX_DBUF_S)); + bufq->rx_buf_len = RTE_MIN(bufq->rx_buf_len, IDPF_RX_MAX_DATA_BUF_SIZE); /* Allocate a little more to support bulk allocate. */ len = nb_desc + IDPF_RX_MAX_BURST; @@ -275,7 +276,8 @@ idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, rxq->offloads = idpf_rx_offload_convert(offloads); len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; - rxq->rx_buf_len = len; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(len, (1 << IDPF_RLAN_CTX_DBUF_S)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, IDPF_RX_MAX_DATA_BUF_SIZE); /* Allocate a little more to support bulk allocate. */ len = nb_desc + IDPF_RX_MAX_BURST;