From patchwork Fri Mar 25 13:34:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gaoxiang Liu X-Patchwork-Id: 108874 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3F308A00C3; Fri, 25 Mar 2022 14:34:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2383740687; Fri, 25 Mar 2022 14:34:51 +0100 (CET) Received: from m12-11.163.com (m12-11.163.com [220.181.12.11]) by mails.dpdk.org (Postfix) with ESMTP id 6A45640140 for ; Fri, 25 Mar 2022 14:34:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=From:Subject:Date:Message-Id:MIME-Version; bh=a5/go ivEntADHmVkPP6OuJVM26j8WBQ8Yp4dELbMaG8=; b=IN8p8vndxsl9Rdiaai0Do SXHi/5c0CwHWznFbXyxw8/msG59IXfmcPVu9y7WzG37EQZZPS63JWWHBWUl5qESX wg9P28nUTu2C30MTxg1SgDPefkSBLxnjdLi34V80FWY6ZrxF3girBzeFuoqGT+OJ 7oMfQgw4/jBBjDttZVkBkE= Received: from DESKTOP-ONA2IA7.localdomain (unknown [122.235.128.60]) by smtp7 (Coremail) with SMTP id C8CowABXHgPnxD1id3WIEA--.22912S4; Fri, 25 Mar 2022 21:34:43 +0800 (CST) From: Gaoxiang Liu To: chas3@att.com, humin29@huawei.com Cc: dev@dpdk.org, liugaoxiang@huawei.com, Gaoxiang Liu Subject: [PATCH v6] net/bonding: another fix to LACP mempool size Date: Fri, 25 Mar 2022 21:34:26 +0800 Message-Id: <20220325133426.2916-1-gaoxiangliu0@163.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220325130135.2207-1-gaoxiangliu0@163.com> References: <20220325130135.2207-1-gaoxiangliu0@163.com> MIME-Version: 1.0 X-CM-TRANSID: C8CowABXHgPnxD1id3WIEA--.22912S4 X-Coremail-Antispam: 1Uf129KBjvJXoWxGF13Kry7Xw1DWr47Jw1rtFb_yoW5KF1Dpr W7Wa15Jr4jqFZF9a1fXa1xC393XwnavrWUKr95Z3W3Xw4xAF1Yvw17tr15ZrWrGrZ5JrsI ya15ur9FgF4UG3DanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x0pR9vtQUUUUU= X-Originating-IP: [122.235.128.60] X-CM-SenderInfo: xjdr5xxdqjzxjxq6il2tof0z/xtbBORPOOl-PMFHYYwAAsU X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The following log message may appear after a slave is idle(or nearly idle) for a few minutes:"PMD: Failed to allocate LACP packet from pool". And bond mode 4 negotiation may fail. Problem:When bond mode 4 has been chosed and delicated queue has not been enable, all mbufs from a slave' private pool(used exclusively for transmitting LACPDUs) have been allocated in interrupt thread, and are still sitting in the device's tx descriptor ring and other cores' mempool caches in fwd thread. Thus the interrupt thread can not alloc LACP packet from pool. Solution: Ensure that each slave'tx (LACPDU) mempool owns more than n-tx-queues * n-tx-descriptor + fwd_core_num * per-core-mmempool-flush-threshold mbufs. Note that the LACP tx machine fuction is the only code that allocates from a slave's private pool. It runs in the context of the interrupt thread, and thus it has no mempool cache of its own. Signed-off-by: Gaoxiang Liu Acked-by: Morten Brørup --- v2: * Fixed compile issues. v3: * delete duplicate code. v4; * Fixed some issues. 1. total_tx_desc should use += 2. add detailed logs v5: * Fixed some issues. 1. move CACHE_FLUSHTHRESH_MULTIPLIER to rte_eth_bond-8023ad.c 2. use RTE_MIN v6: * add a comment of CACHE_FLUSHTHRESH_MULTIPLIER macro --- drivers/net/bonding/rte_eth_bond_8023ad.c | 11 ++++++++--- lib/mempool/rte_mempool.c | 2 ++ 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c index ca50583d62..2c39b0d062 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.c +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c @@ -1050,6 +1050,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint32_t total_tx_desc; struct bond_tx_queue *bd_tx_q; uint16_t q_id; + uint32_t cache_size; /* Given slave mus not be in active list */ RTE_ASSERT(find_slave_by_id(internals->active_slaves, @@ -1100,11 +1101,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, total_tx_desc += bd_tx_q->nb_tx_desc; } +/* BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER is the same as + * CACHE_FLUSHTHRESH_MULTIPLIER already defined in rte_mempool.c */ +#define BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER 1.5 + + cache_size = RTE_MIN(RTE_MEMPOOL_CACHE_MAX_SIZE, 32); + total_tx_desc += rte_lcore_count() * cache_size * BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER; snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id); port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc, - RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? - 32 : RTE_MEMPOOL_CACHE_MAX_SIZE, - 0, element_size, socket_id); + cache_size, 0, element_size, socket_id); /* Any memory allocation failure in initialization is critical because * resources can't be free, so reinitialization is impossible. */ diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index c5a699b1d6..6126067628 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -55,6 +55,8 @@ static void mempool_event_callback_invoke(enum rte_mempool_event event, struct rte_mempool *mp); +/* CACHE_FLUSHTHRESH_MULTIPLIER is the same as + * BONDING_8023AD_CACHE_FLUSHTHRESH_MULTIPLIER in rte_eth_bond_8023ad.c */ #define CACHE_FLUSHTHRESH_MULTIPLIER 1.5 #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER))