From patchwork Wed Nov 1 03:32:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 133695 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B6664316B; Wed, 1 Nov 2023 04:21:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3A474402CF; Wed, 1 Nov 2023 04:21:14 +0100 (CET) Received: from smtpbguseast3.qq.com (smtpbguseast3.qq.com [54.243.244.52]) by mails.dpdk.org (Postfix) with ESMTP id 19AA1402B0; Wed, 1 Nov 2023 04:21:10 +0100 (CET) X-QQ-mid: bizesmtp86t1698808864twpc7uhr Received: from wxdbg.localdomain.com ( [125.120.145.41]) by bizesmtp.qq.com (ESMTP) with id ; Wed, 01 Nov 2023 11:21:03 +0800 (CST) X-QQ-SSF: 01400000000000K0Z000000A0000000 X-QQ-FEAT: SO65WgIymiGfsrQtfZAGnWD5eJ/Cz9t5BMOjCL6gwqYsTHNpuOlskZWktOFWm mhtRaUuCMi+pK/D6EyBrTf0WLplXGqMZGx4BcnmSYgOLqLY1JZkz+JmuGEygSr3U/mFOc+E NM4JkHk1rCoMqvp0VjMiBhy4KSrXg3/1Pnu2NN9LCHZ9gRZS1EJbpHISxh9JpCmO2SDcAys oJkOrOlX2fog1BSSA0RZ/jCNGX4JJ5QK1UGlJAAAMF65ugmUHYtja5vPgfFv8NytFvjVr3J 6zu3LWQZCXRNlOXAr5hKkt0lFp5GS0b1h+Cah/dtQszlu9Rx6Hwt6/ndoZZ8513nfrlN0y1 Aq4mzi2qUQhAGbgW9ergQMJuyP90f4CrMtdigIx/u40Cjfy/XAofuS01QH75zS9zYz39Xob D5//+SQSnek= X-QQ-GoodBg: 2 X-BIZMAIL-ID: 8427205020008022836 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 2/2] net/ngbe: add proper memory barriers in Rx Date: Wed, 1 Nov 2023 11:32:41 +0800 Message-Id: <20231101033241.623190-2-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231101033241.623190-1-jiawenwu@trustnetic.com> References: <20231030105144.595502-1-jiawenwu@trustnetic.com> <20231101033241.623190-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz5a-1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refer to commit 85e46c532bc7 ("net/ixgbe: add proper memory barriers in Rx"). Fix the same issue as ixgbe. Although due to the testing schedule, the current test has not found this problem. We also do the same fix in ngbe, to ensure the read ordering be correct. Fixes: 79f3128d4d98 ("net/ngbe: support scattered Rx") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_rxtx.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index ec353a30b1..8a873b858e 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -980,7 +980,7 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq) for (j = 0; j < LOOK_AHEAD; j++) s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status); - rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rte_atomic_thread_fence(rte_memory_order_acquire); /* Compute how many status bits were set */ for (nb_dd = 0; nb_dd < LOOK_AHEAD && @@ -1223,11 +1223,22 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * of accesses cannot be reordered by the compiler. If they were * not volatile, they could be reordered which could lead to * using invalid descriptor fields when read from rxd. + * + * Meanwhile, to prevent the CPU from executing out of order, we + * need to use a proper memory barrier to ensure the memory + * ordering below. */ rxdp = &rx_ring[rx_id]; staterr = rxdp->qw1.lo.status; if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD))) break; + + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(rte_memory_order_acquire); + rxd = *rxdp; /* @@ -1454,6 +1465,12 @@ ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, if (!(staterr & NGBE_RXD_STAT_DD)) break; + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(rte_memory_order_acquire); + rxd = *rxdp; PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "