From patchwork Mon Oct 30 10:51:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 133626 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 50AE94323E; Mon, 30 Oct 2023 11:40:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D2FC540A67; Mon, 30 Oct 2023 11:40:13 +0100 (CET) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id 0B235402BD; Mon, 30 Oct 2023 11:40:09 +0100 (CET) X-QQ-mid: bizesmtp65t1698662403tged0m5s Received: from wxdbg.localdomain.com ( [122.235.139.36]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 30 Oct 2023 18:40:02 +0800 (CST) X-QQ-SSF: 01400000000000K0Z000000A0000000 X-QQ-FEAT: ERSZEewZJulXV/fNWMY5RDORI3rpEbjv0/YKVutcjdyZimkO7twP0XVIvHcBy rMFiys+SAjSRNLFJ+IjsWqg11UudCabydYSlIst1mu7i+colqz0YDUmTV08xMiA92rfwIo6 j+JueasIiJcj1VHLaJ7ibZULFDBf2tSq21o/0ic/4fd63c238OO+eHC835AMAdk8/8mnJIv ohwzCvaan7Rok5iXLBHU94up/yo5l2ycqpWtYvtryHXKWvjL3NfMx0tBcSPJgMwXWnM/kqd WON7fPkOAbcszDGWCivwNMsezJQ7Zp8jLD7wO0bTicNtJTIDfEbGtQixz56TuIJPWpEzH30 3LDwTi+HpXD22kaZRczZKuIzJGIYHWUO0fQ5w3yNJADjjbYn4bzHL7xFm7Pn2dSVwpuHk1J szPnm6MlLGq5CN8+t4RJ9Q== X-QQ-GoodBg: 2 X-BIZMAIL-ID: 4743506753300082384 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu , stable@dpdk.org Subject: [PATCH 2/2] net/ngbe: add proper memory barriers in Rx Date: Mon, 30 Oct 2023 18:51:44 +0800 Message-Id: <20231030105144.595502-2-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231030105144.595502-1-jiawenwu@trustnetic.com> References: <20231030105144.595502-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz5a-1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refer to commit 85e46c532bc7 ("net/ixgbe: add proper memory barriers in Rx"). Fix the same issue as ixgbe. Although due to the testing schedule, the current test has not found this problem. We also do the same fix in ngbe, to ensure the read ordering be correct. Fixes: 79f3128d4d98 ("net/ngbe: support scattered Rx") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_rxtx.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index ec353a30b1..54a6f6a887 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -1223,11 +1223,22 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * of accesses cannot be reordered by the compiler. If they were * not volatile, they could be reordered which could lead to * using invalid descriptor fields when read from rxd. + * + * Meanwhile, to prevent the CPU from executing out of order, we + * need to use a proper memory barrier to ensure the memory + * ordering below. */ rxdp = &rx_ring[rx_id]; staterr = rxdp->qw1.lo.status; if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD))) break; + + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; /* @@ -1454,6 +1465,12 @@ ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, if (!(staterr & NGBE_RXD_STAT_DD)) break; + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "