From patchwork Mon Oct 30 10:51:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 133625 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DDB34323E; Mon, 30 Oct 2023 11:40:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D5BC4067D; Mon, 30 Oct 2023 11:40:11 +0100 (CET) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id 99B5F40285; Mon, 30 Oct 2023 11:40:07 +0100 (CET) X-QQ-mid: bizesmtp65t1698662400tqm4qvq2 Received: from wxdbg.localdomain.com ( [122.235.139.36]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 30 Oct 2023 18:39:50 +0800 (CST) X-QQ-SSF: 01400000000000K0Z000000A0000000 X-QQ-FEAT: RtFAdvlBO4AbDDrulu5HX9QyfUZ3LY6nwnYMX9iCOqWN7ix3d33cOgsiQIDuT 7OkitY1w6fa6VBtPKwNq8Lv9mOIQrnp42ghrFX+WsGCUO+pM6A5aBxIPe7qU5P8Z2QAj1US inVcyIZUvpBBpm0CwOMHBLkshCI/e06Y7BPZ2tAq8PqkmWRrL7qsqXpK+3KdRwiqnkHVerg NBSL865JYVMBU/7+2/uwydgawNBoizlsLM+yI0mw91Tkbp35D4+XL+744Zvt1z2sYRJCq5P BtJLv996zgn+ZjusWkjjYBP3R2SmP316cKkASoV2EPgzo0LNrVqbk5NFc2tWWOjsXmT/oA2 rPVUh+Ly+Sq5mhSsfhoJ6sXGp3pDXPLkFgmlgaJR2pDNYuPbaqd8wt4YNgY8bdW2UjSoYE5 X-QQ-GoodBg: 2 X-BIZMAIL-ID: 13822407596033058005 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu , stable@dpdk.org Subject: [PATCH 1/2] net/txgbe: add proper memory barriers in Rx Date: Mon, 30 Oct 2023 18:51:43 +0800 Message-Id: <20231030105144.595502-1-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz5a-1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refer to commit 85e46c532bc7 ("net/ixgbe: add proper memory barriers in Rx"). Fix the same issue as ixgbe. Segmentation fault has been observed while running the txgbe_recv_pkts_lro() function to receive packets on the Loongson 3A5000 processor. It's caused by the out-of-order execution of CPU. So add a proper memory barrier to ensure the read ordering be correct. We also did the same thing in the txgbe_recv_pkts() function to make the rxd data be valid even though we did not find segmentation fault in this function. Fixes: 0e484278c85f ("net/txgbe: support Rx") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_rxtx.c | 47 +++++++++++++++------------------- 1 file changed, 21 insertions(+), 26 deletions(-) diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 834ada886a..24fc34d3c4 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -1476,11 +1476,22 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * of accesses cannot be reordered by the compiler. If they were * not volatile, they could be reordered which could lead to * using invalid descriptor fields when read from rxd. + * + * Meanwhile, to prevent the CPU from executing out of order, we + * need to use a proper memory barrier to ensure the memory + * ordering below. */ rxdp = &rx_ring[rx_id]; staterr = rxdp->qw1.lo.status; if (!(staterr & rte_cpu_to_le_32(TXGBE_RXD_STAT_DD))) break; + + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; /* @@ -1726,32 +1737,10 @@ txgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, next_desc: /* - * The code in this whole file uses the volatile pointer to - * ensure the read ordering of the status and the rest of the - * descriptor fields (on the compiler level only!!!). This is so - * UGLY - why not to just use the compiler barrier instead? DPDK - * even has the rte_compiler_barrier() for that. - * - * But most importantly this is just wrong because this doesn't - * ensure memory ordering in a general case at all. For - * instance, DPDK is supposed to work on Power CPUs where - * compiler barrier may just not be enough! - * - * I tried to write only this function properly to have a - * starting point (as a part of an LRO/RSC series) but the - * compiler cursed at me when I tried to cast away the - * "volatile" from rx_ring (yes, it's volatile too!!!). So, I'm - * keeping it the way it is for now. - * - * The code in this file is broken in so many other places and - * will just not work on a big endian CPU anyway therefore the - * lines below will have to be revisited together with the rest - * of the txgbe PMD. - * - * TODO: - * - Get rid of "volatile" and let the compiler do its job. - * - Use the proper memory barrier (rte_rmb()) to ensure the - * memory ordering below. + * "Volatile" only prevents caching of the variable marked + * volatile. Most important, "volatile" cannot prevent the CPU + * from executing out of order. So, it is necessary to use a + * proper memory barrier to ensure the memory ordering below. */ rxdp = &rx_ring[rx_id]; staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status); @@ -1759,6 +1748,12 @@ txgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, if (!(staterr & TXGBE_RXD_STAT_DD)) break; + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u " From patchwork Mon Oct 30 10:51:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 133626 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 50AE94323E; Mon, 30 Oct 2023 11:40:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D2FC540A67; Mon, 30 Oct 2023 11:40:13 +0100 (CET) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id 0B235402BD; Mon, 30 Oct 2023 11:40:09 +0100 (CET) X-QQ-mid: bizesmtp65t1698662403tged0m5s Received: from wxdbg.localdomain.com ( [122.235.139.36]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 30 Oct 2023 18:40:02 +0800 (CST) X-QQ-SSF: 01400000000000K0Z000000A0000000 X-QQ-FEAT: ERSZEewZJulXV/fNWMY5RDORI3rpEbjv0/YKVutcjdyZimkO7twP0XVIvHcBy rMFiys+SAjSRNLFJ+IjsWqg11UudCabydYSlIst1mu7i+colqz0YDUmTV08xMiA92rfwIo6 j+JueasIiJcj1VHLaJ7ibZULFDBf2tSq21o/0ic/4fd63c238OO+eHC835AMAdk8/8mnJIv ohwzCvaan7Rok5iXLBHU94up/yo5l2ycqpWtYvtryHXKWvjL3NfMx0tBcSPJgMwXWnM/kqd WON7fPkOAbcszDGWCivwNMsezJQ7Zp8jLD7wO0bTicNtJTIDfEbGtQixz56TuIJPWpEzH30 3LDwTi+HpXD22kaZRczZKuIzJGIYHWUO0fQ5w3yNJADjjbYn4bzHL7xFm7Pn2dSVwpuHk1J szPnm6MlLGq5CN8+t4RJ9Q== X-QQ-GoodBg: 2 X-BIZMAIL-ID: 4743506753300082384 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu , stable@dpdk.org Subject: [PATCH 2/2] net/ngbe: add proper memory barriers in Rx Date: Mon, 30 Oct 2023 18:51:44 +0800 Message-Id: <20231030105144.595502-2-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231030105144.595502-1-jiawenwu@trustnetic.com> References: <20231030105144.595502-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz5a-1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refer to commit 85e46c532bc7 ("net/ixgbe: add proper memory barriers in Rx"). Fix the same issue as ixgbe. Although due to the testing schedule, the current test has not found this problem. We also do the same fix in ngbe, to ensure the read ordering be correct. Fixes: 79f3128d4d98 ("net/ngbe: support scattered Rx") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_rxtx.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index ec353a30b1..54a6f6a887 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -1223,11 +1223,22 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * of accesses cannot be reordered by the compiler. If they were * not volatile, they could be reordered which could lead to * using invalid descriptor fields when read from rxd. + * + * Meanwhile, to prevent the CPU from executing out of order, we + * need to use a proper memory barrier to ensure the memory + * ordering below. */ rxdp = &rx_ring[rx_id]; staterr = rxdp->qw1.lo.status; if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD))) break; + + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; /* @@ -1454,6 +1465,12 @@ ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, if (!(staterr & NGBE_RXD_STAT_DD)) break; + /* + * Use acquire fence to ensure that status_error which includes + * DD bit is loaded before loading of other descriptor words. + */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + rxd = *rxdp; PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "