[v2,2/6] eal: use rte atomic thread fence

Message ID 1707979859-3984-3-git-send-email-roretzla@linux.microsoft.com (mailing list archive)
State Accepted, archived
Delegated to: David Marchand
Headers
Series use rte atomic thread fence |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Tyler Retzlaff Feb. 15, 2024, 6:50 a.m. UTC
  Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Update rte_mcslock.h to use rte_atomic_thread_fence instead of
directly using internal __rte_atomic_thread_fence

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/eal_common_trace.c | 2 +-
 lib/eal/include/rte_mcslock.h     | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
  

Patch

diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index 6ad87fc..918f49b 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -526,7 +526,7 @@  rte_trace_mode rte_trace_mode_get(void)
 
 	/* Add the trace point at tail */
 	STAILQ_INSERT_TAIL(&tp_list, tp, next);
-	__atomic_thread_fence(rte_memory_order_release);
+	rte_atomic_thread_fence(rte_memory_order_release);
 
 	/* All Good !!! */
 	return 0;
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index 2ca967f..0aeb1a0 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -83,7 +83,7 @@ 
 	 * store to prev->next. Otherwise it will cause a deadlock. Need a
 	 * store-load barrier.
 	 */
-	__rte_atomic_thread_fence(rte_memory_order_acq_rel);
+	rte_atomic_thread_fence(rte_memory_order_acq_rel);
 	/* If the lock has already been acquired, it first atomically
 	 * places the node at the end of the queue and then proceeds
 	 * to spin on me->locked until the previous lock holder resets
@@ -117,7 +117,7 @@ 
 		 * while-loop first. This has the potential to cause a
 		 * deadlock. Need a load barrier.
 		 */
-		__rte_atomic_thread_fence(rte_memory_order_acquire);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 		/* More nodes added to the queue by other CPUs.
 		 * Wait until the next pointer is set.
 		 */