From patchwork Fri Apr 19 23:06:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 139593 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECD7143EB4; Sat, 20 Apr 2024 01:11:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A47CA42D9F; Sat, 20 Apr 2024 01:07:45 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id BDEFA40A70 for ; Sat, 20 Apr 2024 01:06:56 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 8690820FE9AE; Fri, 19 Apr 2024 16:06:48 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 8690820FE9AE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1713568010; bh=9t6NktRjM85KnLvXyNqXkFT4xGgYVSr2cKL049hZrSs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NGywqupSzV49AQ/D191zQsgwrJYzOEbPh7UT1QEh5F433G8zJDj9fQ2q/H1RTkgqy wSabBUpar4jjBgDKZLvaa0gg6rpbZUw0BSVMP2TlCG4+hG6/EIRWuE/zHwNex98taY QbwpWgbU+bNLlqvUmMy0ocb0dv49r+yCxW2ZzHdE= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?utf-8?q?Mattias_R=C3=B6nnblom?= , =?utf-8?q?Morten_Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Alok Prasad , Anatoly Burakov , Andrew Rybchenko , Anoob Joseph , Bruce Richardson , Byron Marohn , Chenbo Xia , Chengwen Feng , Ciara Loftus , Ciara Power , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Erik Gabriel Carrillo , Guoyang Zhou , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jakub Grajciar , Jerin Jacob , Jeroen de Borst , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , Joshua Washington , Joyce Kong , Junfeng Guo , Kevin Laatz , Konstantin Ananyev , Liang Ma , Long Li , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nicolas Chautru , Ori Kam , Pavan Nikhilesh , Peter Mccarthy , Rahul Lakkireddy , Reshma Pattan , Rosen Xu , Ruifeng Wang , Rushil Gupta , Sameh Gobriel , Sivaprasad Tummala , Somnath Kotur , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Tetsuya Mukawa , Vamsi Attunuru , Viacheslav Ovsiienko , Vladimir Medvedkin , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Yuying Zhang , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH v4 34/45] event/dlb2: use rte stdatomic API Date: Fri, 19 Apr 2024 16:06:32 -0700 Message-Id: <1713568003-30453-35-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1713568003-30453-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> <1713568003-30453-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff Acked-by: Stephen Hemminger --- drivers/event/dlb2/dlb2.c | 34 +++++++++++++++++----------------- drivers/event/dlb2/dlb2_priv.h | 13 +++++-------- drivers/event/dlb2/dlb2_xstats.c | 2 +- 3 files changed, 23 insertions(+), 26 deletions(-) diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 628ddef..0b91f03 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -1005,7 +1005,7 @@ struct process_local_port_data } dlb2->new_event_limit = config->nb_events_limit; - __atomic_store_n(&dlb2->inflights, 0, __ATOMIC_SEQ_CST); + rte_atomic_store_explicit(&dlb2->inflights, 0, rte_memory_order_seq_cst); /* Save number of ports/queues for this event dev */ dlb2->num_ports = config->nb_event_ports; @@ -2668,10 +2668,10 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) batch_size = credits; if (likely(credits && - __atomic_compare_exchange_n( + rte_atomic_compare_exchange_strong_explicit( qm_port->credit_pool[type], - &credits, credits - batch_size, false, - __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST))) + &credits, credits - batch_size, + rte_memory_order_seq_cst, rte_memory_order_seq_cst))) return batch_size; else return 0; @@ -2687,7 +2687,7 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) /* Replenish credits, saving one quanta for enqueues */ uint16_t val = ev_port->inflight_credits - quanta; - __atomic_fetch_sub(&dlb2->inflights, val, __ATOMIC_SEQ_CST); + rte_atomic_fetch_sub_explicit(&dlb2->inflights, val, rte_memory_order_seq_cst); ev_port->inflight_credits -= val; } } @@ -2696,8 +2696,8 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) dlb2_check_enqueue_sw_credits(struct dlb2_eventdev *dlb2, struct dlb2_eventdev_port *ev_port) { - uint32_t sw_inflights = __atomic_load_n(&dlb2->inflights, - __ATOMIC_SEQ_CST); + uint32_t sw_inflights = rte_atomic_load_explicit(&dlb2->inflights, + rte_memory_order_seq_cst); const int num = 1; if (unlikely(ev_port->inflight_max < sw_inflights)) { @@ -2719,8 +2719,8 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) return 1; } - __atomic_fetch_add(&dlb2->inflights, credit_update_quanta, - __ATOMIC_SEQ_CST); + rte_atomic_fetch_add_explicit(&dlb2->inflights, credit_update_quanta, + rte_memory_order_seq_cst); ev_port->inflight_credits += (credit_update_quanta); if (ev_port->inflight_credits < num) { @@ -3234,17 +3234,17 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) if (qm_port->dlb2->version == DLB2_HW_V2) { qm_port->cached_ldb_credits += num; if (qm_port->cached_ldb_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_LDB_QUEUE], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_ldb_credits -= batch_size; } } else { qm_port->cached_credits += num; if (qm_port->cached_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_COMBINED_POOL], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_credits -= batch_size; } } @@ -3252,17 +3252,17 @@ static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) if (qm_port->dlb2->version == DLB2_HW_V2) { qm_port->cached_dir_credits += num; if (qm_port->cached_dir_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_DIR_QUEUE], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_dir_credits -= batch_size; } } else { qm_port->cached_credits += num; if (qm_port->cached_credits >= 2 * batch_size) { - __atomic_fetch_add( + rte_atomic_fetch_add_explicit( qm_port->credit_pool[DLB2_COMBINED_POOL], - batch_size, __ATOMIC_SEQ_CST); + batch_size, rte_memory_order_seq_cst); qm_port->cached_credits -= batch_size; } } diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h index 49f1c66..2470ae0 100644 --- a/drivers/event/dlb2/dlb2_priv.h +++ b/drivers/event/dlb2/dlb2_priv.h @@ -348,7 +348,7 @@ struct dlb2_port { uint32_t dequeue_depth; enum dlb2_token_pop_mode token_pop_mode; union dlb2_port_config cfg; - uint32_t *credit_pool[DLB2_NUM_QUEUE_TYPES]; /* use __atomic builtins */ + RTE_ATOMIC(uint32_t) *credit_pool[DLB2_NUM_QUEUE_TYPES]; union { struct { uint16_t cached_ldb_credits; @@ -586,7 +586,7 @@ struct dlb2_eventdev { uint32_t xstats_count_mode_dev; uint32_t xstats_count_mode_port; uint32_t xstats_count; - uint32_t inflights; /* use __atomic builtins */ + RTE_ATOMIC(uint32_t) inflights; uint32_t new_event_limit; int max_num_events_override; int num_dir_credits_override; @@ -623,15 +623,12 @@ struct dlb2_eventdev { struct { uint16_t max_ldb_credits; uint16_t max_dir_credits; - /* use __atomic builtins */ /* shared hw cred */ - alignas(RTE_CACHE_LINE_SIZE) uint32_t ldb_credit_pool; - /* use __atomic builtins */ /* shared hw cred */ - alignas(RTE_CACHE_LINE_SIZE) uint32_t dir_credit_pool; + alignas(RTE_CACHE_LINE_SIZE) RTE_ATOMIC(uint32_t) ldb_credit_pool; + alignas(RTE_CACHE_LINE_SIZE) RTE_ATOMIC(uint32_t) dir_credit_pool; }; struct { uint16_t max_credits; - /* use __atomic builtins */ /* shared hw cred */ - alignas(RTE_CACHE_LINE_SIZE) uint32_t credit_pool; + alignas(RTE_CACHE_LINE_SIZE) RTE_ATOMIC(uint32_t) credit_pool; }; }; uint32_t cos_ports[DLB2_COS_NUM_VALS]; /* total ldb ports in each class */ diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c index ff15271..22094f3 100644 --- a/drivers/event/dlb2/dlb2_xstats.c +++ b/drivers/event/dlb2/dlb2_xstats.c @@ -173,7 +173,7 @@ struct dlb2_xstats_entry { case nb_events_limit: return dlb2->new_event_limit; case inflight_events: - return __atomic_load_n(&dlb2->inflights, __ATOMIC_SEQ_CST); + return rte_atomic_load_explicit(&dlb2->inflights, rte_memory_order_seq_cst); case ldb_pool_size: return dlb2->num_ldb_credits; case dir_pool_size: