From patchwork Fri Apr 19 23:06:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 139588 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B62F743EB4; Sat, 20 Apr 2024 01:10:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 00B8E42D7A; Sat, 20 Apr 2024 01:07:38 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 9E99E40DDD for ; Sat, 20 Apr 2024 01:06:55 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 3172D20FE88B; Fri, 19 Apr 2024 16:06:48 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 3172D20FE88B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1713568010; bh=7KSTyU/orwRW9TDhKy3z7oeCS/phnkvZOfBn3qaJ62A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ot6NRx6sT1JkHTCT9NGgyjq/jGvN7ksZJzTSiyOci6cC3IeuU8NZAlAPwOc+g/5Oq e1ZWfTD8rW+QWNbSmiIIPb6ZKjPB2WL9qm65LTLZeAQNBxSgg4xQkRLkzJILJkbXfk g2aFzzNsiclePXC7IhxEZezgW7jqTrVIdxTVHsKw= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?utf-8?q?Mattias_R=C3=B6nnblom?= , =?utf-8?q?Morten_Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Alok Prasad , Anatoly Burakov , Andrew Rybchenko , Anoob Joseph , Bruce Richardson , Byron Marohn , Chenbo Xia , Chengwen Feng , Ciara Loftus , Ciara Power , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Erik Gabriel Carrillo , Guoyang Zhou , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jakub Grajciar , Jerin Jacob , Jeroen de Borst , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , Joshua Washington , Joyce Kong , Junfeng Guo , Kevin Laatz , Konstantin Ananyev , Liang Ma , Long Li , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nicolas Chautru , Ori Kam , Pavan Nikhilesh , Peter Mccarthy , Rahul Lakkireddy , Reshma Pattan , Rosen Xu , Ruifeng Wang , Rushil Gupta , Sameh Gobriel , Sivaprasad Tummala , Somnath Kotur , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Tetsuya Mukawa , Vamsi Attunuru , Viacheslav Ovsiienko , Vladimir Medvedkin , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Yuying Zhang , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH v4 29/45] common/idpf: use rte stdatomic API Date: Fri, 19 Apr 2024 16:06:27 -0700 Message-Id: <1713568003-30453-30-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1713568003-30453-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> <1713568003-30453-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff Acked-by: Stephen Hemminger --- drivers/common/idpf/idpf_common_device.h | 6 +++--- drivers/common/idpf/idpf_common_rxtx.c | 14 ++++++++------ drivers/common/idpf/idpf_common_rxtx.h | 2 +- drivers/common/idpf/idpf_common_rxtx_avx512.c | 16 ++++++++-------- 4 files changed, 20 insertions(+), 18 deletions(-) diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 3834c1f..bfa927a 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -48,7 +48,7 @@ struct idpf_adapter { struct idpf_hw hw; struct virtchnl2_version_info virtchnl_version; struct virtchnl2_get_capabilities caps; - volatile uint32_t pend_cmd; /* pending command not finished */ + volatile RTE_ATOMIC(uint32_t) pend_cmd; /* pending command not finished */ uint32_t cmd_retval; /* return value of the cmd response from cp */ uint8_t *mbx_resp; /* buffer to store the mailbox response from cp */ @@ -179,8 +179,8 @@ struct idpf_cmd_info { atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops) { uint32_t op_unk = VIRTCHNL2_OP_UNKNOWN; - bool ret = __atomic_compare_exchange(&adapter->pend_cmd, &op_unk, &ops, - 0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE); + bool ret = rte_atomic_compare_exchange_strong_explicit(&adapter->pend_cmd, &op_unk, ops, + rte_memory_order_acquire, rte_memory_order_acquire); if (!ret) DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd); diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c index 83b131e..b09c58c 100644 --- a/drivers/common/idpf/idpf_common_rxtx.c +++ b/drivers/common/idpf/idpf_common_rxtx.c @@ -592,8 +592,8 @@ next_avail = 0; rx_bufq->nb_rx_hold -= delta; } else { - __atomic_fetch_add(&rx_bufq->rx_stats.mbuf_alloc_failed, - nb_desc - next_avail, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rx_bufq->rx_stats.mbuf_alloc_failed, + nb_desc - next_avail, rte_memory_order_relaxed); RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", rx_bufq->port_id, rx_bufq->queue_id); return; @@ -612,8 +612,8 @@ next_avail += nb_refill; rx_bufq->nb_rx_hold -= nb_refill; } else { - __atomic_fetch_add(&rx_bufq->rx_stats.mbuf_alloc_failed, - nb_desc - next_avail, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rx_bufq->rx_stats.mbuf_alloc_failed, + nb_desc - next_avail, rte_memory_order_relaxed); RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", rx_bufq->port_id, rx_bufq->queue_id); } @@ -1093,7 +1093,8 @@ nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(nmb == NULL)) { - __atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rxq->rx_stats.mbuf_alloc_failed, 1, + rte_memory_order_relaxed); RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u " "queue_id=%u", rxq->port_id, rxq->queue_id); break; @@ -1203,7 +1204,8 @@ nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { - __atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rxq->rx_stats.mbuf_alloc_failed, 1, + rte_memory_order_relaxed); RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u " "queue_id=%u", rxq->port_id, rxq->queue_id); break; diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h index b49b1ed..eeeeed1 100644 --- a/drivers/common/idpf/idpf_common_rxtx.h +++ b/drivers/common/idpf/idpf_common_rxtx.h @@ -97,7 +97,7 @@ #define IDPF_RX_SPLIT_BUFQ2_ID 2 struct idpf_rx_stats { - uint64_t mbuf_alloc_failed; + RTE_ATOMIC(uint64_t) mbuf_alloc_failed; }; struct idpf_rx_queue { diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c index f65e8d5..3b5e124 100644 --- a/drivers/common/idpf/idpf_common_rxtx_avx512.c +++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c @@ -38,8 +38,8 @@ dma_addr0); } } - __atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, - IDPF_RXQ_REARM_THRESH, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rxq->rx_stats.mbuf_alloc_failed, + IDPF_RXQ_REARM_THRESH, rte_memory_order_relaxed); return; } struct rte_mbuf *mb0, *mb1, *mb2, *mb3; @@ -168,8 +168,8 @@ dma_addr0); } } - __atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, - IDPF_RXQ_REARM_THRESH, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rxq->rx_stats.mbuf_alloc_failed, + IDPF_RXQ_REARM_THRESH, rte_memory_order_relaxed); return; } } @@ -564,8 +564,8 @@ dma_addr0); } } - __atomic_fetch_add(&rx_bufq->rx_stats.mbuf_alloc_failed, - IDPF_RXQ_REARM_THRESH, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rx_bufq->rx_stats.mbuf_alloc_failed, + IDPF_RXQ_REARM_THRESH, rte_memory_order_relaxed); return; } @@ -638,8 +638,8 @@ dma_addr0); } } - __atomic_fetch_add(&rx_bufq->rx_stats.mbuf_alloc_failed, - IDPF_RXQ_REARM_THRESH, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&rx_bufq->rx_stats.mbuf_alloc_failed, + IDPF_RXQ_REARM_THRESH, rte_memory_order_relaxed); return; } }