From patchwork Mon May 13 18:52:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140040 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA9B44401D; Mon, 13 May 2024 20:55:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E6BF04067D; Mon, 13 May 2024 20:55:01 +0200 (CEST) Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by mails.dpdk.org (Postfix) with ESMTP id 29052402CD for ; Mon, 13 May 2024 20:55:00 +0200 (CEST) Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-6f450f43971so3938501b3a.3 for ; Mon, 13 May 2024 11:55:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715626499; x=1716231299; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qDTur2+NCw1uV0OtsmGZpp6yEEmoZEWlPL0svpl1uRo=; b=D/OmNW/AiYgGc7zsNdVnJPaZAhMJHMk6WxDTt7bFoelmjuaF2K2EOL9/yeq83wsg0F pN4M9PX/ca1oXRqxl2RPACO2y/BAC2fX19HHaJyVtc4LpqsfP03usjVeojeH64qjBs1j uXFtd6mRcNzXJFjcZeGP106pfLStYemG77xMmP2esgYDskU2fzIuL3BQhN+slKFHzT0Z N4OopgxFZXgAuTCQGXIIkCtnvM4A4P3Fw5pEa3yLbdUUDIGfgUacmnBIvaGy2lkUvjHN vXBFfvN6RiS9G072L0f4e8k8/hfeXTGyi6hH9oJ1fTPxsqbaRTsB5xC93h9/PtAZGJc/ MpGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715626499; x=1716231299; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qDTur2+NCw1uV0OtsmGZpp6yEEmoZEWlPL0svpl1uRo=; b=Y/cN657htZXr6QK24+dWSg/uyusEv4BaKTPIyYjhvyc3wWHeko4reWNtBKp4cp/h8l h8r2GwzNFEInpZwnLkZO5Xlz9k5lYUUFsW7fjf6I0XsfDCgOT8hT/LWqs9W7nxSNMs4q osetA1SYOzSfCKgTNhG70Vky97+8N76ZjHl2e2RL/2AfaVTRHgBnGcNm/g+Cs8Yh+Syl ILMGN0gALvOk8P8JPOUdLtTFnadlF5QwHYiGzxxC+dP0VQx1huw1MDW2TlRxfIiX13p5 0bvW4yVzTGbXLIQxZXC3GSOnOHhhY8c9USUkzSg6dWfPnPrydQZajTikP4ATms2YrEli YtSg== X-Gm-Message-State: AOJu0YynopHbrTH/ckrMycQiAzO/LTbES5CU8HehfCRShar5uuyNRjxR 8PRyDK8595L3g+EccUt5v+srFIfz4CPLdAJo2QyW5SvzbXOy4vw+PhjWYgiwtgFK1VlZbNk9TuU 9FEcHiQ== X-Google-Smtp-Source: AGHT+IHJTlBPu8/azLI1lpZGJL/9aq7qJd+d5nqvUXk/lOh3D8Phjo8yh+rzFNw7Jh8DkL96xcXQEg== X-Received: by 2002:a05:6a00:1397:b0:6e6:89ad:1233 with SMTP id d2e1a72fcca58-6f4e02a6150mr14387366b3a.2.1715626499277; Mon, 13 May 2024 11:54:59 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-634103f7237sm8154680a12.71.2024.05.13.11.54.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:54:58 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC v2 1/7] eal: generic 64 bit counter Date: Mon, 13 May 2024 11:52:11 -0700 Message-ID: <20240513185448.120356-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240513185448.120356-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240513185448.120356-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This header implements 64 bit counters that are NOT atomic but are safe against load/store splits on 32 bit platforms. Signed-off-by: Stephen Hemminger Acked-by: Morten Brørup --- lib/eal/include/meson.build | 1 + lib/eal/include/rte_counter.h | 91 +++++++++++++++++++++++++++++++++++ 2 files changed, 92 insertions(+) create mode 100644 lib/eal/include/rte_counter.h diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index e94b056d46..c070dd0079 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -12,6 +12,7 @@ headers += files( 'rte_class.h', 'rte_common.h', 'rte_compat.h', + 'rte_counter.h', 'rte_debug.h', 'rte_dev.h', 'rte_devargs.h', diff --git a/lib/eal/include/rte_counter.h b/lib/eal/include/rte_counter.h new file mode 100644 index 0000000000..1c1c34c2fb --- /dev/null +++ b/lib/eal/include/rte_counter.h @@ -0,0 +1,91 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_COUNTER_H_ +#define _RTE_COUNTER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * RTE Counter + * + * A counter is 64 bit value that is safe from split read/write + * on 32 bit platforms. It assumes that only one cpu at a time + * will update the counter, and another CPU may want to read it. + * + * This is a much weaker guarantee than @rte_atomic but is faster + * since no locked operations are required for update. + */ + +#include + +#ifdef RTE_ARCH_64 +/* + * On a platform that can support native 64 bit type, no special handling. + * These are just wrapper around 64 bit value. + */ +typedef uint64_t rte_counter64_t; + +/** + * Add value to counter. + */ +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + *counter += val; +} + +__rte_experimental +static inline uint64_t +rte_counter64_fetch(const rte_counter64_t *counter) +{ + return *counter; +} + +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + *counter = 0; +} + +#else +/* + * On a 32 bit platform need to use atomic to force the compler to not + * split 64 bit read/write. + */ +typedef RTE_ATOMIC(uint64_t) rte_counter64_t; + +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + rte_atomic_fetch_add_explicit(counter, val, rte_memory_order_relaxed); +} + +__rte_experimental +static inline uint64_t +rte_counter64_fetch(rte_counter64_t *counter) +{ + return rte_atomic_load_explicit(counter, rte_memory_order_relaxed); +} + +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + rte_atomic_store_explicit(counter, 0, rte_memory_order_relaxed); +} +#endif + + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_COUNTER_H_ */