From patchwork Tue May 21 20:16:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140229 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 611B54408D; Tue, 21 May 2024 22:18:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7954040698; Tue, 21 May 2024 22:18:24 +0200 (CEST) Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by mails.dpdk.org (Postfix) with ESMTP id 203E4400D7 for ; Tue, 21 May 2024 22:18:05 +0200 (CEST) Received: by mail-pg1-f177.google.com with SMTP id 41be03b00d2f7-5ce2aada130so1655814a12.1 for ; Tue, 21 May 2024 13:18:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322684; x=1716927484; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=i2YHvLy3+uhrayXw6Xzkckul2FnuttPdhJh6fffHLCU=; b=j5HUEp7YS59XHpIswyLmO9+1mw/rTNEdisWvXg+YaQE4NBYBTNksAusFROrzJluYjG RpNZf9dHk8+Hy/aU9xI5DZdhhpjX2KkA65yOCvRUnusoOjBH3OVSlVBL5gN+CzMtYQMd pwEAaf6NY+OoyKYIhHfsk0c5z/YPV55Ahmhk7tXjPycocjcUGVAuYHGQ6aCJBo1gmVg9 axpjn14JeWLNnHXlkqSxQoDV2cH0Jp36+OL0epd/FoAHuFxBc1fozXNf7qDYlT2m3qo+ Ihqf8qgas78zS5E+JNSyKmCGsxdWGDJSeRMAdTaqO9bxB2Vg7mp3SmH3651xJ9fr0dnu PC1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322684; x=1716927484; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=i2YHvLy3+uhrayXw6Xzkckul2FnuttPdhJh6fffHLCU=; b=Ga3VzOgbIbYxBZRC8JhJSG7nX18ZTKhQnQjDTmMBCm8NO1KgsrRhUAIT1YQJX/5D6F WW2ahpn+Yb7UDxXOSSUMYDtxpNSOX9yxRW4kM19BGNTnQrHIXyPnJRFxrTEaaPsYrB2c /x82XHNt/tvWBcUBkSQ92ezT2JFIXc55OidrMQ8db1zj8j9OYNrngmcjQSNgn8OgAjDj rpQWQGzvjwE17ieQBqBXD+VaKIW1SLvLS9MWjz2AKu/zdI3qfDZ5StXSLXl0RL6FZxdk TFDWd4n6WVR3/hCOf8iK36KXKmsjrpsmgidLDly3swXIvX2fiSnlTVptMehx7NjVrjps hBQg== X-Gm-Message-State: AOJu0Yz5Y7NQhs5scdfuOPAkpoKljxcuURd8b4qJaS9gNTwAD4NCvH1b 7c8frpXHcx1IeUxkBlbDhs9TkELl7SR9xgqgYJlTeQESQ2phlZW8/lkNbN1tcpRKkZJV4xyiR58 A X-Google-Smtp-Source: AGHT+IFX+54mF11TBEn6JwyW1l85siKl01oM+td+0y+ONM/pMkww5YoclfsBm5Aa10QYhCHtiskaNg== X-Received: by 2002:a17:902:d4ca:b0:1e7:8d21:7fd7 with SMTP id d9443c01a7336-1f31c98ee00mr381585ad.28.1716322684305; Tue, 21 May 2024 13:18:04 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:03 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , =?utf-8?q?Morten_Br?= =?utf-8?q?=C3=B8rup?= , Tyler Retzlaff Subject: [PATCH v9 1/8] eal: generic 64 bit counter Date: Tue, 21 May 2024 13:16:34 -0700 Message-ID: <20240521201801.126886-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This header implements 64 bit counters using atomic operations but with a weak memory ordering so that they are safe against load/store splits on 32 bit platforms. Signed-off-by: Stephen Hemminger Acked-by: Morten Brørup --- lib/eal/include/meson.build | 1 + lib/eal/include/rte_counter.h | 150 ++++++++++++++++++++++++++++++++++ 2 files changed, 151 insertions(+) create mode 100644 lib/eal/include/rte_counter.h diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index e94b056d46..c070dd0079 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -12,6 +12,7 @@ headers += files( 'rte_class.h', 'rte_common.h', 'rte_compat.h', + 'rte_counter.h', 'rte_debug.h', 'rte_dev.h', 'rte_devargs.h', diff --git a/lib/eal/include/rte_counter.h b/lib/eal/include/rte_counter.h new file mode 100644 index 0000000000..f0c2b71a6c --- /dev/null +++ b/lib/eal/include/rte_counter.h @@ -0,0 +1,150 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_COUNTER_H_ +#define _RTE_COUNTER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include + +/** + * @file + * RTE Counter + * + * A counter is 64 bit value that is safe from split read/write. + * It assumes that only one cpu at a time will update the counter, + * and another CPU may want to read it. + * + * This is a weaker subset of full atomic variables. + * + * The counters are subject to the restrictions of atomic variables + * in packed structures or unaligned. + */ + +#ifdef RTE_ARCH_64 + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * On native 64 bit platform, counter is implemented as basic + * 64 bit unsigned integer that only increases. + */ +typedef struct { + uint64_t current; + uint64_t offset; +} rte_counter64_t; + +/** + * @internal + * Macro to implement read once (compiler barrier) using stdatomic. + * This is compiler barrier only. + */ +#define __rte_read_once(var) \ + rte_atomic_load_explicit((__rte_atomic typeof(&(var)))&(var), \ + rte_memory_order_consume) + +/** + * @internal + * Macro to implement write once (compiler barrier) using stdatomic. + * This is compiler barrier only. + */ +#define __rte_write_once(var, val) \ + rte_atomic_store_explicit((__rte_atomic typeof(&(var)))&(var), val, \ + rte_memory_order_release) + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Add value to counter. + * Assumes this operation is only done by one thread on the object. + * + * @param counter + * A pointer to the counter. + * @param val + * The value to add to the counter. + */ +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + counter->current += val; +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Read a counter. + * This operation can be done by any thread. + * + * @param counter + * A pointer to the counter. + * @return + * The current value of the counter. + */ +__rte_experimental +static inline uint64_t +rte_counter64_read(const rte_counter64_t *counter) +{ + return __rte_read_once(counter->current) - __rte_read_once(counter->offset); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset a counter to zero. + * This operation can be done by any thread. + * + * @param counter + * A pointer to the counter. + */ +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + __rte_write_once(counter->offset, __rte_read_once(counter->current)); +} + +#else + +/* On 32 bit platform, need to use atomic to avoid load/store tearing */ +typedef RTE_ATOMIC(uint64_t) rte_counter64_t; + +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + rte_atomic_fetch_add_explicit(counter, val, rte_memory_order_relaxed); +} + +__rte_experimental +static inline uint64_t +rte_counter64_read(const rte_counter64_t *counter) +{ + return rte_atomic_load_explicit(counter, rte_memory_order_consume); +} + +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + rte_atomic_store_explicit(counter, 0, rte_memory_order_release); +} + +#endif /* RTE_ARCH_64 */ + + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_COUNTER_H_ */ From patchwork Tue May 21 20:16:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140230 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F3174408D; Tue, 21 May 2024 22:18:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B1E20400D7; Tue, 21 May 2024 22:18:25 +0200 (CEST) Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by mails.dpdk.org (Postfix) with ESMTP id E435D400D7 for ; Tue, 21 May 2024 22:18:05 +0200 (CEST) Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1ed0abbf706so39846535ad.2 for ; Tue, 21 May 2024 13:18:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322685; x=1716927485; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mahlEKlaju+N8dq10rmGQNkzoHzMYO9fSaO2j4DXlPQ=; b=k7DPIb0WHigz9XDn4q7PBOthtSXV9I2/+jHzED6NYdKBjJoQY5d3Ti2j7y3FxSsCtI X1OsmiUPKW4Jd6tpiZvR3TdnOeJiKgEVDGATMa6S1lCDiNwrshOISOelKp/lXxCbC6fY ecBUszceDF6rDOs3Vsx+f+kEVCnp/g8/78Mtt73BEjgI40/R5iMbUahavoQl7boQpcTP 4ybYP3KRDN6aUdXIdKIDGLyC4qyAYpe4l3CKzw1sqBf09eqHIIa37lUxyAZIbzT0q5y6 gtsNnq7iWwfafp0j+qUtJ+l3auxph/yoqJDKLkqgXsPknx74xJFw3jCsIlLaq9J5ID/s ElpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322685; x=1716927485; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mahlEKlaju+N8dq10rmGQNkzoHzMYO9fSaO2j4DXlPQ=; b=h9A5vKudvfXilpastYs4wh6jhvIEbIwP81lttmIX4taCMrKFbpHefPS9f/X1bsFtDa KO5JAABEV7eueQRLZn+S/KX8GvpMQynIPI3msxGCabEk5PnVcN7azaSBJAGv8h8LnVTr 6w+ntUKwwtkw+wotosgflhXD2knE6lQEjkWEa2U3kjTCyb7l22G5vlgJjCz81Db2GrpO vkXGJCV/B2BaHbjvXdwoOGh1PA4e1quZmo3VhPPzJtO08Tq3IQm3fGWCcLT1GiBj7xNx D0X/i58c5dk2zWM1QC9TNjeLC4lPCtq24CSOogq0QBfbroRRiG0hvGXfs/gOY+ANrERz pFew== X-Gm-Message-State: AOJu0YzwuBPPATSaxpZN2NEF7Z965kaeJEew5maTXjDkIJxpDxMfchb3 n2KEBfT71H6UV1mlw84k5TPDWO020ec1hy2dYZE79nyML6us4tOFpIyE5XhpGCtBUMSZvxUVIkW a X-Google-Smtp-Source: AGHT+IHSx90X+0oG190RKbSCUiupeH8EVMs9AtvbumNrSWA0EVFh9r3bvm44tx14/Vxf17vBsY9abw== X-Received: by 2002:a17:902:ce81:b0:1f3:1184:7795 with SMTP id d9443c01a7336-1f31c9f8a80mr101845ad.60.1716322685125; Tue, 21 May 2024 13:18:05 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:04 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH v9 2/8] ethdev: add common counters for statistics Date: Tue, 21 May 2024 13:16:35 -0700 Message-ID: <20240521201801.126886-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce common helper routines for keeping track of per-queue statistics in SW PMD's. The code in several drivers had copy/pasted the same code for this, but had common issues with 64 bit counters on 32 bit platforms. Signed-off-by: Stephen Hemminger --- lib/ethdev/ethdev_swstats.c | 101 ++++++++++++++++++++++++++++++++ lib/ethdev/ethdev_swstats.h | 111 ++++++++++++++++++++++++++++++++++++ lib/ethdev/meson.build | 2 + lib/ethdev/version.map | 3 + 4 files changed, 217 insertions(+) create mode 100644 lib/ethdev/ethdev_swstats.c create mode 100644 lib/ethdev/ethdev_swstats.h diff --git a/lib/ethdev/ethdev_swstats.c b/lib/ethdev/ethdev_swstats.c new file mode 100644 index 0000000000..f7975bdea7 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.c @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + + +#include +#include +#include + +#include "rte_ethdev.h" +#include "ethdev_driver.h" +#include "ethdev_swstats.h" + +int +rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats) +{ + uint64_t packets, bytes, errors; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + const struct rte_eth_counters *counters; + + if (txq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)txq + tx_offset); + packets = rte_counter64_read(&counters->packets); + bytes = rte_counter64_read(&counters->bytes); + errors = rte_counter64_read(&counters->errors); + + stats->opackets += packets; + stats->obytes += bytes; + stats->oerrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_opackets[i] = packets; + stats->q_obytes[i] = bytes; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + const struct rte_eth_counters *counters; + + if (rxq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)rxq + rx_offset); + packets = rte_counter64_read(&counters->packets); + bytes = rte_counter64_read(&counters->bytes); + errors = rte_counter64_read(&counters->errors); + + stats->ipackets += packets; + stats->ibytes += bytes; + stats->ierrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_ipackets[i] = packets; + stats->q_ibytes[i] = bytes; + } + } + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + return 0; +} + +int +rte_eth_counters_reset(struct rte_eth_dev *dev, size_t tx_offset, size_t rx_offset) +{ + struct rte_eth_counters *counters; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + void *txq = dev->data->tx_queues[i]; + + if (txq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)txq + tx_offset); + rte_counter64_reset(&counters->packets); + rte_counter64_reset(&counters->bytes); + rte_counter64_reset(&counters->errors); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + void *rxq = dev->data->rx_queues[i]; + + if (rxq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)rxq + rx_offset); + rte_counter64_reset(&counters->packets); + rte_counter64_reset(&counters->bytes); + rte_counter64_reset(&counters->errors); + } + + return 0; +} diff --git a/lib/ethdev/ethdev_swstats.h b/lib/ethdev/ethdev_swstats.h new file mode 100644 index 0000000000..37b8e43eb0 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.h @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_ETHDEV_SWSTATS_H_ +#define _RTE_ETHDEV_SWSTATS_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * + * Internal statistics counters for software based devices. + * Hardware PMD's should use the hardware counters instead. + * + * This provides a library for PMD's to keep track of packets and bytes. + * It is assumed that this will be used per queue and queues are not + * shared by lcores. + */ + +#include + +/** + * A structure to be embedded in the device driver per-queue data. + */ +struct rte_eth_counters { + rte_counter64_t packets; /**< Total number of packets. */ + rte_counter64_t bytes; /**< Total number of bytes. */ + rte_counter64_t errors; /**< Total number of packets with errors. */ +}; + +/** + * @internal + * Increment counters for a single packet. + * + * @param counters + * Pointer to queue structure containing counters. + * @param packets + * Number of packets to count + * @param bytes + * Total size of all packet in bytes. + */ +__rte_internal +static inline void +rte_eth_count_packets(struct rte_eth_counters *counters, + uint16_t packets, uint32_t bytes) +{ + rte_counter64_add(&counters->packets, packets); + rte_counter64_add(&counters->bytes, bytes); +} + +/** + * @internal + * Increment error counter. + * + * @param counters + * Pointer to queue structure containing counters. + */ +__rte_internal +static inline void +rte_eth_count_error(struct rte_eth_counters *counters) +{ + rte_counter64_add(&counters->errors, 1); +} + +/** + * @internal + * Retrieve the general statistics for all queues. + * @see rte_eth_stats_get. + * + * @param dev + * Pointer to the Ethernet device structure. + * @param tx_offset + * Offset from the tx_queue structure where stats are located. + * @param rx_offset + * Offset from the rx_queue structure where stats are located. + * @param stats + * A pointer to a structure of type *rte_eth_stats* to be filled + * @return + * Zero if successful. Non-zero otherwise. + */ +__rte_internal +int rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats); + +/** + * @internal + * Reset the statistics for all queues. + * @see rte_eth_stats_reset. + * + * @param dev + * Pointer to the Ethernet device structure. + * @param tx_offset + * Offset from the tx_queue structure where stats are located. + * @param rx_offset + * Offset from the rx_queue structure where stats are located. + * @return + * Zero if successful. Non-zero otherwise. + */ +__rte_internal +int rte_eth_counters_reset(struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ETHDEV_SWSTATS_H_ */ diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index f1d2586591..7ce29a46d4 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -3,6 +3,7 @@ sources = files( 'ethdev_driver.c', + 'ethdev_swstats.c', 'ethdev_private.c', 'ethdev_profile.c', 'ethdev_trace_points.c', @@ -42,6 +43,7 @@ driver_sdk_headers += files( 'ethdev_driver.h', 'ethdev_pci.h', 'ethdev_vdev.h', + 'ethdev_swstats.h', ) if is_linux diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 79f6f5293b..fc595be278 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -358,4 +358,7 @@ INTERNAL { rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; rte_flow_fp_default_ops; + + rte_eth_counters_reset; + rte_eth_counters_stats_get; }; From patchwork Tue May 21 20:16:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140231 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1EE044408D; Tue, 21 May 2024 22:18:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D44B40A8B; Tue, 21 May 2024 22:18:27 +0200 (CEST) Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by mails.dpdk.org (Postfix) with ESMTP id D3624400D7 for ; Tue, 21 May 2024 22:18:06 +0200 (CEST) Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1eeabda8590so4712205ad.0 for ; Tue, 21 May 2024 13:18:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322686; x=1716927486; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MjalrBySr7qzjIIpZNrJKVRuPrkLUZnmMi1fl292L1w=; b=YnT1ndz0Ek3S/4zqg3AwUvrYxevNVYC99Lg/BLIr0zwex4eHuQrS4NIb1GFoJndXU8 XzJKOGmBNYKKUfPq4bfSONviw/kC/Rxjkm+5E3+1U7ZhfF5uq84cy6dpMkKYW2qjUUQM eDKzeBVihU4UHMgfNNMY30SX87eHJ/7yclou43wnd+nycgE7zis/yfeaNyqRi/UqhQ00 oGUEiyx8d/Rt9SeL5PqCZquPnQ0GtO5zaXH6SB+UWjCbOMtrUPmzf/k5aK/2f1ocB/QS EPBXnplLQREWYwxhI3JHulaoUHHhz7rRNl0VSbNuZaxeb2hVNOF0kXOGKevshP4OHixt Cm0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322686; x=1716927486; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MjalrBySr7qzjIIpZNrJKVRuPrkLUZnmMi1fl292L1w=; b=ikhuBhQCXWOc1EogIJvFILVHKqQSwC0FMpJs7ttWcn6YwSsTjGQBTI5z26JYKkZKMW oTdI6RXmm93VVEgXcxcbyqvEiKIP0Jmj6O6TQaSBHfs4cM6+Si1PiApgcaberkrhq78h ht/oSUvAkmRtQj5LDu4APtXkEWwaVSEm8jeY17SmySa3p99Cl/OPPIfcpUscEAeuzQUh 6QUVDhfVHRsHBLJSJPELcUl/nRUVfxat7RwHD3KMcYd28YQ3LX2Z3uqeVkUeAwYo238W ZdOh0VFQB3g4rjHq39PS4FKr7VaDgDn96ODXgYaraU8WD5Mnh71b33LARG3aX2WAfNZ1 xnzg== X-Gm-Message-State: AOJu0YzgGbbVluqFdDe4v7hxjCYqFOWzVNt3lj8YY+RdUsRvS64a/mZx jkO49wZVRarMPUHLLSS01XSkLbwg+CJLaoMCbq5KzgnBOxkYMEzL+SdS3N/iERNeuKqvRrkrTTb O X-Google-Smtp-Source: AGHT+IHjapkilgeErt/8AAOcDjukUcC7DnCdkcvP8j/wvV//L11KZtB7/4sBORasauLUFgaf4j6suw== X-Received: by 2002:a17:903:22cb:b0:1ec:31f5:16d5 with SMTP id d9443c01a7336-1f31c98f63emr318615ad.33.1716322685958; Tue, 21 May 2024 13:18:05 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:05 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , "John W. Linville" Subject: [PATCH v9 3/8] net/af_packet: use generic SW stats Date: Tue, 21 May 2024 13:16:36 -0700 Message-ID: <20240521201801.126886-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new generic SW stats. Add a note about how errors and kernel full should be handled. Signed-off-by: Stephen Hemminger --- drivers/net/af_packet/rte_eth_af_packet.c | 78 +++++------------------ 1 file changed, 17 insertions(+), 61 deletions(-) diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index 397a32db58..64fa519812 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -51,8 +52,7 @@ struct pkt_rx_queue { uint16_t in_port; uint8_t vlan_strip; - volatile unsigned long rx_pkts; - volatile unsigned long rx_bytes; + struct rte_eth_counters stats; }; struct pkt_tx_queue { @@ -64,11 +64,10 @@ struct pkt_tx_queue { unsigned int framecount; unsigned int framenum; - volatile unsigned long tx_pkts; - volatile unsigned long err_pkts; - volatile unsigned long tx_bytes; + struct rte_eth_counters stats; }; + struct pmd_internals { unsigned nb_queues; @@ -168,9 +167,10 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) num_rx_bytes += mbuf->pkt_len; } pkt_q->framenum = framenum; - pkt_q->rx_pkts += num_rx; - pkt_q->rx_bytes += num_rx_bytes; - return num_rx; + + rte_eth_count_packets(&pkt_q->stats, num_rx, num_rx_bytes); + + return i; } /* @@ -294,19 +294,16 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (sendto(pkt_q->sockfd, NULL, 0, MSG_DONTWAIT, NULL, 0) == -1 && errno != ENOBUFS && errno != EAGAIN) { /* - * In case of a ENOBUFS/EAGAIN error all of the enqueued - * packets will be considered successful even though only some - * are sent. + * FIXME: if sendto fails kernel is busy should return 0 + * and not free the mbufs. Other errors should free the + * buts and increment the tx error count. */ - num_tx = 0; num_tx_bytes = 0; } pkt_q->framenum = framenum; - pkt_q->tx_pkts += num_tx; - pkt_q->err_pkts += i - num_tx; - pkt_q->tx_bytes += num_tx_bytes; + rte_eth_count_packets(&pkt_q->stats, num_tx, num_tx_bytes); return i; } @@ -386,58 +383,17 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) } static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) +eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_ipackets[i] = internal->rx_queue[i].rx_pkts; - igb_stats->q_ibytes[i] = internal->rx_queue[i].rx_bytes; - rx_total += igb_stats->q_ipackets[i]; - rx_bytes_total += igb_stats->q_ibytes[i]; - } - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_opackets[i] = internal->tx_queue[i].tx_pkts; - igb_stats->q_obytes[i] = internal->tx_queue[i].tx_bytes; - tx_total += igb_stats->q_opackets[i]; - tx_err_total += internal->tx_queue[i].err_pkts; - tx_bytes_total += igb_stats->q_obytes[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->ibytes = rx_bytes_total; - igb_stats->opackets = tx_total; - igb_stats->oerrors = tx_err_total; - igb_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < internal->nb_queues; i++) { - internal->rx_queue[i].rx_pkts = 0; - internal->rx_queue[i].rx_bytes = 0; - } - - for (i = 0; i < internal->nb_queues; i++) { - internal->tx_queue[i].tx_pkts = 0; - internal->tx_queue[i].err_pkts = 0; - internal->tx_queue[i].tx_bytes = 0; - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); } static int From patchwork Tue May 21 20:16:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140232 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A95D24408D; Tue, 21 May 2024 22:18:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6005A40A77; Tue, 21 May 2024 22:18:28 +0200 (CEST) Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by mails.dpdk.org (Postfix) with ESMTP id 99F80400D7 for ; Tue, 21 May 2024 22:18:07 +0200 (CEST) Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-5e8470c1cb7so1586167a12.2 for ; Tue, 21 May 2024 13:18:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322687; x=1716927487; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mBzpjYE8DoSG3Siz752I1pLevMRX7Gd2+d2mJF6DBuM=; b=tUCiQR+sT8VMpsLcv1Ch8p4y9xx4yRnbn8rIx7NQkK4Pw3zxtjoQ/4OSns/esDRtTj HsNdulDQgrcJ10E0NGG9V3Ld+PXocryNP9XkyAAH+Noky62nrMlumxlq5fR6z6T45KZl jTw02NDRUvq58PL5tEBG4mtlnXrdZXRddO8gJVIWzvmxbkwoeLzmhg/7x3A+0f1E1ETv xdKol9/jtOoaVfovPbA9ugEYjlSZxtuMzq8gB7U2bDznhTI5xWb73EBJuD0BKGIxmMCL XpwDN5Ib/yHBCwlSksp4gNmwFCtAkBhudylh/rAgpR8gwKdI9FXHz0yRT7YJjWelDdsE qLOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322687; x=1716927487; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mBzpjYE8DoSG3Siz752I1pLevMRX7Gd2+d2mJF6DBuM=; b=PsXfQaEA2dFYEml39lDOfbacDgyQoIfd1I0Kk4ti6ZCIxqI94ysSasrNU/y0L15VAG QSUrvyFQex6up7LMTHAIgrDJeLMtWN5aswv8nf7GASgJWqv7l6sJ1MH88p6fsPd8EgoV ovTnDX0kRZxlr+y4DAGxT0IKYGAyqusQCRGbo8/1p1oFdw7I0v6srUHsABlLCxBGYe38 7p3FlVIWMeisDiZnoLDf9SfD78INBYCM4Axxu/bEdLfEfcGRjgksniAfIEtZhWGNgssU su568Y2dneEPN267w0QHYBObSogZMJW1+Biyj/1Yfkm7/5hzEllrdY1Ds1HTu3nG++Am x6Cw== X-Gm-Message-State: AOJu0Yz80k7V10AGIXBvN1c8NsnK6MAm0XfQBGNlMldJnuboVF03y+hT fOQ6rwY/g1OvKlpSD4fuirZW5chgCapdoBgBSJW0BCh4CGFteCibl5ezduCHQGSJp9cpYXKa72g M X-Google-Smtp-Source: AGHT+IHi2BRps6THIztGwcPTG7pRuS992Cd9MHkF9h9HLx5fYs6PLBHPrAX5cwjBrtQvYx9Ap/rodw== X-Received: by 2002:a17:903:94b:b0:1f2:fbda:8671 with SMTP id d9443c01a7336-1f31c95c500mr382375ad.6.1716322686818; Tue, 21 May 2024 13:18:06 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:06 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Ciara Loftus Subject: [PATCH v9 4/8] net/af_xdp: use generic SW stats Date: Tue, 21 May 2024 13:16:37 -0700 Message-ID: <20240521201801.126886-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common code for all SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_xdp/rte_eth_af_xdp.c | 85 ++++++++--------------------- 1 file changed, 22 insertions(+), 63 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 6ba455bb9b..c563621798 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -120,19 +121,13 @@ struct xsk_umem_info { uint32_t max_xsks; }; -struct rx_stats { - uint64_t rx_pkts; - uint64_t rx_bytes; - uint64_t rx_dropped; -}; - struct pkt_rx_queue { struct xsk_ring_cons rx; struct xsk_umem_info *umem; struct xsk_socket *xsk; struct rte_mempool *mb_pool; - struct rx_stats stats; + struct rte_eth_counters stats; struct xsk_ring_prod fq; struct xsk_ring_cons cq; @@ -143,17 +138,11 @@ struct pkt_rx_queue { int busy_budget; }; -struct tx_stats { - uint64_t tx_pkts; - uint64_t tx_bytes; - uint64_t tx_dropped; -}; - struct pkt_tx_queue { struct xsk_ring_prod tx; struct xsk_umem_info *umem; - struct tx_stats stats; + struct rte_eth_counters stats; struct pkt_rx_queue *pair; int xsk_queue_idx; @@ -369,9 +358,7 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) xsk_ring_cons__release(rx, nb_pkts); (void)reserve_fill_queue(umem, nb_pkts, fq_bufs, fq); - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; + rte_eth_count_packets(&rxq->stats, nb_pkts, rx_bytes); return nb_pkts; } @@ -429,10 +416,7 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } xsk_ring_cons__release(rx, nb_pkts); - - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; + rte_eth_count_packets(&rxq->stats, nb_pkts, rx_bytes); return nb_pkts; } @@ -558,6 +542,7 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) umem->mb_pool->header_size; offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT; desc->addr = addr | offset; + tx_bytes += mbuf->pkt_len; count++; } else { struct rte_mbuf *local_mbuf = @@ -585,20 +570,17 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) desc->addr = addr | offset; rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), desc->len); + tx_bytes += mbuf->pkt_len; rte_pktmbuf_free(mbuf); count++; } - - tx_bytes += mbuf->pkt_len; } out: xsk_ring_prod__submit(&txq->tx, count); kick_tx(txq, cq); - txq->stats.tx_pkts += count; - txq->stats.tx_bytes += tx_bytes; - txq->stats.tx_dropped += nb_pkts - count; + rte_eth_count_packets(&txq->stats, count, tx_bytes); return count; } @@ -648,8 +630,7 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) kick_tx(txq, cq); - txq->stats.tx_pkts += nb_pkts; - txq->stats.tx_bytes += tx_bytes; + rte_eth_count_packets(&txq->stats, nb_pkts, tx_bytes); return nb_pkts; } @@ -847,39 +828,26 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct pmd_internals *internals = dev->data->dev_private; struct pmd_process_private *process_private = dev->process_private; - struct xdp_statistics xdp_stats; - struct pkt_rx_queue *rxq; - struct pkt_tx_queue *txq; - socklen_t optlen; - int i, ret, fd; + unsigned int i; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - optlen = sizeof(struct xdp_statistics); - rxq = &internals->rx_queues[i]; - txq = rxq->pair; - stats->q_ipackets[i] = rxq->stats.rx_pkts; - stats->q_ibytes[i] = rxq->stats.rx_bytes; + rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); - stats->q_opackets[i] = txq->stats.tx_pkts; - stats->q_obytes[i] = txq->stats.tx_bytes; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct xdp_statistics xdp_stats; + socklen_t optlen = sizeof(xdp_stats); + int fd; - stats->ipackets += stats->q_ipackets[i]; - stats->ibytes += stats->q_ibytes[i]; - stats->imissed += rxq->stats.rx_dropped; - stats->oerrors += txq->stats.tx_dropped; fd = process_private->rxq_xsk_fds[i]; - ret = fd >= 0 ? getsockopt(fd, SOL_XDP, XDP_STATISTICS, - &xdp_stats, &optlen) : -1; - if (ret != 0) { + if (fd < 0) + continue; + if (getsockopt(fd, SOL_XDP, XDP_STATISTICS, + &xdp_stats, &optlen) < 0) { AF_XDP_LOG(ERR, "getsockopt() failed for XDP_STATISTICS.\n"); return -1; } stats->imissed += xdp_stats.rx_dropped; - - stats->opackets += stats->q_opackets[i]; - stats->obytes += stats->q_obytes[i]; } return 0; @@ -888,17 +856,8 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static int eth_stats_reset(struct rte_eth_dev *dev) { - struct pmd_internals *internals = dev->data->dev_private; - int i; - - for (i = 0; i < internals->queue_cnt; i++) { - memset(&internals->rx_queues[i].stats, 0, - sizeof(struct rx_stats)); - memset(&internals->tx_queues[i].stats, 0, - sizeof(struct tx_stats)); - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); } #ifdef RTE_NET_AF_XDP_LIBBPF_XDP_ATTACH From patchwork Tue May 21 20:16:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140233 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E53B74408D; Tue, 21 May 2024 22:18:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B2A4040DDB; Tue, 21 May 2024 22:18:29 +0200 (CEST) Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by mails.dpdk.org (Postfix) with ESMTP id 6F69F400D7 for ; Tue, 21 May 2024 22:18:08 +0200 (CEST) Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1eecc71311eso4895325ad.3 for ; Tue, 21 May 2024 13:18:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322687; x=1716927487; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+EszQcYtvxNgZfYaRAS4ls3d9SF5yFMlXdQa2iCOZGQ=; b=efm1JUC/dQgO0VqliLPC/D0sgngLDTkCBe6I8awEqfDv3lXkhoRcviNDSLOEeuxfhG xxx2Ptds23Su5C5nkqb3G5zrYxwBQK6XFvBPKpROXdWtFDusJLR5fdoj2c4QLKFG9lLU gXiJBlJEkLi5VGI8PU0rH/kfJRV9pe5kwbu2nsU8ccnymAbvG3dL+tu4sJQBFmLYoyrS 1KQn6Mb04oY1nAjKmt2/4bXVT9ik9AyxbbzYUs4TyezKI1X9zGJH2DcyzyX9rwTzRgJ9 RsZub2Eqg+uuvUxWDzLbHEht+1sZtCEL0OocbUrPu78fSjlVr9ZIOl1I0QyFUcbDdsKS 0RuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322687; x=1716927487; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+EszQcYtvxNgZfYaRAS4ls3d9SF5yFMlXdQa2iCOZGQ=; b=GrZKNeurfd6Y/CZuXER2lLUPNTm/kQ5n3Td9iNSQxVHthQv6Gr16RNdJ7/1Gq6AzkU jz38eVnWr6az+vmqEtvlSZQJ8WA61sDqm3KER5mDFWYqZntUMDaKYHsQxJCRmdBqJN9t +RmxhHU3X9yeKKYW9Aekxbvq34CfWytzJoyQnEFkSTOxDOHnPNi3+MCF9nF5v0DuJZYm foC3o+nE8QGivjYiuCc95k01mNFecPiQCjyiO6a/mmvbyLr/f13TTJBF5/Dr7FiLVvwX 4uG++NQRTkbzCUhtf4L+q2b5AA9lASeIPMK6bP+AUDUoA6GiOgo1HOoTU4MK9SiwkN1G /iLA== X-Gm-Message-State: AOJu0Yz88ox3omxc9pGw7rIi0EzRFotGeAhGwwxFZbHKC/FDrm0oMxr+ PkNLa/dsDLXF90AmLZtQV5MwzElcDBtEHdBWB/pjZn0rrioVUE03xL4V87zGSLzfPMh6pcgUzMM n X-Google-Smtp-Source: AGHT+IF+oHsBGiOz2itRhq8KcY/OONuOecSaglw+u4mJY9L/X0BZuyRiifj/QlnwweHjJKQ6obkthw== X-Received: by 2002:a17:902:d4cc:b0:1e5:d021:cf58 with SMTP id d9443c01a7336-1f31c994b04mr481155ad.36.1716322687617; Tue, 21 May 2024 13:18:07 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:07 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v9 5/8] net/pcap: use generic SW stats Date: Tue, 21 May 2024 13:16:38 -0700 Message-ID: <20240521201801.126886-6-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common statistics for SW drivers. Signed-off-by: Stephen Hemminger --- drivers/net/pcap/pcap_ethdev.c | 100 ++++++++------------------------- 1 file changed, 22 insertions(+), 78 deletions(-) diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index bfec085045..4689359527 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -48,13 +49,6 @@ static uint8_t iface_idx; static uint64_t timestamp_rx_dynflag; static int timestamp_dynfield_offset = -1; -struct queue_stat { - volatile unsigned long pkts; - volatile unsigned long bytes; - volatile unsigned long err_pkts; - volatile unsigned long rx_nombuf; -}; - struct queue_missed_stat { /* last value retrieved from pcap */ unsigned int pcap; @@ -68,7 +62,7 @@ struct pcap_rx_queue { uint16_t port_id; uint16_t queue_id; struct rte_mempool *mb_pool; - struct queue_stat rx_stat; + struct rte_eth_counters rx_stat; struct queue_missed_stat missed_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; @@ -80,7 +74,7 @@ struct pcap_rx_queue { struct pcap_tx_queue { uint16_t port_id; uint16_t queue_id; - struct queue_stat tx_stat; + struct rte_eth_counters tx_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; }; @@ -258,14 +252,13 @@ eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) bufs[i]->data_len = pcap_buf->data_len; bufs[i]->pkt_len = pcap_buf->pkt_len; bufs[i]->port = pcap_q->port_id; - rx_bytes += pcap_buf->data_len; + rx_bytes += pcap_buf->pkt_len; /* Enqueue packet back on ring to allow infinite rx. */ rte_ring_enqueue(pcap_q->pkts, pcap_buf); } - pcap_q->rx_stat.pkts += i; - pcap_q->rx_stat.bytes += rx_bytes; + rte_eth_count_packets(&pcap_q->rx_stat, i, rx_bytes); return i; } @@ -300,7 +293,9 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool); if (unlikely(mbuf == NULL)) { - pcap_q->rx_stat.rx_nombuf++; + struct rte_eth_dev *dev = &rte_eth_devices[pcap_q->port_id]; + + ++dev->data->rx_mbuf_alloc_failed; break; } @@ -315,7 +310,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf, packet, header.caplen) == -1)) { - pcap_q->rx_stat.err_pkts++; + rte_eth_count_error(&pcap_q->rx_stat); rte_pktmbuf_free(mbuf); break; } @@ -332,9 +327,8 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) num_rx++; rx_bytes += header.caplen; } - pcap_q->rx_stat.pkts += num_rx; - pcap_q->rx_stat.bytes += rx_bytes; + rte_eth_count_packets(&pcap_q->rx_stat, num_rx, rx_bytes); return num_rx; } @@ -423,9 +417,8 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * we flush the pcap dumper within each burst. */ pcap_dump_flush(dumper); - dumper_q->tx_stat.pkts += num_tx; - dumper_q->tx_stat.bytes += tx_bytes; - dumper_q->tx_stat.err_pkts += nb_pkts - num_tx; + + rte_eth_count_packets(&dumper_q->tx_stat, num_tx, tx_bytes); return nb_pkts; } @@ -448,9 +441,7 @@ eth_tx_drop(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_free(bufs[i]); } - tx_queue->tx_stat.pkts += nb_pkts; - tx_queue->tx_stat.bytes += tx_bytes; - + rte_eth_count_packets(&tx_queue->tx_stat, nb_pkts, tx_bytes); return i; } @@ -502,9 +493,7 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_free(mbuf); } - tx_queue->tx_stat.pkts += num_tx; - tx_queue->tx_stat.bytes += tx_bytes; - tx_queue->tx_stat.err_pkts += i - num_tx; + rte_eth_count_packets(&tx_queue->tx_stat, num_tx, tx_bytes); return i; } @@ -746,41 +735,12 @@ static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { unsigned int i; - unsigned long rx_packets_total = 0, rx_bytes_total = 0; - unsigned long rx_missed_total = 0; - unsigned long rx_nombuf_total = 0, rx_err_total = 0; - unsigned long tx_packets_total = 0, tx_bytes_total = 0; - unsigned long tx_packets_err_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts; - stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes; - rx_nombuf_total += internal->rx_queue[i].rx_stat.rx_nombuf; - rx_err_total += internal->rx_queue[i].rx_stat.err_pkts; - rx_packets_total += stats->q_ipackets[i]; - rx_bytes_total += stats->q_ibytes[i]; - rx_missed_total += queue_missed_stat_get(dev, i); - } - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_queue[i].tx_stat.pkts; - stats->q_obytes[i] = internal->tx_queue[i].tx_stat.bytes; - tx_packets_total += stats->q_opackets[i]; - tx_bytes_total += stats->q_obytes[i]; - tx_packets_err_total += internal->tx_queue[i].tx_stat.err_pkts; - } + rte_eth_counters_stats_get(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat), stats); - stats->ipackets = rx_packets_total; - stats->ibytes = rx_bytes_total; - stats->imissed = rx_missed_total; - stats->ierrors = rx_err_total; - stats->rx_nombuf = rx_nombuf_total; - stats->opackets = tx_packets_total; - stats->obytes = tx_bytes_total; - stats->oerrors = tx_packets_err_total; + for (i = 0; i < dev->data->nb_rx_queues; i++) + stats->imissed += queue_missed_stat_get(dev, i); return 0; } @@ -789,21 +749,12 @@ static int eth_stats_reset(struct rte_eth_dev *dev) { unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - internal->rx_queue[i].rx_stat.pkts = 0; - internal->rx_queue[i].rx_stat.bytes = 0; - internal->rx_queue[i].rx_stat.err_pkts = 0; - internal->rx_queue[i].rx_stat.rx_nombuf = 0; - queue_missed_stat_reset(dev, i); - } + rte_eth_counters_reset(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat)); - for (i = 0; i < dev->data->nb_tx_queues; i++) { - internal->tx_queue[i].tx_stat.pkts = 0; - internal->tx_queue[i].tx_stat.bytes = 0; - internal->tx_queue[i].tx_stat.err_pkts = 0; - } + for (i = 0; i < dev->data->nb_rx_queues; i++) + queue_missed_stat_reset(dev, i); return 0; } @@ -929,13 +880,6 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, pcap_pkt_count); return -EINVAL; } - - /* - * Reset the stats for this queue since eth_pcap_rx calls above - * didn't result in the application receiving packets. - */ - pcap_q->rx_stat.pkts = 0; - pcap_q->rx_stat.bytes = 0; } return 0; From patchwork Tue May 21 20:16:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140234 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 924CA4408D; Tue, 21 May 2024 22:18:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1110E40DF8; Tue, 21 May 2024 22:18:31 +0200 (CEST) Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by mails.dpdk.org (Postfix) with ESMTP id 49F4E400D7 for ; Tue, 21 May 2024 22:18:09 +0200 (CEST) Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1ed96772f92so5107375ad.0 for ; Tue, 21 May 2024 13:18:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322688; x=1716927488; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bN+U3MlFvHBy6Y5Ch8EA2M37+2VY2m8EGSXVGyhuBX4=; b=gqDl5DVo0UQ16ssCVwN1oRSex8mrNSVUAA8M9TF49JMUMhCcYhzw9rndD85K9VO1Hd 9nlb7KER9KQSuDcRnad59PpxFSy+l4mvP0KY3mRSiAnAM/BOc25girbRMUAx7+kIr+7Q CyiOOYROS6TdzRaaNi4yqxrdrbpFaVgLU4hdwnAI3UXnTVxgomDSmVF0RfMLbc2pyFMz +oruFYoNxzbSkzMgBRKk7WzcgVGsvZyT5DtgbgGNJ2WFc+5EeC6NpXqTJQ8Zt5PuF8jJ KwRWLtadkkf9oCTkb9WTJAMRc7uA5VC4HGo2ZUXoI0G0P0lBBsMxJLUNIDkmSrsD3LGl Gkpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322688; x=1716927488; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bN+U3MlFvHBy6Y5Ch8EA2M37+2VY2m8EGSXVGyhuBX4=; b=lcXY4u8bYTi5nJDdSMy4zsIRbGSDSTwisbk+d+AaH3daCb7P1U/KJx0VQVpwkGCpBY 54JtFuaV0+0+6tts0J9hyuxtSrNf/xPXErkGZsqQa/p6+u8NoTFjo78g0qMnUa+UojOh NUO2L2jAavx3CKJ9tJgpHVhLOOvtJd0MRpqdUveCTiQhqCDFbFKG6XGgABGZ6/h+Cmje tuAaJXMiJ9zCnO+0iEJc9o/5tw2vRlCVUC4Fwy6mceFLM0/YnLxDRtVHi4TYOLQ0f5Bo zhwLx4ZPG5Gz7Oz1zw6Klc3m07HmjVq7u5BQTjYSZpCtsZ1tTECB7gtUpYPvACfBx32g pdow== X-Gm-Message-State: AOJu0Yzz6TvyFL5lO/pRBUC3SZYNDpoEt2ZlkL5q6SpCv9nG+axPofVq kI0mCusCOa50+qYKvquSR5sxtySrUrBEitYbxnaNOTj2N2i7nc+eJpj5LI41hw3gS7kk9sFnrjk b X-Google-Smtp-Source: AGHT+IEbTbUgXJKKo7IhyeLS8LPjLAovcZDkBhMZoLQ0nn9xoE6dTD0pgw5VqzuO+pc/iOsy4RpGSw== X-Received: by 2002:a17:902:7809:b0:1ef:4600:835d with SMTP id d9443c01a7336-1f31c969183mr394915ad.21.1716322688395; Tue, 21 May 2024 13:18:08 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:08 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Bruce Richardson Subject: [PATCH v9 6/8] net/ring: use generic SW stats Date: Tue, 21 May 2024 13:16:39 -0700 Message-ID: <20240521201801.126886-7-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use generic per-queue infrastructure. Signed-off-by: Stephen Hemminger --- drivers/net/ring/rte_eth_ring.c | 63 +++++++++++---------------------- 1 file changed, 20 insertions(+), 43 deletions(-) diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index b16f5d55f2..36053e4038 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -7,6 +7,7 @@ #include "rte_eth_ring.h" #include #include +#include #include #include #include @@ -44,8 +45,8 @@ enum dev_action { struct ring_queue { struct rte_ring *rng; - RTE_ATOMIC(uint64_t) rx_pkts; - RTE_ATOMIC(uint64_t) tx_pkts; + + struct rte_eth_counters stats; }; struct pmd_internals { @@ -77,12 +78,12 @@ eth_ring_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; struct ring_queue *r = q; - const uint16_t nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SC_DEQ) - r->rx_pkts += nb_rx; - else - rte_atomic_fetch_add_explicit(&r->rx_pkts, nb_rx, rte_memory_order_relaxed); + uint16_t nb_rx; + + nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, ptrs, nb_bufs, NULL); + + rte_counter64_add(&r->stats.packets, nb_rx); + return nb_rx; } @@ -91,12 +92,12 @@ eth_ring_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; struct ring_queue *r = q; - const uint16_t nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SP_ENQ) - r->tx_pkts += nb_tx; - else - rte_atomic_fetch_add_explicit(&r->tx_pkts, nb_tx, rte_memory_order_relaxed); + uint16_t nb_tx; + + nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, ptrs, nb_bufs, NULL); + + rte_counter64_add(&r->stats.packets, nb_tx); + return nb_tx; } @@ -193,40 +194,16 @@ eth_dev_info(struct rte_eth_dev *dev, static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned int i; - unsigned long rx_total = 0, tx_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_ring_queues[i].rx_pkts; - rx_total += stats->q_ipackets[i]; - } - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_ring_queues[i].tx_pkts; - tx_total += stats->q_opackets[i]; - } - - stats->ipackets = rx_total; - stats->opackets = tx_total; - - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats), + stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < dev->data->nb_rx_queues; i++) - internal->rx_ring_queues[i].rx_pkts = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) - internal->tx_ring_queues[i].tx_pkts = 0; - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats)); } static void From patchwork Tue May 21 20:16:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140235 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52B474408D; Tue, 21 May 2024 22:19:03 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6864240E28; Tue, 21 May 2024 22:18:32 +0200 (CEST) Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by mails.dpdk.org (Postfix) with ESMTP id 27D57400D7 for ; Tue, 21 May 2024 22:18:10 +0200 (CEST) Received: by mail-pg1-f174.google.com with SMTP id 41be03b00d2f7-517ab9a4a13so1574164a12.1 for ; Tue, 21 May 2024 13:18:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322689; x=1716927489; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yy8YDZY/MRPSLR7fI/dVHNGEOfgFcJ9MlJeosQDKby0=; b=rTHEHhSmLYafPlAjSlfkbyZQprV8fjrIDMFX2L5KYWMEx/RsqwONvjFpiul8mREphT 7EtHP0N5OHA2F6r3boFDeKG+A0y+wKwFK3q1n4lP3LZ6Zy+47sUv4cDLZ7l2zQZL0Cyp vLMIaPyNgn9X31gIYnCQxCA2ofrlDPZU3mgtnNY+fHiYzK36YCfySETMQnzI/vaSgEdZ 9mRjJcx1G/EN/7jtv/TwLGsRnwipVu+Vbcr2CXiYLw7Ac8RcLDc8qQhmCxlKoBRtpSnr TRfTrg/zkK5cxwsWVgD0k/HA/fXQ/11wIg9PISslCA4M+YGEFfriKC0+AeiWP9D0rGPR hoqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322689; x=1716927489; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yy8YDZY/MRPSLR7fI/dVHNGEOfgFcJ9MlJeosQDKby0=; b=q7uGPuzUwM9Kt7Blqq1VWEFonc3bZhjDsZtnMn0D4WCxsbtl5ZM0jxqjbmbHAjPRy0 PA//e847ysCMfRfbw9oDppD7DvTaPG7+5P65dqTKZQ5X1fdeVxW9uQR4WdUOr+7mX+XI ZpXg/Ps2Fzt+/t++s56UvaTTl4gX0P0plYcQrPTIL7+8XOrWp2rUCX0ln0HaSb3WMpBM b/GH6/mHAYUhd8etdcOWn4SEpPXTwlk5QBaKXZEa2g/o1MqFUQQziIhnqs3ChWRv7/AC 0lkWTn8AT0v8LTvls6Q+/XaLWORHgIz9CoDMhXQHZ056niTW4PKI8E4RBVvnnxvII35w cO2g== X-Gm-Message-State: AOJu0Yys7c9t+DMgRz+jn1xjoz8m80D6vJaTjaWwbZ0d+FZit6zSF5J4 jzcWoQwzfRpHm9e/gPS4D4vP8YXorp9SMlggWJ8IGax5Xv5vC4OJ/8gyBv40LXnYMY/ZnkCwOzl M X-Google-Smtp-Source: AGHT+IG6Coj9ljiFl1haIbtjcnCfMm5VIlQmswkge/vmjKcVRRlCgTmTO/ISUqfDu5wio8rR6OhRlw== X-Received: by 2002:a17:902:dac7:b0:1f3:61c:30a2 with SMTP id d9443c01a7336-1f31c96530bmr506515ad.2.1716322689324; Tue, 21 May 2024 13:18:09 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:08 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v9 7/8] net/tap: use generic SW stats Date: Tue, 21 May 2024 13:16:40 -0700 Message-ID: <20240521201801.126886-8-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use new common sw statistics. Signed-off-by: Stephen Hemminger --- drivers/net/tap/rte_eth_tap.c | 75 ++++++----------------------------- drivers/net/tap/rte_eth_tap.h | 15 ++----- 2 files changed, 16 insertions(+), 74 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index 69d9da695b..9cc923fd0c 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -455,7 +455,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* Packet couldn't fit in the provided mbuf */ if (unlikely(rxq->pi.flags & TUN_PKT_STRIP)) { - rxq->stats.ierrors++; + rte_eth_count_error(&rxq->stats); continue; } @@ -467,7 +467,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *buf = rte_pktmbuf_alloc(rxq->mp); if (unlikely(!buf)) { - rxq->stats.rx_nombuf++; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->in_port]; + ++dev->data->rx_mbuf_alloc_failed; + /* No new buf has been allocated: do nothing */ if (!new_tail || !seg) goto end; @@ -512,8 +514,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) num_rx_bytes += mbuf->pkt_len; } end: - rxq->stats.ipackets += num_rx; - rxq->stats.ibytes += num_rx_bytes; + rte_eth_count_packets(&rxq->stats, num_rx, num_rx_bytes); if (trigger && num_rx < nb_pkts) rxq->trigger_seen = trigger; @@ -693,7 +694,7 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) tso_segsz = mbuf_in->tso_segsz + hdrs_len; if (unlikely(tso_segsz == hdrs_len) || tso_segsz > *txq->mtu) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); break; } gso_ctx->gso_size = tso_segsz; @@ -731,7 +732,8 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) ret = tap_write_mbufs(txq, num_mbufs, mbuf, &num_packets, &num_tx_bytes); if (ret == -1) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); + /* free tso mbufs */ if (num_tso_mbufs > 0) rte_pktmbuf_free_bulk(mbuf, num_tso_mbufs); @@ -749,9 +751,7 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } } - txq->stats.opackets += num_packets; - txq->stats.errs += nb_pkts - num_tx; - txq->stats.obytes += num_tx_bytes; + rte_eth_count_packets(&txq->stats, num_packets, num_tx_bytes); return num_tx; } @@ -1055,64 +1055,15 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int tap_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *tap_stats) { - unsigned int i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - unsigned long rx_nombuf = 0, ierrors = 0; - const struct pmd_internals *pmd = dev->data->dev_private; - - /* rx queue statistics */ - imax = (dev->data->nb_rx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_rx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - for (i = 0; i < imax; i++) { - tap_stats->q_ipackets[i] = pmd->rxq[i].stats.ipackets; - tap_stats->q_ibytes[i] = pmd->rxq[i].stats.ibytes; - rx_total += tap_stats->q_ipackets[i]; - rx_bytes_total += tap_stats->q_ibytes[i]; - rx_nombuf += pmd->rxq[i].stats.rx_nombuf; - ierrors += pmd->rxq[i].stats.ierrors; - } - - /* tx queue statistics */ - imax = (dev->data->nb_tx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_tx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - - for (i = 0; i < imax; i++) { - tap_stats->q_opackets[i] = pmd->txq[i].stats.opackets; - tap_stats->q_obytes[i] = pmd->txq[i].stats.obytes; - tx_total += tap_stats->q_opackets[i]; - tx_err_total += pmd->txq[i].stats.errs; - tx_bytes_total += tap_stats->q_obytes[i]; - } - - tap_stats->ipackets = rx_total; - tap_stats->ibytes = rx_bytes_total; - tap_stats->ierrors = ierrors; - tap_stats->rx_nombuf = rx_nombuf; - tap_stats->opackets = tx_total; - tap_stats->oerrors = tx_err_total; - tap_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats), tap_stats); } static int tap_stats_reset(struct rte_eth_dev *dev) { - int i; - struct pmd_internals *pmd = dev->data->dev_private; - - for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { - pmd->rxq[i].stats.ipackets = 0; - pmd->rxq[i].stats.ibytes = 0; - pmd->rxq[i].stats.ierrors = 0; - pmd->rxq[i].stats.rx_nombuf = 0; - - pmd->txq[i].stats.opackets = 0; - pmd->txq[i].stats.errs = 0; - pmd->txq[i].stats.obytes = 0; - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats)); } static int diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index 5ac93f93e9..8cba9ea410 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -14,6 +14,7 @@ #include #include +#include #include #include #include "tap_log.h" @@ -32,23 +33,13 @@ enum rte_tuntap_type { ETH_TUNTAP_TYPE_MAX, }; -struct pkt_stats { - uint64_t opackets; /* Number of output packets */ - uint64_t ipackets; /* Number of input packets */ - uint64_t obytes; /* Number of bytes on output */ - uint64_t ibytes; /* Number of bytes on input */ - uint64_t errs; /* Number of TX error packets */ - uint64_t ierrors; /* Number of RX error packets */ - uint64_t rx_nombuf; /* Nb of RX mbuf alloc failures */ -}; - struct rx_queue { struct rte_mempool *mp; /* Mempool for RX packets */ uint32_t trigger_seen; /* Last seen Rx trigger value */ uint16_t in_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ - struct pkt_stats stats; /* Stats for this RX queue */ uint16_t nb_rx_desc; /* max number of mbufs available */ + struct rte_eth_counters stats; /* Stats for this RX queue */ struct rte_eth_rxmode *rxmode; /* RX features */ struct rte_mbuf *pool; /* mbufs pool for this queue */ struct iovec (*iovecs)[]; /* descriptors for this queue */ @@ -59,7 +50,7 @@ struct tx_queue { int type; /* Type field - TUN|TAP */ uint16_t *mtu; /* Pointer to MTU from dev_data */ uint16_t csum:1; /* Enable checksum offloading */ - struct pkt_stats stats; /* Stats for this TX queue */ + struct rte_eth_counters stats; /* Stats for this TX queue */ struct rte_gso_ctx gso_ctx; /* GSO context */ uint16_t out_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ From patchwork Tue May 21 20:16:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140236 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 605F14408D; Tue, 21 May 2024 22:19:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7A2040E68; Tue, 21 May 2024 22:18:33 +0200 (CEST) Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by mails.dpdk.org (Postfix) with ESMTP id 08C42400D7 for ; Tue, 21 May 2024 22:18:11 +0200 (CEST) Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-1e3c3aa8938so107329335ad.1 for ; Tue, 21 May 2024 13:18:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1716322690; x=1716927490; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=x5UrdrHW9F6pfrhHclD4PQZwg1NEIRxD5Iursp/d81w=; b=y/gEJsrrRP8F/1pONwQPUA9XXG3foEE9sV0+Q8XiEen0qPyX5kZ+xnvQdh798MtqHU cjxdfBK54zT9mBbxrpV17SejNIC+0gEzf64aF4WbwE4he3lk1gJr+OWpm0y5MOTG7VLE kFRfAA45Dc78oZR8d59E0MsBVfNhPxofuX/FNj3HXNa5/KUMJAkexOR+Jg2jbtgo7UCR LTnW6DvUGVBqoF/M+hDd0R3U5K6twiarbpZan47HrAJYEuH0f/d5a21wO6Uitl92845W buVDUy/Wbo5hixjAej0mn7jHzvn2rmvIj0iSlq3GaAqzbt+0imH+754/A5DzW3ExPqse IPnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716322690; x=1716927490; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=x5UrdrHW9F6pfrhHclD4PQZwg1NEIRxD5Iursp/d81w=; b=CjhNaBO+pDgRqjFHt3vBDH0yEeNtlRJjZY6okWNkCLE46epvn3R+hGCL9jEqMqpa6s XVC6yVKejCSFEmXA+1U9pr1Ee+ERHTtBlTXHPO4eueC4p5n2PkptXkguMFEvEnzr7Xq4 fVtuGYlph3DhmuqySrlu9cq2x0Nqc4lFgrBNwgCCHMBTq1/XVY4wDbKTN25a2gZZhfNy NNPZ1LgyqvztkibiwH8dnlF2kntdjmgHmH08PFZy4w1GauryV1AC0h+AGjy7ibkLThEk L+CwCC2PVSi5LGD+uR9tY7e1bC/K8gOFcOaA9dFi1r375P8JlOvUcieNHFTPhizO800H Jp3w== X-Gm-Message-State: AOJu0YzZqz7JUISacVlrllZbW/spHvd3pv/0DXA0mUxVPdvEsUptB9lZ ZRctfXpBg5b9MlT0FqQq8yYaryqK886Fw8/D2dUdzsFFBY/3mwK5duEFVYBbUctn5EDm6qxMQik + X-Google-Smtp-Source: AGHT+IH7ovlV/0oeA/dhHyXwgStZO4Le7D+HTMSU0347QiLvdPJTS8Am1da+dq+qSCznPZz/znOecQ== X-Received: by 2002:a17:902:c443:b0:1e0:bae4:48f9 with SMTP id d9443c01a7336-1f31c9a5067mr274075ad.32.1716322690182; Tue, 21 May 2024 13:18:10 -0700 (PDT) Received: from hermes.lan (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f30fb5303csm16119075ad.166.2024.05.21.13.18.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 May 2024 13:18:09 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Tetsuya Mukawa Subject: [PATCH v9 8/8] net/null: use generic SW stats Date: Tue, 21 May 2024 13:16:41 -0700 Message-ID: <20240521201801.126886-9-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240521201801.126886-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240521201801.126886-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new common code for statistics. This also fixes the bug that this driver was not accounting for bytes. Signed-off-by: Stephen Hemminger --- drivers/net/null/rte_eth_null.c | 83 +++++++++------------------------ 1 file changed, 21 insertions(+), 62 deletions(-) diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c index f4ed3b8a7f..83add9c819 100644 --- a/drivers/net/null/rte_eth_null.c +++ b/drivers/net/null/rte_eth_null.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -37,8 +38,8 @@ struct null_queue { struct rte_mempool *mb_pool; struct rte_mbuf *dummy_packet; - RTE_ATOMIC(uint64_t) rx_pkts; - RTE_ATOMIC(uint64_t) tx_pkts; + struct rte_eth_counters tx_stats; + struct rte_eth_counters rx_stats; }; struct pmd_options { @@ -101,9 +102,7 @@ eth_null_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) bufs[i]->port = h->internals->port_id; } - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->rx_pkts, i, rte_memory_order_seq_cst); - + rte_eth_count_packets(&h->rx_stats, i, i * packet_size); return i; } @@ -129,8 +128,7 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) bufs[i]->port = h->internals->port_id; } - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->rx_pkts, i, rte_memory_order_seq_cst); + rte_eth_count_packets(&h->rx_stats, i, i * packet_size); return i; } @@ -147,16 +145,17 @@ eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { int i; struct null_queue *h = q; + uint32_t tx_bytes = 0; if ((q == NULL) || (bufs == NULL)) return 0; - for (i = 0; i < nb_bufs; i++) + for (i = 0; i < nb_bufs; i++) { + tx_bytes += rte_pktmbuf_pkt_len(bufs[i]); rte_pktmbuf_free(bufs[i]); + } - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->tx_pkts, i, rte_memory_order_seq_cst); - + rte_eth_count_packets(&h->tx_stats, i, tx_bytes); return i; } @@ -166,20 +165,20 @@ eth_null_copy_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) int i; struct null_queue *h = q; unsigned int packet_size; + uint32_t tx_bytes = 0; if ((q == NULL) || (bufs == NULL)) return 0; packet_size = h->internals->packet_size; for (i = 0; i < nb_bufs; i++) { + tx_bytes += rte_pktmbuf_pkt_len(bufs[i]); rte_memcpy(h->dummy_packet, rte_pktmbuf_mtod(bufs[i], void *), packet_size); rte_pktmbuf_free(bufs[i]); } - /* NOTE: review for potential ordering optimization */ - rte_atomic_fetch_add_explicit(&h->tx_pkts, i, rte_memory_order_seq_cst); - + rte_eth_count_packets(&h->tx_stats, i, tx_bytes); return i; } @@ -322,60 +321,20 @@ eth_dev_info(struct rte_eth_dev *dev, } static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) +eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned int i, num_stats; - unsigned long rx_total = 0, tx_total = 0; - const struct pmd_internals *internal; - - if ((dev == NULL) || (igb_stats == NULL)) - return -EINVAL; - - internal = dev->data->dev_private; - num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS, - RTE_MIN(dev->data->nb_rx_queues, - RTE_DIM(internal->rx_null_queues))); - for (i = 0; i < num_stats; i++) { - /* NOTE: review for atomic access */ - igb_stats->q_ipackets[i] = - internal->rx_null_queues[i].rx_pkts; - rx_total += igb_stats->q_ipackets[i]; - } - - num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS, - RTE_MIN(dev->data->nb_tx_queues, - RTE_DIM(internal->tx_null_queues))); - for (i = 0; i < num_stats; i++) { - /* NOTE: review for atomic access */ - igb_stats->q_opackets[i] = - internal->tx_null_queues[i].tx_pkts; - tx_total += igb_stats->q_opackets[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->opackets = tx_total; - - return 0; + return rte_eth_counters_stats_get(dev, + offsetof(struct null_queue, tx_stats), + offsetof(struct null_queue, rx_stats), + stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned int i; - struct pmd_internals *internal; - - if (dev == NULL) - return -EINVAL; - - internal = dev->data->dev_private; - for (i = 0; i < RTE_DIM(internal->rx_null_queues); i++) - /* NOTE: review for atomic access */ - internal->rx_null_queues[i].rx_pkts = 0; - for (i = 0; i < RTE_DIM(internal->tx_null_queues); i++) - /* NOTE: review for atomic access */ - internal->tx_null_queues[i].tx_pkts = 0; - - return 0; + return rte_eth_counters_reset(dev, + offsetof(struct null_queue, tx_stats), + offsetof(struct null_queue, rx_stats)); } static void