From patchwork Fri May 17 00:12:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140149 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9DDF44044; Fri, 17 May 2024 02:13:21 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 666B0402ED; Fri, 17 May 2024 02:13:17 +0200 (CEST) Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by mails.dpdk.org (Postfix) with ESMTP id 26E774027F for ; Fri, 17 May 2024 02:13:14 +0200 (CEST) Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-6f43ee95078so780572b3a.1 for ; Thu, 16 May 2024 17:13:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904793; x=1716509593; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=it8vQ7zCnyKycuWg3oTjgKRI/75M1tfp8aWkSfvtLSE=; b=yBPOnqyb13zIcquimSHGi3wR/zPQ9wYMunweozaWhmwq6wn9zhkQoJAWmtrsWcCXtM r1akPdJNU34kJm5aD5XCKrhWWrEgP/3PFh76fkWIAuRBV2s6B2AVnue5t07a9pKbK8y2 McrIfSyeYPHDwTY3viAy5cVmcRaI4TsWvqbgfnoFov4+j7EhGzhteNCGiUCQ2Ia3h+42 Y5t7XBtonHw6AhaYogk8PvGC8jbZtv11l3FSFDRahYjkdz84sBHa0ltK3Xcydfd4IiSC 0AnLWJ/rdk8cW8e0Bj5g/fnTBBQ/fW3nsIzXK7Vh4kDVHXcXsgVC3kFe8pjSYQzmnomw 7oIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904793; x=1716509593; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=it8vQ7zCnyKycuWg3oTjgKRI/75M1tfp8aWkSfvtLSE=; b=Q2Qbqf/BnZKKeaR3GDOcNFAoqrNiZmadfdcCMQ5ydhTS8IqkdOoqWQPqDSuSy3Pw9n etl6jGDCdeguvR1VhG7Te6osa/uagjJ71a2wZQmzb03ZepDUWp9MX8iSUdjnL9a4oJzo uF+E9c9B6BEUA1LC9XWiAW/Av538GHL3K6dFZ6epbdPBSLFwaQ6j9LqpIZ7fVd2nl2+K i8aEsU/ytmxDpPid1lBzBrwo2hbJiDFGRlhb7KZMagG0Zvpdbx+4ca1944FrQNAy3CED bHS3/DrJMllkMmHqdqCcaLBNpFD+vTR9miZE1Zje9uZRNtq49FZ5Q2y/q2qgv1SzE0Lo pL4g== X-Gm-Message-State: AOJu0YzRJfwlMGb9cZDH7vjcMJy9rFv8gJkWmiV0i8jXiXxwiWUb5OA0 KJQGXd8myUDvMfnNR3lK5uHALacPAswCQ8PcnszHxAPufQZpGpmGANhKyalOMFNicZoQcGN6n0X FGrs= X-Google-Smtp-Source: AGHT+IEQ+34crnoyxX8RxHJm3dmVZQQi4uAY20f/wc6/KBwGbWc5ydCkeDbtxDYkp9OSybrVasZ2oQ== X-Received: by 2002:a05:6a20:12d5:b0:1af:66e6:b1b2 with SMTP id adf61e73a8af0-1afde0b7152mr23824295637.1.1715904793104; Thu, 16 May 2024 17:13:13 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:12 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , =?utf-8?q?Morten_Br?= =?utf-8?q?=C3=B8rup?= Subject: [PATCH v6 1/9] eal: generic 64 bit counter Date: Thu, 16 May 2024 17:12:01 -0700 Message-ID: <20240517001302.65514-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This header implements 64 bit counters that are NOT atomic but are safe against load/store splits on 32 bit platforms. Signed-off-by: Stephen Hemminger Acked-by: Morten Brørup --- lib/eal/include/meson.build | 1 + lib/eal/include/rte_counter.h | 98 +++++++++++++++++++++++++++++++++++ 2 files changed, 99 insertions(+) create mode 100644 lib/eal/include/rte_counter.h diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index e94b056d46..c070dd0079 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -12,6 +12,7 @@ headers += files( 'rte_class.h', 'rte_common.h', 'rte_compat.h', + 'rte_counter.h', 'rte_debug.h', 'rte_dev.h', 'rte_devargs.h', diff --git a/lib/eal/include/rte_counter.h b/lib/eal/include/rte_counter.h new file mode 100644 index 0000000000..d623195d63 --- /dev/null +++ b/lib/eal/include/rte_counter.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_COUNTER_H_ +#define _RTE_COUNTER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * RTE Counter + * + * A counter is 64 bit value that is safe from split read/write + * on 32 bit platforms. It assumes that only one cpu at a time + * will update the counter, and another CPU may want to read it. + * + * This is a much weaker guarantee than full atomic variables + * but is faster since no locked operations are required for update. + */ + +#ifdef RTE_ARCH_64 +/* + * On a platform that can support native 64 bit type, no special handling. + * These are just wrapper around 64 bit value. + */ +typedef uint64_t rte_counter64_t; + +/** + * Add value to counter. + */ +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + *counter += val; +} + +__rte_experimental +static inline uint64_t +rte_counter64_fetch(const rte_counter64_t *counter) +{ + return *counter; +} + +__rte_experimental +static inline void +rte_counter64_set(rte_counter64_t *counter, uint64_t val) +{ + *counter = val; +} + +#else + +#include + +/* + * On a 32 bit platform need to use atomic to force the compler to not + * split 64 bit read/write. + */ +typedef RTE_ATOMIC(uint64_t) rte_counter64_t; + +__rte_experimental +static inline void +rte_counter64_add(rte_counter64_t *counter, uint32_t val) +{ + rte_atomic_fetch_add_explicit(counter, val, rte_memory_order_relaxed); +} + +__rte_experimental +static inline uint64_t +rte_counter64_fetch(const rte_counter64_t *counter) +{ + return rte_atomic_load_explicit(counter, rte_memory_order_consume); +} + +__rte_experimental +static inline void +rte_counter64_set(rte_counter64_t *counter, uint64_t val) +{ + rte_atomic_store_explicit(counter, val, rte_memory_order_release); +} +#endif + +__rte_experimental +static inline void +rte_counter64_reset(rte_counter64_t *counter) +{ + rte_counter64_set(counter, 0); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_COUNTER_H_ */ From patchwork Fri May 17 00:12:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140150 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2830244044; Fri, 17 May 2024 02:13:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 317844064C; Fri, 17 May 2024 02:13:19 +0200 (CEST) Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by mails.dpdk.org (Postfix) with ESMTP id C43664027F for ; Fri, 17 May 2024 02:13:14 +0200 (CEST) Received: by mail-pg1-f174.google.com with SMTP id 41be03b00d2f7-65894e58b8aso624453a12.0 for ; Thu, 16 May 2024 17:13:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904794; x=1716509594; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z8eKqf5lNQlmO6QqgxdqNWvOCC2KYo8qD/rGb5b3ZPM=; b=tFosqydPvPtHp+kMqb3UUMM1DRlsWiUItsqfmsa1JxflxOA5Qp29zaEC6n3X1i+Zdx aWuwn0Wolennk/M1N0D0uaezNcbHxrzbVL8RNDeUWQFEY2x7fTmA7nlvO9koK+O8USAB 9gRKmLv7JDag00nR81rNSxys1lqp4XZ/UBOJIx6wlMdO9Ru5iz8EtyYTXO4TiVd2vUYd nSqXF77fhVEtVUfMKNfxQ33ahblZ4hnpHpj3LewWgOwv8nalpkpUqCmdV/AhM/vDEHWV Vmjuyn91V0uMmQeLqGg7escnh1nvWvjl5dfvLqurW2o0vVOEatH2+sa9GLt9YaOAbOVa +oqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904794; x=1716509594; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z8eKqf5lNQlmO6QqgxdqNWvOCC2KYo8qD/rGb5b3ZPM=; b=ERiZaOgU+H12dgqgO6NoT+i8WWcEWK/l1Z0sutsDVesvTPH3pJoLfAyBmY3JSR1XZU fR4Gb1q1hwa7sIzBzm0Rp8WzEw1lTyoFbT2xromPpxkr8ULbNqEWLXveFAw0MAUnrCJp BZvm/vvfTQ7964LKuSJe8pZjAxwJDhGQKqZTKga4TFFqjZygTkCiPBN3EalIAv+iu1Hk SUoTsz4z8C0g1MaRdAUw0y+mSEVkyECzZB2OVbFcYlJ/LTdyduocSxSDSHmslHs7wCM6 3rkblJpgVvvCyw6tbKSpbNbAHenLcWuJmoUp8XknSpGzht+dhZ/8YdN/nUwQZ+OZKXef w9qg== X-Gm-Message-State: AOJu0YwsRvi0X6DOnlu4GAWyuFp8IRyrNtSfJdhYy4g56xdcUqyrASmx jom/fhP32FEV1XzU4wNOVSX7qaVtPjQun6PUNTeBy6UY7wddw70JvZ2oRq1fMGg3rpweIm8F/k7 8gK8= X-Google-Smtp-Source: AGHT+IHK1Xkb8rb5eTU9rzm6XiYy2CLa+Op6JFMjAkx1qn7q6SHafStWLGYTg3uEyiIaFAJIjk3E8Q== X-Received: by 2002:a05:6a20:7343:b0:1af:dbe7:c976 with SMTP id adf61e73a8af0-1afde115bc1mr23203122637.36.1715904793965; Thu, 16 May 2024 17:13:13 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:13 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Subject: [PATCH v6 2/9] ethdev: add common counters for statistics Date: Thu, 16 May 2024 17:12:02 -0700 Message-ID: <20240517001302.65514-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce common helper routines for keeping track of per-queue statistics in SW PMD's. The code in several drivers had copy/pasted the same code for this, but had common issues with 64 bit counters on 32 bit platforms. Signed-off-by: Stephen Hemminger --- lib/ethdev/ethdev_swstats.c | 109 +++++++++++++++++++++++++++++++ lib/ethdev/ethdev_swstats.h | 124 ++++++++++++++++++++++++++++++++++++ lib/ethdev/meson.build | 2 + lib/ethdev/version.map | 3 + 4 files changed, 238 insertions(+) create mode 100644 lib/ethdev/ethdev_swstats.c create mode 100644 lib/ethdev/ethdev_swstats.h diff --git a/lib/ethdev/ethdev_swstats.c b/lib/ethdev/ethdev_swstats.c new file mode 100644 index 0000000000..7892b2180b --- /dev/null +++ b/lib/ethdev/ethdev_swstats.c @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + + +#include +#include +#include + +#include "rte_ethdev.h" +#include "ethdev_driver.h" +#include "ethdev_swstats.h" + +int +rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats) +{ + uint64_t packets, bytes, errors; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const void *txq = dev->data->tx_queues[i]; + const struct rte_eth_counters *counters; + + if (txq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)txq + tx_offset); + packets = rte_counter64_fetch(&counters->packets); + bytes = rte_counter64_fetch(&counters->bytes); + errors = rte_counter64_fetch(&counters->errors); + + /* + * Prevent compiler from fetching values twice which would + * cause per-queue and global statistics to not match. + */ + rte_compiler_barrier(); + + stats->opackets += packets; + stats->obytes += bytes; + stats->oerrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_opackets[i] = packets; + stats->q_obytes[i] = bytes; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const void *rxq = dev->data->rx_queues[i]; + const struct rte_eth_counters *counters; + + if (rxq == NULL) + continue; + + counters = (const struct rte_eth_counters *)((const char *)rxq + rx_offset); + packets = rte_counter64_fetch(&counters->packets); + bytes = rte_counter64_fetch(&counters->bytes); + errors = rte_counter64_fetch(&counters->errors); + + rte_compiler_barrier(); + + stats->ipackets += packets; + stats->ibytes += bytes; + stats->ierrors += errors; + + if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) { + stats->q_ipackets[i] = packets; + stats->q_ibytes[i] = bytes; + } + } + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + return 0; +} + +int +rte_eth_counters_reset(struct rte_eth_dev *dev, size_t tx_offset, size_t rx_offset) +{ + struct rte_eth_counters *counters; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + void *txq = dev->data->tx_queues[i]; + + if (txq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)txq + tx_offset); + rte_counter64_reset(&counters->packets); + rte_counter64_reset(&counters->bytes); + rte_counter64_reset(&counters->errors); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + void *rxq = dev->data->rx_queues[i]; + + if (rxq == NULL) + continue; + + counters = (struct rte_eth_counters *)((char *)rxq + rx_offset); + rte_counter64_reset(&counters->packets); + rte_counter64_reset(&counters->bytes); + rte_counter64_reset(&counters->errors); + } + + return 0; +} diff --git a/lib/ethdev/ethdev_swstats.h b/lib/ethdev/ethdev_swstats.h new file mode 100644 index 0000000000..808c540640 --- /dev/null +++ b/lib/ethdev/ethdev_swstats.h @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Stephen Hemminger + */ + +#ifndef _RTE_ETHDEV_SWSTATS_H_ +#define _RTE_ETHDEV_SWSTATS_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * + * Internal statistics counters for software based devices. + * Hardware PMD's should use the hardware counters instead. + * + * This provides a library for PMD's to keep track of packets and bytes. + * It is assumed that this will be used per queue and queues are not + * shared by lcores. + */ + +#include + +/** + * A structure to be embedded in the device driver per-queue data. + */ +struct rte_eth_counters { + rte_counter64_t packets; /**< Total number of packets. */ + rte_counter64_t bytes; /**< Total number of bytes. */ + rte_counter64_t errors; /**< Total number of packets with errors. */ +}; + +/** + * @internal + * Increment counters for a single packet. + * + * @param counters + * Pointer to queue structure containing counters. + * @param sz + * Size of the packet in bytes. + */ +__rte_internal +static inline void +rte_eth_count_packet(struct rte_eth_counters *counters, uint32_t sz) +{ + rte_counter64_add(&counters->packets, 1); + rte_counter64_add(&counters->bytes, sz); +} + +/** + * @internal + * Increment counters based on mbuf. + * + * @param counters + * Pointer to queue structure containing counters. + * @param mbuf + * Received or transmitted mbuf. + */ +__rte_internal +static inline void +rte_eth_count_mbuf(struct rte_eth_counters *counters, const struct rte_mbuf *mbuf) +{ + rte_eth_count_packet(counters, rte_pktmbuf_pkt_len(mbuf)); +} + +/** + * @internal + * Increment error counter. + * + * @param counters + * Pointer to queue structure containing counters. + */ +__rte_internal +static inline void +rte_eth_count_error(struct rte_eth_counters *counters) +{ + rte_counter64_add(&counters->errors, 1); +} + +/** + * @internal + * Retrieve the general statistics for all queues. + * @see rte_eth_stats_get. + * + * @param dev + * Pointer to the Ethernet device structure. + * @param tx_offset + * Offset from the tx_queue structure where stats are located. + * @param rx_offset + * Offset from the rx_queue structure where stats are located. + * @param stats + * A pointer to a structure of type *rte_eth_stats* to be filled + * @return + * Zero if successful. Non-zero otherwise. + */ +__rte_internal +int rte_eth_counters_stats_get(const struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset, + struct rte_eth_stats *stats); + +/** + * @internal + * Reset the statistics for all queues. + * @see rte_eth_stats_reset. + * + * @param dev + * Pointer to the Ethernet device structure. + * @param tx_offset + * Offset from the tx_queue structure where stats are located. + * @param rx_offset + * Offset from the rx_queue structure where stats are located. + * @return + * Zero if successful. Non-zero otherwise. + */ +__rte_internal +int rte_eth_counters_reset(struct rte_eth_dev *dev, + size_t tx_offset, size_t rx_offset); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_ETHDEV_SWSTATS_H_ */ diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index f1d2586591..7ce29a46d4 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -3,6 +3,7 @@ sources = files( 'ethdev_driver.c', + 'ethdev_swstats.c', 'ethdev_private.c', 'ethdev_profile.c', 'ethdev_trace_points.c', @@ -42,6 +43,7 @@ driver_sdk_headers += files( 'ethdev_driver.h', 'ethdev_pci.h', 'ethdev_vdev.h', + 'ethdev_swstats.h', ) if is_linux diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 79f6f5293b..fc595be278 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -358,4 +358,7 @@ INTERNAL { rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; rte_flow_fp_default_ops; + + rte_eth_counters_reset; + rte_eth_counters_stats_get; }; From patchwork Fri May 17 00:12:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140151 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4BE1944044; Fri, 17 May 2024 02:13:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 481E940684; Fri, 17 May 2024 02:13:22 +0200 (CEST) Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by mails.dpdk.org (Postfix) with ESMTP id A5687402DC for ; Fri, 17 May 2024 02:13:15 +0200 (CEST) Received: by mail-pg1-f175.google.com with SMTP id 41be03b00d2f7-65be7c29cb1so579302a12.0 for ; Thu, 16 May 2024 17:13:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904795; x=1716509595; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SUE9fO9/yU42C4wqedNCLbGs+mpPzKdD4hF5j4IP+8c=; b=ooS+W3UsqZO8PPPkV4FWsA3OjQUGgLh7lf4y1NBXe8Hv+m2u2seW1cFsXZiIVSdpPH dSMbSpJONWocKMpGaiLa3YI/M3XkXxurXMYHbtONup3xchfXDFRPIb/F+LUrvPbXV9y1 Cxzjv303Y3QwP7B5/jATk/UBRNBfkVu4U/vrk6BQx6O0rAHxBBIYOOJzxvGP7s6lg8XF V6DTYEUgL8ll1eMH1Tt3SXtW5M0PXIRRiUBz4A/8v4iZKaxYfN1PHGv9b6puWvSB8JFv GKn/dRXDfIre1G6OrJo9Yx+JUhLLaWGxn/ITbpAVWDf6pUqhtwrgoY9cT14/3JZ6UDpf Zlkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904795; x=1716509595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SUE9fO9/yU42C4wqedNCLbGs+mpPzKdD4hF5j4IP+8c=; b=VcnL6xBUivOoAEjraall7QjJhUNXLT5p36S4BfzsZjvjQaCShjZXD/N49m6T4+5Uar aCwjWdHgGF9wK+cV0FcOuClKgX9W8/lJnMMm1DiFsDbzOkLGe6xjFn4KiQBomVRtaoUN AtQI4gL/lsCG2Jm36GvxyJfFKBCqLEoYbd5a0y32x4iz41rQGtfWR1/VBe+j1x8NpMHw CjyNsuc0OabbtO2BAUu4PFanwQtp8glJbzPJhLn19usL4OYgMSW49mzC8dpRSLihVmMx tTb1O2EjSvSpnW7POoWQKAOKW2XUEuij5f6/wGbl4Ij0vdyMBJHY30+S8H05uuoDknp8 SIHg== X-Gm-Message-State: AOJu0YzagcFfzWmMwilYaN8BgYYaMjtaHFdtaXxD4NYMbAsppmoTcPeF 1y3P0i5BNYo1Y7R1bdEvl3eFmIhumOINymX8b9jFT8OVRFp9KtJImiFV6LlBdoZuS6oWGXg67AZ 226c= X-Google-Smtp-Source: AGHT+IG64+jy5VEZ9rCds+EfYTFl8vyii2S8apHNskR3PEVA2UYyZS+DLC02sKq3rewtEM4mbtMlUg== X-Received: by 2002:a17:90a:51a5:b0:2b9:e009:e47a with SMTP id 98e67ed59e1d1-2b9e009e50amr8728199a91.10.1715904794731; Thu, 16 May 2024 17:13:14 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:14 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , "John W. Linville" Subject: [PATCH v6 3/9] net/af_packet: use generic SW stats Date: Thu, 16 May 2024 17:12:03 -0700 Message-ID: <20240517001302.65514-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new generic SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_packet/rte_eth_af_packet.c | 82 ++++------------------- 1 file changed, 14 insertions(+), 68 deletions(-) diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c index 397a32db58..89b737e7dc 100644 --- a/drivers/net/af_packet/rte_eth_af_packet.c +++ b/drivers/net/af_packet/rte_eth_af_packet.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -29,6 +30,7 @@ #include #include + #define ETH_AF_PACKET_IFACE_ARG "iface" #define ETH_AF_PACKET_NUM_Q_ARG "qpairs" #define ETH_AF_PACKET_BLOCKSIZE_ARG "blocksz" @@ -51,8 +53,7 @@ struct pkt_rx_queue { uint16_t in_port; uint8_t vlan_strip; - volatile unsigned long rx_pkts; - volatile unsigned long rx_bytes; + struct rte_eth_counters stats; }; struct pkt_tx_queue { @@ -64,11 +65,10 @@ struct pkt_tx_queue { unsigned int framecount; unsigned int framenum; - volatile unsigned long tx_pkts; - volatile unsigned long err_pkts; - volatile unsigned long tx_bytes; + struct rte_eth_counters stats; }; + struct pmd_internals { unsigned nb_queues; @@ -118,8 +118,6 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; uint8_t *pbuf; struct pkt_rx_queue *pkt_q = queue; - uint16_t num_rx = 0; - unsigned long num_rx_bytes = 0; unsigned int framecount, framenum; if (unlikely(nb_pkts == 0)) @@ -164,13 +162,11 @@ eth_af_packet_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[i] = mbuf; - num_rx++; - num_rx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&pkt_q->stats, mbuf); } pkt_q->framenum = framenum; - pkt_q->rx_pkts += num_rx; - pkt_q->rx_bytes += num_rx_bytes; - return num_rx; + + return i; } /* @@ -205,8 +201,6 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) unsigned int framecount, framenum; struct pollfd pfd; struct pkt_tx_queue *pkt_q = queue; - uint16_t num_tx = 0; - unsigned long num_tx_bytes = 0; int i; if (unlikely(nb_pkts == 0)) @@ -285,8 +279,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) framenum = 0; ppd = (struct tpacket2_hdr *) pkt_q->rd[framenum].iov_base; - num_tx++; - num_tx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&pkt_q->stats, mbuf); rte_pktmbuf_free(mbuf); } @@ -298,15 +291,9 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * packets will be considered successful even though only some * are sent. */ - - num_tx = 0; - num_tx_bytes = 0; } pkt_q->framenum = framenum; - pkt_q->tx_pkts += num_tx; - pkt_q->err_pkts += i - num_tx; - pkt_q->tx_bytes += num_tx_bytes; return i; } @@ -386,58 +373,17 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) } static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) +eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_ipackets[i] = internal->rx_queue[i].rx_pkts; - igb_stats->q_ibytes[i] = internal->rx_queue[i].rx_bytes; - rx_total += igb_stats->q_ipackets[i]; - rx_bytes_total += igb_stats->q_ibytes[i]; - } - - imax = (internal->nb_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS ? - internal->nb_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS); - for (i = 0; i < imax; i++) { - igb_stats->q_opackets[i] = internal->tx_queue[i].tx_pkts; - igb_stats->q_obytes[i] = internal->tx_queue[i].tx_bytes; - tx_total += igb_stats->q_opackets[i]; - tx_err_total += internal->tx_queue[i].err_pkts; - tx_bytes_total += igb_stats->q_obytes[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->ibytes = rx_bytes_total; - igb_stats->opackets = tx_total; - igb_stats->oerrors = tx_err_total; - igb_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < internal->nb_queues; i++) { - internal->rx_queue[i].rx_pkts = 0; - internal->rx_queue[i].rx_bytes = 0; - } - - for (i = 0; i < internal->nb_queues; i++) { - internal->tx_queue[i].tx_pkts = 0; - internal->tx_queue[i].err_pkts = 0; - internal->tx_queue[i].tx_bytes = 0; - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); } static int From patchwork Fri May 17 00:12:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140152 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8CF5E44044; Fri, 17 May 2024 02:13:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B55340691; Fri, 17 May 2024 02:13:24 +0200 (CEST) Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by mails.dpdk.org (Postfix) with ESMTP id 53E7C402EA for ; Fri, 17 May 2024 02:13:16 +0200 (CEST) Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-2b620677a4fso588727a91.1 for ; Thu, 16 May 2024 17:13:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904795; x=1716509595; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CgxU9wpKRaiPBphXq4/2tz424NNVDmS90mZx5EIq/+E=; b=vUp74WgUz6danEEOgidCIzw4t9/6122u8xPw45P2h6k9ESk94xTVMCaM5emJMA8Iz+ 8/WAtT0EJTvpmkX5SeS4Zp5cavQ4ZZVgcpOzfzEZtXYg3d2D6H6eYzF1vcAufuj9knxe 4S0r5HlzcaVsFzrMbWwvdsBq/tE86s/rjif/fZT6XHjcvmZ7ixbboPgx1yrCylXUMyaI 3eBETVpzJobGnabVJ8CQwvcrNkwF7lexoMmwY8dWJhDjB4XU9txsuXzqDEbj+eiPwTDK Nd49KVdXzaK8wbh42K3zCZ85KUlL/4mixEEMcx5RiVNmPEqiSqr9Qm62axWq+SAZSZCl 55iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904795; x=1716509595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CgxU9wpKRaiPBphXq4/2tz424NNVDmS90mZx5EIq/+E=; b=anAWtZpi+g9RZ4lBDyMzm73L0PdLpwxGz/or0sxitcD836bQji3jrYKPECQFhqMwjN ucCAtm9iNRYp6tp6sfmed0XiWcOkOpX4Wg92eie3Vuya1RsXU+P4pnOxkxp8IR5Bu8Ga QNGfqI4Z+y4A9X4TIlOCPZGB6sBQZQ1wW4VP1PPktpNsKwrP7pK5DWu30JWSeixOObQb R6FNjSHuDWexz3E/eIhzKcrcKQopCJ10NrIAbeUkz5GkOaxMYD5JncmRTyYrxDUOMM0n W6FhuCVBxrKhhZIqkMnskGklQcfFdTVPe54zJmbcC2vZZDFPIeUp1RtASaFk6303DAbm kapw== X-Gm-Message-State: AOJu0YzJlzMSYP04lFHT993WRzsSI48xhoDwv0VxMO4ZZZQsl6nY+eMl FclAo3+MlBOIkJi9jDPCaPbaaOABwjTlIVNKIrviKyfX4B+N92CVa2TX4RDUxb7ofhJCuT+XSqb C+Zw= X-Google-Smtp-Source: AGHT+IELxT0eW94lY/SK+b2pzZ1pm/6S0CONuetpmecySp0je2ICHw1VmfrVZjfjZn57SZp1nthmMA== X-Received: by 2002:a17:90b:3602:b0:2b2:4c3f:cf08 with SMTP id 98e67ed59e1d1-2b6cb7d0dd6mr23173810a91.0.1715904795528; Thu, 16 May 2024 17:13:15 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:15 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Ciara Loftus Subject: [PATCH v6 4/9] net/af_xdp: use generic SW stats Date: Thu, 16 May 2024 17:12:04 -0700 Message-ID: <20240517001302.65514-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common code for all SW stats. Signed-off-by: Stephen Hemminger --- drivers/net/af_xdp/rte_eth_af_xdp.c | 98 ++++++++--------------------- 1 file changed, 25 insertions(+), 73 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 268a130c49..65fc2f478f 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -120,19 +121,13 @@ struct xsk_umem_info { uint32_t max_xsks; }; -struct rx_stats { - uint64_t rx_pkts; - uint64_t rx_bytes; - uint64_t rx_dropped; -}; - struct pkt_rx_queue { struct xsk_ring_cons rx; struct xsk_umem_info *umem; struct xsk_socket *xsk; struct rte_mempool *mb_pool; - struct rx_stats stats; + struct rte_eth_counters stats; struct xsk_ring_prod fq; struct xsk_ring_cons cq; @@ -143,17 +138,11 @@ struct pkt_rx_queue { int busy_budget; }; -struct tx_stats { - uint64_t tx_pkts; - uint64_t tx_bytes; - uint64_t tx_dropped; -}; - struct pkt_tx_queue { struct xsk_ring_prod tx; struct xsk_umem_info *umem; - struct tx_stats stats; + struct rte_eth_counters stats; struct pkt_rx_queue *pair; int xsk_queue_idx; @@ -308,7 +297,6 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_ring_prod *fq = &rxq->fq; struct xsk_umem_info *umem = rxq->umem; uint32_t idx_rx = 0; - unsigned long rx_bytes = 0; int i; struct rte_mbuf *fq_bufs[ETH_AF_XDP_RX_BATCH_SIZE]; @@ -363,16 +351,13 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_pkt_len(bufs[i]) = len; rte_pktmbuf_data_len(bufs[i]) = len; - rx_bytes += len; + + rte_eth_count_mbuf(&rxq->stats, bufs[i]); } xsk_ring_cons__release(rx, nb_pkts); (void)reserve_fill_queue(umem, nb_pkts, fq_bufs, fq); - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; - return nb_pkts; } #else @@ -384,7 +369,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = rxq->umem; struct xsk_ring_prod *fq = &rxq->fq; uint32_t idx_rx = 0; - unsigned long rx_bytes = 0; int i; uint32_t free_thresh = fq->size >> 1; struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE]; @@ -424,16 +408,13 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_ring_enqueue(umem->buf_ring, (void *)addr); rte_pktmbuf_pkt_len(mbufs[i]) = len; rte_pktmbuf_data_len(mbufs[i]) = len; - rx_bytes += len; + rte_eth_count_mbuf(&rxq->stats, mbufs[i]); + bufs[i] = mbufs[i]; } xsk_ring_cons__release(rx, nb_pkts); - /* statistics */ - rxq->stats.rx_pkts += nb_pkts; - rxq->stats.rx_bytes += rx_bytes; - return nb_pkts; } #endif @@ -527,9 +508,8 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct pkt_tx_queue *txq = queue; struct xsk_umem_info *umem = txq->umem; struct rte_mbuf *mbuf; - unsigned long tx_bytes = 0; int i; - uint32_t idx_tx; + uint32_t idx_tx, pkt_len; uint16_t count = 0; struct xdp_desc *desc; uint64_t addr, offset; @@ -541,6 +521,7 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) for (i = 0; i < nb_pkts; i++) { mbuf = bufs[i]; + pkt_len = rte_pktmbuf_pkt_len(mbuf); if (mbuf->pool == umem->mb_pool) { if (!xsk_ring_prod__reserve(&txq->tx, 1, &idx_tx)) { @@ -589,17 +570,13 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) count++; } - tx_bytes += mbuf->pkt_len; + rte_eth_count_packet(&txq->stats, pkt_len); } out: xsk_ring_prod__submit(&txq->tx, count); kick_tx(txq, cq); - txq->stats.tx_pkts += count; - txq->stats.tx_bytes += tx_bytes; - txq->stats.tx_dropped += nb_pkts - count; - return count; } #else @@ -610,7 +587,6 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = txq->umem; struct rte_mbuf *mbuf; void *addrs[ETH_AF_XDP_TX_BATCH_SIZE]; - unsigned long tx_bytes = 0; int i; uint32_t idx_tx; struct xsk_ring_cons *cq = &txq->pair->cq; @@ -640,7 +616,8 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) pkt = xsk_umem__get_data(umem->mz->addr, desc->addr); rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), desc->len); - tx_bytes += mbuf->pkt_len; + rte_eth_qsw_update(&txq->stats, mbuf); + rte_pktmbuf_free(mbuf); } @@ -648,9 +625,6 @@ af_xdp_tx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) kick_tx(txq, cq); - txq->stats.tx_pkts += nb_pkts; - txq->stats.tx_bytes += tx_bytes; - return nb_pkts; } @@ -847,39 +821,26 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - struct pmd_internals *internals = dev->data->dev_private; struct pmd_process_private *process_private = dev->process_private; - struct xdp_statistics xdp_stats; - struct pkt_rx_queue *rxq; - struct pkt_tx_queue *txq; - socklen_t optlen; - int i, ret, fd; + unsigned int i; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - optlen = sizeof(struct xdp_statistics); - rxq = &internals->rx_queues[i]; - txq = rxq->pair; - stats->q_ipackets[i] = rxq->stats.rx_pkts; - stats->q_ibytes[i] = rxq->stats.rx_bytes; + rte_eth_counters_stats_get(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats), stats); - stats->q_opackets[i] = txq->stats.tx_pkts; - stats->q_obytes[i] = txq->stats.tx_bytes; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct xdp_statistics xdp_stats; + socklen_t optlen = sizeof(xdp_stats); + int fd; - stats->ipackets += stats->q_ipackets[i]; - stats->ibytes += stats->q_ibytes[i]; - stats->imissed += rxq->stats.rx_dropped; - stats->oerrors += txq->stats.tx_dropped; fd = process_private->rxq_xsk_fds[i]; - ret = fd >= 0 ? getsockopt(fd, SOL_XDP, XDP_STATISTICS, - &xdp_stats, &optlen) : -1; - if (ret != 0) { + if (fd < 0) + continue; + if (getsockopt(fd, SOL_XDP, XDP_STATISTICS, + &xdp_stats, &optlen) < 0) { AF_XDP_LOG(ERR, "getsockopt() failed for XDP_STATISTICS.\n"); return -1; } stats->imissed += xdp_stats.rx_dropped; - - stats->opackets += stats->q_opackets[i]; - stats->obytes += stats->q_obytes[i]; } return 0; @@ -888,17 +849,8 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) static int eth_stats_reset(struct rte_eth_dev *dev) { - struct pmd_internals *internals = dev->data->dev_private; - int i; - - for (i = 0; i < internals->queue_cnt; i++) { - memset(&internals->rx_queues[i].stats, 0, - sizeof(struct rx_stats)); - memset(&internals->tx_queues[i].stats, 0, - sizeof(struct tx_stats)); - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct pkt_tx_queue, stats), + offsetof(struct pkt_rx_queue, stats)); } #ifdef RTE_NET_AF_XDP_LIBBPF_XDP_ATTACH From patchwork Fri May 17 00:12:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140153 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 953C344044; Fri, 17 May 2024 02:13:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7534409FA; Fri, 17 May 2024 02:13:26 +0200 (CEST) Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by mails.dpdk.org (Postfix) with ESMTP id 395BB402EA for ; Fri, 17 May 2024 02:13:17 +0200 (CEST) Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-2b2b42b5126so691443a91.3 for ; Thu, 16 May 2024 17:13:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904796; x=1716509596; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AxNG1Ahfsh5mUGzqqQYD8sMjy9Y6CWI7ENjvPxua/xw=; b=Q2Xnetz8r4Q0KF6GROy3n/yYsEdBrnmFNZfITJRaZaUJ7FW++LCE7QWEnL2IIKhyck 4LnQYM2UAO6HapZiMA8cEUU+/jQi3n7h9K07d4Gs1E+T2REea+ftjQcpGXi6XfhR3nHG h1j+Nt1xZr6fIvNQgDIXDkVOXKcKygRCcCji/oFO+fuyn7ppBxE6fnT3t56hWKRrdsWo aCYJJ0hG18dqLxYUfNmKKKnM33b4zNshnk7QZrigamS/UGNkJLj2kkDPwBJ9hOyQpG7H r6gkPQNTEEPD3/cDbcJ2CBL1NgFh3CZRWX1Ds0PWSeu3V6tOa3byYiWGdrkg3+dhc+iH R1qA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904796; x=1716509596; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AxNG1Ahfsh5mUGzqqQYD8sMjy9Y6CWI7ENjvPxua/xw=; b=hi0pgGbNpyenUBFVqj7yP3/TxTf6mNUDYYSG7QHfWIx8HJjGAGyUH6SIF6BUT6nmG1 8/cESQWJlmocYnIyRovTgOmDbDh2UMIm56BylSNfz/FxBtoB9zzsvAekDOAVeAArX8RR iO3l0eEMYLHSek4f10XnYu9uyW31LM4JhNs7SDnS4iPMBj5D8dqEALxvmByQOhl7ueDS zVQtf0gG9MWkbxwA5TKN4JljlYkoAQrcC+088fKNCiHCbiQBnyXsnNeqlhBHEDliFbOt QlU8G4MmYu7Afwmj4tM8YukRs5+ndrykVVpolmDtVrXo8WkjjzshrxwSvS1Mm4Y9o/5r uOuA== X-Gm-Message-State: AOJu0YwlouHjmwwprYrC0s6GGXAN7R3BfWCrM9CTSWZ7UeXPKscwbXDL lbxfZkJasBfx0bzJSQ852zm9stNxzMwUe7a1QWRk79K6UNF9DAjoTyO/F3JpEM8ukKnIlwytVTa Gsnw= X-Google-Smtp-Source: AGHT+IGkZRsBiGN25G6fjG1507q46J6+1VDomXR0xcj9iSrmptdKX9T5LXkZEvN+jd6sZlh41O7s8w== X-Received: by 2002:a17:90a:dc82:b0:2ad:f278:77b4 with SMTP id 98e67ed59e1d1-2b6cc76f6e2mr20765977a91.23.1715904796315; Thu, 16 May 2024 17:13:16 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:15 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v6 5/9] net/pcap: use generic SW stats Date: Thu, 16 May 2024 17:12:05 -0700 Message-ID: <20240517001302.65514-6-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use common statistics for SW drivers. Signed-off-by: Stephen Hemminger --- drivers/net/pcap/pcap_ethdev.c | 125 +++++++-------------------------- 1 file changed, 26 insertions(+), 99 deletions(-) diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index bfec085045..b1a983f871 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -48,13 +49,6 @@ static uint8_t iface_idx; static uint64_t timestamp_rx_dynflag; static int timestamp_dynfield_offset = -1; -struct queue_stat { - volatile unsigned long pkts; - volatile unsigned long bytes; - volatile unsigned long err_pkts; - volatile unsigned long rx_nombuf; -}; - struct queue_missed_stat { /* last value retrieved from pcap */ unsigned int pcap; @@ -68,7 +62,7 @@ struct pcap_rx_queue { uint16_t port_id; uint16_t queue_id; struct rte_mempool *mb_pool; - struct queue_stat rx_stat; + struct rte_eth_counters rx_stat; struct queue_missed_stat missed_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; @@ -80,7 +74,7 @@ struct pcap_rx_queue { struct pcap_tx_queue { uint16_t port_id; uint16_t queue_id; - struct queue_stat tx_stat; + struct rte_eth_counters tx_stat; char name[PATH_MAX]; char type[ETH_PCAP_ARG_MAXLEN]; }; @@ -238,7 +232,6 @@ eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { int i; struct pcap_rx_queue *pcap_q = queue; - uint32_t rx_bytes = 0; if (unlikely(nb_pkts == 0)) return 0; @@ -252,39 +245,35 @@ eth_pcap_rx_infinite(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (err) return i; + rte_eth_count_mbuf(&pcap_q->rx_stat, pcap_buf); + rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), rte_pktmbuf_mtod(pcap_buf, void *), pcap_buf->data_len); bufs[i]->data_len = pcap_buf->data_len; bufs[i]->pkt_len = pcap_buf->pkt_len; bufs[i]->port = pcap_q->port_id; - rx_bytes += pcap_buf->data_len; + /* Enqueue packet back on ring to allow infinite rx. */ rte_ring_enqueue(pcap_q->pkts, pcap_buf); } - pcap_q->rx_stat.pkts += i; - pcap_q->rx_stat.bytes += rx_bytes; - return i; } static uint16_t eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { + struct pcap_rx_queue *pcap_q = queue; + struct rte_eth_dev *dev = &rte_eth_devices[pcap_q->port_id]; + struct pmd_process_private *pp = dev->process_private; + pcap_t *pcap = pp->rx_pcap[pcap_q->queue_id]; unsigned int i; struct pcap_pkthdr header; - struct pmd_process_private *pp; const u_char *packet; struct rte_mbuf *mbuf; - struct pcap_rx_queue *pcap_q = queue; uint16_t num_rx = 0; - uint32_t rx_bytes = 0; - pcap_t *pcap; - - pp = rte_eth_devices[pcap_q->port_id].process_private; - pcap = pp->rx_pcap[pcap_q->queue_id]; if (unlikely(pcap == NULL || nb_pkts == 0)) return 0; @@ -300,7 +289,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool); if (unlikely(mbuf == NULL)) { - pcap_q->rx_stat.rx_nombuf++; + ++dev->data->rx_mbuf_alloc_failed; break; } @@ -315,7 +304,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf, packet, header.caplen) == -1)) { - pcap_q->rx_stat.err_pkts++; + rte_eth_count_error(&pcap_q->rx_stat); rte_pktmbuf_free(mbuf); break; } @@ -329,11 +318,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mbuf->ol_flags |= timestamp_rx_dynflag; mbuf->port = pcap_q->port_id; bufs[num_rx] = mbuf; + + rte_eth_count_mbuf(&pcap_q->rx_stat, mbuf); num_rx++; - rx_bytes += header.caplen; } - pcap_q->rx_stat.pkts += num_rx; - pcap_q->rx_stat.bytes += rx_bytes; return num_rx; } @@ -379,8 +367,6 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *dumper_q = queue; - uint16_t num_tx = 0; - uint32_t tx_bytes = 0; struct pcap_pkthdr header; pcap_dumper_t *dumper; unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; @@ -412,8 +398,7 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) pcap_dump((u_char *)dumper, &header, rte_pktmbuf_read(mbuf, 0, caplen, temp_data)); - num_tx++; - tx_bytes += caplen; + rte_eth_count_mbuf(&dumper_q->tx_stat, mbuf); rte_pktmbuf_free(mbuf); } @@ -423,9 +408,6 @@ eth_pcap_tx_dumper(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) * we flush the pcap dumper within each burst. */ pcap_dump_flush(dumper); - dumper_q->tx_stat.pkts += num_tx; - dumper_q->tx_stat.bytes += tx_bytes; - dumper_q->tx_stat.err_pkts += nb_pkts - num_tx; return nb_pkts; } @@ -437,20 +419,16 @@ static uint16_t eth_tx_drop(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { unsigned int i; - uint32_t tx_bytes = 0; struct pcap_tx_queue *tx_queue = queue; if (unlikely(nb_pkts == 0)) return 0; for (i = 0; i < nb_pkts; i++) { - tx_bytes += bufs[i]->pkt_len; + rte_eth_count_mbuf(&tx_queue->tx_stat, bufs[i]); rte_pktmbuf_free(bufs[i]); } - tx_queue->tx_stat.pkts += nb_pkts; - tx_queue->tx_stat.bytes += tx_bytes; - return i; } @@ -465,8 +443,6 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *mbuf; struct pmd_process_private *pp; struct pcap_tx_queue *tx_queue = queue; - uint16_t num_tx = 0; - uint32_t tx_bytes = 0; pcap_t *pcap; unsigned char temp_data[RTE_ETH_PCAP_SNAPLEN]; size_t len; @@ -497,15 +473,11 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_pktmbuf_read(mbuf, 0, len, temp_data), len); if (unlikely(ret != 0)) break; - num_tx++; - tx_bytes += len; + + rte_eth_count_mbuf(&tx_queue->tx_stat, mbuf); rte_pktmbuf_free(mbuf); } - tx_queue->tx_stat.pkts += num_tx; - tx_queue->tx_stat.bytes += tx_bytes; - tx_queue->tx_stat.err_pkts += i - num_tx; - return i; } @@ -746,41 +718,12 @@ static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { unsigned int i; - unsigned long rx_packets_total = 0, rx_bytes_total = 0; - unsigned long rx_missed_total = 0; - unsigned long rx_nombuf_total = 0, rx_err_total = 0; - unsigned long tx_packets_total = 0, tx_bytes_total = 0; - unsigned long tx_packets_err_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts; - stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes; - rx_nombuf_total += internal->rx_queue[i].rx_stat.rx_nombuf; - rx_err_total += internal->rx_queue[i].rx_stat.err_pkts; - rx_packets_total += stats->q_ipackets[i]; - rx_bytes_total += stats->q_ibytes[i]; - rx_missed_total += queue_missed_stat_get(dev, i); - } - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_queue[i].tx_stat.pkts; - stats->q_obytes[i] = internal->tx_queue[i].tx_stat.bytes; - tx_packets_total += stats->q_opackets[i]; - tx_bytes_total += stats->q_obytes[i]; - tx_packets_err_total += internal->tx_queue[i].tx_stat.err_pkts; - } + rte_eth_counters_stats_get(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat), stats); - stats->ipackets = rx_packets_total; - stats->ibytes = rx_bytes_total; - stats->imissed = rx_missed_total; - stats->ierrors = rx_err_total; - stats->rx_nombuf = rx_nombuf_total; - stats->opackets = tx_packets_total; - stats->obytes = tx_bytes_total; - stats->oerrors = tx_packets_err_total; + for (i = 0; i < dev->data->nb_rx_queues; i++) + stats->imissed += queue_missed_stat_get(dev, i); return 0; } @@ -789,21 +732,12 @@ static int eth_stats_reset(struct rte_eth_dev *dev) { unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - internal->rx_queue[i].rx_stat.pkts = 0; - internal->rx_queue[i].rx_stat.bytes = 0; - internal->rx_queue[i].rx_stat.err_pkts = 0; - internal->rx_queue[i].rx_stat.rx_nombuf = 0; - queue_missed_stat_reset(dev, i); - } + rte_eth_counters_reset(dev, offsetof(struct pcap_tx_queue, tx_stat), + offsetof(struct pcap_rx_queue, rx_stat)); - for (i = 0; i < dev->data->nb_tx_queues; i++) { - internal->tx_queue[i].tx_stat.pkts = 0; - internal->tx_queue[i].tx_stat.bytes = 0; - internal->tx_queue[i].tx_stat.err_pkts = 0; - } + for (i = 0; i < dev->data->nb_rx_queues; i++) + queue_missed_stat_reset(dev, i); return 0; } @@ -929,13 +863,6 @@ eth_rx_queue_setup(struct rte_eth_dev *dev, pcap_pkt_count); return -EINVAL; } - - /* - * Reset the stats for this queue since eth_pcap_rx calls above - * didn't result in the application receiving packets. - */ - pcap_q->rx_stat.pkts = 0; - pcap_q->rx_stat.bytes = 0; } return 0; From patchwork Fri May 17 00:12:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140154 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA6BE44044; Fri, 17 May 2024 02:14:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 625BC4069F; Fri, 17 May 2024 02:13:28 +0200 (CEST) Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by mails.dpdk.org (Postfix) with ESMTP id 0256940285 for ; Fri, 17 May 2024 02:13:18 +0200 (CEST) Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-2b5388087f9so632548a91.0 for ; Thu, 16 May 2024 17:13:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904797; x=1716509597; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=emfW8pqQWo/kkBypu+P+JiAY/UdpbCFtvVnSZa+FuiE=; b=PY0I4K52qLXrBxj5O0bYjldPARJkw1C0kagFuaXT2xsG/9Hdpih7WrdbC1YtkivGwH BW1PLc6GhJdN+mYuXru2MfLUoFow5/RGki0UbBrUgntfPBUI6ZN5UEGRD/G3wvmkvtNX JwI796toUg+gSDoZG3Y2BFBneI214rWhj79tof6Uexysk4RDDiQCwSK9cchPeiIH7ZV8 uNZyKSicjeoHjkWf9fNWeWcYm6LPfnoufmiKp4XtGcwsf4G4XBycWskN+EVXcqnYaiKv HwU2bncA4SSQpF82xRyYvlvr93hHk1IIWy8jaNy+LEjtgdd/oE0uMUqvcmDWOqYvsY/s zcmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904797; x=1716509597; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=emfW8pqQWo/kkBypu+P+JiAY/UdpbCFtvVnSZa+FuiE=; b=lxQiknu0DZVKuT0bv/haD7wGz6zqdd3vibVOOtCOfPdsJyxtEPXe3aBXmfO/h+m04U mZDXGwYOuhsi5cuo+1PHMOlqScre2K9uUBxJ1aAXo/9EmyQgX8xtcvlVSDN5sYk4cJBv bftgIKom4VmoPN1/PeZDlgV55dGGmpZGA/qx0NxB6/p11r+jckVhCnQ892IH4YjIzo4r 6uAulOjLX5CsrdQsR5c6dO5yO/8QkrE/28WkBMImiZ1oBCd9EHxZJDX/aiPrx4Ua295q K2uH53bq18KhtCWcwg+udB3oefabqV9EkqPJ9AVsJN+rD1cOZXv0tNJcipVMnWYjRFm/ p9Sw== X-Gm-Message-State: AOJu0YxQmNLDLt6hhCtWYkrD9XDolPKWwmFswJprNoE5tUdOYfoT3HKx E+42+hstDfxqzBGmii4RApwlXdAhWmHKA6oJNw54cB/IOxnzHHaG9bWerrYo+jZIdT4XPjwMRKn oKy4= X-Google-Smtp-Source: AGHT+IFzZu7xurOSNjuzDQBBjSRK18nJ+hMRk5M6xl7v2dfdvXctfWrF80BTn//5qnXvT5/yO1WFhg== X-Received: by 2002:a17:90a:ce8f:b0:2b4:9e2d:3cab with SMTP id 98e67ed59e1d1-2b6cc76d230mr20756957a91.23.1715904797205; Thu, 16 May 2024 17:13:17 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:16 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Bruce Richardson Subject: [PATCH v6 6/9] test/pmd_ring: initialize mbufs Date: Thu, 16 May 2024 17:12:06 -0700 Message-ID: <20240517001302.65514-7-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Do not pass uninitialized data into the ring PMD. The mbufs should be initialized first so that length is zero. Signed-off-by: Stephen Hemminger --- app/test/test_pmd_ring.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/app/test/test_pmd_ring.c b/app/test/test_pmd_ring.c index e83b9dd6b8..55455ece7f 100644 --- a/app/test/test_pmd_ring.c +++ b/app/test/test_pmd_ring.c @@ -19,6 +19,14 @@ static struct rte_mempool *mp; struct rte_ring *rxtx[NUM_RINGS]; static int tx_porta, rx_portb, rxtx_portc, rxtx_portd, rxtx_porte; +/* make a valid zero sized mbuf */ +static void +test_mbuf_init(struct rte_mbuf *mbuf) +{ + memset(mbuf, 0, sizeof(*mbuf)); + rte_pktmbuf_reset(mbuf); +} + static int test_ethdev_configure_port(int port) { @@ -68,14 +76,16 @@ test_ethdev_configure_port(int port) static int test_send_basic_packets(void) { - struct rte_mbuf bufs[RING_SIZE]; + struct rte_mbuf bufs[RING_SIZE]; struct rte_mbuf *pbufs[RING_SIZE]; int i; printf("Testing send and receive RING_SIZE/2 packets (tx_porta -> rx_portb)\n"); - for (i = 0; i < RING_SIZE/2; i++) + for (i = 0; i < RING_SIZE / 2; i++) { + test_mbuf_init(&bufs[i]); pbufs[i] = &bufs[i]; + } if (rte_eth_tx_burst(tx_porta, 0, pbufs, RING_SIZE/2) < RING_SIZE/2) { printf("Failed to transmit packet burst port %d\n", tx_porta); @@ -99,14 +109,16 @@ test_send_basic_packets(void) static int test_send_basic_packets_port(int port) { - struct rte_mbuf bufs[RING_SIZE]; + struct rte_mbuf bufs[RING_SIZE]; struct rte_mbuf *pbufs[RING_SIZE]; int i; printf("Testing send and receive RING_SIZE/2 packets (cmdl_port0 -> cmdl_port0)\n"); - for (i = 0; i < RING_SIZE/2; i++) + for (i = 0; i < RING_SIZE / 2; i++) { + test_mbuf_init(&bufs[i]); pbufs[i] = &bufs[i]; + } if (rte_eth_tx_burst(port, 0, pbufs, RING_SIZE/2) < RING_SIZE/2) { printf("Failed to transmit packet burst port %d\n", port); @@ -134,10 +146,11 @@ test_get_stats(int port) struct rte_eth_stats stats; struct rte_mbuf buf, *pbuf = &buf; + test_mbuf_init(&buf); + printf("Testing ring PMD stats_get port %d\n", port); /* check stats of RXTX port, should all be zero */ - rte_eth_stats_get(port, &stats); if (stats.ipackets != 0 || stats.opackets != 0 || stats.ibytes != 0 || stats.obytes != 0 || @@ -173,6 +186,8 @@ test_stats_reset(int port) struct rte_eth_stats stats; struct rte_mbuf buf, *pbuf = &buf; + test_mbuf_init(&buf); + printf("Testing ring PMD stats_reset port %d\n", port); rte_eth_stats_reset(port); @@ -228,6 +243,7 @@ test_pmd_ring_pair_create_attach(void) int ret; memset(&null_conf, 0, sizeof(struct rte_eth_conf)); + test_mbuf_init(&buf); if ((rte_eth_dev_configure(rxtx_portd, 1, 1, &null_conf) < 0) || (rte_eth_dev_configure(rxtx_porte, 1, 1, From patchwork Fri May 17 00:12:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140155 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60FAC44044; Fri, 17 May 2024 02:14:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 55D6E40A6D; Fri, 17 May 2024 02:13:30 +0200 (CEST) Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by mails.dpdk.org (Postfix) with ESMTP id CAC1340649 for ; Fri, 17 May 2024 02:13:18 +0200 (CEST) Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-2a2d82537efso584700a91.2 for ; Thu, 16 May 2024 17:13:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904798; x=1716509598; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ARyQNNqf2hZNrQietIAGIkeF6zKTVmpArCfhMPOZfO4=; b=0qmu6H4YkEu4Ld0jWzNcvjz5Az076zdwmwkPKa5hK+K9ubQru3qfYScyOaGzeSDqh6 GJ1dU4lwmZiv75tCgTpRvFjaGf6MFxpOcz5tB/hGyqgmvX6fAggybvcweeFYQ1RhdgOl WM4e1b7dBAW7WJ4CjHkwbnQtGKg1TiLTWE4iK7H05zFE71NUCJ5fPYAT+7x63g1cxBeK sN8d+UbtsoMvsq1lOLGOc0TZT+mJXqwGQrbBI/RlTG+PSnSMG/o4QNy0o6bf8qFXvGeC AEt7BYr19KQLfTRcMFfjWmLQqYIOWFVAWolVvJ3Oo624r1riykkPgexFn8OSOrmuRQdC T7WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904798; x=1716509598; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ARyQNNqf2hZNrQietIAGIkeF6zKTVmpArCfhMPOZfO4=; b=c4/OVVkeXR2dV2vPuSuJb5j4uYuCbqj5KHkp+E5H1ZD5SRoCJ49Ow9v3veLnlOeOZp DmcfjdzZuf6681evtkuVuSFfEuZCT0COArwuo/VaA7t5VX08q0/AP2hwE3lCj3jyxJ4r AWD/nhtk7XtCRd87j7eb5DcECo2Ysie6QXC/GUub3JYbAhwFlXE4PlZnFjO+rzujtWYk 7AhThJLzpM0AxkLTHHV8U5do63NJCOWzp4KWqmI7CdAX4MlslN87AZPOnvqgx1Mjwjem Oq4NY1LZhVG9M+9UtiV/SUkuPiEV3eUZ+Kldrb6Dn5Y2nx9LASbp8Zsae9HF6Jt/wdPE e05w== X-Gm-Message-State: AOJu0YyyTXNNthe/t4wPjDVfhOgB8IkqkwmmW69ldieYh3F5iCndKxt3 NInc43sB6qp6J4sjBJ4lH5nDhbpWod4lwGQZg+0wvVrpPQz32p/+cemrnwpN8IUfoU4vxVj/Chp IVOQ= X-Google-Smtp-Source: AGHT+IEG8cMvwGmBlnWAwJKC09cPlj3tijsNnnZmfc8Ch4kH7cSRaPn5U16FEy5fUOVrfVBwaVE9KA== X-Received: by 2002:a17:90a:d09:b0:2af:b977:363a with SMTP id 98e67ed59e1d1-2b6cd1e352cmr20488185a91.43.1715904797989; Thu, 16 May 2024 17:13:17 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:17 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Bruce Richardson Subject: [PATCH v6 7/9] net/ring: use generic SW stats Date: Thu, 16 May 2024 17:12:07 -0700 Message-ID: <20240517001302.65514-8-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use generic per-queue infrastructure. This also fixes bug where ring code was not accounting for bytes. Signed-off-by: Stephen Hemminger --- drivers/net/ring/rte_eth_ring.c | 71 +++++++++++++-------------------- 1 file changed, 28 insertions(+), 43 deletions(-) diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 48953dd7a0..85f14dd679 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -7,6 +7,7 @@ #include "rte_eth_ring.h" #include #include +#include #include #include #include @@ -44,8 +45,8 @@ enum dev_action { struct ring_queue { struct rte_ring *rng; - uint64_t rx_pkts; - uint64_t tx_pkts; + + struct rte_eth_counters stats; }; struct pmd_internals { @@ -77,12 +78,13 @@ eth_ring_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; struct ring_queue *r = q; - const uint16_t nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SC_DEQ) - r->rx_pkts += nb_rx; - else - __atomic_fetch_add(&r->rx_pkts, nb_rx, __ATOMIC_RELAXED); + uint16_t i, nb_rx; + + nb_rx = (uint16_t)rte_ring_dequeue_burst(r->rng, ptrs, nb_bufs, NULL); + + for (i = 0; i < nb_rx; i++) + rte_eth_count_mbuf(&r->stats, bufs[i]); + return nb_rx; } @@ -90,13 +92,20 @@ static uint16_t eth_ring_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { void **ptrs = (void *)&bufs[0]; + uint32_t *sizes; struct ring_queue *r = q; - const uint16_t nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, - ptrs, nb_bufs, NULL); - if (r->rng->flags & RING_F_SP_ENQ) - r->tx_pkts += nb_tx; - else - __atomic_fetch_add(&r->tx_pkts, nb_tx, __ATOMIC_RELAXED); + uint16_t i, nb_tx; + + sizes = alloca(sizeof(uint32_t) * nb_bufs); + + for (i = 0; i < nb_bufs; i++) + sizes[i] = rte_pktmbuf_pkt_len(bufs[i]); + + nb_tx = (uint16_t)rte_ring_enqueue_burst(r->rng, ptrs, nb_bufs, NULL); + + for (i = 0; i < nb_tx; i++) + rte_eth_count_packet(&r->stats, sizes[i]); + return nb_tx; } @@ -193,40 +202,16 @@ eth_dev_info(struct rte_eth_dev *dev, static int eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned int i; - unsigned long rx_total = 0, tx_total = 0; - const struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_rx_queues; i++) { - stats->q_ipackets[i] = internal->rx_ring_queues[i].rx_pkts; - rx_total += stats->q_ipackets[i]; - } - - for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS && - i < dev->data->nb_tx_queues; i++) { - stats->q_opackets[i] = internal->tx_ring_queues[i].tx_pkts; - tx_total += stats->q_opackets[i]; - } - - stats->ipackets = rx_total; - stats->opackets = tx_total; - - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats), + stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned int i; - struct pmd_internals *internal = dev->data->dev_private; - - for (i = 0; i < dev->data->nb_rx_queues; i++) - internal->rx_ring_queues[i].rx_pkts = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) - internal->tx_ring_queues[i].tx_pkts = 0; - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct ring_queue, stats), + offsetof(struct ring_queue, stats)); } static void From patchwork Fri May 17 00:12:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140156 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 101AD44044; Fri, 17 May 2024 02:14:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A97A40A75; Fri, 17 May 2024 02:13:33 +0200 (CEST) Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by mails.dpdk.org (Postfix) with ESMTP id 9F5994067A for ; Fri, 17 May 2024 02:13:19 +0200 (CEST) Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-2b4952a1aecso615046a91.3 for ; Thu, 16 May 2024 17:13:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904799; x=1716509599; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KwEt8rCLbyDQEc29jCoUGhiylEpiJNl7eErC5B0q55M=; b=wU74QJT+u309JmHaJ0FcAwEqFAibod7N33h4G0igfNOtdtYsJ6q3VC/Dciny18sxHm zE4pGnC46f813QYZe1xuRMjlo0jFs1CfOnZr0+S3nYgC5bu542i7dh4VDF4dQP/2+JMb geU43wcgDmjMpPSAEI03y2l58rOh8QJncFQ/sTcO8bfgdp1wlI9b9uWTgP+/A5lht7TW If6oOkKLy2xVjTXDB8OW3vdjEZivmj3Ek+ucP2o2YcEpGHsjUyIhO4cf0ouPrhCvL3e0 FxH9S5j7HbYSKsYR5jC0bARI3y73qOZF7roqyhHlmgENt1rHav2pMYeHtnpmE06Facz+ X1EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904799; x=1716509599; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KwEt8rCLbyDQEc29jCoUGhiylEpiJNl7eErC5B0q55M=; b=KXUkLrG3DBeLOadrnz2RgxigUav1z92pDIlxMOyQjFymX5vaNjSrbjQ88tvCK2oOCW FkJNF7suU2R69ugSPE51qw96lRVKefYNQEgPEZjTbQyI7fXQkDawFUOdaP1HQOw2NHWJ rhaamAjqkuac+1iMd9SD2ItfB4ol4nAjORd65Mx7PxiDA51C00kz6qMNZ+AT5hPRDkLO VtwMy/x+ZaxGLHhkPD43JXJRvdn2HjLftXczxIgrZ+dw7CQhNpvhPFxlI6BMB2Oe6Kzj GWdf5vLuW6108GeKJ8Y0uDkJJKZzUxrbz2EVSTiumIyst++zHf+wLKpHkP+SKlkayHs1 1Log== X-Gm-Message-State: AOJu0YzqDIOLUEeivqZ7rYu6wbPKYAwUCFG1ZZqhUmVgWsAtJAIXHtdC 6hTFXqeBmtcQhutrZeYxMy5lnQRukS6XGp3kAp1Ad+XTndQYZbyUvmWQa/5jwN2cPEY5M7hEFZm Behw= X-Google-Smtp-Source: AGHT+IE1hvhwIkM67gNQ7BDo6nHmS8FN47CxRs4ZjcdqSkfAV1O8deADn54jcuPAGTLQa/YZnOSUMw== X-Received: by 2002:a17:90a:8415:b0:2b5:91d1:3ae9 with SMTP id 98e67ed59e1d1-2b6cc44f99fmr17301880a91.19.1715904798735; Thu, 16 May 2024 17:13:18 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:18 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [PATCH v6 8/9] net/tap: use generic SW stats Date: Thu, 16 May 2024 17:12:08 -0700 Message-ID: <20240517001302.65514-9-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use new common sw statistics. Signed-off-by: Stephen Hemminger --- drivers/net/tap/rte_eth_tap.c | 88 ++++++----------------------------- drivers/net/tap/rte_eth_tap.h | 15 ++---- 2 files changed, 18 insertions(+), 85 deletions(-) diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index 69d9da695b..f87979da4f 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -432,7 +432,6 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rx_queue *rxq = queue; struct pmd_process_private *process_private; uint16_t num_rx; - unsigned long num_rx_bytes = 0; uint32_t trigger = tap_trigger; if (trigger == rxq->trigger_seen) @@ -455,7 +454,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* Packet couldn't fit in the provided mbuf */ if (unlikely(rxq->pi.flags & TUN_PKT_STRIP)) { - rxq->stats.ierrors++; + rte_eth_count_error(&rxq->stats); continue; } @@ -467,7 +466,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct rte_mbuf *buf = rte_pktmbuf_alloc(rxq->mp); if (unlikely(!buf)) { - rxq->stats.rx_nombuf++; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->in_port]; + ++dev->data->rx_mbuf_alloc_failed; + /* No new buf has been allocated: do nothing */ if (!new_tail || !seg) goto end; @@ -509,11 +510,9 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /* account for the receive frame */ bufs[num_rx++] = mbuf; - num_rx_bytes += mbuf->pkt_len; + rte_eth_count_mbuf(&rxq->stats, mbuf); } end: - rxq->stats.ipackets += num_rx; - rxq->stats.ibytes += num_rx_bytes; if (trigger && num_rx < nb_pkts) rxq->trigger_seen = trigger; @@ -523,8 +522,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) static inline int tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, - struct rte_mbuf **pmbufs, - uint16_t *num_packets, unsigned long *num_tx_bytes) + struct rte_mbuf **pmbufs) { struct pmd_process_private *process_private; int i; @@ -647,8 +645,7 @@ tap_write_mbufs(struct tx_queue *txq, uint16_t num_mbufs, if (n <= 0) return -1; - (*num_packets)++; - (*num_tx_bytes) += rte_pktmbuf_pkt_len(mbuf); + rte_eth_count_mbuf(&txq->stats, mbuf); } return 0; } @@ -660,8 +657,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) { struct tx_queue *txq = queue; uint16_t num_tx = 0; - uint16_t num_packets = 0; - unsigned long num_tx_bytes = 0; uint32_t max_size; int i; @@ -693,7 +688,7 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) tso_segsz = mbuf_in->tso_segsz + hdrs_len; if (unlikely(tso_segsz == hdrs_len) || tso_segsz > *txq->mtu) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); break; } gso_ctx->gso_size = tso_segsz; @@ -728,10 +723,10 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) num_mbufs = 1; } - ret = tap_write_mbufs(txq, num_mbufs, mbuf, - &num_packets, &num_tx_bytes); + ret = tap_write_mbufs(txq, num_mbufs, mbuf); if (ret == -1) { - txq->stats.errs++; + rte_eth_count_error(&txq->stats); + /* free tso mbufs */ if (num_tso_mbufs > 0) rte_pktmbuf_free_bulk(mbuf, num_tso_mbufs); @@ -749,10 +744,6 @@ pmd_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } } - txq->stats.opackets += num_packets; - txq->stats.errs += nb_pkts - num_tx; - txq->stats.obytes += num_tx_bytes; - return num_tx; } @@ -1055,64 +1046,15 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) static int tap_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *tap_stats) { - unsigned int i, imax; - unsigned long rx_total = 0, tx_total = 0, tx_err_total = 0; - unsigned long rx_bytes_total = 0, tx_bytes_total = 0; - unsigned long rx_nombuf = 0, ierrors = 0; - const struct pmd_internals *pmd = dev->data->dev_private; - - /* rx queue statistics */ - imax = (dev->data->nb_rx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_rx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - for (i = 0; i < imax; i++) { - tap_stats->q_ipackets[i] = pmd->rxq[i].stats.ipackets; - tap_stats->q_ibytes[i] = pmd->rxq[i].stats.ibytes; - rx_total += tap_stats->q_ipackets[i]; - rx_bytes_total += tap_stats->q_ibytes[i]; - rx_nombuf += pmd->rxq[i].stats.rx_nombuf; - ierrors += pmd->rxq[i].stats.ierrors; - } - - /* tx queue statistics */ - imax = (dev->data->nb_tx_queues < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? - dev->data->nb_tx_queues : RTE_ETHDEV_QUEUE_STAT_CNTRS; - - for (i = 0; i < imax; i++) { - tap_stats->q_opackets[i] = pmd->txq[i].stats.opackets; - tap_stats->q_obytes[i] = pmd->txq[i].stats.obytes; - tx_total += tap_stats->q_opackets[i]; - tx_err_total += pmd->txq[i].stats.errs; - tx_bytes_total += tap_stats->q_obytes[i]; - } - - tap_stats->ipackets = rx_total; - tap_stats->ibytes = rx_bytes_total; - tap_stats->ierrors = ierrors; - tap_stats->rx_nombuf = rx_nombuf; - tap_stats->opackets = tx_total; - tap_stats->oerrors = tx_err_total; - tap_stats->obytes = tx_bytes_total; - return 0; + return rte_eth_counters_stats_get(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats), tap_stats); } static int tap_stats_reset(struct rte_eth_dev *dev) { - int i; - struct pmd_internals *pmd = dev->data->dev_private; - - for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) { - pmd->rxq[i].stats.ipackets = 0; - pmd->rxq[i].stats.ibytes = 0; - pmd->rxq[i].stats.ierrors = 0; - pmd->rxq[i].stats.rx_nombuf = 0; - - pmd->txq[i].stats.opackets = 0; - pmd->txq[i].stats.errs = 0; - pmd->txq[i].stats.obytes = 0; - } - - return 0; + return rte_eth_counters_reset(dev, offsetof(struct tx_queue, stats), + offsetof(struct rx_queue, stats)); } static int diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h index 5ac93f93e9..8cba9ea410 100644 --- a/drivers/net/tap/rte_eth_tap.h +++ b/drivers/net/tap/rte_eth_tap.h @@ -14,6 +14,7 @@ #include #include +#include #include #include #include "tap_log.h" @@ -32,23 +33,13 @@ enum rte_tuntap_type { ETH_TUNTAP_TYPE_MAX, }; -struct pkt_stats { - uint64_t opackets; /* Number of output packets */ - uint64_t ipackets; /* Number of input packets */ - uint64_t obytes; /* Number of bytes on output */ - uint64_t ibytes; /* Number of bytes on input */ - uint64_t errs; /* Number of TX error packets */ - uint64_t ierrors; /* Number of RX error packets */ - uint64_t rx_nombuf; /* Nb of RX mbuf alloc failures */ -}; - struct rx_queue { struct rte_mempool *mp; /* Mempool for RX packets */ uint32_t trigger_seen; /* Last seen Rx trigger value */ uint16_t in_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ - struct pkt_stats stats; /* Stats for this RX queue */ uint16_t nb_rx_desc; /* max number of mbufs available */ + struct rte_eth_counters stats; /* Stats for this RX queue */ struct rte_eth_rxmode *rxmode; /* RX features */ struct rte_mbuf *pool; /* mbufs pool for this queue */ struct iovec (*iovecs)[]; /* descriptors for this queue */ @@ -59,7 +50,7 @@ struct tx_queue { int type; /* Type field - TUN|TAP */ uint16_t *mtu; /* Pointer to MTU from dev_data */ uint16_t csum:1; /* Enable checksum offloading */ - struct pkt_stats stats; /* Stats for this TX queue */ + struct rte_eth_counters stats; /* Stats for this TX queue */ struct rte_gso_ctx gso_ctx; /* GSO context */ uint16_t out_port; /* Port ID */ uint16_t queue_id; /* queue ID*/ From patchwork Fri May 17 00:12:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 140157 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55EB144044; Fri, 17 May 2024 02:14:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 384EE40A7F; Fri, 17 May 2024 02:13:35 +0200 (CEST) Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by mails.dpdk.org (Postfix) with ESMTP id 59AA44067D for ; Fri, 17 May 2024 02:13:20 +0200 (CEST) Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-5c6bd3100fcso552963a12.3 for ; Thu, 16 May 2024 17:13:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1715904799; x=1716509599; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=F1v3feu5BR4POSxHAthIeXNE69YI+TXuZU380J4Tk78=; b=dwNTnaLDHVrKv3S+nQvYgSSq6uXpOT+tXkCKdXn5vBFg8IdE9XFu8Zk8VTk6cDrUOr de5h+VFckNYCL5FJjaOD34BWESBef6of3sNKoRS1HeiybFIwR1DJs9WW2d3N9n0Bq2lU ohdwh23b7UGAW1db/a6fGy0SOvgTwGQ8bBOmDjxe9+CPWMz/RVPI2q1DcM3mBDj0qnBl lr9loaZ+tbKEAIbc//VC7EvNRSu5+7bw/3noxUf9BEFIa1ctS7qGgR9Q/TLB3claab42 aoeW2fuBuvpBdBcG869en7IwA6lmSGVaUNVZVke8vGKBmpdMCD+tNRheRkAGC737K45N u7gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715904799; x=1716509599; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F1v3feu5BR4POSxHAthIeXNE69YI+TXuZU380J4Tk78=; b=U9VaDJ15Co8VnLSt9EWhxawcEQde1tzv7PPWk9lkkWSM0bx/54e7GEK2nRB7OIULwI kqaDVNsVHnPJOe8AsbwO9wGzk63mFjxubUQMGfWCiHNO2FH8QY7FaeTH22D+vJvYpFSK EqTED4iritDwzBnNx9BridhidGVuq3+MUKEr+D3uHLRN+21qOoF1785gEp5xJ8WVXe77 vF3ozJUWHxk7WqmUaZqqezpiwoPwXSwmRgRUkdT/D3q5wJeZfTZ0Mcnsgu16wMuV9pQn NKANvgJcxsz10u/GDz0KGZNM4wUAsEUO6iKf4f+hhf5SxDRQN1seL/itTu1/XAh1LFcp A25w== X-Gm-Message-State: AOJu0YzqSTJCIW1xbJfhs030MB++ndtfCDkqpzeAP6aBW6b5nXXj6tKP YaPw6ZJRxl6J1USD41mwQVhQ3Gp1UZ+kw56ufwocSPOk6YrnwZ3ZAvDRNBetYIKOegdzIooO/Hg zxdU= X-Google-Smtp-Source: AGHT+IFbZxZhQrCC/BTVe7VPoFv9B+seQoGLAwoNHOylu2i8UN4bSM5FMy7/aP8T0ARlUh+7nhk5dA== X-Received: by 2002:a17:90a:f2d8:b0:2b9:f6bc:e4c1 with SMTP id 98e67ed59e1d1-2b9f6bceda6mr4783545a91.28.1715904799495; Thu, 16 May 2024 17:13:19 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2b9f5820ce9sm3337495a91.56.2024.05.16.17.13.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 17:13:19 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Tetsuya Mukawa Subject: [PATCH v6 9/9] net/null: use generic SW stats Date: Thu, 16 May 2024 17:12:09 -0700 Message-ID: <20240517001302.65514-10-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517001302.65514-1-stephen@networkplumber.org> References: <20240510050507.14381-1-stephen@networkplumber.org> <20240517001302.65514-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use the new common code for statistics. This also fixes the bug that this driver was not accounting for bytes. Signed-off-by: Stephen Hemminger --- drivers/net/null/rte_eth_null.c | 80 +++++++-------------------------- 1 file changed, 17 insertions(+), 63 deletions(-) diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c index 7c46004f1e..7786982732 100644 --- a/drivers/net/null/rte_eth_null.c +++ b/drivers/net/null/rte_eth_null.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -37,8 +38,8 @@ struct null_queue { struct rte_mempool *mb_pool; struct rte_mbuf *dummy_packet; - uint64_t rx_pkts; - uint64_t tx_pkts; + struct rte_eth_counters tx_stats; + struct rte_eth_counters rx_stats; }; struct pmd_options { @@ -99,11 +100,9 @@ eth_null_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) bufs[i]->data_len = (uint16_t)packet_size; bufs[i]->pkt_len = packet_size; bufs[i]->port = h->internals->port_id; + rte_eth_count_mbuf(&h->rx_stats, bufs[i]); } - /* NOTE: review for potential ordering optimization */ - __atomic_fetch_add(&h->rx_pkts, i, __ATOMIC_SEQ_CST); - return i; } @@ -127,11 +126,9 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) bufs[i]->data_len = (uint16_t)packet_size; bufs[i]->pkt_len = packet_size; bufs[i]->port = h->internals->port_id; + rte_eth_count_mbuf(&h->rx_stats, bufs[i]); } - /* NOTE: review for potential ordering optimization */ - __atomic_fetch_add(&h->rx_pkts, i, __ATOMIC_SEQ_CST); - return i; } @@ -151,11 +148,10 @@ eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if ((q == NULL) || (bufs == NULL)) return 0; - for (i = 0; i < nb_bufs; i++) + for (i = 0; i < nb_bufs; i++) { + rte_eth_count_mbuf(&h->tx_stats, bufs[i]); rte_pktmbuf_free(bufs[i]); - - /* NOTE: review for potential ordering optimization */ - __atomic_fetch_add(&h->tx_pkts, i, __ATOMIC_SEQ_CST); + } return i; } @@ -174,12 +170,10 @@ eth_null_copy_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) for (i = 0; i < nb_bufs; i++) { rte_memcpy(h->dummy_packet, rte_pktmbuf_mtod(bufs[i], void *), packet_size); + rte_eth_count_mbuf(&h->tx_stats, bufs[i]); rte_pktmbuf_free(bufs[i]); } - /* NOTE: review for potential ordering optimization */ - __atomic_fetch_add(&h->tx_pkts, i, __ATOMIC_SEQ_CST); - return i; } @@ -322,60 +316,20 @@ eth_dev_info(struct rte_eth_dev *dev, } static int -eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *igb_stats) +eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - unsigned int i, num_stats; - unsigned long rx_total = 0, tx_total = 0; - const struct pmd_internals *internal; - - if ((dev == NULL) || (igb_stats == NULL)) - return -EINVAL; - - internal = dev->data->dev_private; - num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS, - RTE_MIN(dev->data->nb_rx_queues, - RTE_DIM(internal->rx_null_queues))); - for (i = 0; i < num_stats; i++) { - /* NOTE: review for atomic access */ - igb_stats->q_ipackets[i] = - internal->rx_null_queues[i].rx_pkts; - rx_total += igb_stats->q_ipackets[i]; - } - - num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS, - RTE_MIN(dev->data->nb_tx_queues, - RTE_DIM(internal->tx_null_queues))); - for (i = 0; i < num_stats; i++) { - /* NOTE: review for atomic access */ - igb_stats->q_opackets[i] = - internal->tx_null_queues[i].tx_pkts; - tx_total += igb_stats->q_opackets[i]; - } - - igb_stats->ipackets = rx_total; - igb_stats->opackets = tx_total; - - return 0; + return rte_eth_counters_stats_get(dev, + offsetof(struct null_queue, tx_stats), + offsetof(struct null_queue, rx_stats), + stats); } static int eth_stats_reset(struct rte_eth_dev *dev) { - unsigned int i; - struct pmd_internals *internal; - - if (dev == NULL) - return -EINVAL; - - internal = dev->data->dev_private; - for (i = 0; i < RTE_DIM(internal->rx_null_queues); i++) - /* NOTE: review for atomic access */ - internal->rx_null_queues[i].rx_pkts = 0; - for (i = 0; i < RTE_DIM(internal->tx_null_queues); i++) - /* NOTE: review for atomic access */ - internal->tx_null_queues[i].tx_pkts = 0; - - return 0; + return rte_eth_counters_reset(dev, + offsetof(struct null_queue, tx_stats), + offsetof(struct null_queue, rx_stats)); } static void