From patchwork Wed Aug 9 15:51:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 130027 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3AF6843016; Wed, 9 Aug 2023 09:34:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 007DB40691; Wed, 9 Aug 2023 09:33:35 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id F2CE643296 for ; Wed, 9 Aug 2023 09:33:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691566413; x=1723102413; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pIje8G3Hzd8FeZ4+pi1k5UQYrRgftdcREjKR68GkJOI=; b=S8YB/xRyBDaH/ynwfHhE8UU1xFKDFrFXJe7OnMNQbC0jutoVGx9VaPd1 Sb9gFgrZwymkHztCcwde/90z4IbvdgsGyrFTAo3jYHt/0K3EqSahsJiF8 XTiVVVA6CIB6OB+yZ9PnAcmxZhzPzQ5SkO+FKt+v87kJ+R5ARis2UmIpM RaKt0OfEaZTdFMeARWPh/qseq5cS5PzMd50EMy8oTo5cHH9MuvS8zqXlU CFRpQ7UflujhiTv5HyMa1o+sCYcLJUb9UbBnaAzcO+qJa3grMl27HBR5C 1tFNMEY27Bpd+E5GrHHeC0vzQX/jT1Zow8C4kepNeMZWNMX3L5gf72aWr g==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="356014515" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="356014515" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Aug 2023 00:33:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="1062337453" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="1062337453" Received: from dpdk-beileix-icelake.sh.intel.com ([10.67.116.252]) by fmsmga005.fm.intel.com with ESMTP; 09 Aug 2023 00:33:31 -0700 From: beilei.xing@intel.com To: jingjing.wu@intel.com, mingxia.liu@intel.com Cc: dev@dpdk.org, Beilei Xing Subject: [PATCH 14/19] net/cpfl: add stats ops for representor Date: Wed, 9 Aug 2023 15:51:29 +0000 Message-Id: <20230809155134.539287-15-beilei.xing@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230809155134.539287-1-beilei.xing@intel.com> References: <20230809155134.539287-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Support stats_get and stats_reset ops fot port representor. Signed-off-by: Beilei Xing --- drivers/net/cpfl/cpfl_ethdev.h | 8 +++++ drivers/net/cpfl/cpfl_representor.c | 54 +++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h index 7813b9173e..33e810408b 100644 --- a/drivers/net/cpfl/cpfl_ethdev.h +++ b/drivers/net/cpfl/cpfl_ethdev.h @@ -171,15 +171,23 @@ struct cpfl_repr { bool func_up; /* If the represented function is up */ }; +struct cpfl_repr_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; +}; + struct cpfl_repr_rx_queue { struct cpfl_repr *repr; struct rte_mempool *mb_pool; struct rte_ring *rx_ring; + struct cpfl_repr_stats stats; /* Statistics */ }; struct cpfl_repr_tx_queue { struct cpfl_repr *repr; struct cpfl_tx_queue *txq; + struct cpfl_repr_stats stats; /* Statistics */ }; struct cpfl_adapter_ext { diff --git a/drivers/net/cpfl/cpfl_representor.c b/drivers/net/cpfl/cpfl_representor.c index 862464602f..79cb7f76d4 100644 --- a/drivers/net/cpfl/cpfl_representor.c +++ b/drivers/net/cpfl/cpfl_representor.c @@ -425,6 +425,58 @@ cpfl_repr_link_update(struct rte_eth_dev *ethdev, return 0; } +static int +idpf_repr_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct cpfl_repr_tx_queue *txq; + struct cpfl_repr_rx_queue *rxq; + uint16_t i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + if (!txq) + continue; + stats->opackets += __atomic_load_n(&txq->stats.packets, __ATOMIC_RELAXED); + stats->obytes += __atomic_load_n(&txq->stats.bytes, __ATOMIC_RELAXED); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (!rxq) + continue; + stats->ipackets += __atomic_load_n(&rxq->stats.packets, __ATOMIC_RELAXED); + stats->ibytes += __atomic_load_n(&rxq->stats.bytes, __ATOMIC_RELAXED); + stats->ierrors += __atomic_load_n(&rxq->stats.errors, __ATOMIC_RELAXED); + } + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + return 0; +} + +static int +idpf_repr_stats_reset(struct rte_eth_dev *dev) +{ + struct cpfl_repr_tx_queue *txq; + struct cpfl_repr_rx_queue *rxq; + uint16_t i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + if (!txq) + continue; + __atomic_store_n(&txq->stats.packets, 0, __ATOMIC_RELAXED); + __atomic_store_n(&txq->stats.bytes, 0, __ATOMIC_RELAXED); + __atomic_store_n(&txq->stats.errors, 0, __ATOMIC_RELAXED); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (!rxq) + continue; + __atomic_store_n(&rxq->stats.packets, 0, __ATOMIC_RELAXED); + __atomic_store_n(&rxq->stats.bytes, 0, __ATOMIC_RELAXED); + __atomic_store_n(&rxq->stats.errors, 0, __ATOMIC_RELAXED); + } + return 0; +} + static const struct eth_dev_ops cpfl_repr_dev_ops = { .dev_start = cpfl_repr_dev_start, .dev_stop = cpfl_repr_dev_stop, @@ -435,6 +487,8 @@ static const struct eth_dev_ops cpfl_repr_dev_ops = { .rx_queue_setup = cpfl_repr_rx_queue_setup, .tx_queue_setup = cpfl_repr_tx_queue_setup, .link_update = cpfl_repr_link_update, + .stats_get = idpf_repr_stats_get, + .stats_reset = idpf_repr_stats_reset, }; static int