From patchwork Fri Dec 23 01:55:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 121330 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE987A0093; Fri, 23 Dec 2022 03:53:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C40F42D5D; Fri, 23 Dec 2022 03:52:19 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A366F42D62 for ; Fri, 23 Dec 2022 03:52:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671763937; x=1703299937; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kfXPZLUApREQ7ZsLD5E//BZdEhdYTwoujjXwiE3XYvI=; b=ArUR0u0t5V94sJZAVnQXFOb1ZT85XsfY2/P5OWgkxY7GV31k4tUVwYC8 qXp1C38Ee5jn5mp0vpFK1a0mxk/Lpb5O5dq4x3/tZqRYrzt2aPEXfRAb5 YYFon+sAjwVFc04TWfC0Ks05PzfFCmsw5PHZ3WN2es0Za9larXWvZRM/l zpB/bIsUiRZBDBUvm8cuBntksdEwqE3Gmw/eb0/yX0qWyCnL6Ne2E3JnL S2Chyxji0ArVDykiFKS3v5j+lOcnNrTwbftLT02sfDUVMZcaUQ5evnzpc XeMyaqbraAm5tAk+cyXBWe8ljevjjngY8CrYElnMVYEB62EWkHBAhQ6jX g==; X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="321467133" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="321467133" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 18:52:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="629707201" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="629707201" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.112]) by orsmga006.jf.intel.com with ESMTP; 22 Dec 2022 18:52:15 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com, Mingxia Liu Subject: [PATCH 18/21] net/cpfl: add hw statistics Date: Fri, 23 Dec 2022 01:55:55 +0000 Message-Id: <20221223015558.3143279-19-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221223015558.3143279-1-mingxia.liu@intel.com> References: <20221223015558.3143279-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch add hardware packets/bytes statistics. Signed-off-by: Mingxia Liu --- drivers/net/cpfl/cpfl_ethdev.c | 88 ++++++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c index 4e5d4e124a..026ac52997 100644 --- a/drivers/net/cpfl/cpfl_ethdev.c +++ b/drivers/net/cpfl/cpfl_ethdev.c @@ -169,6 +169,87 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) return ptypes; } +static uint64_t +cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) +{ + uint64_t mbuf_alloc_failed = 0; + struct idpf_rx_queue *rxq; + int i = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + mbuf_alloc_failed += + rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed)); + } + + return mbuf_alloc_failed; +} + +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct idpf_vport *vport = + (struct idpf_vport *)dev->data->dev_private; + struct virtchnl2_vport_stats *pstats = NULL; + int ret; + + ret = idpf_query_stats(vport, &pstats); + if (ret == 0) { + uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads & + RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 : + RTE_ETHER_CRC_LEN; + + idpf_update_stats(&vport->eth_stats_offset, pstats); + stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + + pstats->rx_broadcast - pstats->rx_discards; + stats->opackets = pstats->tx_broadcast + pstats->tx_multicast + + pstats->tx_unicast; + stats->imissed = pstats->rx_discards; + stats->oerrors = pstats->tx_errors + pstats->tx_discards; + stats->ibytes = pstats->rx_bytes; + stats->ibytes -= stats->ipackets * crc_stats_len; + stats->obytes = pstats->tx_bytes; + + dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev); + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + } else { + PMD_DRV_LOG(ERR, "Get statistics failed"); + } + return ret; +} + +static void +cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) +{ + struct idpf_rx_queue *rxq; + int i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0); + } +} + +static int +cpfl_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct idpf_vport *vport = + (struct idpf_vport *)dev->data->dev_private; + struct virtchnl2_vport_stats *pstats = NULL; + int ret; + + ret = idpf_query_stats(vport, &pstats); + if (ret != 0) + return ret; + + /* set stats offset base on current values */ + vport->eth_stats_offset = *pstats; + + cpfl_reset_mbuf_alloc_failed_stats(dev); + + return 0; +} + static int cpfl_init_rss(struct idpf_vport *vport) { @@ -362,6 +443,11 @@ cpfl_dev_start(struct rte_eth_dev *dev) goto err_vport; } + if (cpfl_dev_stats_reset(dev)) { + PMD_DRV_LOG(ERR, "Failed to reset stats"); + goto err_vport; + } + return 0; err_vport: @@ -757,6 +843,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = { .tx_queue_release = cpfl_dev_tx_queue_release, .mtu_set = cpfl_dev_mtu_set, .dev_supported_ptypes_get = cpfl_dev_supported_ptypes_get, + .stats_get = cpfl_dev_stats_get, + .stats_reset = cpfl_dev_stats_reset, }; static uint16_t