From patchwork Fri Feb 17 07:32:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junfeng Guo X-Patchwork-Id: 124114 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0DC041CBC; Fri, 17 Feb 2023 08:39:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B884842D37; Fri, 17 Feb 2023 08:39:25 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id ABAAA40A8B for ; Fri, 17 Feb 2023 08:39:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676619562; x=1708155562; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KFHovYtrdVgKV0AlB+q5EkbMXxQOV5DK3khSFGVDVVo=; b=g0RA1ULPieaL3vQOClBKE7/S0DzaSFhBO4VEJyzo/YMiZzJ5fa1ninh/ LIUHsebrxjSPvaPSx+kt6UMvoK24f4N1/cbDxa4sf67OyPGBrJo6cfExZ WBTglR2KmlmG+dh5rG9ERGaB/AyeoMayYTWWp/MRYMXqOImn0eGxK4Ci/ AH/Nx9S4YzCHiNLC0AG7vi3ZZ/DTw+1CmBcdaz1q9y0hwd++jCgp8juKO 9Lih+mm65Hh8y0mM2oAontgZ65nOdSeKwpARFCe6dLZqJE7Uxf3fjI5mf tx+8bwM4+BUGUSxkjQEKXU5dwLy83yXNC3/N4s3/mCnLvYGmp147yIMe0 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="418153083" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="418153083" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2023 23:39:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10623"; a="670458707" X-IronPort-AV: E=Sophos;i="5.97,304,1669104000"; d="scan'208";a="670458707" Received: from dpdk-jf-ntb-one.sh.intel.com ([10.67.111.104]) by orsmga002.jf.intel.com with ESMTP; 16 Feb 2023 23:39:18 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, ferruh.yigit@amd.com, beilei.xing@intel.com Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com, Junfeng Guo , Rushil Gupta , Jordan Kimbrough , Jeroen de Borst Subject: [RFC v3 07/10] net/gve: support basic stats for DQO Date: Fri, 17 Feb 2023 15:32:25 +0800 Message-Id: <20230217073228.340815-8-junfeng.guo@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com> References: <20230130062642.3337239-1-junfeng.guo@intel.com> <20230217073228.340815-1-junfeng.guo@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add basic stats support for DQO. Signed-off-by: Junfeng Guo Signed-off-by: Rushil Gupta Signed-off-by: Jordan Kimbrough Signed-off-by: Jeroen de Borst --- drivers/net/gve/gve_ethdev.c | 2 ++ drivers/net/gve/gve_rx_dqo.c | 12 +++++++++++- drivers/net/gve/gve_tx_dqo.c | 6 ++++++ 3 files changed, 19 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index 1c9d272c2b..2541738da1 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -481,6 +481,8 @@ static const struct eth_dev_ops gve_eth_dev_ops_dqo = { .rx_queue_release = gve_rx_queue_release_dqo, .tx_queue_release = gve_tx_queue_release_dqo, .link_update = gve_link_update, + .stats_get = gve_dev_stats_get, + .stats_reset = gve_dev_stats_reset, .mtu_set = gve_dev_mtu_set, }; diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index a281b237a4..2a540b1ba5 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -37,6 +37,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail = 0; rxq->nb_rx_hold -= delta; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -57,6 +58,7 @@ gve_rx_refill_dqo(struct gve_rx_queue *rxq) next_avail += nb_refill; rxq->nb_rx_hold -= nb_refill; } else { + rxq->no_mbufs += nb_desc - next_avail; dev = &rte_eth_devices[rxq->port_id]; dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail; PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u", @@ -80,7 +82,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint16_t pkt_len; uint16_t rx_id; uint16_t nb_rx; + uint64_t bytes; + bytes = 0; nb_rx = 0; rxq = rx_queue; rx_id = rxq->rx_tail; @@ -94,8 +98,10 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (rx_desc->generation != rxq->cur_gen_bit) break; - if (unlikely(rx_desc->rx_error)) + if (unlikely(rx_desc->rx_error)) { + rxq->errors++; continue; + } pkt_len = rx_desc->packet_len; @@ -120,6 +126,7 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxm->hash.rss = rte_be_to_cpu_32(rx_desc->hash); rx_pkts[nb_rx++] = rxm; + bytes += pkt_len; } if (nb_rx > 0) { @@ -128,6 +135,9 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxq->next_avail = rx_id_bufq; gve_rx_refill_dqo(rxq); + + rxq->packets += nb_rx; + rxq->bytes += bytes; } return nb_rx; diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index af43ff870a..450cf71a6b 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -80,10 +80,12 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t nb_used; uint16_t tx_id; uint16_t sw_id; + uint64_t bytes; sw_ring = txq->sw_ring; txr = txq->tx_ring; + bytes = 0; mask = txq->nb_tx_desc - 1; sw_mask = txq->sw_size - 1; tx_id = txq->tx_tail; @@ -118,6 +120,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_id = (tx_id + 1) & mask; sw_id = (sw_id + 1) & sw_mask; + bytes += tx_pkt->pkt_len; tx_pkt = tx_pkt->next; } while (tx_pkt); @@ -141,6 +144,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) rte_write32(tx_id, txq->qtx_tail); txq->tx_tail = tx_id; txq->sw_tail = sw_id; + + txq->packets += nb_tx; + txq->bytes += bytes; } return nb_tx;