From patchwork Fri Sep 29 11:50:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132212 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B316742672; Fri, 29 Sep 2023 13:52:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F54840ED2; Fri, 29 Sep 2023 13:52:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 40E1F41104 for ; Fri, 29 Sep 2023 13:52:57 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38T27hYZ021235; Fri, 29 Sep 2023 04:52:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=vW9i22Eh76ujAT0tAhunAnJvjIEDGBUDNh4nz0mEU5M=; b=kIgnan90qEJvPBF6Xkr6NOkT7+iWqpZcr0X3EBwEbWRBgqkN822kKo8Rg8GMlunKZ0i+ 07/u7vj60eoMCbB6ykrLvhAjuXVlxTkh3DULar/XbQazKmd5C9+EalJ9qj4Ch7RniLYR TFz1VR8vphonZifXV7nS6+U31SgQveGQsD599DIwLWFawG+QYD/iAMOKgLjPtaB6V/Ir zbg8QIbE1IcQKfxc8Lqft3LMK3lYNDDj/0x7mYDzXDh+e0gxNZQGjM3Q0bgJ3OImoQTh gKo2Xjgx4I1j+64+gNTg38Ui7uQq5Khl6kaa1B0wcJnAS4COz0A46uFNPp8JkLgKD4Oy cg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3tcrrs8m7d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 29 Sep 2023 04:52:56 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 29 Sep 2023 04:52:54 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 29 Sep 2023 04:52:54 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 1D4B45B6923; Fri, 29 Sep 2023 04:52:49 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v8 09/12] eventdev/dma: support adapter stats Date: Fri, 29 Sep 2023 17:20:48 +0530 Message-ID: <20230929115051.564063-10-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230929115051.564063-1-amitprakashs@marvell.com> References: <20230929081309.464565-1-amitprakashs@marvell.com> <20230929115051.564063-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: dQIGiDKn_imVMhQTGeH5LeaT54qI3h55 X-Proofpoint-GUID: dQIGiDKn_imVMhQTGeH5LeaT54qI3h55 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-29_10,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added DMA adapter stats API support to get and reset stats. DMA SW adapter stats and eventdev driver supported stats for enqueue and dequeue are reported by get API. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 95 ++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index 632169a7c2..6c67e6d499 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -142,6 +142,9 @@ struct event_dma_adapter { /* Loop counter to flush dma ops */ uint16_t transmit_loop_count; + + /* Per instance stats structure */ + struct rte_event_dma_adapter_stats dma_stats; } __rte_cache_aligned; static struct event_dma_adapter **event_dma_adapter; @@ -475,6 +478,7 @@ rte_event_dma_adapter_free(uint8_t id) static inline unsigned int edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, unsigned int cnt) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; struct dma_vchan_info *vchan_qinfo = NULL; struct rte_event_dma_adapter_op *dma_op; uint16_t vchan, nb_enqueued = 0; @@ -484,6 +488,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, uns ret = 0; n = 0; + stats->event_deq_count += cnt; for (i = 0; i < cnt; i++) { dma_op = ev[i].event_ptr; @@ -506,6 +511,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, uns ret = edma_circular_buffer_flush_to_dma_dev(adapter, &vchan_qinfo->dma_buf, dma_dev_id, vchan, &nb_enqueued); + stats->dma_enq_count += nb_enqueued; n += nb_enqueued; /** @@ -552,6 +558,7 @@ edma_adapter_dev_flush(struct event_dma_adapter *adapter, int16_t dma_dev_id, static unsigned int edma_adapter_enq_flush(struct event_dma_adapter *adapter) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; int16_t dma_dev_id; uint16_t nb_enqueued = 0; uint16_t nb_ops_flushed = 0; @@ -566,6 +573,8 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter) if (!nb_ops_flushed) adapter->stop_enq_to_dma_dev = false; + stats->dma_enq_count += nb_enqueued; + return nb_enqueued; } @@ -577,6 +586,7 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter) static int edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; uint8_t event_port_id = adapter->event_port_id; uint8_t event_dev_id = adapter->eventdev_id; struct rte_event ev[DMA_BATCH_SIZE]; @@ -596,6 +606,7 @@ edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) break; } + stats->event_poll_count++; n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, DMA_BATCH_SIZE, 0); if (!n) @@ -616,6 +627,7 @@ static inline uint16_t edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_adapter_op **ops, uint16_t num) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; uint8_t event_port_id = adapter->event_port_id; uint8_t event_dev_id = adapter->eventdev_id; struct rte_event events[DMA_BATCH_SIZE]; @@ -655,6 +667,10 @@ edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_a } while (retry++ < DMA_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < nb_ev); + stats->event_enq_fail_count += nb_ev - nb_enqueued; + stats->event_enq_count += nb_enqueued; + stats->event_enq_retry_count += retry - 1; + return nb_enqueued; } @@ -709,6 +725,7 @@ edma_ops_buffer_flush(struct event_dma_adapter *adapter) static inline unsigned int edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; struct dma_vchan_info *vchan_info; struct dma_ops_circular_buffer *tq_buf; struct rte_event_dma_adapter_op *ops; @@ -746,6 +763,7 @@ edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) continue; done = false; + stats->dma_deq_count += n; tq_buf = &dev_info->tqmap[vchan].dma_buf; @@ -1308,3 +1326,80 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, return 0; } + +int +rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats) +{ + struct rte_event_dma_adapter_stats dev_stats_sum = {0}; + struct rte_event_dma_adapter_stats dev_stats; + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || stats == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + memset(stats, 0, sizeof(*stats)); + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_get == NULL) + continue; + + ret = (*dev->dev_ops->dma_adapter_stats_get)(dev, i, &dev_stats); + if (ret) + continue; + + dev_stats_sum.dma_deq_count += dev_stats.dma_deq_count; + dev_stats_sum.event_enq_count += dev_stats.event_enq_count; + } + + if (adapter->service_initialized) + *stats = adapter->dma_stats; + + stats->dma_deq_count += dev_stats_sum.dma_deq_count; + stats->event_enq_count += dev_stats_sum.event_enq_count; + + return 0; +} + +int +rte_event_dma_adapter_stats_reset(uint8_t id) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_reset == NULL) + continue; + + (*dev->dev_ops->dma_adapter_stats_reset)(dev, i); + } + + memset(&adapter->dma_stats, 0, sizeof(adapter->dma_stats)); + + return 0; +}