From patchwork Fri Dec 3 16:36:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Bhansali X-Patchwork-Id: 104857 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F288FA0C41; Fri, 3 Dec 2021 17:36:48 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA2844271E; Fri, 3 Dec 2021 17:36:42 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6E0CB40041 for ; Fri, 3 Dec 2021 17:36:41 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1B396mrM014942 for ; Fri, 3 Dec 2021 08:36:40 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=NIayA9/BhuABPaV7VwTDCGIUo7jvtC41ODrFeeW7SMU=; b=hzVa7+1vAwRYz3B4WSogs89F1/AVR0806D/h/c8EWklrm5pw/L8S3HZPLsZ2H3OxmpVl PXAXb/rcP4SPXtjswi30ZjaOvOafLRu1/WooTrlnroLBSSq/D1ILRGojc2cwNv8rt54E zIUEmTowzYJT2gUp3jPl0ybSWZfTAB5vv6BeDVPjXJJtZ2zG9BypkIfjcBol+AzRW0GZ U09E0RkmG2rimEHDchMKQ0Id6aNf9Z6aWyCHniKVEFdQVAdWTZB8Nv+XfRouCyVSPmHd W9H+kC4efjJ5DV0Vlhx2UzGP4viLbiwXUr9kVNCI5kgWaeyrfp0CVGuniNdAE6ZtGD2P lg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3cq84ykfm2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 03 Dec 2021 08:36:40 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Dec 2021 08:36:38 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Dec 2021 08:36:38 -0800 Received: from localhost.localdomain (unknown [10.28.48.107]) by maili.marvell.com (Postfix) with ESMTP id 8155A3F703F; Fri, 3 Dec 2021 08:36:36 -0800 (PST) From: Rahul Bhansali To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , Rahul Bhansali Subject: [PATCH 2/2] net/cnxk: ethdev Rx/Tx queue status callbacks Date: Fri, 3 Dec 2021 22:06:27 +0530 Message-ID: <20211203163627.3254236-2-rbhansali@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211203163627.3254236-1-rbhansali@marvell.com> References: <20211203163627.3254236-1-rbhansali@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: -743wEqYlkXZCXaizivTO5rLS38RjDij X-Proofpoint-GUID: -743wEqYlkXZCXaizivTO5rLS38RjDij X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2021-12-03_07,2021-12-02_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Provides ethdev callback support of rx_queue_count, rx_descriptor_status and tx_descriptor_status. Signed-off-by: Rahul Bhansali --- drivers/net/cnxk/cnxk_ethdev.c | 3 ++ drivers/net/cnxk/cnxk_ethdev.h | 5 +++ drivers/net/cnxk/cnxk_ethdev_ops.c | 60 ++++++++++++++++++++++++++++++ 3 files changed, 68 insertions(+) diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 74f625553d..183fd241d8 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1595,6 +1595,9 @@ cnxk_eth_dev_init(struct rte_eth_dev *eth_dev) int rc, max_entries; eth_dev->dev_ops = &cnxk_eth_dev_ops; + eth_dev->rx_queue_count = cnxk_nix_rx_queue_count; + eth_dev->rx_descriptor_status = cnxk_nix_rx_descriptor_status; + eth_dev->tx_descriptor_status = cnxk_nix_tx_descriptor_status; /* Alloc security context */ sec_ctx = plt_zmalloc(sizeof(struct rte_security_ctx), 0); diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 5bfda3d815..43814a81fc 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -559,6 +559,11 @@ void cnxk_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid, void cnxk_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid, struct rte_eth_txq_info *qinfo); +/* Queue status */ +int cnxk_nix_rx_descriptor_status(void *rxq, uint16_t offset); +int cnxk_nix_tx_descriptor_status(void *txq, uint16_t offset); +uint32_t cnxk_nix_rx_queue_count(void *rxq); + /* Lookup configuration */ const uint32_t *cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev); void *cnxk_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index ce5f1f7240..1255d6b40f 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -694,6 +694,66 @@ cnxk_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t qid, memcpy(&qinfo->conf, &txq_sp->qconf.conf.tx, sizeof(qinfo->conf)); } +uint32_t +cnxk_nix_rx_queue_count(void *rxq) +{ + struct cnxk_eth_rxq_sp *rxq_sp = cnxk_eth_rxq_to_sp(rxq); + struct roc_nix *nix = &rxq_sp->dev->nix; + uint32_t head, tail; + + roc_nix_cq_head_tail_get(nix, rxq_sp->qid, &head, &tail); + return (tail - head) % (rxq_sp->qconf.nb_desc); +} + +static inline int +nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset, bool is_rx) +{ + /* Check given offset(queue index) has packet filled/xmit by HW + * in case of Rx or Tx. + * Also, checks for wrap around case. + */ + return ((tail > head && offset <= tail && offset >= head) || + (head > tail && (offset >= head || offset <= tail))) ? + is_rx : + !is_rx; +} + +int +cnxk_nix_rx_descriptor_status(void *rxq, uint16_t offset) +{ + struct cnxk_eth_rxq_sp *rxq_sp = cnxk_eth_rxq_to_sp(rxq); + struct roc_nix *nix = &rxq_sp->dev->nix; + uint32_t head, tail; + + if (rxq_sp->qconf.nb_desc <= offset) + return -EINVAL; + + roc_nix_cq_head_tail_get(nix, rxq_sp->qid, &head, &tail); + + if (nix_offset_has_packet(head, tail, offset, 1)) + return RTE_ETH_RX_DESC_DONE; + else + return RTE_ETH_RX_DESC_AVAIL; +} + +int +cnxk_nix_tx_descriptor_status(void *txq, uint16_t offset) +{ + struct cnxk_eth_txq_sp *txq_sp = cnxk_eth_txq_to_sp(txq); + struct roc_nix *nix = &txq_sp->dev->nix; + uint32_t head = 0, tail = 0; + + if (txq_sp->qconf.nb_desc <= offset) + return -EINVAL; + + roc_nix_sq_head_tail_get(nix, txq_sp->qid, &head, &tail); + + if (nix_offset_has_packet(head, tail, offset, 0)) + return RTE_ETH_TX_DESC_DONE; + else + return RTE_ETH_TX_DESC_FULL; +} + /* It is a NOP for cnxk as HW frees the buffer on xmit */ int cnxk_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)