From patchwork Fri Jun 18 10:36:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 94427 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A96EAA0C46; Fri, 18 Jun 2021 12:41:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 43A0841120; Fri, 18 Jun 2021 12:40:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 79520410F6 for ; Fri, 18 Jun 2021 12:40:09 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 15IAZPAY003484 for ; Fri, 18 Jun 2021 03:40:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=uh+eZQWVOvK5BxfKp7z2G0XWsVL6Ul9lmjAMuTHpTT8=; b=EhAuHOhN4PJIvxmiRsfinw9mkiV1R9RvjKNbx3mOV60QZ6mNqbY2g/dqmK8UPiHEmxUC NWgqoN4j/ZOVUOXpY4/JXsaQrf5wKB740xWtxrl7ZLqXWwMheCRX/bq5jS89XPC9patU 3nLXEMKtvYBvGpw8CERJOKspvo5RCsEU3pz9TW7NIVUZOXpCNl69wsa/SFXhzn/4C2aY rQQ8ZdgGm+2mG3c4EWqcP9AsFd6snLI0IaM9Vcp3XT4xWfuC6qX4O+NTfZ34Wqt3pFCB GWoQFhJ3SahDObkkeF+1Xksw7K0hEExMHjrvxqnZMOBQ9aF8UjuqN6eefPhCV51tDYBl nQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 397udry7fn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 18 Jun 2021 03:40:08 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 18 Jun 2021 03:40:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 18 Jun 2021 03:40:06 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 692805B6A19; Fri, 18 Jun 2021 03:39:26 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , , "Nithin Dabilpuram" Date: Fri, 18 Jun 2021 16:06:57 +0530 Message-ID: <20210618103741.26526-19-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210618103741.26526-1-ndabilpuram@marvell.com> References: <20210306153404.10781-1-ndabilpuram@marvell.com> <20210618103741.26526-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: fP4zt-XCalDglX5bmeC2GitFVgQ-ut3l X-Proofpoint-GUID: fP4zt-XCalDglX5bmeC2GitFVgQ-ut3l X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.790 definitions=2021-06-18_04:2021-06-18, 2021-06-18 signatures=0 Subject: [dpdk-dev] [PATCH v3 18/62] net/cnxk: add queue start and stop support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Rx/Tx queue start and stop callbacks for CN9K and CN10K. Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/features/cnxk.ini | 1 + doc/guides/nics/features/cnxk_vec.ini | 1 + doc/guides/nics/features/cnxk_vf.ini | 1 + drivers/net/cnxk/cn10k_ethdev.c | 16 ++++++ drivers/net/cnxk/cn9k_ethdev.c | 16 ++++++ drivers/net/cnxk/cnxk_ethdev.c | 92 +++++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_ethdev.h | 1 + 7 files changed, 128 insertions(+) diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index 503582c..712f8d5 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Queue start/stop = Y RSS hash = Y Inner RSS = Y Packet type parsing = Y diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini index 9ad225a..82f2af0 100644 --- a/doc/guides/nics/features/cnxk_vec.ini +++ b/doc/guides/nics/features/cnxk_vec.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Queue start/stop = Y RSS hash = Y Inner RSS = Y Packet type parsing = Y diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini index 8c93ba7..61fed11 100644 --- a/doc/guides/nics/features/cnxk_vf.ini +++ b/doc/guides/nics/features/cnxk_vf.ini @@ -11,6 +11,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Queue start/stop = Y RSS hash = Y Inner RSS = Y Packet type parsing = Y diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c index f79d03c..d70ab00 100644 --- a/drivers/net/cnxk/cn10k_ethdev.c +++ b/drivers/net/cnxk/cn10k_ethdev.c @@ -138,6 +138,21 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, } static int +cn10k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct cn10k_eth_txq *txq = eth_dev->data->tx_queues[qidx]; + int rc; + + rc = cnxk_nix_tx_queue_stop(eth_dev, qidx); + if (rc) + return rc; + + /* Clear fc cache pkts to trigger worker stop */ + txq->fc_cache_pkts = 0; + return 0; +} + +static int cn10k_nix_configure(struct rte_eth_dev *eth_dev) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); @@ -169,6 +184,7 @@ nix_eth_dev_ops_override(void) cnxk_eth_dev_ops.dev_configure = cn10k_nix_configure; cnxk_eth_dev_ops.tx_queue_setup = cn10k_nix_tx_queue_setup; cnxk_eth_dev_ops.rx_queue_setup = cn10k_nix_rx_queue_setup; + cnxk_eth_dev_ops.tx_queue_stop = cn10k_nix_tx_queue_stop; cnxk_eth_dev_ops.dev_ptypes_set = cn10k_nix_ptypes_set; } diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c index b5a4bd7..1dab096 100644 --- a/drivers/net/cnxk/cn9k_ethdev.c +++ b/drivers/net/cnxk/cn9k_ethdev.c @@ -136,6 +136,21 @@ cn9k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, } static int +cn9k_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct cn9k_eth_txq *txq = eth_dev->data->tx_queues[qidx]; + int rc; + + rc = cnxk_nix_tx_queue_stop(eth_dev, qidx); + if (rc) + return rc; + + /* Clear fc cache pkts to trigger worker stop */ + txq->fc_cache_pkts = 0; + return 0; +} + +static int cn9k_nix_configure(struct rte_eth_dev *eth_dev) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); @@ -178,6 +193,7 @@ nix_eth_dev_ops_override(void) cnxk_eth_dev_ops.dev_configure = cn9k_nix_configure; cnxk_eth_dev_ops.tx_queue_setup = cn9k_nix_tx_queue_setup; cnxk_eth_dev_ops.rx_queue_setup = cn9k_nix_rx_queue_setup; + cnxk_eth_dev_ops.tx_queue_stop = cn9k_nix_tx_queue_stop; cnxk_eth_dev_ops.dev_ptypes_set = cn9k_nix_ptypes_set; } diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 6a0bd50..98e0ba0 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -854,12 +854,104 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev) return rc; } +static int +cnxk_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qid) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct roc_nix_sq *sq = &dev->sqs[qid]; + int rc = -EINVAL; + + if (data->tx_queue_state[qid] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + rc = roc_nix_tm_sq_aura_fc(sq, true); + if (rc) { + plt_err("Failed to enable sq aura fc, txq=%u, rc=%d", qid, rc); + goto done; + } + + data->tx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STARTED; +done: + return rc; +} + +int +cnxk_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qid) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct roc_nix_sq *sq = &dev->sqs[qid]; + int rc; + + if (data->tx_queue_state[qid] == RTE_ETH_QUEUE_STATE_STOPPED) + return 0; + + rc = roc_nix_tm_sq_aura_fc(sq, false); + if (rc) { + plt_err("Failed to disable sqb aura fc, txq=%u, rc=%d", qid, + rc); + goto done; + } + + data->tx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED; +done: + return rc; +} + +static int +cnxk_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qid) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct roc_nix_rq *rq = &dev->rqs[qid]; + int rc; + + if (data->rx_queue_state[qid] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + rc = roc_nix_rq_ena_dis(rq, true); + if (rc) { + plt_err("Failed to enable rxq=%u, rc=%d", qid, rc); + goto done; + } + + data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STARTED; +done: + return rc; +} + +static int +cnxk_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qid) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct roc_nix_rq *rq = &dev->rqs[qid]; + int rc; + + if (data->rx_queue_state[qid] == RTE_ETH_QUEUE_STATE_STOPPED) + return 0; + + rc = roc_nix_rq_ena_dis(rq, false); + if (rc) { + plt_err("Failed to disable rxq=%u, rc=%d", qid, rc); + goto done; + } + + data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED; +done: + return rc; +} + /* CNXK platform independent eth dev ops */ struct eth_dev_ops cnxk_eth_dev_ops = { .dev_infos_get = cnxk_nix_info_get, .link_update = cnxk_nix_link_update, .tx_queue_release = cnxk_nix_tx_queue_release, .rx_queue_release = cnxk_nix_rx_queue_release, + .tx_queue_start = cnxk_nix_tx_queue_start, + .rx_queue_start = cnxk_nix_rx_queue_start, + .rx_queue_stop = cnxk_nix_rx_queue_stop, .dev_supported_ptypes_get = cnxk_nix_supported_ptypes_get, }; diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index b23df4a..5a52489 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -214,6 +214,7 @@ int cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint16_t nb_desc, uint16_t fp_rx_q_sz, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); +int cnxk_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qid); uint64_t cnxk_nix_rxq_mbuf_setup(struct cnxk_eth_dev *dev);