From patchwork Tue Jul 27 03:41:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xueming(Steven) Li" X-Patchwork-Id: 96301 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A98D5A0C4B; Tue, 27 Jul 2021 05:42:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2698E410ED; Tue, 27 Jul 2021 05:42:15 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2070.outbound.protection.outlook.com [40.107.94.70]) by mails.dpdk.org (Postfix) with ESMTP id 4E0F8410EC for ; Tue, 27 Jul 2021 05:42:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kk4SYLjI8tP5ulqVR9MQqdMCgHlxm848F6AYnereH6sIjzDRR8zeZvAKk3l9eQ7x6FgLuZ+ZOJIp73v9sMaGkMb9L9wgXWswQDV/6bMB/4PX4eTCMJtf+ft2hcn8uJ0RMe7Xl8twnMLAcWaaUzaz3i9+XzW+9/gExRlDzBHIjhNUCkhGEeRJXVKizl1rxkryLIKwQ0BxLoeNtcTHhtIS6NdnkpawvaKT5gklTC+OZCc1WsQ094FvM0hrvC5ByHq8lwGaKbVAX5nV8se2v6JndTl0hclFrKN9a2yycbwFk9W+vSO5KcsP5q4fuu58qiSzwCrkZdKAf4RYQ08noZTTFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dioW9/IFc8BAEwO69hRCttBRAtDqoPAQeIMmc+uNYAo=; b=UUbdwMmjge6CcplJdjh72/LiCV2iAoTgNITdBwerWsY1BYWw87OtRxnZDV/De9zDxCFLvkTZCoDeTDroz6KUdE8T02rod2QsLLFIraJmQH7AVxK8jJGRwu0VDn9NDYlSelbiKHl7nKqV4Lp6rm/m+za75dSXE1pDjVLCePb6IUta1uB2HJ/P7klEmbNbaG6S2UTsKWXZ0lElHBFdTwVlAzAO76LmphQEYu3A3q2CLNqCIJzFSZROiW+D+w+7NxqlWMMSW7angXoAuDrmYPnXVmurOyTfsiU42UV91M0/w7xfsnPjjToHoDg4Sjy4hRV8KH4A2Bab4Kfv6f0tk3JW6g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dioW9/IFc8BAEwO69hRCttBRAtDqoPAQeIMmc+uNYAo=; b=fiZx/7yKyOGCsdwCbsAL2SVmDMvA9sAjs3us7nfKaqIUqCU+EXR6UHWqoZQ82hQIQ969nRQA1SVKt3ZIzdfRnAxdPvuzjCZnqUIUjppOc2LyAg+gJ7m5c158PFLUDOTPJ/SLEjxvMsvGoBNPMnmoaUoVF8Z8fIFDnEUfpqDZWVpS/URPe5RtmNQ21sihaPMWtdlkp+y4c5eAYJh+c3bDzX0T6RJGhob9k7q2b9iSkHLXCsAg/+Xzk+0ptooy1FBBmSSvOfNf0K4y0UzP0ejYhXpAce9PD18Tk45EGzvwi2tG7JYEJ/2B0tIEq39NFZtEOcWQr2+zBJt/w6dESmJeUw== Received: from MWHPR11CA0017.namprd11.prod.outlook.com (2603:10b6:301:1::27) by DM6PR12MB3561.namprd12.prod.outlook.com (2603:10b6:5:3e::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4352.29; Tue, 27 Jul 2021 03:42:09 +0000 Received: from CO1NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:301:1:cafe::41) by MWHPR11CA0017.outlook.office365.com (2603:10b6:301:1::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4352.25 via Frontend Transport; Tue, 27 Jul 2021 03:42:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT048.mail.protection.outlook.com (10.13.175.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4352.24 via Frontend Transport; Tue, 27 Jul 2021 03:42:08 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 27 Jul 2021 03:42:06 +0000 From: Xueming Li To: CC: , Viacheslav Ovsiienko , , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Date: Tue, 27 Jul 2021 11:41:34 +0800 Message-ID: <20210727034134.20556-1-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9eca39c2-4e29-4d01-3f7b-08d950b082c4 X-MS-TrafficTypeDiagnostic: DM6PR12MB3561: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2958; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: q2mafc5jH6xVTlZaqUovNWMgysa5/urcmT2n9psQTwYicNqx7FLutjvfHpIOFoLTl7Gi9mOQEYQ6WbI7q0MEL1sEJotQOwlHqlTbdV1a9woyXdO+iDPRycJmJDZGHy060Z4R5u1fvjtI8ZjCjYOWeiwzK+LJM4Z5SuyJkH3wX2BsHXqi6V9XqyZB+d6QrYy63pQ6DYYE3WlkOk/wbTXfhCwesxTgIGR3fcGrfE4tntSriRtqy6FJvcjefsBVhwUm0/Jj1kywoMeWQww2kErNvcjdTN5uDnvyWP9kef7LONxmorBNYc6olB/DZJWQSywHGnzXXpJi16bgKgdgrwRtfdD0RysqB2W56DF7wdi4Rlywn9aPplAHU2C5QrabowM+AbD5H7MA3a5g+Vet+H34rtK0V2B+xIiqAe9Hc1M0/gs0QVkJLSNwI5tt0N3hmUQX5VysqRAb2gu7BwHh5FBp/mQubd8W0bYTIo9KGsFflDEhqShoc5BeQ8wrBuuMk/KK/gzkJ8SG709KQTW0Q0f0bM+MiuONCjYSEfkgiXWekzDUS4PrmTbLMP1/aRipRbk935Mmtgv1U0oimtKfKusMvGkC/5bz6CCY2FK5nY7QVWfkoIOR8UmQXT3GQ9i8MVdJ22oRomf/PFFt6bYbT3N+k8paXO/qwvbVSHkcUsdBFLd1oXmzLgZGlBqRZA6QcjfV5lbKRlDu2ZKjnrlpddd3eMPtKi8bBBJtCYxgW+6fgjY= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(186003)(8676002)(86362001)(83380400001)(6286002)(36860700001)(8936002)(16526019)(2616005)(26005)(55016002)(356005)(336012)(426003)(7696005)(47076005)(7636003)(2906002)(36756003)(6666004)(5660300002)(4326008)(82310400003)(1076003)(36906005)(70206006)(54906003)(508600001)(316002)(70586007)(109986005)(266003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2021 03:42:08.6852 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9eca39c2-4e29-4d01-3f7b-08d950b082c4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3561 Subject: [dpdk-dev] [RFC] ethdev: change queue release callback X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To align with other eth device queue configuration callbacks, change RX and TX queue release callback API parameter from queue object to device and queue index. Signed-off-by: Xueming Li ========================= In formal patch, there should be a lot of changes to update all PMDs. --- lib/ethdev/ethdev_driver.h | 3 ++- lib/ethdev/rte_ethdev.c | 49 +++++++++++++++----------------------- 2 files changed, 21 insertions(+), 31 deletions(-) diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 40e474aa7e..838e8468e6 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -282,7 +282,8 @@ typedef int (*eth_rx_disable_intr_t)(struct rte_eth_dev *dev, uint16_t rx_queue_id); /**< @internal Disable interrupt of a receive queue of an Ethernet device. */ -typedef void (*eth_queue_release_t)(void *queue); +typedef void (*eth_queue_release_t)(struct rte_eth_dev *dev, + uint16_t rx_queue_id); /**< @internal Release memory resources allocated by given RX/TX queue. */ typedef int (*eth_fw_version_get_t)(struct rte_eth_dev *dev, diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 9d95cd11e1..a1106f5896 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -906,12 +906,10 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) return -(ENOMEM); } } else if (dev->data->rx_queues != NULL && nb_queues != 0) { /* re-configure */ - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP); - + if (dev->dev_ops->rx_queue_release != NULL) + for (i = nb_queues; i < old_nb_queues; i++) + (*dev->dev_ops->rx_queue_release)(dev, i); rxq = dev->data->rx_queues; - - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->rx_queue_release)(rxq[i]); rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues, RTE_CACHE_LINE_SIZE); if (rxq == NULL) @@ -926,12 +924,10 @@ eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->rx_queues = rxq; } else if (dev->data->rx_queues != NULL && nb_queues == 0) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP); - - rxq = dev->data->rx_queues; - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->rx_queue_release)(rxq[i]); + if (dev->dev_ops->rx_queue_release != NULL) + for (i = nb_queues; i < old_nb_queues; i++) + (*dev->dev_ops->rx_queue_release)(dev, i); rte_free(dev->data->rx_queues); dev->data->rx_queues = NULL; @@ -1146,12 +1142,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) return -(ENOMEM); } } else if (dev->data->tx_queues != NULL && nb_queues != 0) { /* re-configure */ - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP); - txq = dev->data->tx_queues; - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->tx_queue_release)(txq[i]); + if (dev->dev_ops->tx_queue_release != NULL) + for (i = nb_queues; i < old_nb_queues; i++) + (*dev->dev_ops->tx_queue_release)(dev, i); txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues, RTE_CACHE_LINE_SIZE); if (txq == NULL) @@ -1166,12 +1161,11 @@ eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues) dev->data->tx_queues = txq; } else if (dev->data->tx_queues != NULL && nb_queues == 0) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP); - txq = dev->data->tx_queues; - for (i = nb_queues; i < old_nb_queues; i++) - (*dev->dev_ops->tx_queue_release)(txq[i]); + if (dev->dev_ops->tx_queue_release != NULL) + for (i = nb_queues; i < old_nb_queues; i++) + (*dev->dev_ops->tx_queue_release)(dev, i); rte_free(dev->data->tx_queues); dev->data->tx_queues = NULL; @@ -2113,9 +2107,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rxq = dev->data->rx_queues; if (rxq[rx_queue_id]) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, - -ENOTSUP); - (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); + (*dev->dev_ops->rx_queue_release)(dev, rx_queue_id); rxq[rx_queue_id] = NULL; } @@ -2249,9 +2241,8 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, return -EBUSY; rxq = dev->data->rx_queues; if (rxq[rx_queue_id] != NULL) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, - -ENOTSUP); - (*dev->dev_ops->rx_queue_release)(rxq[rx_queue_id]); + if (dev->dev_ops->rx_queue_release != NULL) + (*dev->dev_ops->rx_queue_release)(dev, rx_queue_id); rxq[rx_queue_id] = NULL; } ret = (*dev->dev_ops->rx_hairpin_queue_setup)(dev, rx_queue_id, @@ -2317,9 +2308,8 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, txq = dev->data->tx_queues; if (txq[tx_queue_id]) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, - -ENOTSUP); - (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]); + if (dev->dev_ops->tx_queue_release != NULL) + (*dev->dev_ops->tx_queue_release)(dev, tx_queue_id); txq[tx_queue_id] = NULL; } @@ -2429,9 +2419,8 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, return -EBUSY; txq = dev->data->tx_queues; if (txq[tx_queue_id] != NULL) { - RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, - -ENOTSUP); - (*dev->dev_ops->tx_queue_release)(txq[tx_queue_id]); + if (dev->dev_ops->tx_queue_release != NULL) + (*dev->dev_ops->tx_queue_release)(dev, tx_queue_id); txq[tx_queue_id] = NULL; } ret = (*dev->dev_ops->tx_hairpin_queue_setup)