From patchwork Tue Feb 27 13:37:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137349 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 85C6D43C06; Tue, 27 Feb 2024 14:37:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 24BB240A8A; Tue, 27 Feb 2024 14:37:58 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2046.outbound.protection.outlook.com [40.107.220.46]) by mails.dpdk.org (Postfix) with ESMTP id 0A7BF4027D for ; Tue, 27 Feb 2024 14:37:56 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d0WTJJhmjJo4j6yrjPdq8ZMBRA6oZhFVktMBRtpHEO5uVq9vjgFRFzb0x6Y+bZ3gabvJSqFlWEoq+/nwqRv83dLRVcvFM85SjUd1ZoZ/quACwsC2FCjrc14AdfidFnRidrgc0KsmMaNY7pwPByxNtwScB5RMZ6YAH01b42+UCbHypDuKRQL/tFh6JJNf+90K5bvhR3Ers/8BwyGPqzwzSFMboKDg7w0qG/fCYGEJVrI6prgse9ayMsJ81TD/Bn+PzfX/DGzXp6/nimshBfFnS0HrFa2A89S3FSbpaLvlRLDND9RDhiA7f+mPM1DwgwRAQ91d12pHEBgXfbfI+dbW6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=B925omgLPO9O09R028zFLhEWkfZR9AihBIVlkoacZG8=; b=RD/VBOzXpez7vSGPmOjSs4PrC+BheBH0bvD0Q3GBcXYsuNIKhn1qyEPnU2FKp7AcaZ8wrQIRmDnE2yVVg6xjkNNq7UFb+jsD3v5WQxjMPi03qq1mijGjiU1OUKklkXOsTBxC73oQxrhMMnOJG/nfWr7/+b7eLjyF4/jmGab4UJ/4tmVFT02O5PPa6Qmj7It7TFRRVmrAcvW5M2yNdaoc8eJp8yyfh2JRqXeVE+PV0reOnRRkedqXvvcpALVTMtukYevTpTkRAgxM8p41nfa386EhZX+gewYMTcCg0gM5ixXSj9lV1NcG7G4S5UvBbZE0aD+5GdjOhPjH0mkXaw95cA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=B925omgLPO9O09R028zFLhEWkfZR9AihBIVlkoacZG8=; b=hmA6emTTsB5n4tXKy5DBa0FvxRQIDsOXOUIErIEggp9UupZgf5o1RKyHWA3l2nmB8vUVwrDF5j5g10uXHemRv9Y5itCvn5YrxX7tU2VkNJGBgPELqJZnnadAp4Qnxt2k6It+3zYiS/axMARAu7C5nwYam5gpEDx23R/Teh/pvS71zrdVTDDsflxB/fGzK/vldwySgASIfeQEZYWBcUvgEYPLbmOxaWbflm25bubsQhmS9FwZDxZXfE7s7K/+18GOpHYgPgy6wWamCsAGG7EoBJ9RQHsnB5L7Du/ZSObFYx0NgX84Asxj2EEs9yGV7wrNjzo0raoCFKYvXTwxE1QmHw== Received: from DM6PR21CA0012.namprd21.prod.outlook.com (2603:10b6:5:174::22) by DS0PR12MB6584.namprd12.prod.outlook.com (2603:10b6:8:d0::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.36; Tue, 27 Feb 2024 13:37:50 +0000 Received: from CY4PEPF0000EE32.namprd05.prod.outlook.com (2603:10b6:5:174:cafe::e9) by DM6PR21CA0012.outlook.office365.com (2603:10b6:5:174::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7362.11 via Frontend Transport; Tue, 27 Feb 2024 13:37:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000EE32.mail.protection.outlook.com (10.167.242.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Tue, 27 Feb 2024 13:37:50 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 27 Feb 2024 05:37:33 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Tue, 27 Feb 2024 05:37:30 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH v4 1/2] net/mlx5: move meter init functions Date: Tue, 27 Feb 2024 15:37:13 +0200 Message-ID: <20240227133714.12705-2-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240227133714.12705-1-dsosnowski@nvidia.com> References: <20240222180059.50597-1-dsosnowski@nvidia.com> <20240227133714.12705-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE32:EE_|DS0PR12MB6584:EE_ X-MS-Office365-Filtering-Correlation-Id: 6ec8e6ee-4b09-4be2-8473-08dc37994aa4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9LLGTg7VGybHIm4MHryaDLgWt8q5VhNUigrq/VOcyEx4tnj61vC17C+v88IXdvRbRS7TO0ORwGosaz/oGKONeJTEk2R5urwszdZhX/FJnk67rDiqTPweQgElLELPCq5vOF5AcTy5Lht+QBzS0jcmeTgU/0aBtvSMDE8dWG1+yR5tFw8/3Hx77phc3HjGS8VVl3OLxQt+z6N1ZxOkpo2LEaHGP8fAjs93rHIjaC9Ii0Ib3k4eByc4emnJ7FdqHYleiVfkyQ7LClxAyNS9eIQivObetcM7R6H2/UuL/uEiih/YMI8fIoipJVN+pZdwjNoHRAUzxqcUyau268yEVAmQSUQd09XFUbkMFeEWy8zG6SDaXpuT24zKgY6mk/RdILHyRpl/6EkDFl21oh2GwJAhKLqkhy9li61CVSkmasHmABExR0dPF868m1nFa9GGGH5Qe0PcWNug42l1K2GfWs4oGxOqeL+OBfgtNmyrAaSC+d3ODXwANvnIdBGz8/QQQJETCR1xBb0Wv+OpzPPQhoLb63ixlNvkrl5gtWrmKQVHnFsuKqe3Vw6bDRoevPOfJe/0Sy6/6c/+Il3BG4wAlaEDFiEFduMWBXuTzvjTUThYTylTShGpb6Ub/LqKWEUbJFfGtT6tW+1gd2oRGTOBva+wQPbdONMS2WPNE56X4P7HvN/OKgzYAASuMYoWdYQDByQpqd9rCGMMAAcNdDACmmalp5qiQsiOnYMVKwUTPLaAR8h339eYlVrSwml1TLcu9aid X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Feb 2024 13:37:50.0025 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6ec8e6ee-4b09-4be2-8473-08dc37994aa4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EE32.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6584 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move mlx5_flow_meter_init() and mlx5_flow_meter_uinit() to module for meter operations. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow_hw.c | 203 ---------------------------- drivers/net/mlx5/mlx5_flow_meter.c | 207 +++++++++++++++++++++++++++++ 2 files changed, 207 insertions(+), 203 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 769ec9ff94..49c164060b 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -13139,209 +13139,6 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags) return 0; } -void -mlx5_flow_meter_uninit(struct rte_eth_dev *dev) -{ - struct mlx5_priv *priv = dev->data->dev_private; - - if (priv->mtr_policy_arr) { - mlx5_free(priv->mtr_policy_arr); - priv->mtr_policy_arr = NULL; - } - if (priv->mtr_profile_arr) { - mlx5_free(priv->mtr_profile_arr); - priv->mtr_profile_arr = NULL; - } - if (priv->hws_mpool) { - mlx5_aso_mtr_queue_uninit(priv->sh, priv->hws_mpool, NULL); - mlx5_ipool_destroy(priv->hws_mpool->idx_pool); - mlx5_free(priv->hws_mpool); - priv->hws_mpool = NULL; - } - if (priv->mtr_bulk.aso) { - mlx5_free(priv->mtr_bulk.aso); - priv->mtr_bulk.aso = NULL; - priv->mtr_bulk.size = 0; - mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER); - } - if (priv->mtr_bulk.action) { - mlx5dr_action_destroy(priv->mtr_bulk.action); - priv->mtr_bulk.action = NULL; - } - if (priv->mtr_bulk.devx_obj) { - claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj)); - priv->mtr_bulk.devx_obj = NULL; - } -} - -int -mlx5_flow_meter_init(struct rte_eth_dev *dev, - uint32_t nb_meters, - uint32_t nb_meter_profiles, - uint32_t nb_meter_policies, - uint32_t nb_queues) -{ - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_devx_obj *dcs = NULL; - uint32_t log_obj_size; - int ret = 0; - int reg_id; - struct mlx5_aso_mtr *aso; - uint32_t i; - struct rte_flow_error error; - uint32_t flags; - uint32_t nb_mtrs = rte_align32pow2(nb_meters); - struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct mlx5_aso_mtr), - .trunk_size = 1 << 12, - .per_core_cache = 1 << 13, - .need_lock = 1, - .release_mem_en = !!priv->sh->config.reclaim_mode, - .malloc = mlx5_malloc, - .max_idx = nb_meters, - .free = mlx5_free, - .type = "mlx5_hw_mtr_mark_action", - }; - - if (!nb_meters) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter configuration is invalid."); - goto err; - } - if (!priv->mtr_en || !priv->sh->meter_aso_en) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO is not supported."); - goto err; - } - priv->mtr_config.nb_meters = nb_meters; - log_obj_size = rte_log2_u32(nb_meters >> 1); - dcs = mlx5_devx_cmd_create_flow_meter_aso_obj - (priv->sh->cdev->ctx, priv->sh->cdev->pdn, - log_obj_size); - if (!dcs) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO object allocation failed."); - goto err; - } - priv->mtr_bulk.devx_obj = dcs; - reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); - if (reg_id < 0) { - ret = ENOTSUP; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter register is not available."); - goto err; - } - flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; - if (priv->sh->config.dv_esw_en && priv->master) - flags |= MLX5DR_ACTION_FLAG_HWS_FDB; - priv->mtr_bulk.action = mlx5dr_action_create_aso_meter - (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, - reg_id - REG_C_0, flags); - if (!priv->mtr_bulk.action) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter action creation failed."); - goto err; - } - priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr) * - nb_meters, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_bulk.aso) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter bulk ASO allocation failed."); - goto err; - } - priv->mtr_bulk.size = nb_meters; - aso = priv->mtr_bulk.aso; - for (i = 0; i < priv->mtr_bulk.size; i++) { - aso->type = ASO_METER_DIRECT; - aso->state = ASO_METER_WAIT; - aso->offset = i; - aso++; - } - priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr_pool), - RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!priv->hws_mpool) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ipool allocation failed."); - goto err; - } - priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; - priv->hws_mpool->action = priv->mtr_bulk.action; - priv->hws_mpool->nb_sq = nb_queues; - if (mlx5_aso_mtr_queue_init(priv->sh, priv->hws_mpool, - &priv->sh->mtrmng->pools_mng, nb_queues)) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO queue allocation failed."); - goto err; - } - /* - * No need for local cache if Meter number is a small number. - * Since flow insertion rate will be very limited in that case. - * Here let's set the number to less than default trunk size 4K. - */ - if (nb_mtrs <= cfg.trunk_size) { - cfg.per_core_cache = 0; - cfg.trunk_size = nb_mtrs; - } else if (nb_mtrs <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { - cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; - } - priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); - if (nb_meter_profiles) { - priv->mtr_config.nb_meter_profiles = nb_meter_profiles; - priv->mtr_profile_arr = - mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_profile) * - nb_meter_profiles, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_profile_arr) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter profile allocation failed."); - goto err; - } - } - if (nb_meter_policies) { - priv->mtr_config.nb_meter_policies = nb_meter_policies; - priv->mtr_policy_arr = - mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_policy) * - nb_meter_policies, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!priv->mtr_policy_arr) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter policy allocation failed."); - goto err; - } - } - return 0; -err: - mlx5_flow_meter_uninit(dev); - return ret; -} - static __rte_always_inline uint32_t mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain) { diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 7cbf772ea4..9cb4614436 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -15,6 +15,213 @@ #include "mlx5.h" #include "mlx5_flow.h" +#ifdef HAVE_MLX5_HWS_SUPPORT + +void +mlx5_flow_meter_uninit(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (priv->mtr_policy_arr) { + mlx5_free(priv->mtr_policy_arr); + priv->mtr_policy_arr = NULL; + } + if (priv->mtr_profile_arr) { + mlx5_free(priv->mtr_profile_arr); + priv->mtr_profile_arr = NULL; + } + if (priv->hws_mpool) { + mlx5_aso_mtr_queue_uninit(priv->sh, priv->hws_mpool, NULL); + mlx5_ipool_destroy(priv->hws_mpool->idx_pool); + mlx5_free(priv->hws_mpool); + priv->hws_mpool = NULL; + } + if (priv->mtr_bulk.aso) { + mlx5_free(priv->mtr_bulk.aso); + priv->mtr_bulk.aso = NULL; + priv->mtr_bulk.size = 0; + mlx5_aso_queue_uninit(priv->sh, ASO_OPC_MOD_POLICER); + } + if (priv->mtr_bulk.action) { + mlx5dr_action_destroy(priv->mtr_bulk.action); + priv->mtr_bulk.action = NULL; + } + if (priv->mtr_bulk.devx_obj) { + claim_zero(mlx5_devx_cmd_destroy(priv->mtr_bulk.devx_obj)); + priv->mtr_bulk.devx_obj = NULL; + } +} + +int +mlx5_flow_meter_init(struct rte_eth_dev *dev, + uint32_t nb_meters, + uint32_t nb_meter_profiles, + uint32_t nb_meter_policies, + uint32_t nb_queues) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_devx_obj *dcs = NULL; + uint32_t log_obj_size; + int ret = 0; + int reg_id; + struct mlx5_aso_mtr *aso; + uint32_t i; + struct rte_flow_error error; + uint32_t flags; + uint32_t nb_mtrs = rte_align32pow2(nb_meters); + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct mlx5_aso_mtr), + .trunk_size = 1 << 12, + .per_core_cache = 1 << 13, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = nb_meters, + .free = mlx5_free, + .type = "mlx5_hw_mtr_mark_action", + }; + + if (!nb_meters) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter configuration is invalid."); + goto err; + } + if (!priv->mtr_en || !priv->sh->meter_aso_en) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO is not supported."); + goto err; + } + priv->mtr_config.nb_meters = nb_meters; + log_obj_size = rte_log2_u32(nb_meters >> 1); + dcs = mlx5_devx_cmd_create_flow_meter_aso_obj + (priv->sh->cdev->ctx, priv->sh->cdev->pdn, + log_obj_size); + if (!dcs) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO object allocation failed."); + goto err; + } + priv->mtr_bulk.devx_obj = dcs; + reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + if (reg_id < 0) { + ret = ENOTSUP; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter register is not available."); + goto err; + } + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + priv->mtr_bulk.action = mlx5dr_action_create_aso_meter + (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, + reg_id - REG_C_0, flags); + if (!priv->mtr_bulk.action) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter action creation failed."); + goto err; + } + priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr) * + nb_meters, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_bulk.aso) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter bulk ASO allocation failed."); + goto err; + } + priv->mtr_bulk.size = nb_meters; + aso = priv->mtr_bulk.aso; + for (i = 0; i < priv->mtr_bulk.size; i++) { + aso->type = ASO_METER_DIRECT; + aso->state = ASO_METER_WAIT; + aso->offset = i; + aso++; + } + priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr_pool), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!priv->hws_mpool) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ipool allocation failed."); + goto err; + } + priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; + priv->hws_mpool->action = priv->mtr_bulk.action; + priv->hws_mpool->nb_sq = nb_queues; + if (mlx5_aso_mtr_queue_init(priv->sh, priv->hws_mpool, + &priv->sh->mtrmng->pools_mng, nb_queues)) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO queue allocation failed."); + goto err; + } + /* + * No need for local cache if Meter number is a small number. + * Since flow insertion rate will be very limited in that case. + * Here let's set the number to less than default trunk size 4K. + */ + if (nb_mtrs <= cfg.trunk_size) { + cfg.per_core_cache = 0; + cfg.trunk_size = nb_mtrs; + } else if (nb_mtrs <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { + cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; + } + priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); + if (nb_meter_profiles) { + priv->mtr_config.nb_meter_profiles = nb_meter_profiles; + priv->mtr_profile_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_profile) * + nb_meter_profiles, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_profile_arr) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile allocation failed."); + goto err; + } + } + if (nb_meter_policies) { + priv->mtr_config.nb_meter_policies = nb_meter_policies; + priv->mtr_policy_arr = + mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_flow_meter_policy) * + nb_meter_policies, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!priv->mtr_policy_arr) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter policy allocation failed."); + goto err; + } + } + return 0; +err: + mlx5_flow_meter_uninit(dev); + return ret; +} + +#endif /* HAVE_MLX5_HWS_SUPPORT */ + static int mlx5_flow_meter_disable(struct rte_eth_dev *dev, uint32_t meter_id, struct rte_mtr_error *error);