From patchwork Fri Sep 30 12:53:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 117222 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06C9AA00C4; Fri, 30 Sep 2022 14:55:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C83BE42B6E; Fri, 30 Sep 2022 14:54:17 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2050.outbound.protection.outlook.com [40.107.237.50]) by mails.dpdk.org (Postfix) with ESMTP id D255942BB0 for ; Fri, 30 Sep 2022 14:54:11 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gk2Vj77Z0HHIIF36BSxabNenlfrU+I7uys7uIyQevE//gq8RntkITRy1ONbjwC9YuDEmVQtPH2r5yJh1f++JnpwTIspMfqfRhBMJjPzeOfHq5wHpXAkMk6faR2SqZg//B3PihVV5eA7GPVXJvG9ZGjzJVkwOXz+hTT/T6mSBHB+Xn9kRA+qrvrK71qMGsWJ/ux/D6ArlP25iUy37WO5f2ze54OsS9Ryyu9MYROTP6Lxa19MpjdFihAWirmqgC3ME65OMvxEJDZDM2sjtHS4XlNgvz54rTaRta6YmJ79QEFInqcF/TtuqGk+nQtRHL6OttjcG6Rob8obREOI9XN++Dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZeQaD6buAgTwaOqPsm2biRGVHXyC1jA2toKID2l7Mr8=; b=WiJ52Wm2pocee0zUtdHRYI9h6NNbW4lhT68kl8h+u4EXOdWRa/OFs9N4boRVaQV7e5r2Vzy1dqFWKyD4vh/ZP+bb/U7GCuEp5tO4qARqOAHut6QeHnUbFfKbBxLlt/c3W9GOig/sJyCp8LpBNw5Qk4q5rZaxDruGB6JuM8UWqsgtbR91SSRlxtdc/vT/j5Of1TZOlXtP87ZVqA3vYZEDzBqqYn8mXb/Qjk+6OYCd687E6OUlE+tW53vw//GTp9/XoB5kz2wwT96PGCggrJcvSJ1LcmSQvtkgl7kiXB1g0LFHM/6U89ULsCGh35CpTxAXEiNkqW6EgSgidkfIxY3Zew== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZeQaD6buAgTwaOqPsm2biRGVHXyC1jA2toKID2l7Mr8=; b=Dw3ADdhW32tNIuLZRQKpSw1uNI1IycfV7qUQUJcQ64dUpwC/t/e4Iad1CKYgdmdfk6aTjCUKy9wH2bCGFyqcPKrxNpcaC05P8fqoLsuTKcngK1N75+A+/65B8ySEANUHJbsaSyiv3x/sAJLLhnrPkdla+uFrt3fevbXbWod/9jm/jyfVNTpApuku6XXv+b0U0NKK0PGMhiqxztNK9QThqN3DWMaIH8LFhWJiZ1+iMBwLyydk7WCL3ylhYdDRlcvfKfwHom7OWLZMx7eraG6l2IMNv++AsWLlQnBvOq/EIlol8Y183pybsm7MXsdi45InuPLMJxpEpq+IeFogZSMKgg== Received: from DS7PR05CA0039.namprd05.prod.outlook.com (2603:10b6:8:2f::16) by BL3PR12MB6620.namprd12.prod.outlook.com (2603:10b6:208:38f::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17; Fri, 30 Sep 2022 12:54:09 +0000 Received: from DM6NAM11FT097.eop-nam11.prod.protection.outlook.com (2603:10b6:8:2f:cafe::4d) by DS7PR05CA0039.outlook.office365.com (2603:10b6:8:2f::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.7 via Frontend Transport; Fri, 30 Sep 2022 12:54:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT097.mail.protection.outlook.com (10.13.172.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Fri, 30 Sep 2022 12:54:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 30 Sep 2022 05:54:01 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 30 Sep 2022 05:53:59 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , , , "Alexander Kozyrev" Subject: [PATCH v3 12/17] net/mlx5: implement METER MARK indirect action for HWS Date: Fri, 30 Sep 2022 15:53:10 +0300 Message-ID: <20220930125315.5079-13-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220930125315.5079-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20220930125315.5079-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT097:EE_|BL3PR12MB6620:EE_ X-MS-Office365-Filtering-Correlation-Id: d5105a9a-8c3c-40ec-5673-08daa2e2dd86 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: n9GB0xBVAGJcQlpbkvHmQt0kl4nuaqZwaPbRwoaF1uvjvJ98EgA7lMi59VQPFSEOoQPgqRUBIupTuTvcIaGNCDR51MRU7FR5mje+nQHvkmWgbbtc3W2bHVnVMWPyNPYU4T+/FGrFA+hy9bfx1WgkRLZg0x7pAoUl9vjw4fdpR9ihY2p92qdjI/UwjXnfuJd4TdGXui/rAvshaTIFnGd/gJukGWfWnZqLUjjZChIMTPQ9VcQmQnUH8POS9krHxDhZmRs4VzY99i1OWA+fyTH+OaCrmyChnLXmpZYzyZJFwpx2Kf9BzbSYc7L2GlYX3WXiLm/p54d3aWEDeqWklmE2QS6+z5vkHSNGas8kPLKhDqqQUsulrZHQdmeqTpMT0xA0IC76wozl5W+P5C6slQXwRb8zqxPHMVdKep8Do0wwHloOkiSvzb0QcJV6gSF1yUQyoMPPt9n102KjbAkvlat6IBERtkYVaan/MonQ/axsouItZoaYU6LiLg9AWHDFu3OOeMAejNDM92v8l1KmmooL3RickdBTaTw0vJo64kDSjRjzwV+Rv77Y3o4qY/w7taym1Afy7l9lbYTWjNKR+LFcjpOORyQ2Fs5IR0TewCA6KrN90et99MJNcxZBs4sISEGdjWGukc62j3lIkRptNn+22zRWlcTUBzXSsejEOVZpMzmUVTszMmqyZagOiAtaBPAJEPEh2kfEfNz20kiuYBvU2OjKDmVL/1RbRK/o/mxpe4TJt7jXAAXLsMQjnJJ2rKsA066pVyLYWl95oN56c2F3C+R0EA0TGnORkxW7exixJQs= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(136003)(396003)(376002)(346002)(451199015)(40470700004)(46966006)(36840700001)(8936002)(5660300002)(83380400001)(36756003)(7696005)(6286002)(6666004)(107886003)(426003)(41300700001)(6636002)(478600001)(54906003)(110136005)(2906002)(40460700003)(186003)(4326008)(70586007)(70206006)(1076003)(30864003)(2616005)(26005)(336012)(55016003)(82310400005)(8676002)(40480700001)(16526019)(47076005)(316002)(86362001)(82740400003)(356005)(36860700001)(7636003)(559001)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2022 12:54:08.7817 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d5105a9a-8c3c-40ec-5673-08daa2e2dd86 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT097.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6620 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alexander Kozyrev Add ability to create an indirect action handle for METER_MARK. It allows to share one Meter between several different actions. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5.c | 4 +- drivers/net/mlx5/mlx5.h | 33 ++- drivers/net/mlx5/mlx5_flow.c | 6 + drivers/net/mlx5/mlx5_flow.h | 19 +- drivers/net/mlx5/mlx5_flow_aso.c | 139 +++++++-- drivers/net/mlx5/mlx5_flow_dv.c | 145 +++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 438 +++++++++++++++++++++++++---- drivers/net/mlx5/mlx5_flow_meter.c | 79 +++++- 8 files changed, 764 insertions(+), 99 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 925e19bcd5..383a789dfa 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -442,7 +442,7 @@ mlx5_flow_aso_age_mng_init(struct mlx5_dev_ctx_shared *sh) rte_errno = ENOMEM; return -ENOMEM; } - err = mlx5_aso_queue_init(sh, ASO_OPC_MOD_FLOW_HIT); + err = mlx5_aso_queue_init(sh, ASO_OPC_MOD_FLOW_HIT, 1); if (err) { mlx5_free(sh->aso_age_mng); return -1; @@ -763,7 +763,7 @@ mlx5_flow_aso_ct_mng_init(struct mlx5_dev_ctx_shared *sh) rte_errno = ENOMEM; return -rte_errno; } - err = mlx5_aso_queue_init(sh, ASO_OPC_MOD_CONNECTION_TRACKING); + err = mlx5_aso_queue_init(sh, ASO_OPC_MOD_CONNECTION_TRACKING, MLX5_ASO_CT_SQ_NUM); if (err) { mlx5_free(sh->ct_mng); /* rte_errno should be extracted from the failure. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7ec5f6a352..89dc8441dc 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -971,12 +971,16 @@ enum mlx5_aso_mtr_type { /* Generic aso_flow_meter information. */ struct mlx5_aso_mtr { - LIST_ENTRY(mlx5_aso_mtr) next; + union { + LIST_ENTRY(mlx5_aso_mtr) next; + struct mlx5_aso_mtr_pool *pool; + }; enum mlx5_aso_mtr_type type; struct mlx5_flow_meter_info fm; /**< Pointer to the next aso flow meter structure. */ uint8_t state; /**< ASO flow meter state. */ uint32_t offset; + enum rte_color init_color; }; /* Generic aso_flow_meter pool structure. */ @@ -985,7 +989,11 @@ struct mlx5_aso_mtr_pool { /*Must be the first in pool*/ struct mlx5_devx_obj *devx_obj; /* The devx object of the minimum aso flow meter ID. */ + struct mlx5dr_action *action; /* HWS action. */ + struct mlx5_indexed_pool *idx_pool; /* HWS index pool. */ uint32_t index; /* Pool index in management structure. */ + uint32_t nb_sq; /* Number of ASO SQ. */ + struct mlx5_aso_sq *sq; /* ASO SQs. */ }; LIST_HEAD(aso_meter_list, mlx5_aso_mtr); @@ -1678,6 +1686,7 @@ struct mlx5_priv { struct mlx5_aso_ct_pools_mng *ct_mng; /* Management data for ASO connection tracking. */ struct mlx5_aso_ct_pool *hws_ctpool; /* HW steering's CT pool. */ + struct mlx5_aso_mtr_pool *hws_mpool; /* HW steering's Meter pool. */ #endif }; @@ -1998,7 +2007,8 @@ void mlx5_pmd_socket_uninit(void); int mlx5_flow_meter_init(struct rte_eth_dev *dev, uint32_t nb_meters, uint32_t nb_meter_profiles, - uint32_t nb_meter_policies); + uint32_t nb_meter_policies, + uint32_t nb_queues); void mlx5_flow_meter_uninit(struct rte_eth_dev *dev); int mlx5_flow_meter_ops_get(struct rte_eth_dev *dev, void *arg); struct mlx5_flow_meter_info *mlx5_flow_meter_find(struct mlx5_priv *priv, @@ -2067,15 +2077,24 @@ eth_tx_burst_t mlx5_select_tx_function(struct rte_eth_dev *dev); /* mlx5_flow_aso.c */ +int mlx5_aso_mtr_queue_init(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_mtr_pool *hws_pool, + struct mlx5_aso_mtr_pools_mng *pool_mng, + uint32_t nb_queues); +void mlx5_aso_mtr_queue_uninit(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_mtr_pool *hws_pool, + struct mlx5_aso_mtr_pools_mng *pool_mng); int mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, - enum mlx5_access_aso_opc_mod aso_opc_mod); + enum mlx5_access_aso_opc_mod aso_opc_mode, + uint32_t nb_queues); int mlx5_aso_flow_hit_queue_poll_start(struct mlx5_dev_ctx_shared *sh); int mlx5_aso_flow_hit_queue_poll_stop(struct mlx5_dev_ctx_shared *sh); void mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh, - enum mlx5_access_aso_opc_mod aso_opc_mod); -int mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, - struct mlx5_aso_mtr *mtr, struct mlx5_mtr_bulk *bulk); -int mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, + enum mlx5_access_aso_opc_mod aso_opc_mod); +int mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, uint32_t queue, + struct mlx5_aso_mtr *mtr, + struct mlx5_mtr_bulk *bulk); +int mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, uint32_t queue, struct mlx5_aso_mtr *mtr); int mlx5_aso_ct_update_by_wqe(struct mlx5_dev_ctx_shared *sh, uint32_t queue, struct mlx5_aso_ct_action *ct, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index cbf9c31984..9627ffc979 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4221,6 +4221,12 @@ flow_action_handles_translate(struct rte_eth_dev *dev, MLX5_RTE_FLOW_ACTION_TYPE_COUNT; translated[handle->index].conf = (void *)(uintptr_t)idx; break; + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + translated[handle->index].type = + (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK; + translated[handle->index].conf = (void *)(uintptr_t)idx; + break; case MLX5_INDIRECT_ACTION_TYPE_AGE: if (priv->sh->flow_hit_aso_en) { translated[handle->index].type = diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 6d928b477e..ffa4f28255 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -46,6 +46,7 @@ enum mlx5_rte_flow_action_type { MLX5_RTE_FLOW_ACTION_TYPE_COUNT, MLX5_RTE_FLOW_ACTION_TYPE_JUMP, MLX5_RTE_FLOW_ACTION_TYPE_RSS, + MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK, }; /* Private (internal) Field IDs for MODIFY_FIELD action. */ @@ -54,22 +55,23 @@ enum mlx5_rte_flow_field_id { MLX5_RTE_FLOW_FIELD_META_REG, }; -#define MLX5_INDIRECT_ACTION_TYPE_OFFSET 30 +#define MLX5_INDIRECT_ACTION_TYPE_OFFSET 29 enum { MLX5_INDIRECT_ACTION_TYPE_RSS, MLX5_INDIRECT_ACTION_TYPE_AGE, MLX5_INDIRECT_ACTION_TYPE_COUNT, MLX5_INDIRECT_ACTION_TYPE_CT, + MLX5_INDIRECT_ACTION_TYPE_METER_MARK, }; -/* Now, the maximal ports will be supported is 256, action number is 4M. */ -#define MLX5_INDIRECT_ACT_CT_MAX_PORT 0x100 +/* Now, the maximal ports will be supported is 16, action number is 32M. */ +#define MLX5_INDIRECT_ACT_CT_MAX_PORT 0x10 #define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 22 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1) -/* 30-31: type, 22-29: owner port, 0-21: index. */ +/* 29-31: type, 25-28: owner port, 0-24: index */ #define MLX5_INDIRECT_ACT_CT_GEN_IDX(owner, index) \ ((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | \ (((owner) & MLX5_INDIRECT_ACT_CT_OWNER_MASK) << \ @@ -207,6 +209,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_ITEM_PORT_REPRESENTOR (UINT64_C(1) << 41) #define MLX5_FLOW_ITEM_REPRESENTED_PORT (UINT64_C(1) << 42) +/* Meter color item */ +#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -1108,6 +1113,7 @@ struct rte_flow_hw { struct rte_flow_template_table *table; /* The table flow allcated from. */ struct mlx5dr_rule rule; /* HWS layer data struct. */ uint32_t cnt_id; + uint32_t mtr_id; } __rte_packed; /* rte flow action translate to DR action struct. */ @@ -1154,6 +1160,9 @@ struct mlx5_action_construct_data { struct { uint32_t id; } shared_counter; + struct { + uint32_t id; + } shared_meter; }; }; @@ -1237,6 +1246,7 @@ struct mlx5_hw_actions { uint16_t encap_decap_pos; /* Encap/Decap action position. */ uint32_t mark:1; /* Indicate the mark action. */ uint32_t cnt_id; /* Counter id. */ + uint32_t mtr_id; /* Meter id. */ /* Translated DR action array from action template. */ struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; }; @@ -1524,6 +1534,7 @@ flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) */ return REG_A; case RTE_FLOW_ITEM_TYPE_CONNTRACK: + case RTE_FLOW_ITEM_TYPE_METER_COLOR: return mlx5_flow_hw_aso_tag; case RTE_FLOW_ITEM_TYPE_TAG: MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index c00c07b891..f371fff2e2 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -275,6 +275,65 @@ mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, return -1; } +void +mlx5_aso_mtr_queue_uninit(struct mlx5_dev_ctx_shared *sh __rte_unused, + struct mlx5_aso_mtr_pool *hws_pool, + struct mlx5_aso_mtr_pools_mng *pool_mng) +{ + uint32_t i; + + if (hws_pool) { + for (i = 0; i < hws_pool->nb_sq; i++) + mlx5_aso_destroy_sq(hws_pool->sq + i); + mlx5_free(hws_pool->sq); + return; + } + if (pool_mng) + mlx5_aso_destroy_sq(&pool_mng->sq); +} + +int +mlx5_aso_mtr_queue_init(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_mtr_pool *hws_pool, + struct mlx5_aso_mtr_pools_mng *pool_mng, + uint32_t nb_queues) +{ + struct mlx5_common_device *cdev = sh->cdev; + struct mlx5_aso_sq *sq; + uint32_t i; + + if (hws_pool) { + sq = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_sq) * nb_queues, + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!sq) + return -1; + hws_pool->sq = sq; + for (i = 0; i < nb_queues; i++) { + if (mlx5_aso_sq_create(cdev, hws_pool->sq + i, + sh->tx_uar.obj, + MLX5_ASO_QUEUE_LOG_DESC)) + goto error; + mlx5_aso_mtr_init_sq(hws_pool->sq + i); + } + hws_pool->nb_sq = nb_queues; + } + if (pool_mng) { + if (mlx5_aso_sq_create(cdev, &pool_mng->sq, + sh->tx_uar.obj, + MLX5_ASO_QUEUE_LOG_DESC)) + return -1; + mlx5_aso_mtr_init_sq(&pool_mng->sq); + } + return 0; +error: + do { + if (&hws_pool->sq[i]) + mlx5_aso_destroy_sq(hws_pool->sq + i); + } while (i--); + return -1; +} + /** * API to create and initialize Send Queue used for ASO access. * @@ -282,13 +341,16 @@ mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, * Pointer to shared device context. * @param[in] aso_opc_mod * Mode of ASO feature. + * @param[in] nb_queues + * Number of Send Queues to create. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ int mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, - enum mlx5_access_aso_opc_mod aso_opc_mod) + enum mlx5_access_aso_opc_mod aso_opc_mod, + uint32_t nb_queues) { uint32_t sq_desc_n = 1 << MLX5_ASO_QUEUE_LOG_DESC; struct mlx5_common_device *cdev = sh->cdev; @@ -307,10 +369,9 @@ mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh, mlx5_aso_age_init_sq(&sh->aso_age_mng->aso_sq); break; case ASO_OPC_MOD_POLICER: - if (mlx5_aso_sq_create(cdev, &sh->mtrmng->pools_mng.sq, - sh->tx_uar.obj, MLX5_ASO_QUEUE_LOG_DESC)) + if (mlx5_aso_mtr_queue_init(sh, NULL, + &sh->mtrmng->pools_mng, nb_queues)) return -1; - mlx5_aso_mtr_init_sq(&sh->mtrmng->pools_mng.sq); break; case ASO_OPC_MOD_CONNECTION_TRACKING: if (mlx5_aso_ct_queue_init(sh, sh->ct_mng, MLX5_ASO_CT_SQ_NUM)) @@ -343,7 +404,7 @@ mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh, sq = &sh->aso_age_mng->aso_sq; break; case ASO_OPC_MOD_POLICER: - sq = &sh->mtrmng->pools_mng.sq; + mlx5_aso_mtr_queue_uninit(sh, NULL, &sh->mtrmng->pools_mng); break; case ASO_OPC_MOD_CONNECTION_TRACKING: mlx5_aso_ct_queue_uninit(sh, sh->ct_mng); @@ -666,7 +727,8 @@ static uint16_t mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_sq *sq, struct mlx5_aso_mtr *aso_mtr, - struct mlx5_mtr_bulk *bulk) + struct mlx5_mtr_bulk *bulk, + bool need_lock) { volatile struct mlx5_aso_wqe *wqe = NULL; struct mlx5_flow_meter_info *fm = NULL; @@ -679,11 +741,13 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, uint32_t param_le; int id; - rte_spinlock_lock(&sq->sqsl); + if (need_lock) + rte_spinlock_lock(&sq->sqsl); res = size - (uint16_t)(sq->head - sq->tail); if (unlikely(!res)) { DRV_LOG(ERR, "Fail: SQ is full and no free WQE to send"); - rte_spinlock_unlock(&sq->sqsl); + if (need_lock) + rte_spinlock_unlock(&sq->sqsl); return 0; } wqe = &sq->sq_obj.aso_wqes[sq->head & mask]; @@ -692,8 +756,11 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, fm = &aso_mtr->fm; sq->elts[sq->head & mask].mtr = aso_mtr; if (aso_mtr->type == ASO_METER_INDIRECT) { - pool = container_of(aso_mtr, struct mlx5_aso_mtr_pool, - mtrs[aso_mtr->offset]); + if (likely(sh->config.dv_flow_en == 2)) + pool = aso_mtr->pool; + else + pool = container_of(aso_mtr, struct mlx5_aso_mtr_pool, + mtrs[aso_mtr->offset]); id = pool->devx_obj->id; } else { id = bulk->devx_obj->id; @@ -756,7 +823,8 @@ mlx5_aso_mtr_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, mlx5_doorbell_ring(&sh->tx_uar.bf_db, *(volatile uint64_t *)wqe, sq->pi, &sq->sq_obj.db_rec[MLX5_SND_DBR], !sh->tx_uar.dbnc); - rte_spinlock_unlock(&sq->sqsl); + if (need_lock) + rte_spinlock_unlock(&sq->sqsl); return 1; } @@ -779,7 +847,7 @@ mlx5_aso_mtrs_status_update(struct mlx5_aso_sq *sq, uint16_t aso_mtrs_nums) } static void -mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq) +mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq, bool need_lock) { struct mlx5_aso_cq *cq = &sq->cq; volatile struct mlx5_cqe *restrict cqe; @@ -791,7 +859,8 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq) uint16_t n = 0; int ret; - rte_spinlock_lock(&sq->sqsl); + if (need_lock) + rte_spinlock_lock(&sq->sqsl); max = (uint16_t)(sq->head - sq->tail); if (unlikely(!max)) { rte_spinlock_unlock(&sq->sqsl); @@ -823,7 +892,8 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq) rte_io_wmb(); cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); } - rte_spinlock_unlock(&sq->sqsl); + if (need_lock) + rte_spinlock_unlock(&sq->sqsl); } /** @@ -840,16 +910,30 @@ mlx5_aso_mtr_completion_handle(struct mlx5_aso_sq *sq) * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, +mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, uint32_t queue, struct mlx5_aso_mtr *mtr, struct mlx5_mtr_bulk *bulk) { - struct mlx5_aso_sq *sq = &sh->mtrmng->pools_mng.sq; + struct mlx5_aso_sq *sq; uint32_t poll_wqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES; + bool need_lock; + if (likely(sh->config.dv_flow_en == 2)) { + if (queue == MLX5_HW_INV_QUEUE) { + sq = &mtr->pool->sq[mtr->pool->nb_sq - 1]; + need_lock = true; + } else { + sq = &mtr->pool->sq[queue]; + need_lock = false; + } + } else { + sq = &sh->mtrmng->pools_mng.sq; + need_lock = true; + } do { - mlx5_aso_mtr_completion_handle(sq); - if (mlx5_aso_mtr_sq_enqueue_single(sh, sq, mtr, bulk)) + mlx5_aso_mtr_completion_handle(sq, need_lock); + if (mlx5_aso_mtr_sq_enqueue_single(sh, sq, mtr, + bulk, need_lock)) return 0; /* Waiting for wqe resource. */ rte_delay_us_sleep(MLX5_ASO_WQE_CQE_RESPONSE_DELAY); @@ -873,17 +957,30 @@ mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, +mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, uint32_t queue, struct mlx5_aso_mtr *mtr) { - struct mlx5_aso_sq *sq = &sh->mtrmng->pools_mng.sq; + struct mlx5_aso_sq *sq; uint32_t poll_cqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES; + bool need_lock; + if (likely(sh->config.dv_flow_en == 2)) { + if (queue == MLX5_HW_INV_QUEUE) { + sq = &mtr->pool->sq[mtr->pool->nb_sq - 1]; + need_lock = true; + } else { + sq = &mtr->pool->sq[queue]; + need_lock = false; + } + } else { + sq = &sh->mtrmng->pools_mng.sq; + need_lock = true; + } if (__atomic_load_n(&mtr->state, __ATOMIC_RELAXED) == ASO_METER_READY) return 0; do { - mlx5_aso_mtr_completion_handle(sq); + mlx5_aso_mtr_completion_handle(sq, need_lock); if (__atomic_load_n(&mtr->state, __ATOMIC_RELAXED) == ASO_METER_READY) return 0; diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 7f81272150..a42eb99154 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -1387,6 +1387,7 @@ mlx5_flow_item_field_width(struct rte_eth_dev *dev, return inherit < 0 ? 0 : inherit; case RTE_FLOW_FIELD_IPV4_ECN: case RTE_FLOW_FIELD_IPV6_ECN: + case RTE_FLOW_FIELD_METER_COLOR: return 2; default: MLX5_ASSERT(false); @@ -1856,6 +1857,31 @@ mlx5_flow_field_id_to_modify_info info[idx].offset = data->offset; } break; + case RTE_FLOW_FIELD_METER_COLOR: + { + const uint32_t color_mask = + (UINT32_C(1) << MLX5_MTR_COLOR_BITS) - 1; + int reg; + + if (priv->sh->config.dv_flow_en == 2) + reg = flow_hw_get_reg_id + (RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + else + reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, + 0, error); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + MLX5_ASSERT((unsigned int)reg < RTE_DIM(reg_to_field)); + info[idx] = (struct field_modify_info){4, 0, + reg_to_field[reg]}; + if (mask) + mask[idx] = flow_modify_info_mask_32_masked + (width, data->offset, color_mask); + else + info[idx].offset = data->offset; + } + break; case RTE_FLOW_FIELD_POINTER: case RTE_FLOW_FIELD_VALUE: default: @@ -1913,7 +1939,9 @@ flow_dv_convert_action_modify_field item.spec = conf->src.field == RTE_FLOW_FIELD_POINTER ? (void *)(uintptr_t)conf->src.pvalue : (void *)(uintptr_t)&conf->src.value; - if (conf->dst.field == RTE_FLOW_FIELD_META) { + if (conf->dst.field == RTE_FLOW_FIELD_META || + conf->dst.field == RTE_FLOW_FIELD_TAG || + conf->dst.field == RTE_FLOW_FIELD_METER_COLOR) { meta = *(const unaligned_uint32_t *)item.spec; meta = rte_cpu_to_be_32(meta); item.spec = &meta; @@ -3687,6 +3715,69 @@ flow_dv_validate_action_aso_ct(struct rte_eth_dev *dev, return 0; } +/** + * Validate METER_COLOR item. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] item + * Item specification. + * @param[in] attr + * Attributes of flow that includes this item. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_item_meter_color(struct rte_eth_dev *dev, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr __rte_unused, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_item_meter_color *spec = item->spec; + const struct rte_flow_item_meter_color *mask = item->mask; + struct rte_flow_item_meter_color nic_mask = { + .color = RTE_COLORS + }; + int ret; + + if (priv->mtr_color_reg == REG_NON) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "meter color register" + " isn't available"); + ret = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, error); + if (ret < 0) + return ret; + if (!spec) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_SPEC, + item->spec, + "data cannot be empty"); + if (spec->color > RTE_COLORS) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + &spec->color, + "meter color is invalid"); + if (!mask) + mask = &rte_flow_item_meter_color_mask; + if (!mask->color) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_SPEC, NULL, + "mask cannot be zero"); + + ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask, + (const uint8_t *)&nic_mask, + sizeof(struct rte_flow_item_meter_color), + MLX5_ITEM_RANGE_NOT_ACCEPTED, error); + if (ret < 0) + return ret; + return 0; +} + int flow_dv_encap_decap_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) @@ -6519,7 +6610,7 @@ flow_dv_mtr_container_resize(struct rte_eth_dev *dev) return -ENOMEM; } if (!pools_mng->n) - if (mlx5_aso_queue_init(priv->sh, ASO_OPC_MOD_POLICER)) { + if (mlx5_aso_queue_init(priv->sh, ASO_OPC_MOD_POLICER, 1)) { mlx5_free(pools); return -ENOMEM; } @@ -7421,6 +7512,13 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, if (ret < 0) return ret; break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + ret = flow_dv_validate_item_meter_color(dev, items, + attr, error); + if (ret < 0) + return ret; + last_item = MLX5_FLOW_ITEM_METER_COLOR; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -10508,6 +10606,45 @@ flow_dv_translate_item_flex(struct rte_eth_dev *dev, void *matcher, void *key, mlx5_flex_flow_translate_item(dev, matcher, key, item, is_inner); } +/** + * Add METER_COLOR item to matcher + * + * @param[in] dev + * The device to configure through. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. + */ +static void +flow_dv_translate_item_meter_color(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) +{ + const struct rte_flow_item_meter_color *color_m = item->mask; + const struct rte_flow_item_meter_color *color_v = item->spec; + uint32_t value, mask; + int reg = REG_NON; + + MLX5_ASSERT(color_v); + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, color_v, color_m, + &rte_flow_item_meter_color_mask); + value = rte_col_2_mlx5_col(color_v->color); + mask = color_m ? + color_m->color : (UINT32_C(1) << MLX5_MTR_COLOR_BITS) - 1; + if (!!(key_type & MLX5_SET_MATCHER_SW)) + reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + else + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + if (reg == REG_NON) + return; + flow_dv_match_meta_reg(key, (enum modify_reg)reg, value, mask); +} + static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 }; #define HEADER_IS_ZERO(match_criteria, headers) \ @@ -13260,6 +13397,10 @@ flow_dv_translate_items(struct rte_eth_dev *dev, /* No other protocol should follow eCPRI layer. */ last_item = MLX5_FLOW_LAYER_ECPRI; break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + flow_dv_translate_item_meter_color(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_METER_COLOR; + break; default: break; } diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 71a134f224..d498d203d5 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -412,6 +412,10 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, mlx5_hws_cnt_shared_put(priv->hws_cpool, &acts->cnt_id); acts->cnt_id = 0; } + if (acts->mtr_id) { + mlx5_ipool_free(priv->hws_mpool->idx_pool, acts->mtr_id); + acts->mtr_id = 0; + } } /** @@ -628,6 +632,42 @@ __flow_hw_act_data_shared_cnt_append(struct mlx5_priv *priv, return 0; } +/** + * Append shared meter_mark action to the dynamic action list. + * + * @param[in] priv + * Pointer to the port private data structure. + * @param[in] acts + * Pointer to the template HW steering DR actions. + * @param[in] type + * Action type. + * @param[in] action_src + * Offset of source rte flow action. + * @param[in] action_dst + * Offset of destination DR action. + * @param[in] mtr_id + * Shared meter id. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static __rte_always_inline int +__flow_hw_act_data_shared_mtr_append(struct mlx5_priv *priv, + struct mlx5_hw_actions *acts, + enum rte_flow_action_type type, + uint16_t action_src, + uint16_t action_dst, + cnt_id_t mtr_id) +{ struct mlx5_action_construct_data *act_data; + + act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); + if (!act_data) + return -1; + act_data->type = type; + act_data->shared_meter.id = mtr_id; + LIST_INSERT_HEAD(&acts->act_list, act_data, next); + return 0; +} /** * Translate shared indirect action. @@ -682,6 +722,13 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, idx, &acts->rule_acts[action_dst])) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + if (__flow_hw_act_data_shared_mtr_append(priv, acts, + (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK, + action_src, action_dst, idx)) + return -1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -888,6 +935,7 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev, (void *)(uintptr_t)&conf->src.value; if (conf->dst.field == RTE_FLOW_FIELD_META || conf->dst.field == RTE_FLOW_FIELD_TAG || + conf->dst.field == RTE_FLOW_FIELD_METER_COLOR || conf->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) { value = *(const unaligned_uint32_t *)item.spec; value = rte_cpu_to_be_32(value); @@ -1047,7 +1095,7 @@ flow_hw_meter_compile(struct rte_eth_dev *dev, acts->rule_acts[jump_pos].action = (!!group) ? acts->jump->hws_action : acts->jump->root_action; - if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) + if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) return -ENOMEM; return 0; } @@ -1121,6 +1169,74 @@ static rte_be32_t vlan_hdr_to_be32(const struct rte_flow_action *actions) #endif } +static __rte_always_inline struct mlx5_aso_mtr * +flow_hw_meter_mark_alloc(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + uint32_t queue) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + const struct rte_flow_action_meter_mark *meter_mark = action->conf; + struct mlx5_aso_mtr *aso_mtr; + struct mlx5_flow_meter_info *fm; + uint32_t mtr_id; + + aso_mtr = mlx5_ipool_malloc(priv->hws_mpool->idx_pool, &mtr_id); + if (!aso_mtr) + return NULL; + /* Fill the flow meter parameters. */ + aso_mtr->type = ASO_METER_INDIRECT; + fm = &aso_mtr->fm; + fm->meter_id = mtr_id; + fm->profile = (struct mlx5_flow_meter_profile *)(meter_mark->profile); + fm->is_enable = meter_mark->state; + fm->color_aware = meter_mark->color_mode; + aso_mtr->pool = pool; + aso_mtr->state = ASO_METER_WAIT; + aso_mtr->offset = mtr_id - 1; + aso_mtr->init_color = (meter_mark->color_mode) ? + meter_mark->init_color : RTE_COLOR_GREEN; + /* Update ASO flow meter by wqe. */ + if (mlx5_aso_meter_update_by_wqe(priv->sh, queue, aso_mtr, + &priv->mtr_bulk)) { + mlx5_ipool_free(pool->idx_pool, mtr_id); + return NULL; + } + /* Wait for ASO object completion. */ + if (queue == MLX5_HW_INV_QUEUE && + mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) { + mlx5_ipool_free(pool->idx_pool, mtr_id); + return NULL; + } + return aso_mtr; +} + +static __rte_always_inline int +flow_hw_meter_mark_compile(struct rte_eth_dev *dev, + uint16_t aso_mtr_pos, + const struct rte_flow_action *action, + struct mlx5dr_rule_action *acts, + uint32_t *index, + uint32_t queue) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + struct mlx5_aso_mtr *aso_mtr; + + aso_mtr = flow_hw_meter_mark_alloc(dev, action, queue); + if (!aso_mtr) + return -1; + + /* Compile METER_MARK action */ + acts[aso_mtr_pos].action = pool->action; + acts[aso_mtr_pos].aso_meter.offset = aso_mtr->offset; + acts[aso_mtr_pos].aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color) + rte_col_2_mlx5_col(aso_mtr->init_color); + *index = aso_mtr->fm.meter_id; + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -1428,6 +1544,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, goto err; } break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + action_pos = at->actions_off[actions - at->actions]; + if (actions->conf && masks->conf && + ((const struct rte_flow_action_meter_mark *) + masks->conf)->profile) { + err = flow_hw_meter_mark_compile(dev, + action_pos, actions, + acts->rule_acts, + &acts->mtr_id, + MLX5_HW_INV_QUEUE); + if (err) + goto err; + } else if (__flow_hw_act_data_general_append(priv, acts, + actions->type, + actions - action_start, + action_pos)) + goto err; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; break; @@ -1624,8 +1758,10 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, struct mlx5dr_rule_action *rule_act) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_action_construct_data act_data; struct mlx5_shared_action_rss *shared_rss; + struct mlx5_aso_mtr *aso_mtr; uint32_t act_idx = (uint32_t)(uintptr_t)action->conf; uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t idx = act_idx & @@ -1661,6 +1797,17 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, if (flow_hw_ct_compile(dev, queue, idx, rule_act)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + /* Find ASO object. */ + aso_mtr = mlx5_ipool_get(pool->idx_pool, idx); + if (!aso_mtr) + return -1; + rule_act->action = pool->action; + rule_act->aso_meter.offset = aso_mtr->offset; + rule_act->aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color) + rte_col_2_mlx5_col(aso_mtr->init_color); + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1730,6 +1877,7 @@ flow_hw_modify_field_construct(struct mlx5_hw_q_job *job, rte_memcpy(values, mhdr_action->src.pvalue, sizeof(values)); if (mhdr_action->dst.field == RTE_FLOW_FIELD_META || mhdr_action->dst.field == RTE_FLOW_FIELD_TAG || + mhdr_action->dst.field == RTE_FLOW_FIELD_METER_COLOR || mhdr_action->dst.field == (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG) { value_p = (unaligned_uint32_t *)values; *value_p = rte_cpu_to_be_32(*value_p); @@ -1807,6 +1955,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, uint32_t queue) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct rte_flow_template_table *table = job->flow->table; struct mlx5_action_construct_data *act_data; const struct rte_flow_actions_template *at = hw_at->action_template; @@ -1823,8 +1972,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, uint32_t ft_flag; size_t encap_len = 0; int ret; - struct mlx5_aso_mtr *mtr; - uint32_t mtr_id; + struct mlx5_aso_mtr *aso_mtr; rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num); attr.group = table->grp->group_id; @@ -1858,6 +2006,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq; uint32_t ct_idx; cnt_id_t cnt_id; + uint32_t mtr_id; action = &actions[act_data->action_src]; /* @@ -1964,13 +2113,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_METER: meter = action->conf; mtr_id = meter->mtr_id; - mtr = mlx5_aso_meter_by_idx(priv, mtr_id); + aso_mtr = mlx5_aso_meter_by_idx(priv, mtr_id); rule_acts[act_data->action_dst].action = priv->mtr_bulk.action; rule_acts[act_data->action_dst].aso_meter.offset = - mtr->offset; + aso_mtr->offset; jump = flow_hw_jump_action_register - (dev, &table->cfg, mtr->fm.group, NULL); + (dev, &table->cfg, aso_mtr->fm.group, NULL); if (!jump) return -1; MLX5_ASSERT @@ -1980,7 +2129,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, jump->root_action; job->flow->jump = jump; job->flow->fate_type = MLX5_FLOW_FATE_JUMP; - if (mlx5_aso_mtr_wait(priv->sh, mtr)) + if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) return -1; break; case RTE_FLOW_ACTION_TYPE_COUNT: @@ -2016,6 +2165,28 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, &rule_acts[act_data->action_dst])) return -1; break; + case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK: + mtr_id = act_data->shared_meter.id & + ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + /* Find ASO object. */ + aso_mtr = mlx5_ipool_get(pool->idx_pool, mtr_id); + if (!aso_mtr) + return -1; + rule_acts[act_data->action_dst].action = + pool->action; + rule_acts[act_data->action_dst].aso_meter.offset = + aso_mtr->offset; + rule_acts[act_data->action_dst].aso_meter.init_color = + (enum mlx5dr_action_aso_meter_color) + rte_col_2_mlx5_col(aso_mtr->init_color); + break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + ret = flow_hw_meter_mark_compile(dev, + act_data->action_dst, action, + rule_acts, &job->flow->mtr_id, queue); + if (ret != 0) + return ret; + break; default: break; } @@ -2283,6 +2454,7 @@ flow_hw_pull(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_hw_q_job *job; int ret, i; @@ -2307,6 +2479,10 @@ flow_hw_pull(struct rte_eth_dev *dev, &job->flow->cnt_id); job->flow->cnt_id = 0; } + if (job->flow->mtr_id) { + mlx5_ipool_free(pool->idx_pool, job->flow->mtr_id); + job->flow->mtr_id = 0; + } mlx5_ipool_free(job->flow->table->flow, job->flow->idx); } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; @@ -3189,6 +3365,9 @@ flow_hw_actions_validate(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_METER: /* TODO: Validation logic */ break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + /* TODO: Validation logic */ + break; case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: ret = flow_hw_validate_action_modify_field(action, mask, @@ -3282,6 +3461,11 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT; *curr_off = *curr_off + 1; break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + at->actions_off[action_src] = *curr_off; + action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_METER; + *curr_off = *curr_off + 1; + break; default: DRV_LOG(WARNING, "Unsupported shared action type: %d", type); return -EINVAL; @@ -3373,6 +3557,12 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at) MLX5_HW_VLAN_PUSH_PCP_IDX : MLX5_HW_VLAN_PUSH_VID_IDX; break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + at->actions_off[i] = curr_off; + action_types[curr_off++] = MLX5DR_ACTION_TYP_ASO_METER; + if (curr_off >= MLX5_HW_MAX_ACTS) + goto err_actions_num; + break; default: type = mlx5_hw_dr_action_types[at->actions[i].type]; at->actions_off[i] = curr_off; @@ -3848,6 +4038,16 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, " attribute"); } break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + { + int reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + if (reg == REG_NON) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Unsupported meter color register"); + break; + } case RTE_FLOW_ITEM_TYPE_VOID: case RTE_FLOW_ITEM_TYPE_ETH: case RTE_FLOW_ITEM_TYPE_VLAN: @@ -5363,7 +5563,8 @@ flow_hw_configure(struct rte_eth_dev *dev, if (mlx5_flow_meter_init(dev, port_attr->nb_meters, port_attr->nb_meter_profiles, - port_attr->nb_meter_policies)) + port_attr->nb_meter_policies, + nb_q_updated)) goto err; /* Add global actions. */ for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { @@ -5867,7 +6068,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, { struct rte_flow_action_handle *handle = NULL; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr *aso_mtr; cnt_id_t cnt_id; + uint32_t mtr_id; RTE_SET_USED(queue); RTE_SET_USED(attr); @@ -5886,6 +6089,14 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, case RTE_FLOW_ACTION_TYPE_CONNTRACK: handle = flow_hw_conntrack_create(dev, queue, action->conf, error); break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + aso_mtr = flow_hw_meter_mark_alloc(dev, action, queue); + if (!aso_mtr) + break; + mtr_id = (MLX5_INDIRECT_ACTION_TYPE_METER_MARK << + MLX5_INDIRECT_ACTION_TYPE_OFFSET) | (aso_mtr->fm.meter_id); + handle = (struct rte_flow_action_handle *)(uintptr_t)mtr_id; + break; default: handle = flow_dv_action_create(dev, conf, action, error); } @@ -5921,18 +6132,59 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, void *user_data, struct rte_flow_error *error) { - uint32_t act_idx = (uint32_t)(uintptr_t)handle; - uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; - RTE_SET_USED(queue); RTE_SET_USED(attr); RTE_SET_USED(user_data); + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + const struct rte_flow_update_meter_mark *upd_meter_mark = + (const struct rte_flow_update_meter_mark *)update; + const struct rte_flow_action_meter_mark *meter_mark; + struct mlx5_aso_mtr *aso_mtr; + struct mlx5_flow_meter_info *fm; + uint32_t act_idx = (uint32_t)(uintptr_t)handle; + uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); + switch (type) { case MLX5_INDIRECT_ACTION_TYPE_CT: return flow_hw_conntrack_update(dev, queue, update, act_idx, error); + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + meter_mark = &upd_meter_mark->meter_mark; + /* Find ASO object. */ + aso_mtr = mlx5_ipool_get(pool->idx_pool, idx); + if (!aso_mtr) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Invalid meter_mark update index"); + fm = &aso_mtr->fm; + if (upd_meter_mark->profile_valid) + fm->profile = (struct mlx5_flow_meter_profile *) + (meter_mark->profile); + if (upd_meter_mark->color_mode_valid) + fm->color_aware = meter_mark->color_mode; + if (upd_meter_mark->init_color_valid) + aso_mtr->init_color = (meter_mark->color_mode) ? + meter_mark->init_color : RTE_COLOR_GREEN; + if (upd_meter_mark->state_valid) + fm->is_enable = meter_mark->state; + /* Update ASO flow meter by wqe. */ + if (mlx5_aso_meter_update_by_wqe(priv->sh, queue, + aso_mtr, &priv->mtr_bulk)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to update ASO meter WQE"); + /* Wait for ASO object completion. */ + if (queue == MLX5_HW_INV_QUEUE && + mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to wait for ASO meter CQE"); + return 0; default: - return flow_dv_action_update(dev, handle, update, error); + break; } + return flow_dv_action_update(dev, handle, update, error); } /** @@ -5963,7 +6215,11 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, { uint32_t act_idx = (uint32_t)(uintptr_t)handle; uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; + struct mlx5_aso_mtr *aso_mtr; + struct mlx5_flow_meter_info *fm; RTE_SET_USED(queue); RTE_SET_USED(attr); @@ -5973,6 +6229,28 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, return mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx); case MLX5_INDIRECT_ACTION_TYPE_CT: return flow_hw_conntrack_destroy(dev, act_idx, error); + case MLX5_INDIRECT_ACTION_TYPE_METER_MARK: + aso_mtr = mlx5_ipool_get(pool->idx_pool, idx); + if (!aso_mtr) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Invalid meter_mark destroy index"); + fm = &aso_mtr->fm; + fm->is_enable = 0; + /* Update ASO flow meter by wqe. */ + if (mlx5_aso_meter_update_by_wqe(priv->sh, queue, aso_mtr, + &priv->mtr_bulk)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to update ASO meter WQE"); + /* Wait for ASO object completion. */ + if (queue == MLX5_HW_INV_QUEUE && + mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Unable to wait for ASO meter CQE"); + mlx5_ipool_free(pool->idx_pool, idx); + return 0; default: return flow_dv_action_destroy(dev, handle, error); } @@ -6056,8 +6334,8 @@ flow_hw_action_create(struct rte_eth_dev *dev, const struct rte_flow_action *action, struct rte_flow_error *err) { - return flow_hw_action_handle_create(dev, UINT32_MAX, NULL, conf, action, - NULL, err); + return flow_hw_action_handle_create(dev, MLX5_HW_INV_QUEUE, + NULL, conf, action, NULL, err); } /** @@ -6082,8 +6360,8 @@ flow_hw_action_destroy(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, struct rte_flow_error *error) { - return flow_hw_action_handle_destroy(dev, UINT32_MAX, NULL, handle, - NULL, error); + return flow_hw_action_handle_destroy(dev, MLX5_HW_INV_QUEUE, + NULL, handle, NULL, error); } /** @@ -6111,8 +6389,8 @@ flow_hw_action_update(struct rte_eth_dev *dev, const void *update, struct rte_flow_error *err) { - return flow_hw_action_handle_update(dev, UINT32_MAX, NULL, handle, - update, NULL, err); + return flow_hw_action_handle_update(dev, MLX5_HW_INV_QUEUE, + NULL, handle, update, NULL, err); } static int @@ -6642,6 +6920,12 @@ mlx5_flow_meter_uninit(struct rte_eth_dev *dev) mlx5_free(priv->mtr_profile_arr); priv->mtr_profile_arr = NULL; } + if (priv->hws_mpool) { + mlx5_aso_mtr_queue_uninit(priv->sh, priv->hws_mpool, NULL); + mlx5_ipool_destroy(priv->hws_mpool->idx_pool); + mlx5_free(priv->hws_mpool); + priv->hws_mpool = NULL; + } if (priv->mtr_bulk.aso) { mlx5_free(priv->mtr_bulk.aso); priv->mtr_bulk.aso = NULL; @@ -6662,7 +6946,8 @@ int mlx5_flow_meter_init(struct rte_eth_dev *dev, uint32_t nb_meters, uint32_t nb_meter_profiles, - uint32_t nb_meter_policies) + uint32_t nb_meter_policies, + uint32_t nb_queues) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_devx_obj *dcs = NULL; @@ -6672,29 +6957,35 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, struct mlx5_aso_mtr *aso; uint32_t i; struct rte_flow_error error; + uint32_t flags; + uint32_t nb_mtrs = rte_align32pow2(nb_meters); + struct mlx5_indexed_pool_config cfg = { + .size = sizeof(struct mlx5_aso_mtr), + .trunk_size = 1 << 12, + .per_core_cache = 1 << 13, + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = nb_meters, + .free = mlx5_free, + .type = "mlx5_hw_mtr_mark_action", + }; if (!nb_meters || !nb_meter_profiles || !nb_meter_policies) { ret = ENOTSUP; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter configuration is invalid."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter configuration is invalid."); goto err; } if (!priv->mtr_en || !priv->sh->meter_aso_en) { ret = ENOTSUP; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO is not supported."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO is not supported."); goto err; } priv->mtr_config.nb_meters = nb_meters; - if (mlx5_aso_queue_init(priv->sh, ASO_OPC_MOD_POLICER)) { - ret = ENOMEM; - rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO queue allocation failed."); - goto err; - } log_obj_size = rte_log2_u32(nb_meters >> 1); dcs = mlx5_devx_cmd_create_flow_meter_aso_obj (priv->sh->cdev->ctx, priv->sh->cdev->pdn, @@ -6702,8 +6993,8 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, if (!dcs) { ret = ENOMEM; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter ASO object allocation failed."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO object allocation failed."); goto err; } priv->mtr_bulk.devx_obj = dcs; @@ -6711,31 +7002,33 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, if (reg_id < 0) { ret = ENOTSUP; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter register is not available."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter register is not available."); goto err; } + flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + if (priv->sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; priv->mtr_bulk.action = mlx5dr_action_create_aso_meter (priv->dr_ctx, (struct mlx5dr_devx_obj *)dcs, - reg_id - REG_C_0, MLX5DR_ACTION_FLAG_HWS_RX | - MLX5DR_ACTION_FLAG_HWS_TX | - MLX5DR_ACTION_FLAG_HWS_FDB); + reg_id - REG_C_0, flags); if (!priv->mtr_bulk.action) { ret = ENOMEM; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter action creation failed."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter action creation failed."); goto err; } priv->mtr_bulk.aso = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_aso_mtr) * nb_meters, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); + sizeof(struct mlx5_aso_mtr) * + nb_meters, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); if (!priv->mtr_bulk.aso) { ret = ENOMEM; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter bulk ASO allocation failed."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter bulk ASO allocation failed."); goto err; } priv->mtr_bulk.size = nb_meters; @@ -6746,32 +7039,65 @@ mlx5_flow_meter_init(struct rte_eth_dev *dev, aso->offset = i; aso++; } + priv->hws_mpool = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_aso_mtr_pool), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!priv->hws_mpool) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ipool allocation failed."); + goto err; + } + priv->hws_mpool->devx_obj = priv->mtr_bulk.devx_obj; + priv->hws_mpool->action = priv->mtr_bulk.action; + priv->hws_mpool->nb_sq = nb_queues; + if (mlx5_aso_mtr_queue_init(priv->sh, priv->hws_mpool, + NULL, nb_queues)) { + ret = ENOMEM; + rte_flow_error_set(&error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter ASO queue allocation failed."); + goto err; + } + /* + * No need for local cache if Meter number is a small number. + * Since flow insertion rate will be very limited in that case. + * Here let's set the number to less than default trunk size 4K. + */ + if (nb_mtrs <= cfg.trunk_size) { + cfg.per_core_cache = 0; + cfg.trunk_size = nb_mtrs; + } else if (nb_mtrs <= MLX5_HW_IPOOL_SIZE_THRESHOLD) { + cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; + } + priv->hws_mpool->idx_pool = mlx5_ipool_create(&cfg); priv->mtr_config.nb_meter_profiles = nb_meter_profiles; priv->mtr_profile_arr = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_profile) * - nb_meter_profiles, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); + sizeof(struct mlx5_flow_meter_profile) * + nb_meter_profiles, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); if (!priv->mtr_profile_arr) { ret = ENOMEM; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter profile allocation failed."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter profile allocation failed."); goto err; } priv->mtr_config.nb_meter_policies = nb_meter_policies; priv->mtr_policy_arr = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(struct mlx5_flow_meter_policy) * - nb_meter_policies, - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); + sizeof(struct mlx5_flow_meter_policy) * + nb_meter_policies, + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); if (!priv->mtr_policy_arr) { ret = ENOMEM; rte_flow_error_set(&error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Meter policy allocation failed."); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Meter policy allocation failed."); goto err; } return 0; diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index 792b945c98..fd1337ae73 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -588,6 +588,36 @@ mlx5_flow_meter_profile_delete(struct rte_eth_dev *dev, return 0; } +/** + * Callback to get MTR profile. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] meter_profile_id + * Meter profile id. + * @param[out] error + * Pointer to the error structure. + * + * @return + * A valid handle in case of success, NULL otherwise. + */ +static struct rte_flow_meter_profile * +mlx5_flow_meter_profile_get(struct rte_eth_dev *dev, + uint32_t meter_profile_id, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->mtr_en) { + rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter is not supported"); + return NULL; + } + return (void *)(uintptr_t)mlx5_flow_meter_profile_find(priv, + meter_profile_id); +} + /** * Callback to add MTR profile with HWS. * @@ -1150,6 +1180,37 @@ mlx5_flow_meter_policy_delete(struct rte_eth_dev *dev, return 0; } +/** + * Callback to get MTR policy. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] policy_id + * Meter policy id. + * @param[out] error + * Pointer to the error structure. + * + * @return + * A valid handle in case of success, NULL otherwise. + */ +static struct rte_flow_meter_policy * +mlx5_flow_meter_policy_get(struct rte_eth_dev *dev, + uint32_t policy_id, + struct rte_mtr_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t policy_idx; + + if (!priv->mtr_en) { + rte_mtr_error_set(error, ENOTSUP, + RTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL, + "Meter is not supported"); + return NULL; + } + return (void *)(uintptr_t)mlx5_flow_meter_policy_find(dev, policy_id, + &policy_idx); +} + /** * Callback to delete MTR policy for HWS. * @@ -1565,11 +1626,11 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv, if (priv->sh->meter_aso_en) { fm->is_enable = !!is_enable; aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm); - ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, - &priv->mtr_bulk); + ret = mlx5_aso_meter_update_by_wqe(priv->sh, MLX5_HW_INV_QUEUE, + aso_mtr, &priv->mtr_bulk); if (ret) return ret; - ret = mlx5_aso_mtr_wait(priv->sh, aso_mtr); + ret = mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr); if (ret) return ret; } else { @@ -1815,8 +1876,8 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id, /* If ASO meter supported, update ASO flow meter by wqe. */ if (priv->sh->meter_aso_en) { aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm); - ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, - &priv->mtr_bulk); + ret = mlx5_aso_meter_update_by_wqe(priv->sh, MLX5_HW_INV_QUEUE, + aso_mtr, &priv->mtr_bulk); if (ret) goto error; if (!priv->mtr_idx_tbl) { @@ -1921,7 +1982,7 @@ mlx5_flow_meter_hws_create(struct rte_eth_dev *dev, uint32_t meter_id, fm->shared = !!shared; fm->initialized = 1; /* Update ASO flow meter by wqe. */ - ret = mlx5_aso_meter_update_by_wqe(priv->sh, aso_mtr, + ret = mlx5_aso_meter_update_by_wqe(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr, &priv->mtr_bulk); if (ret) return -rte_mtr_error_set(error, ENOTSUP, @@ -2401,9 +2462,11 @@ static const struct rte_mtr_ops mlx5_flow_mtr_ops = { .capabilities_get = mlx5_flow_mtr_cap_get, .meter_profile_add = mlx5_flow_meter_profile_add, .meter_profile_delete = mlx5_flow_meter_profile_delete, + .meter_profile_get = mlx5_flow_meter_profile_get, .meter_policy_validate = mlx5_flow_meter_policy_validate, .meter_policy_add = mlx5_flow_meter_policy_add, .meter_policy_delete = mlx5_flow_meter_policy_delete, + .meter_policy_get = mlx5_flow_meter_policy_get, .create = mlx5_flow_meter_create, .destroy = mlx5_flow_meter_destroy, .meter_enable = mlx5_flow_meter_enable, @@ -2418,9 +2481,11 @@ static const struct rte_mtr_ops mlx5_flow_mtr_hws_ops = { .capabilities_get = mlx5_flow_mtr_cap_get, .meter_profile_add = mlx5_flow_meter_profile_hws_add, .meter_profile_delete = mlx5_flow_meter_profile_hws_delete, + .meter_profile_get = mlx5_flow_meter_profile_get, .meter_policy_validate = mlx5_flow_meter_policy_hws_validate, .meter_policy_add = mlx5_flow_meter_policy_hws_add, .meter_policy_delete = mlx5_flow_meter_policy_hws_delete, + .meter_policy_get = mlx5_flow_meter_policy_get, .create = mlx5_flow_meter_hws_create, .destroy = mlx5_flow_meter_hws_destroy, .meter_enable = mlx5_flow_meter_enable, @@ -2566,7 +2631,7 @@ mlx5_flow_meter_attach(struct mlx5_priv *priv, struct mlx5_aso_mtr *aso_mtr; aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm); - if (mlx5_aso_mtr_wait(priv->sh, aso_mtr)) { + if (mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr)) { return rte_flow_error_set(error, ENOENT, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,