From patchwork Wed Jan 18 12:55:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 122309 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1837E4240D; Wed, 18 Jan 2023 13:56:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E91F42D67; Wed, 18 Jan 2023 13:56:35 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2040.outbound.protection.outlook.com [40.107.94.40]) by mails.dpdk.org (Postfix) with ESMTP id AF6064003F for ; Wed, 18 Jan 2023 13:56:33 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZUa7QZZTRjsVc5vawRyoySaiy42SPJZNVy7SnD6phId1U+4FjA4ouErMWQD9uM4uZ3JLoU9wAMTJl0SPorJq4/CRPxLfMRw/oPOatMax2ndM594dfcmHTjvaSmrUbkShzwUxSj4dbTCR0Nk0DVluBhJlTIDbEGGjzwxBshv/3NXmGgzQRlhqI6RjAJIQZkrZj5veLjziASM3TdQJRlZMM8sdh+zrYHHS0M+lkEl71zjZ5ziuFNi6dWJELLOLBWoEbsLI/mFmNRlSZmwOiMepkxJS9ab2jusb6bUEvWsSNfx+gXlUqQL6akfTu17OBjDBJ4dD/N+TJ+IVBZGsl7IXdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mmIV62gEZUpsINx+BouE6bOK3UzGM25/5lR502WgKjM=; b=WYnUkb4sPFcSu345+Xcfe9splIdYMCCO5GIucBYNWOxwLB0DNERJs/vpWOl69+DjPuY6D0kUXKTfxsqqLte75wcxlc37lbSsXxNIGYbmRnO8X3py4SXnBOstIJsycf9o/y08ZIQAadbcIC1RkZp1otPQLqBCs7WoLUvjsrrn7oaSuZRBfId1v3ApOwxQVevkHQrgjtVzFQyjf+91yboycD4YrZj/ja/s5oOKCzWfECkqc2vPDAWzHO0CXfSsAJ69MhWBaTK4A2GffVhgdNRmSplGjnreih4kniIqy/ms6SMHD/VDHngoakubPMhkdd1c6jTKpRI4eiU/Jq2NG8m1Lg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mmIV62gEZUpsINx+BouE6bOK3UzGM25/5lR502WgKjM=; b=qZvbr5YjppAWDbktpos3khtPPWjj3ONUj6k0ZY23Z7WnW842pam2krCFwNDJQQO7GT6C/Op//RGRczVOqe6QHpE7ixlLiYOGMQkhkjpjRQbhWbPYdvVOC6ievJjeWiPEFnx99atoWqcWyItAoxq27FLhzZUE+7Oly3SyUgZkJteAyDFYHMajwlZFc1gcpj2He4i3WDzscl/2cM5aHElq4LuNDn3oBuoVTPl6CxVBjKfdNPvpInUpRVvIeQMsyid0bjB6YsKBi3jzLLBCvwTK9p/LXabH0Xd7b8GIHTRGp0Iae+HThLNwff9KalES/x3FPUDZRi26wqOo8dun5H7IYw== Received: from DS7PR05CA0031.namprd05.prod.outlook.com (2603:10b6:8:2f::6) by SJ2PR12MB7868.namprd12.prod.outlook.com (2603:10b6:a03:4cd::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan 2023 12:56:30 +0000 Received: from DS1PEPF0000B078.namprd05.prod.outlook.com (2603:10b6:8:2f:cafe::f6) by DS7PR05CA0031.outlook.office365.com (2603:10b6:8:2f::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend Transport; Wed, 18 Jan 2023 12:56:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000B078.mail.protection.outlook.com (10.167.17.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend Transport; Wed, 18 Jan 2023 12:56:30 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:21 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:19 -0800 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko Subject: [PATCH 4/5] net/mlx5: add indirect QUOTA create/query/modify Date: Wed, 18 Jan 2023 14:55:55 +0200 Message-ID: <20230118125556.23622-5-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118125556.23622-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B078:EE_|SJ2PR12MB7868:EE_ X-MS-Office365-Filtering-Correlation-Id: e8bccf2b-7bb0-4983-8fa4-08daf9536b36 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DPS98OpTBuSg+a0pSTzi1X7m5CxhQcxidz7oWuVLElXYVCSLWn3iDKUW9TYWt3APib6KG5CcF6ZB7hzt99RRlYvO+PNoNwU3gLvrCdqx4EwVwjSej9h4HPQ/ENY9W27MPisc0Cp65JayPAFMfBVFXdsJYL8MA3T9riRZSmDH9FxGYzZogjV/g/4RIefxO8w8I0GcEIkYvZrPLywM8aRNKLvESGwWjxjKpYRnHTjOeHRc0ivPL7YPqlIqyt+IDcDZjMXX4pOzFH3t4GRlqur7iMSBaQsfct22HjoJTh+o+ICopvqRS1ieouE2fKQFBN+yTDCuIH6MW1Js5RdA56V0jo8tv/ssfz0nuz9NODRvftuc49Xde6oaFcfkJzRL7fesBHTmz7cS4xUfIXUPXZ6Xn9vFkcg1WG3tMPtRHX9ETEKn/i/74EL2OS2PaRG5dOzBDNaeJGD+0J2/sX1c2WVsJS4yMH2Iy+3HNN3V7uksBsL47+3r5hOtOi///Lz4F8B8K0pjSjWB0GJMqAxkWwxIOPaqxJ3sWBufuquzYnCkdtZoyAjgpd1si9ucRISECQGcFypRwvzD3WCLEbhDWpjFiDRDK0g+2HjG8AxCJlC4Kd81o5aXT5OQ2KcoarefjsM6pi3qQfwvQvSwwPNII7cVYlRoShyHESWPZtJ1sA3HhPeB+VW7veiKsGEt0UsDzVtfOnbQFpmD7IM9SnixExYYUnjDvwZMBLrpsv1OH0eZMIqEjEZqolIuPiOLTbeAoaPfmjCXC6VB/1+k5/Dez5V/jg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199015)(40470700004)(46966006)(36840700001)(83380400001)(7636003)(82740400003)(36860700001)(5660300002)(356005)(2906002)(6916009)(86362001)(30864003)(4326008)(8676002)(70586007)(55016003)(70206006)(8936002)(15650500001)(82310400005)(426003)(478600001)(40480700001)(2616005)(47076005)(6286002)(1076003)(186003)(26005)(16526019)(107886003)(336012)(316002)(54906003)(7696005)(40460700003)(6666004)(41300700001)(36756003)(559001)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 12:56:30.1123 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8bccf2b-7bb0-4983-8fa4-08daf9536b36 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B078.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7868 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement HWS functions for indirect QUOTA creation, modification and query. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.h | 72 +++ drivers/net/mlx5/mlx5_flow.c | 62 +++ drivers/net/mlx5/mlx5_flow.h | 20 +- drivers/net/mlx5/mlx5_flow_aso.c | 8 +- drivers/net/mlx5/mlx5_flow_hw.c | 343 +++++++++++--- drivers/net/mlx5/mlx5_flow_quota.c | 726 +++++++++++++++++++++++++++++ 7 files changed, 1151 insertions(+), 81 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_flow_quota.c diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index abd507bd88..323c381d2b 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -23,6 +23,7 @@ sources = files( 'mlx5_flow_dv.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', + 'mlx5_flow_quota.c', 'mlx5_mac.c', 'mlx5_rss.c', 'mlx5_rx.c', diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7c6bc91ddf..c18dffeab5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -46,6 +46,14 @@ #define MLX5_HW_INV_QUEUE UINT32_MAX +/* + * The default ipool threshold value indicates which per_core_cache + * value to set. + */ +#define MLX5_HW_IPOOL_SIZE_THRESHOLD (1 << 19) +/* The default min local cache size. */ +#define MLX5_HW_IPOOL_CACHE_MIN (1 << 9) + /* * Number of modification commands. * The maximal actions amount in FW is some constant, and it is 16 in the @@ -349,6 +357,7 @@ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */ MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ + MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */ }; #define MLX5_HW_MAX_ITEMS (16) @@ -590,6 +599,7 @@ struct mlx5_aso_sq_elem { char *query_data; }; void *user_data; + struct mlx5_quota *quota_obj; }; }; @@ -1645,6 +1655,33 @@ struct mlx5_hw_ctrl_flow { struct mlx5_flow_hw_ctrl_rx; +enum mlx5_quota_state { + MLX5_QUOTA_STATE_FREE, /* quota not in use */ + MLX5_QUOTA_STATE_READY, /* quota is ready */ + MLX5_QUOTA_STATE_WAIT /* quota waits WR completion */ +}; + +struct mlx5_quota { + uint8_t state; /* object state */ + uint8_t mode; /* metering mode */ + /** + * Keep track of application update types. + * PMD does not allow 2 consecutive ADD updates. + */ + enum rte_flow_update_quota_op last_update; +}; + +/* Bulk management structure for flow quota. */ +struct mlx5_quota_ctx { + uint32_t nb_quotas; /* Total number of quota objects */ + struct mlx5dr_action *dr_action; /* HWS action */ + struct mlx5_devx_obj *devx_obj; /* DEVX ranged object. */ + struct mlx5_pmd_mr mr; /* MR for READ from MTR ASO */ + struct mlx5_aso_mtr_dseg **read_buf; /* Buffers for READ */ + struct mlx5_aso_sq *sq; /* SQs for sync/async ACCESS_ASO WRs */ + struct mlx5_indexed_pool *quota_ipool; /* Manage quota objects */ +}; + struct mlx5_priv { struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ struct mlx5_dev_ctx_shared *sh; /* Shared device context. */ @@ -1734,6 +1771,7 @@ struct mlx5_priv { struct mlx5_flow_meter_policy *mtr_policy_arr; /* Policy array. */ struct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */ struct mlx5_mtr_bulk mtr_bulk; /* Meter index mapping for HWS */ + struct mlx5_quota_ctx quota_ctx; /* Quota index mapping for HWS */ uint8_t skip_default_rss_reta; /* Skip configuration of default reta. */ uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */ struct mlx5_mp_id mp_id; /* ID of a multi-process process */ @@ -2227,6 +2265,15 @@ int mlx5_aso_ct_queue_init(struct mlx5_dev_ctx_shared *sh, uint32_t nb_queues); int mlx5_aso_ct_queue_uninit(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_ct_pools_mng *ct_mng); +int +mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, + void *uar, uint16_t log_desc_n); +void +mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq); +void +mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq); +void +mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq); /* mlx5_flow_flex.c */ @@ -2257,4 +2304,29 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx, void *ctx); void mlx5_flex_parser_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); + +int +mlx5_flow_quota_destroy(struct rte_eth_dev *dev); +int +mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas); +struct rte_flow_action_handle * +mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_quota *conf, + struct mlx5_hw_q_job *job, bool push, + struct rte_flow_error *error); +void +mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue, + struct mlx5_hw_q_job *job); +int +mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue, + struct rte_flow_action_handle *handle, + const struct rte_flow_action *update, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error); +int mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_handle *handle, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f5e2831480..768c4c4ae6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1075,6 +1075,20 @@ mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, void *data, void *user_data, struct rte_flow_error *error); +static int +mlx5_action_handle_query_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error); +static int +mlx5_flow_async_action_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -1090,6 +1104,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { .action_handle_destroy = mlx5_action_handle_destroy, .action_handle_update = mlx5_action_handle_update, .action_handle_query = mlx5_action_handle_query, + .action_handle_query_update = mlx5_action_handle_query_update, .tunnel_decap_set = mlx5_flow_tunnel_decap_set, .tunnel_match = mlx5_flow_tunnel_match, .tunnel_action_decap_release = mlx5_flow_tunnel_action_release, @@ -1112,6 +1127,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .push = mlx5_flow_push, .async_action_handle_create = mlx5_flow_async_action_handle_create, .async_action_handle_update = mlx5_flow_async_action_handle_update, + .async_action_handle_query_update = + mlx5_flow_async_action_handle_query_update, .async_action_handle_query = mlx5_flow_async_action_handle_query, .async_action_handle_destroy = mlx5_flow_async_action_handle_destroy, }; @@ -9031,6 +9048,27 @@ mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, update, user_data, error); } +static int +mlx5_flow_async_action_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + if (!fops || !fops->async_action_query_update) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "async query_update not supported"); + return fops->async_action_query_update + (dev, queue_id, op_attr, action_handle, + update, query, qu_mode, user_data, error); +} + /** * Query shared action. * @@ -10163,6 +10201,30 @@ mlx5_action_handle_query(struct rte_eth_dev *dev, return flow_drv_action_query(dev, handle, data, fops, error); } +static int +mlx5_action_handle_query_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error) +{ + struct rte_flow_attr attr = { .transfer = 0 }; + enum mlx5_flow_drv_type drv_type = flow_get_drv_type(dev, &attr); + const struct mlx5_flow_driver_ops *fops; + + if (drv_type == MLX5_FLOW_TYPE_MIN || drv_type == MLX5_FLOW_TYPE_MAX) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "invalid driver type"); + fops = flow_get_drv_ops(drv_type); + if (!fops || !fops->action_query_update) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "no query_update handler"); + return fops->action_query_update(dev, handle, update, + query, qu_mode, error); +} + /** * Destroy all indirect actions (shared RSS). * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e376dcae93..9235af960d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -70,6 +70,7 @@ enum { MLX5_INDIRECT_ACTION_TYPE_COUNT, MLX5_INDIRECT_ACTION_TYPE_CT, MLX5_INDIRECT_ACTION_TYPE_METER_MARK, + MLX5_INDIRECT_ACTION_TYPE_QUOTA, }; /* Now, the maximal ports will be supported is 16, action number is 32M. */ @@ -218,6 +219,8 @@ enum mlx5_feature_name { /* Meter color item */ #define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) +#define MLX5_FLOW_ITEM_QUOTA (UINT64_C(1) << 45) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ @@ -303,6 +306,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_SEND_TO_KERNEL (1ull << 42) #define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43) #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) +#define MLX5_FLOW_ACTION_QUOTA (1ull << 46) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1699,6 +1703,12 @@ typedef int (*mlx5_flow_action_query_t) const struct rte_flow_action_handle *action, void *data, struct rte_flow_error *error); +typedef int (*mlx5_flow_action_query_update_t) + (struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *data, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error); typedef int (*mlx5_flow_sync_domain_t) (struct rte_eth_dev *dev, uint32_t domains, @@ -1845,7 +1855,13 @@ typedef int (*mlx5_flow_async_action_handle_update_t) const void *update, void *user_data, struct rte_flow_error *error); - +typedef int (*mlx5_flow_async_action_handle_query_update_t) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *data, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error); typedef int (*mlx5_flow_async_action_handle_query_t) (struct rte_eth_dev *dev, uint32_t queue, @@ -1896,6 +1912,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_action_destroy_t action_destroy; mlx5_flow_action_update_t action_update; mlx5_flow_action_query_t action_query; + mlx5_flow_action_query_update_t action_query_update; mlx5_flow_sync_domain_t sync_domain; mlx5_flow_discover_priorities_t discover_priorities; mlx5_flow_item_create_t item_create; @@ -1917,6 +1934,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_push_t push; mlx5_flow_async_action_handle_create_t async_action_create; mlx5_flow_async_action_handle_update_t async_action_update; + mlx5_flow_async_action_handle_query_update_t async_action_query_update; mlx5_flow_async_action_handle_query_t async_action_query; mlx5_flow_async_action_handle_destroy_t async_action_destroy; }; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 0eb91c570f..3c08da0614 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -74,7 +74,7 @@ mlx5_aso_reg_mr(struct mlx5_common_device *cdev, size_t length, * @param[in] sq * ASO SQ to destroy. */ -static void +void mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq) { mlx5_devx_sq_destroy(&sq->sq_obj); @@ -148,7 +148,7 @@ mlx5_aso_age_init_sq(struct mlx5_aso_sq *sq) * @param[in] sq * ASO SQ to initialize. */ -static void +void mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq) { volatile struct mlx5_aso_wqe *restrict wqe; @@ -219,7 +219,7 @@ mlx5_aso_ct_init_sq(struct mlx5_aso_sq *sq) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, void *uar, uint16_t log_desc_n) { @@ -504,7 +504,7 @@ mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe) * @param[in] sq * ASO SQ to use. */ -static void +void mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq) { struct mlx5_aso_cq *cq = &sq->cq; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 04d0612ee1..5815310ba6 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -68,6 +68,9 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action); +static void +flow_hw_construct_quota(struct mlx5_priv *priv, + struct mlx5dr_rule_action *rule_act, uint32_t qid); static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev); static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev); @@ -791,6 +794,9 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, action_src, action_dst, idx)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + flow_hw_construct_quota(priv, &acts->rule_acts[action_dst], idx); + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1834,6 +1840,16 @@ flow_hw_shared_action_get(struct rte_eth_dev *dev, return -1; } +static void +flow_hw_construct_quota(struct mlx5_priv *priv, + struct mlx5dr_rule_action *rule_act, uint32_t qid) +{ + rule_act->action = priv->quota_ctx.dr_action; + rule_act->aso_meter.offset = qid - 1; + rule_act->aso_meter.init_color = + MLX5DR_ACTION_ASO_METER_COLOR_GREEN; +} + /** * Construct shared indirect action. * @@ -1957,6 +1973,9 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, (enum mlx5dr_action_aso_meter_color) rte_col_2_mlx5_col(aso_mtr->init_color); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + flow_hw_construct_quota(priv, rule_act, idx); + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -2263,6 +2282,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst].action = priv->hw_vport[port_action->port_id]; break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + flow_hw_construct_quota(priv, + rule_acts + act_data->action_dst, + act_data->shared_meter.id); + break; case RTE_FLOW_ACTION_TYPE_METER: meter = action->conf; mtr_id = meter->mtr_id; @@ -2702,11 +2726,18 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, if (ret_comp < n_res && priv->hws_ctpool) ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue], &res[ret_comp], n_res - ret_comp); + if (ret_comp < n_res && priv->quota_ctx.sq) + ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue], + &res[ret_comp], + n_res - ret_comp); for (i = 0; i < ret_comp; i++) { job = (struct mlx5_hw_q_job *)res[i].user_data; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) == + MLX5_INDIRECT_ACTION_TYPE_QUOTA) { + mlx5_quota_async_completion(dev, queue, job); + } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); @@ -3687,6 +3718,10 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return ret; *action_flags |= MLX5_FLOW_ACTION_INDIRECT_AGE; break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + /* TODO: add proper quota verification */ + *action_flags |= MLX5_FLOW_ACTION_QUOTA; + break; default: DRV_LOG(WARNING, "Unsupported shared action type: %d", type); return rte_flow_error_set(error, ENOTSUP, @@ -3724,19 +3759,17 @@ flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused, } static inline uint16_t -flow_hw_template_expand_modify_field(const struct rte_flow_action actions[], - const struct rte_flow_action masks[], - const struct rte_flow_action *mf_action, - const struct rte_flow_action *mf_mask, - struct rte_flow_action *new_actions, - struct rte_flow_action *new_masks, - uint64_t flags, uint32_t act_num) +flow_hw_template_expand_modify_field(struct rte_flow_action actions[], + struct rte_flow_action masks[], + const struct rte_flow_action *mf_actions, + const struct rte_flow_action *mf_masks, + uint64_t flags, uint32_t act_num, + uint32_t mf_num) { uint32_t i, tail; MLX5_ASSERT(actions && masks); - MLX5_ASSERT(new_actions && new_masks); - MLX5_ASSERT(mf_action && mf_mask); + MLX5_ASSERT(mf_num > 0); if (flags & MLX5_FLOW_ACTION_MODIFY_FIELD) { /* * Application action template already has Modify Field. @@ -3787,12 +3820,10 @@ flow_hw_template_expand_modify_field(const struct rte_flow_action actions[], i = 0; insert: tail = act_num - i; /* num action to move */ - memcpy(new_actions, actions, sizeof(actions[0]) * i); - new_actions[i] = *mf_action; - memcpy(new_actions + i + 1, actions + i, sizeof(actions[0]) * tail); - memcpy(new_masks, masks, sizeof(masks[0]) * i); - new_masks[i] = *mf_mask; - memcpy(new_masks + i + 1, masks + i, sizeof(masks[0]) * tail); + memmove(actions + i + mf_num, actions + i, sizeof(actions[0]) * tail); + memcpy(actions + i, mf_actions, sizeof(actions[0]) * mf_num); + memmove(masks + i + mf_num, masks + i, sizeof(masks[0]) * tail); + memcpy(masks + i, mf_masks, sizeof(masks[0]) * mf_num); return i; } @@ -4102,6 +4133,7 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT; *curr_off = *curr_off + 1; break; + case RTE_FLOW_ACTION_TYPE_QUOTA: case RTE_FLOW_ACTION_TYPE_METER_MARK: at->actions_off[action_src] = *curr_off; action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_METER; @@ -4331,6 +4363,96 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, &modify_action); } +static __rte_always_inline void +flow_hw_actions_template_replace_container(const + struct rte_flow_action *actions, + const + struct rte_flow_action *masks, + struct rte_flow_action *new_actions, + struct rte_flow_action *new_masks, + struct rte_flow_action **ra, + struct rte_flow_action **rm, + uint32_t act_num) +{ + memcpy(new_actions, actions, sizeof(actions[0]) * act_num); + memcpy(new_masks, masks, sizeof(masks[0]) * act_num); + *ra = (void *)(uintptr_t)new_actions; + *rm = (void *)(uintptr_t)new_masks; +} + +#define RX_META_COPY_ACTION ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field){ \ + .operation = RTE_FLOW_MODIFY_SET, \ + .dst = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = REG_B, \ + }, \ + .src = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = REG_C_1, \ + }, \ + .width = 32, \ + } \ +}) + +#define RX_META_COPY_MASK ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field){ \ + .operation = RTE_FLOW_MODIFY_SET, \ + .dst = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = UINT32_MAX, \ + .offset = UINT32_MAX, \ + }, \ + .src = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = UINT32_MAX, \ + .offset = UINT32_MAX, \ + }, \ + .width = UINT32_MAX, \ + } \ +}) + +#define QUOTA_COLOR_INC_ACTION ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field) { \ + .operation = RTE_FLOW_MODIFY_ADD, \ + .dst = { \ + .field = RTE_FLOW_FIELD_METER_COLOR, \ + .level = 0, .offset = 0 \ + }, \ + .src = { \ + .field = RTE_FLOW_FIELD_VALUE, \ + .level = 1, \ + .offset = 0, \ + }, \ + .width = 2 \ + } \ +}) + +#define QUOTA_COLOR_INC_MASK ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field) { \ + .operation = RTE_FLOW_MODIFY_ADD, \ + .dst = { \ + .field = RTE_FLOW_FIELD_METER_COLOR, \ + .level = UINT32_MAX, \ + .offset = UINT32_MAX, \ + }, \ + .src = { \ + .field = RTE_FLOW_FIELD_VALUE, \ + .level = 3, \ + .offset = 0 \ + }, \ + .width = UINT32_MAX \ + } \ +}) + /** * Create flow action template. * @@ -4369,40 +4491,9 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, int set_vlan_vid_ix = -1; struct rte_flow_action_modify_field set_vlan_vid_spec = {0, }; struct rte_flow_action_modify_field set_vlan_vid_mask = {0, }; - const struct rte_flow_action_modify_field rx_mreg = { - .operation = RTE_FLOW_MODIFY_SET, - .dst = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = REG_B, - }, - .src = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = REG_C_1, - }, - .width = 32, - }; - const struct rte_flow_action_modify_field rx_mreg_mask = { - .operation = RTE_FLOW_MODIFY_SET, - .dst = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, - .offset = UINT32_MAX, - }, - .src = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, - .offset = UINT32_MAX, - }, - .width = UINT32_MAX, - }; - const struct rte_flow_action rx_cpy = { - .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, - .conf = &rx_mreg, - }; - const struct rte_flow_action rx_cpy_mask = { - .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, - .conf = &rx_mreg_mask, - }; + struct rte_flow_action mf_actions[MLX5_HW_MAX_ACTS]; + struct rte_flow_action mf_masks[MLX5_HW_MAX_ACTS]; + uint32_t expand_mf_num = 0; if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks, &action_flags, error)) @@ -4432,44 +4523,57 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Too many actions"); return NULL; } + if (set_vlan_vid_ix != -1) { + /* If temporary action buffer was not used, copy template actions to it */ + if (ra == actions) + flow_hw_actions_template_replace_container(actions, + masks, + tmp_action, + tmp_mask, + &ra, &rm, + act_num); + flow_hw_set_vlan_vid(dev, ra, rm, + &set_vlan_vid_spec, &set_vlan_vid_mask, + set_vlan_vid_ix); + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + } + if (action_flags & MLX5_FLOW_ACTION_QUOTA) { + mf_actions[expand_mf_num] = QUOTA_COLOR_INC_ACTION; + mf_masks[expand_mf_num] = QUOTA_COLOR_INC_MASK; + expand_mf_num++; + } if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS && priv->sh->config.dv_esw_en && (action_flags & (MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS))) { /* Insert META copy */ - if (act_num + 1 > MLX5_HW_MAX_ACTS) { + mf_actions[expand_mf_num] = RX_META_COPY_ACTION; + mf_masks[expand_mf_num] = RX_META_COPY_MASK; + expand_mf_num++; + } + if (expand_mf_num) { + if (act_num + expand_mf_num > MLX5_HW_MAX_ACTS) { rte_flow_error_set(error, E2BIG, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot expand: too many actions"); return NULL; } + if (ra == actions) + flow_hw_actions_template_replace_container(actions, + masks, + tmp_action, + tmp_mask, + &ra, &rm, + act_num); /* Application should make sure only one Q/RSS exist in one rule. */ - pos = flow_hw_template_expand_modify_field(actions, masks, - &rx_cpy, - &rx_cpy_mask, - tmp_action, tmp_mask, + pos = flow_hw_template_expand_modify_field(ra, rm, + mf_actions, + mf_masks, action_flags, - act_num); - ra = tmp_action; - rm = tmp_mask; - act_num++; + act_num, + expand_mf_num); + act_num += expand_mf_num; action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; } - if (set_vlan_vid_ix != -1) { - /* If temporary action buffer was not used, copy template actions to it */ - if (ra == actions && rm == masks) { - for (i = 0; i < act_num; ++i) { - tmp_action[i] = actions[i]; - tmp_mask[i] = masks[i]; - if (actions[i].type == RTE_FLOW_ACTION_TYPE_END) - break; - } - ra = tmp_action; - rm = tmp_mask; - } - flow_hw_set_vlan_vid(dev, ra, rm, - &set_vlan_vid_spec, &set_vlan_vid_mask, - set_vlan_vid_ix); - } act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error); if (act_len <= 0) return NULL; @@ -4732,6 +4836,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP: case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: + case RTE_FLOW_ITEM_TYPE_QUOTA: break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: /* @@ -6932,6 +7037,12 @@ flow_hw_configure(struct rte_eth_dev *dev, "Failed to set up Rx control flow templates"); goto err; } + /* Initialize quotas */ + if (port_attr->nb_quotas) { + ret = mlx5_flow_quota_init(dev, port_attr->nb_quotas); + if (ret) + goto err; + } /* Initialize meter library*/ if (port_attr->nb_meters) if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 1, 1, nb_q_updated)) @@ -7031,6 +7142,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); priv->hws_cpool = NULL; } + mlx5_flow_quota_destroy(dev); flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { if (priv->hw_drop[i]) @@ -7124,6 +7236,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) flow_hw_ct_mng_destroy(dev, priv->ct_mng); priv->ct_mng = NULL; } + mlx5_flow_quota_destroy(dev); for (i = 0; i < priv->nb_queue; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); @@ -7524,6 +7637,8 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue, return flow_hw_validate_action_meter_mark(dev, action, error); case RTE_FLOW_ACTION_TYPE_RSS: return flow_dv_action_validate(dev, conf, action, error); + case RTE_FLOW_ACTION_TYPE_QUOTA: + return 0; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, @@ -7695,6 +7810,11 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, case RTE_FLOW_ACTION_TYPE_RSS: handle = flow_dv_action_create(dev, conf, action, error); break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + aso = true; + handle = mlx5_quota_alloc(dev, queue, action->conf, + job, push, error); + break; default: rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action type not supported"); @@ -7815,6 +7935,11 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, case MLX5_INDIRECT_ACTION_TYPE_RSS: ret = flow_dv_action_update(dev, handle, update, error); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + aso = true; + ret = mlx5_quota_query_update(dev, queue, handle, update, NULL, + job, push, error); + break; default: ret = -ENOTSUP; rte_flow_error_set(error, ENOTSUP, @@ -7927,6 +8052,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, case MLX5_INDIRECT_ACTION_TYPE_RSS: ret = flow_dv_action_destroy(dev, handle, error); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + break; default: ret = -ENOTSUP; rte_flow_error_set(error, ENOTSUP, @@ -8196,6 +8323,11 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, ret = flow_hw_conntrack_query(dev, queue, act_idx, data, job, push, error); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + aso = true; + ret = mlx5_quota_query(dev, queue, handle, data, + job, push, error); + break; default: ret = -ENOTSUP; rte_flow_error_set(error, ENOTSUP, @@ -8205,7 +8337,51 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, } if (job) flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); - return 0; + return ret; +} + +static int +flow_hw_async_action_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + bool push = flow_hw_action_push(attr); + bool aso = false; + struct mlx5_hw_q_job *job = NULL; + int ret = 0; + + if (attr) { + job = flow_hw_action_job_init(priv, queue, handle, user_data, + query, + MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, + error); + if (!job) + return -rte_errno; + } + switch (MLX5_INDIRECT_ACTION_TYPE_GET(handle)) { + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + if (qu_mode != RTE_FLOW_QU_QUERY_FIRST) { + ret = rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, "quota action must query before update"); + break; + } + aso = true; + ret = mlx5_quota_query_update(dev, queue, handle, + update, query, job, push, error); + break; + default: + ret = rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "update and query not supportred"); + } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); + return ret; } static int @@ -8217,6 +8393,19 @@ flow_hw_action_query(struct rte_eth_dev *dev, handle, data, NULL, error); } +static int +flow_hw_action_query_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error) +{ + return flow_hw_async_action_handle_query_update(dev, MLX5_HW_INV_QUEUE, + NULL, handle, update, + query, qu_mode, NULL, + error); +} + /** * Get aged-out flows of a given port on the given HWS flow queue. * @@ -8329,12 +8518,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .async_action_create = flow_hw_action_handle_create, .async_action_destroy = flow_hw_action_handle_destroy, .async_action_update = flow_hw_action_handle_update, + .async_action_query_update = flow_hw_async_action_handle_query_update, .async_action_query = flow_hw_action_handle_query, .action_validate = flow_hw_action_validate, .action_create = flow_hw_action_create, .action_destroy = flow_hw_action_destroy, .action_update = flow_hw_action_update, .action_query = flow_hw_action_query, + .action_query_update = flow_hw_action_query_update, .query = flow_hw_query, .get_aged_flows = flow_hw_get_aged_flows, .get_q_aged_flows = flow_hw_get_q_aged_flows, diff --git a/drivers/net/mlx5/mlx5_flow_quota.c b/drivers/net/mlx5/mlx5_flow_quota.c new file mode 100644 index 0000000000..0639620848 --- /dev/null +++ b/drivers/net/mlx5/mlx5_flow_quota.c @@ -0,0 +1,726 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Nvidia Inc. All rights reserved. + */ +#include +#include + +#include + +#include "mlx5.h" +#include "mlx5_malloc.h" +#include "mlx5_flow.h" + +typedef void (*quota_wqe_cmd_t)(volatile struct mlx5_aso_wqe *restrict, + struct mlx5_quota_ctx *, uint32_t, uint32_t, + void *); + +#define MLX5_ASO_MTR1_INIT_MASK 0xffffffffULL +#define MLX5_ASO_MTR0_INIT_MASK ((MLX5_ASO_MTR1_INIT_MASK) << 32) + +static __rte_always_inline bool +is_aso_mtr1_obj(uint32_t qix) +{ + return (qix & 1) != 0; +} + +static __rte_always_inline bool +is_quota_sync_queue(const struct mlx5_priv *priv, uint32_t queue) +{ + return queue >= priv->nb_queue - 1; +} + +static __rte_always_inline uint32_t +quota_sync_queue(const struct mlx5_priv *priv) +{ + return priv->nb_queue - 1; +} + +static __rte_always_inline uint32_t +mlx5_quota_wqe_read_offset(uint32_t qix, uint32_t sq_index) +{ + return 2 * sq_index + (qix & 1); +} + +static int32_t +mlx5_quota_fetch_tokens(const struct mlx5_aso_mtr_dseg *rd_buf) +{ + int c_tok = (int)rte_be_to_cpu_32(rd_buf->c_tokens); + int e_tok = (int)rte_be_to_cpu_32(rd_buf->e_tokens); + int result; + + DRV_LOG(DEBUG, "c_tokens %d e_tokens %d\n", + rte_be_to_cpu_32(rd_buf->c_tokens), + rte_be_to_cpu_32(rd_buf->e_tokens)); + /* Query after SET ignores negative E tokens */ + if (c_tok >= 0 && e_tok < 0) + result = c_tok; + /** + * If number of tokens in Meter bucket is zero or above, + * Meter hardware will use that bucket and can set number of tokens to + * negative value. + * Quota can discard negative C tokens in query report. + * That is a known hardware limitation. + * Use case example: + * + * C E Result + * 250 250 500 + * 50 250 300 + * -150 250 100 + * -150 50 50 * + * -150 -150 -300 + * + */ + else if (c_tok < 0 && e_tok >= 0 && (c_tok + e_tok) < 0) + result = e_tok; + else + result = c_tok + e_tok; + + return result; +} + +static void +mlx5_quota_query_update_async_cmpl(struct mlx5_hw_q_job *job) +{ + struct rte_flow_query_quota *query = job->query.user; + + query->quota = mlx5_quota_fetch_tokens(job->query.hw); +} + +void +mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue, + struct mlx5_hw_q_job *job) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t qix = MLX5_INDIRECT_ACTION_IDX_GET(job->action); + struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix); + + RTE_SET_USED(queue); + qobj->state = MLX5_QUOTA_STATE_READY; + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_CREATE: + break; + case MLX5_HW_Q_JOB_TYPE_QUERY: + case MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY: + mlx5_quota_query_update_async_cmpl(job); + break; + default: + break; + } +} + +static __rte_always_inline void +mlx5_quota_wqe_set_aso_read(volatile struct mlx5_aso_wqe *restrict wqe, + struct mlx5_quota_ctx *qctx, uint32_t queue) +{ + struct mlx5_aso_sq *sq = qctx->sq + queue; + uint32_t sq_mask = (1 << sq->log_desc_n) - 1; + uint32_t sq_head = sq->head & sq_mask; + uintptr_t rd_addr = (uintptr_t)(qctx->read_buf[queue] + 2 * sq_head); + + wqe->aso_cseg.lkey = rte_cpu_to_be_32(qctx->mr.lkey); + wqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(rd_addr >> 32)); + wqe->aso_cseg.va_l_r = rte_cpu_to_be_32(((uint32_t)rd_addr) | + MLX5_ASO_CSEG_READ_ENABLE); +} + +#define MLX5_ASO_MTR1_ADD_MASK 0x00000F00ULL +#define MLX5_ASO_MTR1_SET_MASK 0x000F0F00ULL +#define MLX5_ASO_MTR0_ADD_MASK ((MLX5_ASO_MTR1_ADD_MASK) << 32) +#define MLX5_ASO_MTR0_SET_MASK ((MLX5_ASO_MTR1_SET_MASK) << 32) + +static __rte_always_inline void +mlx5_quota_wqe_set_mtr_tokens(volatile struct mlx5_aso_wqe *restrict wqe, + uint32_t qix, void *arg) +{ + volatile struct mlx5_aso_mtr_dseg *mtr_dseg; + const struct rte_flow_update_quota *conf = arg; + bool set_op = (conf->op == RTE_FLOW_UPDATE_QUOTA_SET); + + if (is_aso_mtr1_obj(qix)) { + wqe->aso_cseg.data_mask = set_op ? + RTE_BE64(MLX5_ASO_MTR1_SET_MASK) : + RTE_BE64(MLX5_ASO_MTR1_ADD_MASK); + mtr_dseg = wqe->aso_dseg.mtrs + 1; + } else { + wqe->aso_cseg.data_mask = set_op ? + RTE_BE64(MLX5_ASO_MTR0_SET_MASK) : + RTE_BE64(MLX5_ASO_MTR0_ADD_MASK); + mtr_dseg = wqe->aso_dseg.mtrs; + } + if (set_op) { + /* prevent using E tokens when C tokens exhausted */ + mtr_dseg->e_tokens = -1; + mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota); + } else { + mtr_dseg->e_tokens = rte_cpu_to_be_32(conf->quota); + } +} + +static __rte_always_inline void +mlx5_quota_wqe_query(volatile struct mlx5_aso_wqe *restrict wqe, + struct mlx5_quota_ctx *qctx, __rte_unused uint32_t qix, + uint32_t queue, __rte_unused void *arg) +{ + mlx5_quota_wqe_set_aso_read(wqe, qctx, queue); + wqe->aso_cseg.data_mask = 0ull; /* clear MTR ASO data modification */ +} + +static __rte_always_inline void +mlx5_quota_wqe_update(volatile struct mlx5_aso_wqe *restrict wqe, + __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix, + __rte_unused uint32_t queue, void *arg) +{ + mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg); + wqe->aso_cseg.va_l_r = 0; /* clear READ flag */ +} + +static __rte_always_inline void +mlx5_quota_wqe_query_update(volatile struct mlx5_aso_wqe *restrict wqe, + struct mlx5_quota_ctx *qctx, uint32_t qix, + uint32_t queue, void *arg) +{ + mlx5_quota_wqe_set_aso_read(wqe, qctx, queue); + mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg); +} + +static __rte_always_inline void +mlx5_quota_set_init_wqe(volatile struct mlx5_aso_wqe *restrict wqe, + __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix, + __rte_unused uint32_t queue, void *arg) +{ + volatile struct mlx5_aso_mtr_dseg *mtr_dseg; + const struct rte_flow_action_quota *conf = arg; + const struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix + 1); + + if (is_aso_mtr1_obj(qix)) { + wqe->aso_cseg.data_mask = + rte_cpu_to_be_64(MLX5_ASO_MTR1_INIT_MASK); + mtr_dseg = wqe->aso_dseg.mtrs + 1; + } else { + wqe->aso_cseg.data_mask = + rte_cpu_to_be_64(MLX5_ASO_MTR0_INIT_MASK); + mtr_dseg = wqe->aso_dseg.mtrs; + } + mtr_dseg->e_tokens = -1; + mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota); + mtr_dseg->v_bo_sc_bbog_mm |= rte_cpu_to_be_32 + (qobj->mode << ASO_DSEG_MTR_MODE); +} + +static __rte_always_inline void +mlx5_quota_cmd_completed_status(struct mlx5_aso_sq *sq, uint16_t n) +{ + uint16_t i, mask = (1 << sq->log_desc_n) - 1; + + for (i = 0; i < n; i++) { + uint8_t state = MLX5_QUOTA_STATE_WAIT; + struct mlx5_quota *quota_obj = + sq->elts[(sq->tail + i) & mask].quota_obj; + + __atomic_compare_exchange_n("a_obj->state, &state, + MLX5_QUOTA_STATE_READY, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + } +} + +static void +mlx5_quota_cmd_completion_handle(struct mlx5_aso_sq *sq) +{ + struct mlx5_aso_cq *cq = &sq->cq; + volatile struct mlx5_cqe *restrict cqe; + const unsigned int cq_size = 1 << cq->log_desc_n; + const unsigned int mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = cq->cq_ci & mask; + uint16_t max; + uint16_t n = 0; + int ret; + + MLX5_ASSERT(rte_spinlock_is_locked(&sq->sqsl)); + max = (uint16_t)(sq->head - sq->tail); + if (unlikely(!max)) + return; + do { + idx = next_idx; + next_idx = (cq->cq_ci + 1) & mask; + rte_prefetch0(&cq->cq_obj.cqes[next_idx]); + cqe = &cq->cq_obj.cqes[idx]; + ret = check_cqe(cqe, cq_size, cq->cq_ci); + /* + * Be sure owner read is done before any other cookie field or + * opaque field. + */ + rte_io_rmb(); + if (ret != MLX5_CQE_STATUS_SW_OWN) { + if (likely(ret == MLX5_CQE_STATUS_HW_OWN)) + break; + mlx5_aso_cqe_err_handle(sq); + } else { + n++; + } + cq->cq_ci++; + } while (1); + if (likely(n)) { + mlx5_quota_cmd_completed_status(sq, n); + sq->tail += n; + rte_io_wmb(); + cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); + } +} + +static int +mlx5_quota_cmd_wait_cmpl(struct mlx5_aso_sq *sq, struct mlx5_quota *quota_obj) +{ + uint32_t poll_cqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES; + + do { + rte_spinlock_lock(&sq->sqsl); + mlx5_quota_cmd_completion_handle(sq); + rte_spinlock_unlock(&sq->sqsl); + if (__atomic_load_n("a_obj->state, __ATOMIC_RELAXED) == + MLX5_QUOTA_STATE_READY) + return 0; + } while (poll_cqe_times -= MLX5_ASO_WQE_CQE_RESPONSE_DELAY); + DRV_LOG(ERR, "QUOTA: failed to poll command CQ"); + return -1; +} + +static int +mlx5_quota_cmd_wqe(struct rte_eth_dev *dev, struct mlx5_quota *quota_obj, + quota_wqe_cmd_t wqe_cmd, uint32_t qix, uint32_t queue, + struct mlx5_hw_q_job *job, bool push, void *arg) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + struct mlx5_aso_sq *sq = qctx->sq + queue; + uint32_t head, sq_mask = (1 << sq->log_desc_n) - 1; + bool sync_queue = is_quota_sync_queue(priv, queue); + volatile struct mlx5_aso_wqe *restrict wqe; + int ret = 0; + + if (sync_queue) + rte_spinlock_lock(&sq->sqsl); + head = sq->head & sq_mask; + wqe = &sq->sq_obj.aso_wqes[head]; + wqe_cmd(wqe, qctx, qix, queue, arg); + wqe->general_cseg.misc = rte_cpu_to_be_32(qctx->devx_obj->id + (qix >> 1)); + wqe->general_cseg.opcode = rte_cpu_to_be_32 + (ASO_OPC_MOD_POLICER << WQE_CSEG_OPC_MOD_OFFSET | + sq->pi << WQE_CSEG_WQE_INDEX_OFFSET | MLX5_OPCODE_ACCESS_ASO); + sq->head++; + sq->pi += 2; /* Each WQE contains 2 WQEBB */ + if (push) { + mlx5_doorbell_ring(&sh->tx_uar.bf_db, *(volatile uint64_t *)wqe, + sq->pi, &sq->sq_obj.db_rec[MLX5_SND_DBR], + !sh->tx_uar.dbnc); + sq->db_pi = sq->pi; + } + sq->db = wqe; + job->query.hw = qctx->read_buf[queue] + + mlx5_quota_wqe_read_offset(qix, head); + sq->elts[head].quota_obj = sync_queue ? + quota_obj : (typeof(quota_obj))job; + if (sync_queue) { + rte_spinlock_unlock(&sq->sqsl); + ret = mlx5_quota_cmd_wait_cmpl(sq, quota_obj); + } + return ret; +} + +static void +mlx5_quota_destroy_sq(struct mlx5_priv *priv) +{ + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t i, nb_queues = priv->nb_queue; + + if (!qctx->sq) + return; + for (i = 0; i < nb_queues; i++) + mlx5_aso_destroy_sq(qctx->sq + i); + mlx5_free(qctx->sq); +} + +static __rte_always_inline void +mlx5_quota_wqe_init_common(struct mlx5_aso_sq *sq, + volatile struct mlx5_aso_wqe *restrict wqe) +{ +#define ASO_MTR_DW0 RTE_BE32(1 << ASO_DSEG_VALID_OFFSET | \ + MLX5_FLOW_COLOR_GREEN << ASO_DSEG_SC_OFFSET) + + memset((void *)(uintptr_t)wqe, 0, sizeof(*wqe)); + wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) | + (sizeof(*wqe) >> 4)); + wqe->aso_cseg.operand_masks = RTE_BE32 + (0u | (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) | + (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) | + (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) | + (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET)); + wqe->general_cseg.flags = RTE_BE32 + (MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET); + wqe->aso_dseg.mtrs[0].v_bo_sc_bbog_mm = ASO_MTR_DW0; + /** + * ASO Meter tokens auto-update must be disabled in quota action. + * Tokens auto-update is disabled when Meter when *IR values set to + * ((0x1u << 16) | (0x1Eu << 24)) **NOT** 0x00 + */ + wqe->aso_dseg.mtrs[0].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); + wqe->aso_dseg.mtrs[0].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); + wqe->aso_dseg.mtrs[1].v_bo_sc_bbog_mm = ASO_MTR_DW0; + wqe->aso_dseg.mtrs[1].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); + wqe->aso_dseg.mtrs[1].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); +#undef ASO_MTR_DW0 +} + +static void +mlx5_quota_init_sq(struct mlx5_aso_sq *sq) +{ + uint32_t i, size = 1 << sq->log_desc_n; + + for (i = 0; i < size; i++) + mlx5_quota_wqe_init_common(sq, sq->sq_obj.aso_wqes + i); +} + +static int +mlx5_quota_alloc_sq(struct mlx5_priv *priv) +{ + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t i, nb_queues = priv->nb_queue; + + qctx->sq = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(qctx->sq[0]) * nb_queues, + 0, SOCKET_ID_ANY); + if (!qctx->sq) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ pool"); + return -ENOMEM; + } + for (i = 0; i < nb_queues; i++) { + int ret = mlx5_aso_sq_create + (sh->cdev, qctx->sq + i, sh->tx_uar.obj, + rte_log2_u32(priv->hw_q[i].size)); + if (ret) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ[%u]", i); + return -ENOMEM; + } + mlx5_quota_init_sq(qctx->sq + i); + } + return 0; +} + +static void +mlx5_quota_destroy_read_buf(struct mlx5_priv *priv) +{ + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + + if (qctx->mr.lkey) { + void *addr = qctx->mr.addr; + sh->cdev->mr_scache.dereg_mr_cb(&qctx->mr); + mlx5_free(addr); + } + if (qctx->read_buf) + mlx5_free(qctx->read_buf); +} + +static int +mlx5_quota_alloc_read_buf(struct mlx5_priv *priv) +{ + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t i, nb_queues = priv->nb_queue; + uint32_t sq_size_sum; + size_t page_size = rte_mem_page_size(); + struct mlx5_aso_mtr_dseg *buf; + size_t rd_buf_size; + int ret; + + for (i = 0, sq_size_sum = 0; i < nb_queues; i++) + sq_size_sum += priv->hw_q[i].size; + /* ACCESS MTR ASO WQE reads 2 MTR objects */ + rd_buf_size = 2 * sq_size_sum * sizeof(buf[0]); + buf = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, rd_buf_size, + page_size, SOCKET_ID_ANY); + if (!buf) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [1]"); + return -ENOMEM; + } + ret = sh->cdev->mr_scache.reg_mr_cb(sh->cdev->pd, buf, + rd_buf_size, &qctx->mr); + if (ret) { + DRV_LOG(DEBUG, "QUOTA: failed to register MTR ASO READ MR"); + return -errno; + } + qctx->read_buf = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(qctx->read_buf[0]) * nb_queues, + 0, SOCKET_ID_ANY); + if (!qctx->read_buf) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [2]"); + return -ENOMEM; + } + for (i = 0; i < nb_queues; i++) { + qctx->read_buf[i] = buf; + buf += 2 * priv->hw_q[i].size; + } + return 0; +} + +static __rte_always_inline int +mlx5_quota_check_ready(struct mlx5_quota *qobj, struct rte_flow_error *error) +{ + uint8_t state = MLX5_QUOTA_STATE_READY; + bool verdict = __atomic_compare_exchange_n + (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + + if (!verdict) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action is busy"); + return 0; +} + +int +mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_handle *handle, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t work_queue = !is_quota_sync_queue(priv, queue) ? + queue : quota_sync_queue(priv); + uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle); + uint32_t qix = id - 1; + struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id); + struct mlx5_hw_q_job sync_job; + int ret; + + if (!qobj) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid query handle"); + ret = mlx5_quota_check_ready(qobj, error); + if (ret) + return ret; + ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_wqe_query, qix, work_queue, + async_job ? async_job : &sync_job, push, NULL); + if (ret) { + __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY, + __ATOMIC_RELAXED); + return rte_flow_error_set(error, EAGAIN, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again"); + } + if (is_quota_sync_queue(priv, queue)) + query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw); + return 0; +} + +int +mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue, + struct rte_flow_action_handle *handle, + const struct rte_flow_action *update, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + const struct rte_flow_update_quota *conf = update->conf; + uint32_t work_queue = !is_quota_sync_queue(priv, queue) ? + queue : quota_sync_queue(priv); + uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle); + uint32_t qix = id - 1; + struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id); + struct mlx5_hw_q_job sync_job; + quota_wqe_cmd_t wqe_cmd = query ? + mlx5_quota_wqe_query_update : + mlx5_quota_wqe_update; + int ret; + + if (conf->quota > MLX5_MTR_MAX_TOKEN_VALUE) + return rte_flow_error_set(error, E2BIG, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "update value too big"); + if (!qobj) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid query_update handle"); + if (conf->op == RTE_FLOW_UPDATE_QUOTA_ADD && + qobj->last_update == RTE_FLOW_UPDATE_QUOTA_ADD) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot add twice"); + ret = mlx5_quota_check_ready(qobj, error); + if (ret) + return ret; + ret = mlx5_quota_cmd_wqe(dev, qobj, wqe_cmd, qix, work_queue, + async_job ? async_job : &sync_job, push, + (void *)(uintptr_t)update->conf); + if (ret) { + __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY, + __ATOMIC_RELAXED); + return rte_flow_error_set(error, EAGAIN, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again"); + } + qobj->last_update = conf->op; + if (query && is_quota_sync_queue(priv, queue)) + query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw); + return 0; +} + +struct rte_flow_action_handle * +mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_quota *conf, + struct mlx5_hw_q_job *job, bool push, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t id; + struct mlx5_quota *qobj; + uintptr_t handle = (uintptr_t)MLX5_INDIRECT_ACTION_TYPE_QUOTA << + MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t work_queue = !is_quota_sync_queue(priv, queue) ? + queue : quota_sync_queue(priv); + struct mlx5_hw_q_job sync_job; + uint8_t state = MLX5_QUOTA_STATE_FREE; + bool verdict; + int ret; + + qobj = mlx5_ipool_malloc(qctx->quota_ipool, &id); + if (!qobj) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "quota: failed to allocate quota object"); + return NULL; + } + verdict = __atomic_compare_exchange_n + (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + if (!verdict) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "quota: new quota object has invalid state"); + return NULL; + } + switch (conf->mode) { + case RTE_FLOW_QUOTA_MODE_L2: + qobj->mode = MLX5_METER_MODE_L2_LEN; + break; + case RTE_FLOW_QUOTA_MODE_PACKET: + qobj->mode = MLX5_METER_MODE_PKT; + break; + default: + qobj->mode = MLX5_METER_MODE_IP_LEN; + } + ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_set_init_wqe, id - 1, + work_queue, job ? job : &sync_job, push, + (void *)(uintptr_t)conf); + if (ret) { + mlx5_ipool_free(qctx->quota_ipool, id); + __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_FREE, + __ATOMIC_RELAXED); + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "quota: WR failure"); + return 0; + } + return (struct rte_flow_action_handle *)(handle | id); +} + +int +mlx5_flow_quota_destroy(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + int ret; + + if (qctx->quota_ipool) + mlx5_ipool_destroy(qctx->quota_ipool); + mlx5_quota_destroy_sq(priv); + mlx5_quota_destroy_read_buf(priv); + if (qctx->dr_action) { + ret = mlx5dr_action_destroy(qctx->dr_action); + if (ret) + DRV_LOG(ERR, "QUOTA: failed to destroy DR action"); + } + if (qctx->devx_obj) { + ret = mlx5_devx_cmd_destroy(qctx->devx_obj); + if (ret) + DRV_LOG(ERR, "QUOTA: failed to destroy MTR ASO object"); + } + memset(qctx, 0, sizeof(*qctx)); + return 0; +} + +#define MLX5_QUOTA_IPOOL_TRUNK_SIZE (1u << 12) +#define MLX5_QUOTA_IPOOL_CACHE_SIZE (1u << 13) +int +mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + int reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + uint32_t flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + struct mlx5_indexed_pool_config quota_ipool_cfg = { + .size = sizeof(struct mlx5_quota), + .trunk_size = RTE_MIN(nb_quotas, MLX5_QUOTA_IPOOL_TRUNK_SIZE), + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = nb_quotas, + .free = mlx5_free, + .type = "mlx5_flow_quota_index_pool" + }; + int ret; + + if (!nb_quotas) { + DRV_LOG(DEBUG, "QUOTA: cannot create quota with 0 objects"); + return -EINVAL; + } + if (!priv->mtr_en || !sh->meter_aso_en) { + DRV_LOG(DEBUG, "QUOTA: no MTR support"); + return -ENOTSUP; + } + if (reg_id < 0) { + DRV_LOG(DEBUG, "QUOTA: MRT register not available"); + return -ENOTSUP; + } + qctx->devx_obj = mlx5_devx_cmd_create_flow_meter_aso_obj + (sh->cdev->ctx, sh->cdev->pdn, rte_log2_u32(nb_quotas >> 1)); + if (!qctx->devx_obj) { + DRV_LOG(DEBUG, "QUOTA: cannot allocate MTR ASO objects"); + return -ENOMEM; + } + if (sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + qctx->dr_action = mlx5dr_action_create_aso_meter + (priv->dr_ctx, (struct mlx5dr_devx_obj *)qctx->devx_obj, + reg_id - REG_C_0, flags); + if (!qctx->dr_action) { + DRV_LOG(DEBUG, "QUOTA: failed to create DR action"); + ret = -ENOMEM; + goto err; + } + ret = mlx5_quota_alloc_read_buf(priv); + if (ret) + goto err; + ret = mlx5_quota_alloc_sq(priv); + if (ret) + goto err; + if (nb_quotas < MLX5_QUOTA_IPOOL_TRUNK_SIZE) + quota_ipool_cfg.per_core_cache = 0; + else if (nb_quotas < MLX5_HW_IPOOL_SIZE_THRESHOLD) + quota_ipool_cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; + else + quota_ipool_cfg.per_core_cache = MLX5_QUOTA_IPOOL_CACHE_SIZE; + qctx->quota_ipool = mlx5_ipool_create("a_ipool_cfg); + if (!qctx->quota_ipool) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate quota pool"); + ret = -ENOMEM; + goto err; + } + qctx->nb_quotas = nb_quotas; + return 0; +err: + mlx5_flow_quota_destroy(dev); + return ret; +}