From patchwork Wed Jan 18 12:55:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 122306 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8BD54240D; Wed, 18 Jan 2023 13:56:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5247410DD; Wed, 18 Jan 2023 13:56:30 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2068.outbound.protection.outlook.com [40.107.94.68]) by mails.dpdk.org (Postfix) with ESMTP id A976A40DFD for ; Wed, 18 Jan 2023 13:56:28 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TQjB6E8jeO3xH6CTT2Rk6Bt2ryPqjI0s2Gkv/SL9354tPb/ZqhYtVCbukwtgR2LZz1D9pTxsFpwxp4muvx8DPphpVlpPfMCZhv7FH6nAkzgRDyQ7oFFDfy7wDhhH9bCBO/WoekZEyiorp5bvRtell3KSKwFIPJEgtauFNIVqM7lBduOXaZ+/hF17LdoqvFyihlgiBGWaE3T9o6RGIh4gQOjWZuzMk+JMNYCTAXNjFUye3GtY+DahrrvOxyXYNiKhL8LXhfciyOTVAmDZXrLKLlWw3325EpaV+upuCK51hB2a0QJYrMS4jHFtFoxGKugzPfCoE3/GY/rKLSLMY9oTpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bpiUnhKFYmr6ZTArUIdpOhGtzrFCyOR9OzzcZAIPLZ0=; b=CDfqHQh+KXVHW6R64o9VGQP2ivbC/d5yB/XTxVrTr1NvEgSdrCq2BGpg79slDdLGHRIiDA+d3UZNC/wOTW7HRB7cJqxn+liqCK0AWBU8fPMGL75RgR1qPM/SM79J3ICBZDEarClNX8SDdrdYdp9ZRGCcmYXJ+OfPJ7Fhq5GIbeQwkOl5MuqfrNEIKdBJyS15HzpWEJCY9PPPYOC0b/pcy6FHsiXLHc9jlvRuvrGocdLPZXxW0ABUJOFchLpsBDUcTzFCHnLeVEIlv3syXmmc1EESEhSIa4Q2IPNdIX27nQluGxXpcR3+hbbhNl6lSlPYu8FcfDaWIJlZeuQO1U0v9A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=bpiUnhKFYmr6ZTArUIdpOhGtzrFCyOR9OzzcZAIPLZ0=; b=V7iId3OKpJQeg3POD/3EE4iytOUszZFcqr7KlK5UZzVXyewtq+MsYiNucH8hr/7YoRXIjwFlZAaVAr7IRKaPPG4xJazaj+UJ45Rm5Mf0D/cH5cnLdmCs7x01CLfmh9NipuEF3xyuPbpdNOrGDjuBIKL8QTnG5wqxjY9mazx/R/m92En7VrwyoKqd9X3eZoqgdcgWpqig7P1Da62mMWb/LCblgjdBbFnLGDA1sO+TUD4CVSFs9YNNKEVcbXolie+NrLYsLbbu1Prn5dHnqZ2ufiIryey+Z2QYVLl4GCL3+i7OepTfoOG4Qh5WJhQEenRXhj8RI9kJbGscNcyZdhLzyQ== Received: from BN0PR04CA0124.namprd04.prod.outlook.com (2603:10b6:408:ed::9) by DM6PR12MB5519.namprd12.prod.outlook.com (2603:10b6:5:1b5::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan 2023 12:56:26 +0000 Received: from BN8NAM11FT051.eop-nam11.prod.protection.outlook.com (2603:10b6:408:ed:cafe::9) by BN0PR04CA0124.outlook.office365.com (2603:10b6:408:ed::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend Transport; Wed, 18 Jan 2023 12:56:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT051.mail.protection.outlook.com (10.13.177.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13 via Frontend Transport; Wed, 18 Jan 2023 12:56:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:14 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:12 -0800 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko Subject: [PATCH 1/5] net/mlx5: update query fields in async job structure Date: Wed, 18 Jan 2023 14:55:52 +0200 Message-ID: <20230118125556.23622-2-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118125556.23622-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT051:EE_|DM6PR12MB5519:EE_ X-MS-Office365-Filtering-Correlation-Id: 7f29cd2d-e035-493c-daa2-08daf95368f6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v6VGSvgcowvmmC4DtKBFOh0Tp+M71tQefKK6RfOBisbcU/kNGs48gpMuRk0Dj/5ssJ66jlVNl0P5R2O0PXI6BGD+9ye2ltHDlUJL36wT0H2X8or1PXMgPqQQZjrBt2DjFu5vZiYfwtgvgL02sRUtBRZ1xqgD9vHAxqwve2P/cdviVLxcQirwknz1N2qb/dfDLvU5bOj4qMOhtBEnBpaTeXnV7rHG379Bl/IenjVNMJlFN06MhE8E2Ezq/xt5L0a//ur9SbqAzkFL9VzgFw/JD2LjvkjuxBBPNXMyJB9QrsA4nyHCSUPZuy+RkRyPrvOP992Q4iS3k40ueDllO5H4s8+wQUOWricRks7gy0qEdSFCn31F5RHAb8/8n1fdy61wiPo9BJGmlEeB2b3PpT0uhYO6Qo3MqXhF3Jz63lN77yWVYR+y+utXlCj86+/ftMg0agmEPvdJ8mvKuNbWRrELDAYoAeFRmwVjooSUkKoz7Z/yYDVx0YUH5pxwt0h7db2A6lg0hR1X2/eFgv46YwZvXL+GB3wZOhozA/M7tNfUs5TRcgLU3ss8Pr+svXJrwkHMVNFqHtKdIAOzS0cynpS5MTN4xsEPMwrG+EPpJCLogYP/qOCA5q0KuoQLMu8ViiyT3cEzUKwx8gHBi2s1L2ATc0dfDZgcEqewL6UVeg1xOV9cYnXO4/cx5O75X2mK3+IyjP3W6GJ4oVJJVytRHzLJUw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(396003)(39860400002)(136003)(346002)(376002)(451199015)(40470700004)(46966006)(36840700001)(83380400001)(82310400005)(40460700003)(6286002)(16526019)(7696005)(54906003)(36756003)(55016003)(478600001)(40480700001)(186003)(36860700001)(7636003)(356005)(2616005)(1076003)(70586007)(86362001)(47076005)(336012)(82740400003)(426003)(107886003)(8936002)(2906002)(26005)(15650500001)(6666004)(70206006)(6916009)(8676002)(4326008)(316002)(41300700001)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 12:56:26.2925 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7f29cd2d-e035-493c-daa2-08daf95368f6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT051.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB5519 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Query fields defined in `mlx5_hw_q_job` target CT type only. The patch updates `mlx5_hw_q_job` for other query types as well. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5.h | 10 +++++----- drivers/net/mlx5/mlx5_flow_aso.c | 2 +- drivers/net/mlx5/mlx5_flow_hw.c | 6 +++--- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 16b33e1548..eaf2ad69fb 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -366,11 +366,11 @@ struct mlx5_hw_q_job { struct rte_flow_item *items; union { struct { - /* Pointer to ct query user memory. */ - struct rte_flow_action_conntrack *profile; - /* Pointer to ct ASO query out memory. */ - void *out_data; - } __rte_packed; + /* User memory for query output */ + void *user; + /* Data extracted from hardware */ + void *hw; + } __rte_packed query; struct rte_flow_item_ethdev port_spec; struct rte_flow_item_tag tag_spec; } __rte_packed; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 29bd7ce9e8..0eb91c570f 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -1389,7 +1389,7 @@ mlx5_aso_ct_sq_query_single(struct mlx5_dev_ctx_shared *sh, struct mlx5_hw_q_job *job = (struct mlx5_hw_q_job *)user_data; sq->elts[wqe_idx].ct = user_data; - job->out_data = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64); + job->query.hw = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64); } else { sq->elts[wqe_idx].query_data = data; sq->elts[wqe_idx].ct = ct; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 20c71ff7f0..df5883f340 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2730,8 +2730,8 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, idx = MLX5_ACTION_CTX_CT_GET_IDX ((uint32_t)(uintptr_t)job->action); aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx); - mlx5_aso_ct_obj_analyze(job->profile, - job->out_data); + mlx5_aso_ct_obj_analyze(job->query.user, + job->query.hw); aso_ct->state = ASO_CONNTRACK_READY; } } @@ -8179,7 +8179,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, case MLX5_INDIRECT_ACTION_TYPE_CT: aso = true; if (job) - job->profile = (struct rte_flow_action_conntrack *)data; + job->query.user = data; ret = flow_hw_conntrack_query(dev, queue, act_idx, data, job, push, error); break; From patchwork Wed Jan 18 12:55:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 122307 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 73C094240D; Wed, 18 Jan 2023 13:56:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 947CF42D47; Wed, 18 Jan 2023 13:56:31 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2075.outbound.protection.outlook.com [40.107.94.75]) by mails.dpdk.org (Postfix) with ESMTP id DBD97410DD for ; Wed, 18 Jan 2023 13:56:28 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IPXS7sSZTCeGi2jGVyAl0rgfrEiFa5bHUUnw0HT7iicZsnNi6Nu6b7+F6Dfv+pWnFomDC4rFhZxi4NH1IbKfXxQuhIuWbR2fLecWZgvhnjRoep3w+JnlnddC7O425Wz4wvYhovS+FngqsD36DatmQ/lVCNU5LNwGbJDr55yLJjYEySmYXA1BA2bqy8VrHbd4Qad5rzjkHCDXAQgV32eMtMxsmoiYbFTxWck2yiFBtHqBZnFuDCTqsGiQdOsIG8/aq5BXirIuC7DnUjbhkwkCrYCOB5cs2T7NXJIvBUuphcvMoJvPhj3O67pxwgaJqkdbofJ2yb5DP0ejJKLeLzC7SA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HFhyxzuWx0Bsf3F+unmUfff1YXqLsPM/7j8jRgdUIq0=; b=TVU3CSPp8X/jVNaW1a19Uouw+cj9rut8QBADtCUAECol4smx2n3SudDVqxwqGnF2OVSKuNliOdsVo9GqKHBg3u1yufpFJ2gSKfYXfZleMvHY1g2MO1ksnUTp19UztZwwF7NmrzCBgS/2L0+eLOlAmaGjTwjNs2evrN0CD3tGrYVLNJMp123oxx8iEIZMCdXGPWgQyAzRd0Ic5PCen0CmgDB0Jq9Izh65ovbhzNkGPRS6jSm7R0cep2ZDgJLogCzjGy7uevmY2O2HmNG5IylN7L/9/o6xdnmrVNRF9NruryAm6spIaVDvsfNIc5cTbaUnNoR+/QVQtDv04+YBPKh7YA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HFhyxzuWx0Bsf3F+unmUfff1YXqLsPM/7j8jRgdUIq0=; b=OhLN6y2bxn484WSYR4Q/m0ceIPwjFcS2NITur5aL1Wy1tIY3mruCYShvwXlF1EzPg8h+lWjOvfnLCbKWgOY1UPpFXxmuU6OK9QGEv+K7SGbEiPrhP/aIuL8anv99rNhItSABH6/k3RKKnE5Otl1h6Pb4c1jyv9I5sPsxR8bktjnuraQM2/KwDIsYYGFshkFiAWW35CHtJF3ngEoblZjBnQIb24LcB4QRhMMR88fLQI/PBL7bvktHHE1pfy6unH4tmddoxyddsOV5oVcIHyP7al23fZTRp3e/mYBRNn9xOtdbFMjmfG401BsbpWcLsIzTbU1iXkHUF8bNE9zXZAlwkA== Received: from DS7PR05CA0034.namprd05.prod.outlook.com (2603:10b6:8:2f::35) by SJ2PR12MB8063.namprd12.prod.outlook.com (2603:10b6:a03:4d1::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.13; Wed, 18 Jan 2023 12:56:26 +0000 Received: from DS1PEPF0000B078.namprd05.prod.outlook.com (2603:10b6:8:2f:cafe::e3) by DS7PR05CA0034.outlook.office365.com (2603:10b6:8:2f::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend Transport; Wed, 18 Jan 2023 12:56:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000B078.mail.protection.outlook.com (10.167.17.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend Transport; Wed, 18 Jan 2023 12:56:26 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:16 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:14 -0800 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko Subject: [PATCH 2/5] net/mlx5: remove code duplication Date: Wed, 18 Jan 2023 14:55:53 +0200 Message-ID: <20230118125556.23622-3-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118125556.23622-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B078:EE_|SJ2PR12MB8063:EE_ X-MS-Office365-Filtering-Correlation-Id: fc5d2b91-0900-4ee1-45cc-08daf95368e7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9SzHWZYdn4qW8IvOcMJlCBrZ2L7K6hvtMGo4CwWe+xAVWEB0fTThssEgSUgWAA+VTyG6KMS5xaXSl8vrOXJVewbwzzZcr4Tf1memiOpv4/WInPl7dqarbVKZL6tlj4rUvBQkj/ESBg4emgQzMwxeiwHOZHKHKkRB9C8Gw26fhP6FOLJ1mHO4LD7+zpo5atUDTaz6LyGFdjqb2+/11oL0PLmDegNEOEz8F1lEyLp0bleJHK7WI1Ws467Pc9sCrBdxugC1qOHdPmbTiSE9QWZ2o5osIFFBeca38Z7+Gq9N4PhBg38U7idbQjnDmofxL9c6Sw7FPMo3VOHUAGgBFB9zSxaS/AhDPbaP7LFC5edbH8dA+bv7PQ8UfvVF90D71oA8hWH842dr2KDRhmrTm4MRwm4lZnhgR0DptTfhAX6lWPmucHBCnq6potyxajiqK4pqesQdH1PShWp8oqgv/FWiVDN/n+O7uFwREQPWDHqW3mdBYmArZOKyLffKyjV4ZEVos4tPTZ+yDOF2rk/zP2skIP+AySsHTGFJ0dv69dv3p9/V4nsg/iqPZDmyvW7CEQUmpTiV2XanxKqMXo14WMGZoKedTjTfu7d0maLm05FkL81hurpuaBFbnX14vvWMaN2SuOwPiYVL/4TVQ/KrilZkyOeAlZsRlK4Jvof0Sb14Ywe1pfXUv1b+m9xZGyYrd+YknXWmkyt6kJcAeYMqMpiIeg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(346002)(396003)(136003)(376002)(39860400002)(451199015)(46966006)(36840700001)(40470700004)(36860700001)(107886003)(36756003)(6666004)(2906002)(6286002)(82310400005)(70206006)(5660300002)(8676002)(83380400001)(8936002)(6916009)(4326008)(47076005)(82740400003)(26005)(426003)(478600001)(41300700001)(356005)(7636003)(16526019)(70586007)(336012)(55016003)(40480700001)(7696005)(186003)(1076003)(2616005)(40460700003)(86362001)(54906003)(316002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 12:56:26.2530 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fc5d2b91-0900-4ee1-45cc-08daf95368e7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B078.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8063 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace duplicated code with dedicated functions Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5.h | 6 +- drivers/net/mlx5/mlx5_flow_hw.c | 182 ++++++++++++++++---------------- 2 files changed, 95 insertions(+), 93 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index eaf2ad69fb..7c6bc91ddf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -344,11 +344,11 @@ struct mlx5_lb_ctx { }; /* HW steering queue job descriptor type. */ -enum { +enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */ MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */ - MLX5_HW_Q_JOB_TYPE_UPDATE, - MLX5_HW_Q_JOB_TYPE_QUERY, + MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ + MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ }; #define MLX5_HW_MAX_ITEMS (16) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index df5883f340..04d0612ee1 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -7532,6 +7532,67 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue, return 0; } +static __rte_always_inline bool +flow_hw_action_push(const struct rte_flow_op_attr *attr) +{ + return attr ? !attr->postpone : true; +} + +static __rte_always_inline struct mlx5_hw_q_job * +flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue) +{ + return priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; +} + +static __rte_always_inline void +flow_hw_job_put(struct mlx5_priv *priv, uint32_t queue) +{ + priv->hw_q[queue].job_idx++; +} + +static __rte_always_inline struct mlx5_hw_q_job * +flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue, + const struct rte_flow_action_handle *handle, + void *user_data, void *query_data, + enum mlx5_hw_job_type type, + struct rte_flow_error *error) +{ + struct mlx5_hw_q_job *job; + + MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + "Action destroy failed due to queue full."); + return NULL; + } + job = flow_hw_job_get(priv, queue); + job->type = type; + job->action = handle; + job->user_data = user_data; + job->query.user = query_data; + return job; +} + +static __rte_always_inline void +flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue, + struct mlx5_hw_q_job *job, + bool push, bool aso, bool status) +{ + struct mlx5_priv *priv = dev->data->dev_private; + if (likely(status)) { + if (push) + __flow_hw_push_action(dev, queue); + if (!aso) + rte_ring_enqueue(push ? + priv->hw_q[queue].indir_cq : + priv->hw_q[queue].indir_iq, + job); + } else { + flow_hw_job_put(priv, queue); + } +} + /** * Create shared action. * @@ -7569,21 +7630,15 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, cnt_id_t cnt_id; uint32_t mtr_id; uint32_t age_idx; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Flow queue full."); + job = flow_hw_action_job_init(priv, queue, NULL, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_CREATE, + error); + if (!job) return NULL; - } - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_CREATE; - job->user_data = user_data; - push = !attr->postpone; } switch (action->type) { case RTE_FLOW_ACTION_TYPE_AGE: @@ -7646,17 +7701,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, break; } if (job) { - if (!handle) { - priv->hw_q[queue].job_idx++; - return NULL; - } job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return handle; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); + flow_hw_action_finalize(dev, queue, job, push, aso, + handle != NULL); } return handle; } @@ -7704,19 +7751,15 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1); int ret = 0; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action update failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_UPDATE; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_UPDATE, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -7779,19 +7822,8 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return 0; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return ret; } @@ -7830,20 +7862,16 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, struct mlx5_hw_q_job *job = NULL; struct mlx5_aso_mtr *aso_mtr; struct mlx5_flow_meter_info *fm; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; int ret = 0; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action destroy failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_DESTROY; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + NULL, MLX5_HW_Q_JOB_TYPE_DESTROY, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -7906,19 +7934,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return ret; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return ret; } @@ -8155,19 +8172,15 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET; uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK; int ret; - bool push = true; + bool push = flow_hw_action_push(attr); bool aso = false; if (attr) { - MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE); - if (unlikely(!priv->hw_q[queue].job_idx)) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Action destroy failed due to queue full."); - job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; - job->type = MLX5_HW_Q_JOB_TYPE_QUERY; - job->user_data = user_data; - push = !attr->postpone; + job = flow_hw_action_job_init(priv, queue, handle, user_data, + data, MLX5_HW_Q_JOB_TYPE_QUERY, + error); + if (!job) + return -rte_errno; } switch (type) { case MLX5_INDIRECT_ACTION_TYPE_AGE: @@ -8190,19 +8203,8 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, "action type not supported"); break; } - if (job) { - if (ret) { - priv->hw_q[queue].job_idx++; - return ret; - } - job->action = handle; - if (push) - __flow_hw_push_action(dev, queue); - if (aso) - return ret; - rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq : - priv->hw_q[queue].indir_iq, job); - } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); return 0; } From patchwork Wed Jan 18 12:55:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 122308 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 018444240D; Wed, 18 Jan 2023 13:56:48 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C0A8942D3D; Wed, 18 Jan 2023 13:56:33 +0100 (CET) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2078.outbound.protection.outlook.com [40.107.243.78]) by mails.dpdk.org (Postfix) with ESMTP id 93D4640DFD for ; Wed, 18 Jan 2023 13:56:29 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RvwbizUaxohnKvq9LgiWtRhfMp5WmEohcqHCPZ4au+lm24uvWUAWWu+3djAJZ6uB1d3hBBLcyIVi01gb8dxA/B1jh67Mrhkr6BC1s0b+5JYPH2N++6jgzkcBPczQuki1U456rjP7ePCuhIWFMkCo2jqwUp2cvOsJQO9HwRdTwOOazL647EoQML7Tixxa1o8MxOcUfOQDiGluk5K6LtiYOKTzj55qiWDWv8Usy3qjvVFsTr6WpzpYYXfNnHE23300QECnaShgp5UstwaQdBNWZSvyeylIyhUv9njL2bEihqBk+48m070hpySSG8rdIKziP+eKYqzvNe6Xq0XCgRlUaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MMah2sW55hGObj05wuV7QwLZnmFh1kHgLWCsUSySlzQ=; b=OH49q/4FbwYjiDb18tyeURAHro5vtoFxnkD5uFc8rLUjEFtouTUR95mqonVUn+gywYlOU49efYY0ZfkFtVXJNuQdV/UNKX0XMMQnYMLGINcgnnOAXO3duHTWNRcEC167fh3GkPBUUAEYgUEqb8KT+NeFmrKXZlPBewUxKY0VSPfuM2l1Y5PwrgXpRM0MH+gpt8pQ3RIfmeN1VYTjaPVzozTK6bQZ7zt+iUYzI8FfXnnSbTOfK/0RF96duKpICBTsodcP77rQKtmH/XQTl0vUu+eOPzBI8aXk5HnRQazkp2oF4u3D2wZptDNoWXQSJZp8/V+KuRjOAfPzkJ1Ia5gFig== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MMah2sW55hGObj05wuV7QwLZnmFh1kHgLWCsUSySlzQ=; b=BBgKmyEeRoZwZ7u6IrwFBD9lirbViiSsBXBnLUOy59eAGRyhgZZmEv4WKgqw71YrS9iB+Hb6zPRSljpO4+Bh6B4sB8JGmiIvRYMEYqHcI8IG1uyRSE9O5okaBFZ2cbYDnyKgJUFfwgWrdF8QN7SpnmCHeSjJfbXZqf7goZGqeT2rBtBUHGzs/dA2TFznqtgHy0V8JuVAtHYPyCLatJnIwN6gdaOxiZNw1nWXHqPDpvN6foCoxSZAppkG6pPusYePc87clEDJ98c5vkOWP6VSqfCKC7+lYVY+i3zvK/ZMYukmOeJF7j4NRYS/41Y+I+wyZ87ZZBrTLC7y3YPD2iSAKQ== Received: from DS7PR03CA0017.namprd03.prod.outlook.com (2603:10b6:5:3b8::22) by MN0PR12MB6198.namprd12.prod.outlook.com (2603:10b6:208:3c5::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan 2023 12:56:28 +0000 Received: from DS1PEPF0000B074.namprd05.prod.outlook.com (2603:10b6:5:3b8:cafe::52) by DS7PR03CA0017.outlook.office365.com (2603:10b6:5:3b8::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend Transport; Wed, 18 Jan 2023 12:56:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000B074.mail.protection.outlook.com (10.167.17.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend Transport; Wed, 18 Jan 2023 12:56:27 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:18 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:16 -0800 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko Subject: [PATCH 3/5] common/mlx5: update MTR ASO definitions Date: Wed, 18 Jan 2023 14:55:54 +0200 Message-ID: <20230118125556.23622-4-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118125556.23622-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B074:EE_|MN0PR12MB6198:EE_ X-MS-Office365-Filtering-Correlation-Id: cf8c42c9-9267-4b82-d6a6-08daf95369bb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: oOjPneoceQ2clP3+MY2nXHYDj5clEnnBsWH98aXnrD3SmY/BbV26kTNW08BSVUnKQiwb2BipiqGrVhLi7I0qmSdgXRFAQDWMm8c8FBvTidFVr05d+Wl90/xTGmUA/zx3yeX0ZjICGdHMpWh7C5TteQB8nhHhVwFGT7FCQfKUNE0Lm9wm4CrE/I/MnGXB7UyZjA+trEoDjovbidbv5yrBBv57AbPNq3O5olaqlIvHcfcXkf68a0FbK+2z+Dv5saQ2W7P4/kO0eaHW6MgdsuqelZqefyqIKGGVU10t2NOkj09qJqnTeX7n17VvskJlcw0SmQUaEVFsDmliI7RaRDOIOR+dvLhZvKiITOjYq4TqdZ0PCbxYi+FQ74u2Av7eDvekXQluYO0bQiruSgPAAFrk7vy0riSzYDhT1C8Z7e7/qIpzs84FFVCeh9D+GI2tpGOdiqeK46W3+fuF4xsdcVWweFymuitiv3fSkUZxaCJLGVk1a1MqKmNmGOmgzmbnj3DUnUdPqNjJ4HtMUEdMdWKCDsQ7/+1pJrEyxBEZTs7AHh20VJqiDfH+F3ccrrIE/f8ybEQ6DhVz1QvREOVjiwPnHAvp6qC+1aKVu+8fjXi0NHcNbCHpfHJC7gBkZly9mVb3EKMZH1fQzwSrpFCHa6KuIkepZhybRDPTslVziENY8WZvxN8mMLkZMRmkXsW0sFKC52kidZGPLUV3hx7LVuXt+A== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199015)(46966006)(40470700004)(36840700001)(36756003)(2906002)(356005)(86362001)(55016003)(70586007)(15650500001)(8676002)(4326008)(70206006)(8936002)(5660300002)(4744005)(6916009)(7636003)(36860700001)(82740400003)(83380400001)(316002)(7696005)(40460700003)(426003)(107886003)(478600001)(54906003)(6666004)(82310400005)(40480700001)(41300700001)(2616005)(336012)(47076005)(16526019)(6286002)(26005)(186003)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 12:56:27.6303 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cf8c42c9-9267-4b82-d6a6-08daf95369bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B074.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6198 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update MTR ASO definitions for QUOTA flow action. Quota flow action requires WQE READ capability and access to token fields. Signed-off-by: Gregory Etelson --- drivers/common/mlx5/mlx5_prm.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 3790dc84b8..c25eb6b8c3 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3814,6 +3814,8 @@ enum mlx5_aso_op { ASO_OPER_LOGICAL_OR = 0x1, }; +#define MLX5_ASO_CSEG_READ_ENABLE 1 + /* ASO WQE CTRL segment. */ struct mlx5_aso_cseg { uint32_t va_h; @@ -3828,6 +3830,8 @@ struct mlx5_aso_cseg { uint64_t data_mask; } __rte_packed; +#define MLX5_MTR_MAX_TOKEN_VALUE INT32_MAX + /* A meter data segment - 2 per ASO WQE. */ struct mlx5_aso_mtr_dseg { uint32_t v_bo_sc_bbog_mm; From patchwork Wed Jan 18 12:55:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 122309 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1837E4240D; Wed, 18 Jan 2023 13:56:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E91F42D67; Wed, 18 Jan 2023 13:56:35 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2040.outbound.protection.outlook.com [40.107.94.40]) by mails.dpdk.org (Postfix) with ESMTP id AF6064003F for ; Wed, 18 Jan 2023 13:56:33 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZUa7QZZTRjsVc5vawRyoySaiy42SPJZNVy7SnD6phId1U+4FjA4ouErMWQD9uM4uZ3JLoU9wAMTJl0SPorJq4/CRPxLfMRw/oPOatMax2ndM594dfcmHTjvaSmrUbkShzwUxSj4dbTCR0Nk0DVluBhJlTIDbEGGjzwxBshv/3NXmGgzQRlhqI6RjAJIQZkrZj5veLjziASM3TdQJRlZMM8sdh+zrYHHS0M+lkEl71zjZ5ziuFNi6dWJELLOLBWoEbsLI/mFmNRlSZmwOiMepkxJS9ab2jusb6bUEvWsSNfx+gXlUqQL6akfTu17OBjDBJ4dD/N+TJ+IVBZGsl7IXdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mmIV62gEZUpsINx+BouE6bOK3UzGM25/5lR502WgKjM=; b=WYnUkb4sPFcSu345+Xcfe9splIdYMCCO5GIucBYNWOxwLB0DNERJs/vpWOl69+DjPuY6D0kUXKTfxsqqLte75wcxlc37lbSsXxNIGYbmRnO8X3py4SXnBOstIJsycf9o/y08ZIQAadbcIC1RkZp1otPQLqBCs7WoLUvjsrrn7oaSuZRBfId1v3ApOwxQVevkHQrgjtVzFQyjf+91yboycD4YrZj/ja/s5oOKCzWfECkqc2vPDAWzHO0CXfSsAJ69MhWBaTK4A2GffVhgdNRmSplGjnreih4kniIqy/ms6SMHD/VDHngoakubPMhkdd1c6jTKpRI4eiU/Jq2NG8m1Lg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mmIV62gEZUpsINx+BouE6bOK3UzGM25/5lR502WgKjM=; b=qZvbr5YjppAWDbktpos3khtPPWjj3ONUj6k0ZY23Z7WnW842pam2krCFwNDJQQO7GT6C/Op//RGRczVOqe6QHpE7ixlLiYOGMQkhkjpjRQbhWbPYdvVOC6ievJjeWiPEFnx99atoWqcWyItAoxq27FLhzZUE+7Oly3SyUgZkJteAyDFYHMajwlZFc1gcpj2He4i3WDzscl/2cM5aHElq4LuNDn3oBuoVTPl6CxVBjKfdNPvpInUpRVvIeQMsyid0bjB6YsKBi3jzLLBCvwTK9p/LXabH0Xd7b8GIHTRGp0Iae+HThLNwff9KalES/x3FPUDZRi26wqOo8dun5H7IYw== Received: from DS7PR05CA0031.namprd05.prod.outlook.com (2603:10b6:8:2f::6) by SJ2PR12MB7868.namprd12.prod.outlook.com (2603:10b6:a03:4cd::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan 2023 12:56:30 +0000 Received: from DS1PEPF0000B078.namprd05.prod.outlook.com (2603:10b6:8:2f:cafe::f6) by DS7PR05CA0031.outlook.office365.com (2603:10b6:8:2f::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6023.12 via Frontend Transport; Wed, 18 Jan 2023 12:56:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000B078.mail.protection.outlook.com (10.167.17.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend Transport; Wed, 18 Jan 2023 12:56:30 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:21 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:19 -0800 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko Subject: [PATCH 4/5] net/mlx5: add indirect QUOTA create/query/modify Date: Wed, 18 Jan 2023 14:55:55 +0200 Message-ID: <20230118125556.23622-5-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118125556.23622-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B078:EE_|SJ2PR12MB7868:EE_ X-MS-Office365-Filtering-Correlation-Id: e8bccf2b-7bb0-4983-8fa4-08daf9536b36 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DPS98OpTBuSg+a0pSTzi1X7m5CxhQcxidz7oWuVLElXYVCSLWn3iDKUW9TYWt3APib6KG5CcF6ZB7hzt99RRlYvO+PNoNwU3gLvrCdqx4EwVwjSej9h4HPQ/ENY9W27MPisc0Cp65JayPAFMfBVFXdsJYL8MA3T9riRZSmDH9FxGYzZogjV/g/4RIefxO8w8I0GcEIkYvZrPLywM8aRNKLvESGwWjxjKpYRnHTjOeHRc0ivPL7YPqlIqyt+IDcDZjMXX4pOzFH3t4GRlqur7iMSBaQsfct22HjoJTh+o+ICopvqRS1ieouE2fKQFBN+yTDCuIH6MW1Js5RdA56V0jo8tv/ssfz0nuz9NODRvftuc49Xde6oaFcfkJzRL7fesBHTmz7cS4xUfIXUPXZ6Xn9vFkcg1WG3tMPtRHX9ETEKn/i/74EL2OS2PaRG5dOzBDNaeJGD+0J2/sX1c2WVsJS4yMH2Iy+3HNN3V7uksBsL47+3r5hOtOi///Lz4F8B8K0pjSjWB0GJMqAxkWwxIOPaqxJ3sWBufuquzYnCkdtZoyAjgpd1si9ucRISECQGcFypRwvzD3WCLEbhDWpjFiDRDK0g+2HjG8AxCJlC4Kd81o5aXT5OQ2KcoarefjsM6pi3qQfwvQvSwwPNII7cVYlRoShyHESWPZtJ1sA3HhPeB+VW7veiKsGEt0UsDzVtfOnbQFpmD7IM9SnixExYYUnjDvwZMBLrpsv1OH0eZMIqEjEZqolIuPiOLTbeAoaPfmjCXC6VB/1+k5/Dez5V/jg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(346002)(396003)(376002)(136003)(451199015)(40470700004)(46966006)(36840700001)(83380400001)(7636003)(82740400003)(36860700001)(5660300002)(356005)(2906002)(6916009)(86362001)(30864003)(4326008)(8676002)(70586007)(55016003)(70206006)(8936002)(15650500001)(82310400005)(426003)(478600001)(40480700001)(2616005)(47076005)(6286002)(1076003)(186003)(26005)(16526019)(107886003)(336012)(316002)(54906003)(7696005)(40460700003)(6666004)(41300700001)(36756003)(559001)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 12:56:30.1123 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8bccf2b-7bb0-4983-8fa4-08daf9536b36 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B078.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7868 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement HWS functions for indirect QUOTA creation, modification and query. Signed-off-by: Gregory Etelson --- drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.h | 72 +++ drivers/net/mlx5/mlx5_flow.c | 62 +++ drivers/net/mlx5/mlx5_flow.h | 20 +- drivers/net/mlx5/mlx5_flow_aso.c | 8 +- drivers/net/mlx5/mlx5_flow_hw.c | 343 +++++++++++--- drivers/net/mlx5/mlx5_flow_quota.c | 726 +++++++++++++++++++++++++++++ 7 files changed, 1151 insertions(+), 81 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_flow_quota.c diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index abd507bd88..323c381d2b 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -23,6 +23,7 @@ sources = files( 'mlx5_flow_dv.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', + 'mlx5_flow_quota.c', 'mlx5_mac.c', 'mlx5_rss.c', 'mlx5_rx.c', diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7c6bc91ddf..c18dffeab5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -46,6 +46,14 @@ #define MLX5_HW_INV_QUEUE UINT32_MAX +/* + * The default ipool threshold value indicates which per_core_cache + * value to set. + */ +#define MLX5_HW_IPOOL_SIZE_THRESHOLD (1 << 19) +/* The default min local cache size. */ +#define MLX5_HW_IPOOL_CACHE_MIN (1 << 9) + /* * Number of modification commands. * The maximal actions amount in FW is some constant, and it is 16 in the @@ -349,6 +357,7 @@ enum mlx5_hw_job_type { MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */ MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */ + MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */ }; #define MLX5_HW_MAX_ITEMS (16) @@ -590,6 +599,7 @@ struct mlx5_aso_sq_elem { char *query_data; }; void *user_data; + struct mlx5_quota *quota_obj; }; }; @@ -1645,6 +1655,33 @@ struct mlx5_hw_ctrl_flow { struct mlx5_flow_hw_ctrl_rx; +enum mlx5_quota_state { + MLX5_QUOTA_STATE_FREE, /* quota not in use */ + MLX5_QUOTA_STATE_READY, /* quota is ready */ + MLX5_QUOTA_STATE_WAIT /* quota waits WR completion */ +}; + +struct mlx5_quota { + uint8_t state; /* object state */ + uint8_t mode; /* metering mode */ + /** + * Keep track of application update types. + * PMD does not allow 2 consecutive ADD updates. + */ + enum rte_flow_update_quota_op last_update; +}; + +/* Bulk management structure for flow quota. */ +struct mlx5_quota_ctx { + uint32_t nb_quotas; /* Total number of quota objects */ + struct mlx5dr_action *dr_action; /* HWS action */ + struct mlx5_devx_obj *devx_obj; /* DEVX ranged object. */ + struct mlx5_pmd_mr mr; /* MR for READ from MTR ASO */ + struct mlx5_aso_mtr_dseg **read_buf; /* Buffers for READ */ + struct mlx5_aso_sq *sq; /* SQs for sync/async ACCESS_ASO WRs */ + struct mlx5_indexed_pool *quota_ipool; /* Manage quota objects */ +}; + struct mlx5_priv { struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ struct mlx5_dev_ctx_shared *sh; /* Shared device context. */ @@ -1734,6 +1771,7 @@ struct mlx5_priv { struct mlx5_flow_meter_policy *mtr_policy_arr; /* Policy array. */ struct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */ struct mlx5_mtr_bulk mtr_bulk; /* Meter index mapping for HWS */ + struct mlx5_quota_ctx quota_ctx; /* Quota index mapping for HWS */ uint8_t skip_default_rss_reta; /* Skip configuration of default reta. */ uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */ struct mlx5_mp_id mp_id; /* ID of a multi-process process */ @@ -2227,6 +2265,15 @@ int mlx5_aso_ct_queue_init(struct mlx5_dev_ctx_shared *sh, uint32_t nb_queues); int mlx5_aso_ct_queue_uninit(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_ct_pools_mng *ct_mng); +int +mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, + void *uar, uint16_t log_desc_n); +void +mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq); +void +mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq); +void +mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq); /* mlx5_flow_flex.c */ @@ -2257,4 +2304,29 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx, void *ctx); void mlx5_flex_parser_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); + +int +mlx5_flow_quota_destroy(struct rte_eth_dev *dev); +int +mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas); +struct rte_flow_action_handle * +mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_quota *conf, + struct mlx5_hw_q_job *job, bool push, + struct rte_flow_error *error); +void +mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue, + struct mlx5_hw_q_job *job); +int +mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue, + struct rte_flow_action_handle *handle, + const struct rte_flow_action *update, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error); +int mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_handle *handle, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f5e2831480..768c4c4ae6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1075,6 +1075,20 @@ mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, void *data, void *user_data, struct rte_flow_error *error); +static int +mlx5_action_handle_query_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error); +static int +mlx5_flow_async_action_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error); static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, @@ -1090,6 +1104,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { .action_handle_destroy = mlx5_action_handle_destroy, .action_handle_update = mlx5_action_handle_update, .action_handle_query = mlx5_action_handle_query, + .action_handle_query_update = mlx5_action_handle_query_update, .tunnel_decap_set = mlx5_flow_tunnel_decap_set, .tunnel_match = mlx5_flow_tunnel_match, .tunnel_action_decap_release = mlx5_flow_tunnel_action_release, @@ -1112,6 +1127,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .push = mlx5_flow_push, .async_action_handle_create = mlx5_flow_async_action_handle_create, .async_action_handle_update = mlx5_flow_async_action_handle_update, + .async_action_handle_query_update = + mlx5_flow_async_action_handle_query_update, .async_action_handle_query = mlx5_flow_async_action_handle_query, .async_action_handle_destroy = mlx5_flow_async_action_handle_destroy, }; @@ -9031,6 +9048,27 @@ mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, update, user_data, error); } +static int +mlx5_flow_async_action_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + + if (!fops || !fops->async_action_query_update) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "async query_update not supported"); + return fops->async_action_query_update + (dev, queue_id, op_attr, action_handle, + update, query, qu_mode, user_data, error); +} + /** * Query shared action. * @@ -10163,6 +10201,30 @@ mlx5_action_handle_query(struct rte_eth_dev *dev, return flow_drv_action_query(dev, handle, data, fops, error); } +static int +mlx5_action_handle_query_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error) +{ + struct rte_flow_attr attr = { .transfer = 0 }; + enum mlx5_flow_drv_type drv_type = flow_get_drv_type(dev, &attr); + const struct mlx5_flow_driver_ops *fops; + + if (drv_type == MLX5_FLOW_TYPE_MIN || drv_type == MLX5_FLOW_TYPE_MAX) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "invalid driver type"); + fops = flow_get_drv_ops(drv_type); + if (!fops || !fops->action_query_update) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "no query_update handler"); + return fops->action_query_update(dev, handle, update, + query, qu_mode, error); +} + /** * Destroy all indirect actions (shared RSS). * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index e376dcae93..9235af960d 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -70,6 +70,7 @@ enum { MLX5_INDIRECT_ACTION_TYPE_COUNT, MLX5_INDIRECT_ACTION_TYPE_CT, MLX5_INDIRECT_ACTION_TYPE_METER_MARK, + MLX5_INDIRECT_ACTION_TYPE_QUOTA, }; /* Now, the maximal ports will be supported is 16, action number is 32M. */ @@ -218,6 +219,8 @@ enum mlx5_feature_name { /* Meter color item */ #define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) +#define MLX5_FLOW_ITEM_QUOTA (UINT64_C(1) << 45) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ @@ -303,6 +306,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_SEND_TO_KERNEL (1ull << 42) #define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43) #define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44) +#define MLX5_FLOW_ACTION_QUOTA (1ull << 46) #define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \ (MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE) @@ -1699,6 +1703,12 @@ typedef int (*mlx5_flow_action_query_t) const struct rte_flow_action_handle *action, void *data, struct rte_flow_error *error); +typedef int (*mlx5_flow_action_query_update_t) + (struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *data, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error); typedef int (*mlx5_flow_sync_domain_t) (struct rte_eth_dev *dev, uint32_t domains, @@ -1845,7 +1855,13 @@ typedef int (*mlx5_flow_async_action_handle_update_t) const void *update, void *user_data, struct rte_flow_error *error); - +typedef int (*mlx5_flow_async_action_handle_query_update_t) + (struct rte_eth_dev *dev, uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, void *data, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error); typedef int (*mlx5_flow_async_action_handle_query_t) (struct rte_eth_dev *dev, uint32_t queue, @@ -1896,6 +1912,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_action_destroy_t action_destroy; mlx5_flow_action_update_t action_update; mlx5_flow_action_query_t action_query; + mlx5_flow_action_query_update_t action_query_update; mlx5_flow_sync_domain_t sync_domain; mlx5_flow_discover_priorities_t discover_priorities; mlx5_flow_item_create_t item_create; @@ -1917,6 +1934,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_push_t push; mlx5_flow_async_action_handle_create_t async_action_create; mlx5_flow_async_action_handle_update_t async_action_update; + mlx5_flow_async_action_handle_query_update_t async_action_query_update; mlx5_flow_async_action_handle_query_t async_action_query; mlx5_flow_async_action_handle_destroy_t async_action_destroy; }; diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index 0eb91c570f..3c08da0614 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -74,7 +74,7 @@ mlx5_aso_reg_mr(struct mlx5_common_device *cdev, size_t length, * @param[in] sq * ASO SQ to destroy. */ -static void +void mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq) { mlx5_devx_sq_destroy(&sq->sq_obj); @@ -148,7 +148,7 @@ mlx5_aso_age_init_sq(struct mlx5_aso_sq *sq) * @param[in] sq * ASO SQ to initialize. */ -static void +void mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq) { volatile struct mlx5_aso_wqe *restrict wqe; @@ -219,7 +219,7 @@ mlx5_aso_ct_init_sq(struct mlx5_aso_sq *sq) * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ -static int +int mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq, void *uar, uint16_t log_desc_n) { @@ -504,7 +504,7 @@ mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe) * @param[in] sq * ASO SQ to use. */ -static void +void mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq) { struct mlx5_aso_cq *cq = &sq->cq; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 04d0612ee1..5815310ba6 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -68,6 +68,9 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, struct mlx5_action_construct_data *act_data, const struct mlx5_hw_actions *hw_acts, const struct rte_flow_action *action); +static void +flow_hw_construct_quota(struct mlx5_priv *priv, + struct mlx5dr_rule_action *rule_act, uint32_t qid); static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev); static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev); @@ -791,6 +794,9 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, action_src, action_dst, idx)) return -1; break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + flow_hw_construct_quota(priv, &acts->rule_acts[action_dst], idx); + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -1834,6 +1840,16 @@ flow_hw_shared_action_get(struct rte_eth_dev *dev, return -1; } +static void +flow_hw_construct_quota(struct mlx5_priv *priv, + struct mlx5dr_rule_action *rule_act, uint32_t qid) +{ + rule_act->action = priv->quota_ctx.dr_action; + rule_act->aso_meter.offset = qid - 1; + rule_act->aso_meter.init_color = + MLX5DR_ACTION_ASO_METER_COLOR_GREEN; +} + /** * Construct shared indirect action. * @@ -1957,6 +1973,9 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, (enum mlx5dr_action_aso_meter_color) rte_col_2_mlx5_col(aso_mtr->init_color); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + flow_hw_construct_quota(priv, rule_act, idx); + break; default: DRV_LOG(WARNING, "Unsupported shared action type:%d", type); break; @@ -2263,6 +2282,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, rule_acts[act_data->action_dst].action = priv->hw_vport[port_action->port_id]; break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + flow_hw_construct_quota(priv, + rule_acts + act_data->action_dst, + act_data->shared_meter.id); + break; case RTE_FLOW_ACTION_TYPE_METER: meter = action->conf; mtr_id = meter->mtr_id; @@ -2702,11 +2726,18 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev, if (ret_comp < n_res && priv->hws_ctpool) ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue], &res[ret_comp], n_res - ret_comp); + if (ret_comp < n_res && priv->quota_ctx.sq) + ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue], + &res[ret_comp], + n_res - ret_comp); for (i = 0; i < ret_comp; i++) { job = (struct mlx5_hw_q_job *)res[i].user_data; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) == + MLX5_INDIRECT_ACTION_TYPE_QUOTA) { + mlx5_quota_async_completion(dev, queue, job); + } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action); if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) { idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action); @@ -3687,6 +3718,10 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, return ret; *action_flags |= MLX5_FLOW_ACTION_INDIRECT_AGE; break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + /* TODO: add proper quota verification */ + *action_flags |= MLX5_FLOW_ACTION_QUOTA; + break; default: DRV_LOG(WARNING, "Unsupported shared action type: %d", type); return rte_flow_error_set(error, ENOTSUP, @@ -3724,19 +3759,17 @@ flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused, } static inline uint16_t -flow_hw_template_expand_modify_field(const struct rte_flow_action actions[], - const struct rte_flow_action masks[], - const struct rte_flow_action *mf_action, - const struct rte_flow_action *mf_mask, - struct rte_flow_action *new_actions, - struct rte_flow_action *new_masks, - uint64_t flags, uint32_t act_num) +flow_hw_template_expand_modify_field(struct rte_flow_action actions[], + struct rte_flow_action masks[], + const struct rte_flow_action *mf_actions, + const struct rte_flow_action *mf_masks, + uint64_t flags, uint32_t act_num, + uint32_t mf_num) { uint32_t i, tail; MLX5_ASSERT(actions && masks); - MLX5_ASSERT(new_actions && new_masks); - MLX5_ASSERT(mf_action && mf_mask); + MLX5_ASSERT(mf_num > 0); if (flags & MLX5_FLOW_ACTION_MODIFY_FIELD) { /* * Application action template already has Modify Field. @@ -3787,12 +3820,10 @@ flow_hw_template_expand_modify_field(const struct rte_flow_action actions[], i = 0; insert: tail = act_num - i; /* num action to move */ - memcpy(new_actions, actions, sizeof(actions[0]) * i); - new_actions[i] = *mf_action; - memcpy(new_actions + i + 1, actions + i, sizeof(actions[0]) * tail); - memcpy(new_masks, masks, sizeof(masks[0]) * i); - new_masks[i] = *mf_mask; - memcpy(new_masks + i + 1, masks + i, sizeof(masks[0]) * tail); + memmove(actions + i + mf_num, actions + i, sizeof(actions[0]) * tail); + memcpy(actions + i, mf_actions, sizeof(actions[0]) * mf_num); + memmove(masks + i + mf_num, masks + i, sizeof(masks[0]) * tail); + memcpy(masks + i, mf_masks, sizeof(masks[0]) * mf_num); return i; } @@ -4102,6 +4133,7 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask, action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT; *curr_off = *curr_off + 1; break; + case RTE_FLOW_ACTION_TYPE_QUOTA: case RTE_FLOW_ACTION_TYPE_METER_MARK: at->actions_off[action_src] = *curr_off; action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_METER; @@ -4331,6 +4363,96 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev, &modify_action); } +static __rte_always_inline void +flow_hw_actions_template_replace_container(const + struct rte_flow_action *actions, + const + struct rte_flow_action *masks, + struct rte_flow_action *new_actions, + struct rte_flow_action *new_masks, + struct rte_flow_action **ra, + struct rte_flow_action **rm, + uint32_t act_num) +{ + memcpy(new_actions, actions, sizeof(actions[0]) * act_num); + memcpy(new_masks, masks, sizeof(masks[0]) * act_num); + *ra = (void *)(uintptr_t)new_actions; + *rm = (void *)(uintptr_t)new_masks; +} + +#define RX_META_COPY_ACTION ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field){ \ + .operation = RTE_FLOW_MODIFY_SET, \ + .dst = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = REG_B, \ + }, \ + .src = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = REG_C_1, \ + }, \ + .width = 32, \ + } \ +}) + +#define RX_META_COPY_MASK ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field){ \ + .operation = RTE_FLOW_MODIFY_SET, \ + .dst = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = UINT32_MAX, \ + .offset = UINT32_MAX, \ + }, \ + .src = { \ + .field = (enum rte_flow_field_id) \ + MLX5_RTE_FLOW_FIELD_META_REG, \ + .level = UINT32_MAX, \ + .offset = UINT32_MAX, \ + }, \ + .width = UINT32_MAX, \ + } \ +}) + +#define QUOTA_COLOR_INC_ACTION ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field) { \ + .operation = RTE_FLOW_MODIFY_ADD, \ + .dst = { \ + .field = RTE_FLOW_FIELD_METER_COLOR, \ + .level = 0, .offset = 0 \ + }, \ + .src = { \ + .field = RTE_FLOW_FIELD_VALUE, \ + .level = 1, \ + .offset = 0, \ + }, \ + .width = 2 \ + } \ +}) + +#define QUOTA_COLOR_INC_MASK ((const struct rte_flow_action) { \ + .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \ + .conf = &(struct rte_flow_action_modify_field) { \ + .operation = RTE_FLOW_MODIFY_ADD, \ + .dst = { \ + .field = RTE_FLOW_FIELD_METER_COLOR, \ + .level = UINT32_MAX, \ + .offset = UINT32_MAX, \ + }, \ + .src = { \ + .field = RTE_FLOW_FIELD_VALUE, \ + .level = 3, \ + .offset = 0 \ + }, \ + .width = UINT32_MAX \ + } \ +}) + /** * Create flow action template. * @@ -4369,40 +4491,9 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, int set_vlan_vid_ix = -1; struct rte_flow_action_modify_field set_vlan_vid_spec = {0, }; struct rte_flow_action_modify_field set_vlan_vid_mask = {0, }; - const struct rte_flow_action_modify_field rx_mreg = { - .operation = RTE_FLOW_MODIFY_SET, - .dst = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = REG_B, - }, - .src = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = REG_C_1, - }, - .width = 32, - }; - const struct rte_flow_action_modify_field rx_mreg_mask = { - .operation = RTE_FLOW_MODIFY_SET, - .dst = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, - .offset = UINT32_MAX, - }, - .src = { - .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG, - .level = UINT32_MAX, - .offset = UINT32_MAX, - }, - .width = UINT32_MAX, - }; - const struct rte_flow_action rx_cpy = { - .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, - .conf = &rx_mreg, - }; - const struct rte_flow_action rx_cpy_mask = { - .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, - .conf = &rx_mreg_mask, - }; + struct rte_flow_action mf_actions[MLX5_HW_MAX_ACTS]; + struct rte_flow_action mf_masks[MLX5_HW_MAX_ACTS]; + uint32_t expand_mf_num = 0; if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks, &action_flags, error)) @@ -4432,44 +4523,57 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Too many actions"); return NULL; } + if (set_vlan_vid_ix != -1) { + /* If temporary action buffer was not used, copy template actions to it */ + if (ra == actions) + flow_hw_actions_template_replace_container(actions, + masks, + tmp_action, + tmp_mask, + &ra, &rm, + act_num); + flow_hw_set_vlan_vid(dev, ra, rm, + &set_vlan_vid_spec, &set_vlan_vid_mask, + set_vlan_vid_ix); + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + } + if (action_flags & MLX5_FLOW_ACTION_QUOTA) { + mf_actions[expand_mf_num] = QUOTA_COLOR_INC_ACTION; + mf_masks[expand_mf_num] = QUOTA_COLOR_INC_MASK; + expand_mf_num++; + } if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS && priv->sh->config.dv_esw_en && (action_flags & (MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS))) { /* Insert META copy */ - if (act_num + 1 > MLX5_HW_MAX_ACTS) { + mf_actions[expand_mf_num] = RX_META_COPY_ACTION; + mf_masks[expand_mf_num] = RX_META_COPY_MASK; + expand_mf_num++; + } + if (expand_mf_num) { + if (act_num + expand_mf_num > MLX5_HW_MAX_ACTS) { rte_flow_error_set(error, E2BIG, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot expand: too many actions"); return NULL; } + if (ra == actions) + flow_hw_actions_template_replace_container(actions, + masks, + tmp_action, + tmp_mask, + &ra, &rm, + act_num); /* Application should make sure only one Q/RSS exist in one rule. */ - pos = flow_hw_template_expand_modify_field(actions, masks, - &rx_cpy, - &rx_cpy_mask, - tmp_action, tmp_mask, + pos = flow_hw_template_expand_modify_field(ra, rm, + mf_actions, + mf_masks, action_flags, - act_num); - ra = tmp_action; - rm = tmp_mask; - act_num++; + act_num, + expand_mf_num); + act_num += expand_mf_num; action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; } - if (set_vlan_vid_ix != -1) { - /* If temporary action buffer was not used, copy template actions to it */ - if (ra == actions && rm == masks) { - for (i = 0; i < act_num; ++i) { - tmp_action[i] = actions[i]; - tmp_mask[i] = masks[i]; - if (actions[i].type == RTE_FLOW_ACTION_TYPE_END) - break; - } - ra = tmp_action; - rm = tmp_mask; - } - flow_hw_set_vlan_vid(dev, ra, rm, - &set_vlan_vid_spec, &set_vlan_vid_mask, - set_vlan_vid_ix); - } act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error); if (act_len <= 0) return NULL; @@ -4732,6 +4836,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_ICMP: case RTE_FLOW_ITEM_TYPE_ICMP6: case RTE_FLOW_ITEM_TYPE_CONNTRACK: + case RTE_FLOW_ITEM_TYPE_QUOTA: break; case RTE_FLOW_ITEM_TYPE_INTEGRITY: /* @@ -6932,6 +7037,12 @@ flow_hw_configure(struct rte_eth_dev *dev, "Failed to set up Rx control flow templates"); goto err; } + /* Initialize quotas */ + if (port_attr->nb_quotas) { + ret = mlx5_flow_quota_init(dev, port_attr->nb_quotas); + if (ret) + goto err; + } /* Initialize meter library*/ if (port_attr->nb_meters) if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 1, 1, nb_q_updated)) @@ -7031,6 +7142,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool); priv->hws_cpool = NULL; } + mlx5_flow_quota_destroy(dev); flow_hw_free_vport_actions(priv); for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) { if (priv->hw_drop[i]) @@ -7124,6 +7236,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) flow_hw_ct_mng_destroy(dev, priv->ct_mng); priv->ct_mng = NULL; } + mlx5_flow_quota_destroy(dev); for (i = 0; i < priv->nb_queue; i++) { rte_ring_free(priv->hw_q[i].indir_iq); rte_ring_free(priv->hw_q[i].indir_cq); @@ -7524,6 +7637,8 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue, return flow_hw_validate_action_meter_mark(dev, action, error); case RTE_FLOW_ACTION_TYPE_RSS: return flow_dv_action_validate(dev, conf, action, error); + case RTE_FLOW_ACTION_TYPE_QUOTA: + return 0; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, @@ -7695,6 +7810,11 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, case RTE_FLOW_ACTION_TYPE_RSS: handle = flow_dv_action_create(dev, conf, action, error); break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + aso = true; + handle = mlx5_quota_alloc(dev, queue, action->conf, + job, push, error); + break; default: rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action type not supported"); @@ -7815,6 +7935,11 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, case MLX5_INDIRECT_ACTION_TYPE_RSS: ret = flow_dv_action_update(dev, handle, update, error); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + aso = true; + ret = mlx5_quota_query_update(dev, queue, handle, update, NULL, + job, push, error); + break; default: ret = -ENOTSUP; rte_flow_error_set(error, ENOTSUP, @@ -7927,6 +8052,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, case MLX5_INDIRECT_ACTION_TYPE_RSS: ret = flow_dv_action_destroy(dev, handle, error); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + break; default: ret = -ENOTSUP; rte_flow_error_set(error, ENOTSUP, @@ -8196,6 +8323,11 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, ret = flow_hw_conntrack_query(dev, queue, act_idx, data, job, push, error); break; + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + aso = true; + ret = mlx5_quota_query(dev, queue, handle, data, + job, push, error); + break; default: ret = -ENOTSUP; rte_flow_error_set(error, ENOTSUP, @@ -8205,7 +8337,51 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, } if (job) flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); - return 0; + return ret; +} + +static int +flow_hw_async_action_handle_query_update + (struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + void *user_data, struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + bool push = flow_hw_action_push(attr); + bool aso = false; + struct mlx5_hw_q_job *job = NULL; + int ret = 0; + + if (attr) { + job = flow_hw_action_job_init(priv, queue, handle, user_data, + query, + MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, + error); + if (!job) + return -rte_errno; + } + switch (MLX5_INDIRECT_ACTION_TYPE_GET(handle)) { + case MLX5_INDIRECT_ACTION_TYPE_QUOTA: + if (qu_mode != RTE_FLOW_QU_QUERY_FIRST) { + ret = rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, "quota action must query before update"); + break; + } + aso = true; + ret = mlx5_quota_query_update(dev, queue, handle, + update, query, job, push, error); + break; + default: + ret = rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "update and query not supportred"); + } + if (job) + flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0); + return ret; } static int @@ -8217,6 +8393,19 @@ flow_hw_action_query(struct rte_eth_dev *dev, handle, data, NULL, error); } +static int +flow_hw_action_query_update(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode qu_mode, + struct rte_flow_error *error) +{ + return flow_hw_async_action_handle_query_update(dev, MLX5_HW_INV_QUEUE, + NULL, handle, update, + query, qu_mode, NULL, + error); +} + /** * Get aged-out flows of a given port on the given HWS flow queue. * @@ -8329,12 +8518,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .async_action_create = flow_hw_action_handle_create, .async_action_destroy = flow_hw_action_handle_destroy, .async_action_update = flow_hw_action_handle_update, + .async_action_query_update = flow_hw_async_action_handle_query_update, .async_action_query = flow_hw_action_handle_query, .action_validate = flow_hw_action_validate, .action_create = flow_hw_action_create, .action_destroy = flow_hw_action_destroy, .action_update = flow_hw_action_update, .action_query = flow_hw_action_query, + .action_query_update = flow_hw_action_query_update, .query = flow_hw_query, .get_aged_flows = flow_hw_get_aged_flows, .get_q_aged_flows = flow_hw_get_q_aged_flows, diff --git a/drivers/net/mlx5/mlx5_flow_quota.c b/drivers/net/mlx5/mlx5_flow_quota.c new file mode 100644 index 0000000000..0639620848 --- /dev/null +++ b/drivers/net/mlx5/mlx5_flow_quota.c @@ -0,0 +1,726 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Nvidia Inc. All rights reserved. + */ +#include +#include + +#include + +#include "mlx5.h" +#include "mlx5_malloc.h" +#include "mlx5_flow.h" + +typedef void (*quota_wqe_cmd_t)(volatile struct mlx5_aso_wqe *restrict, + struct mlx5_quota_ctx *, uint32_t, uint32_t, + void *); + +#define MLX5_ASO_MTR1_INIT_MASK 0xffffffffULL +#define MLX5_ASO_MTR0_INIT_MASK ((MLX5_ASO_MTR1_INIT_MASK) << 32) + +static __rte_always_inline bool +is_aso_mtr1_obj(uint32_t qix) +{ + return (qix & 1) != 0; +} + +static __rte_always_inline bool +is_quota_sync_queue(const struct mlx5_priv *priv, uint32_t queue) +{ + return queue >= priv->nb_queue - 1; +} + +static __rte_always_inline uint32_t +quota_sync_queue(const struct mlx5_priv *priv) +{ + return priv->nb_queue - 1; +} + +static __rte_always_inline uint32_t +mlx5_quota_wqe_read_offset(uint32_t qix, uint32_t sq_index) +{ + return 2 * sq_index + (qix & 1); +} + +static int32_t +mlx5_quota_fetch_tokens(const struct mlx5_aso_mtr_dseg *rd_buf) +{ + int c_tok = (int)rte_be_to_cpu_32(rd_buf->c_tokens); + int e_tok = (int)rte_be_to_cpu_32(rd_buf->e_tokens); + int result; + + DRV_LOG(DEBUG, "c_tokens %d e_tokens %d\n", + rte_be_to_cpu_32(rd_buf->c_tokens), + rte_be_to_cpu_32(rd_buf->e_tokens)); + /* Query after SET ignores negative E tokens */ + if (c_tok >= 0 && e_tok < 0) + result = c_tok; + /** + * If number of tokens in Meter bucket is zero or above, + * Meter hardware will use that bucket and can set number of tokens to + * negative value. + * Quota can discard negative C tokens in query report. + * That is a known hardware limitation. + * Use case example: + * + * C E Result + * 250 250 500 + * 50 250 300 + * -150 250 100 + * -150 50 50 * + * -150 -150 -300 + * + */ + else if (c_tok < 0 && e_tok >= 0 && (c_tok + e_tok) < 0) + result = e_tok; + else + result = c_tok + e_tok; + + return result; +} + +static void +mlx5_quota_query_update_async_cmpl(struct mlx5_hw_q_job *job) +{ + struct rte_flow_query_quota *query = job->query.user; + + query->quota = mlx5_quota_fetch_tokens(job->query.hw); +} + +void +mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue, + struct mlx5_hw_q_job *job) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t qix = MLX5_INDIRECT_ACTION_IDX_GET(job->action); + struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix); + + RTE_SET_USED(queue); + qobj->state = MLX5_QUOTA_STATE_READY; + switch (job->type) { + case MLX5_HW_Q_JOB_TYPE_CREATE: + break; + case MLX5_HW_Q_JOB_TYPE_QUERY: + case MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY: + mlx5_quota_query_update_async_cmpl(job); + break; + default: + break; + } +} + +static __rte_always_inline void +mlx5_quota_wqe_set_aso_read(volatile struct mlx5_aso_wqe *restrict wqe, + struct mlx5_quota_ctx *qctx, uint32_t queue) +{ + struct mlx5_aso_sq *sq = qctx->sq + queue; + uint32_t sq_mask = (1 << sq->log_desc_n) - 1; + uint32_t sq_head = sq->head & sq_mask; + uintptr_t rd_addr = (uintptr_t)(qctx->read_buf[queue] + 2 * sq_head); + + wqe->aso_cseg.lkey = rte_cpu_to_be_32(qctx->mr.lkey); + wqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(rd_addr >> 32)); + wqe->aso_cseg.va_l_r = rte_cpu_to_be_32(((uint32_t)rd_addr) | + MLX5_ASO_CSEG_READ_ENABLE); +} + +#define MLX5_ASO_MTR1_ADD_MASK 0x00000F00ULL +#define MLX5_ASO_MTR1_SET_MASK 0x000F0F00ULL +#define MLX5_ASO_MTR0_ADD_MASK ((MLX5_ASO_MTR1_ADD_MASK) << 32) +#define MLX5_ASO_MTR0_SET_MASK ((MLX5_ASO_MTR1_SET_MASK) << 32) + +static __rte_always_inline void +mlx5_quota_wqe_set_mtr_tokens(volatile struct mlx5_aso_wqe *restrict wqe, + uint32_t qix, void *arg) +{ + volatile struct mlx5_aso_mtr_dseg *mtr_dseg; + const struct rte_flow_update_quota *conf = arg; + bool set_op = (conf->op == RTE_FLOW_UPDATE_QUOTA_SET); + + if (is_aso_mtr1_obj(qix)) { + wqe->aso_cseg.data_mask = set_op ? + RTE_BE64(MLX5_ASO_MTR1_SET_MASK) : + RTE_BE64(MLX5_ASO_MTR1_ADD_MASK); + mtr_dseg = wqe->aso_dseg.mtrs + 1; + } else { + wqe->aso_cseg.data_mask = set_op ? + RTE_BE64(MLX5_ASO_MTR0_SET_MASK) : + RTE_BE64(MLX5_ASO_MTR0_ADD_MASK); + mtr_dseg = wqe->aso_dseg.mtrs; + } + if (set_op) { + /* prevent using E tokens when C tokens exhausted */ + mtr_dseg->e_tokens = -1; + mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota); + } else { + mtr_dseg->e_tokens = rte_cpu_to_be_32(conf->quota); + } +} + +static __rte_always_inline void +mlx5_quota_wqe_query(volatile struct mlx5_aso_wqe *restrict wqe, + struct mlx5_quota_ctx *qctx, __rte_unused uint32_t qix, + uint32_t queue, __rte_unused void *arg) +{ + mlx5_quota_wqe_set_aso_read(wqe, qctx, queue); + wqe->aso_cseg.data_mask = 0ull; /* clear MTR ASO data modification */ +} + +static __rte_always_inline void +mlx5_quota_wqe_update(volatile struct mlx5_aso_wqe *restrict wqe, + __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix, + __rte_unused uint32_t queue, void *arg) +{ + mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg); + wqe->aso_cseg.va_l_r = 0; /* clear READ flag */ +} + +static __rte_always_inline void +mlx5_quota_wqe_query_update(volatile struct mlx5_aso_wqe *restrict wqe, + struct mlx5_quota_ctx *qctx, uint32_t qix, + uint32_t queue, void *arg) +{ + mlx5_quota_wqe_set_aso_read(wqe, qctx, queue); + mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg); +} + +static __rte_always_inline void +mlx5_quota_set_init_wqe(volatile struct mlx5_aso_wqe *restrict wqe, + __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix, + __rte_unused uint32_t queue, void *arg) +{ + volatile struct mlx5_aso_mtr_dseg *mtr_dseg; + const struct rte_flow_action_quota *conf = arg; + const struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix + 1); + + if (is_aso_mtr1_obj(qix)) { + wqe->aso_cseg.data_mask = + rte_cpu_to_be_64(MLX5_ASO_MTR1_INIT_MASK); + mtr_dseg = wqe->aso_dseg.mtrs + 1; + } else { + wqe->aso_cseg.data_mask = + rte_cpu_to_be_64(MLX5_ASO_MTR0_INIT_MASK); + mtr_dseg = wqe->aso_dseg.mtrs; + } + mtr_dseg->e_tokens = -1; + mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota); + mtr_dseg->v_bo_sc_bbog_mm |= rte_cpu_to_be_32 + (qobj->mode << ASO_DSEG_MTR_MODE); +} + +static __rte_always_inline void +mlx5_quota_cmd_completed_status(struct mlx5_aso_sq *sq, uint16_t n) +{ + uint16_t i, mask = (1 << sq->log_desc_n) - 1; + + for (i = 0; i < n; i++) { + uint8_t state = MLX5_QUOTA_STATE_WAIT; + struct mlx5_quota *quota_obj = + sq->elts[(sq->tail + i) & mask].quota_obj; + + __atomic_compare_exchange_n("a_obj->state, &state, + MLX5_QUOTA_STATE_READY, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + } +} + +static void +mlx5_quota_cmd_completion_handle(struct mlx5_aso_sq *sq) +{ + struct mlx5_aso_cq *cq = &sq->cq; + volatile struct mlx5_cqe *restrict cqe; + const unsigned int cq_size = 1 << cq->log_desc_n; + const unsigned int mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = cq->cq_ci & mask; + uint16_t max; + uint16_t n = 0; + int ret; + + MLX5_ASSERT(rte_spinlock_is_locked(&sq->sqsl)); + max = (uint16_t)(sq->head - sq->tail); + if (unlikely(!max)) + return; + do { + idx = next_idx; + next_idx = (cq->cq_ci + 1) & mask; + rte_prefetch0(&cq->cq_obj.cqes[next_idx]); + cqe = &cq->cq_obj.cqes[idx]; + ret = check_cqe(cqe, cq_size, cq->cq_ci); + /* + * Be sure owner read is done before any other cookie field or + * opaque field. + */ + rte_io_rmb(); + if (ret != MLX5_CQE_STATUS_SW_OWN) { + if (likely(ret == MLX5_CQE_STATUS_HW_OWN)) + break; + mlx5_aso_cqe_err_handle(sq); + } else { + n++; + } + cq->cq_ci++; + } while (1); + if (likely(n)) { + mlx5_quota_cmd_completed_status(sq, n); + sq->tail += n; + rte_io_wmb(); + cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); + } +} + +static int +mlx5_quota_cmd_wait_cmpl(struct mlx5_aso_sq *sq, struct mlx5_quota *quota_obj) +{ + uint32_t poll_cqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES; + + do { + rte_spinlock_lock(&sq->sqsl); + mlx5_quota_cmd_completion_handle(sq); + rte_spinlock_unlock(&sq->sqsl); + if (__atomic_load_n("a_obj->state, __ATOMIC_RELAXED) == + MLX5_QUOTA_STATE_READY) + return 0; + } while (poll_cqe_times -= MLX5_ASO_WQE_CQE_RESPONSE_DELAY); + DRV_LOG(ERR, "QUOTA: failed to poll command CQ"); + return -1; +} + +static int +mlx5_quota_cmd_wqe(struct rte_eth_dev *dev, struct mlx5_quota *quota_obj, + quota_wqe_cmd_t wqe_cmd, uint32_t qix, uint32_t queue, + struct mlx5_hw_q_job *job, bool push, void *arg) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + struct mlx5_aso_sq *sq = qctx->sq + queue; + uint32_t head, sq_mask = (1 << sq->log_desc_n) - 1; + bool sync_queue = is_quota_sync_queue(priv, queue); + volatile struct mlx5_aso_wqe *restrict wqe; + int ret = 0; + + if (sync_queue) + rte_spinlock_lock(&sq->sqsl); + head = sq->head & sq_mask; + wqe = &sq->sq_obj.aso_wqes[head]; + wqe_cmd(wqe, qctx, qix, queue, arg); + wqe->general_cseg.misc = rte_cpu_to_be_32(qctx->devx_obj->id + (qix >> 1)); + wqe->general_cseg.opcode = rte_cpu_to_be_32 + (ASO_OPC_MOD_POLICER << WQE_CSEG_OPC_MOD_OFFSET | + sq->pi << WQE_CSEG_WQE_INDEX_OFFSET | MLX5_OPCODE_ACCESS_ASO); + sq->head++; + sq->pi += 2; /* Each WQE contains 2 WQEBB */ + if (push) { + mlx5_doorbell_ring(&sh->tx_uar.bf_db, *(volatile uint64_t *)wqe, + sq->pi, &sq->sq_obj.db_rec[MLX5_SND_DBR], + !sh->tx_uar.dbnc); + sq->db_pi = sq->pi; + } + sq->db = wqe; + job->query.hw = qctx->read_buf[queue] + + mlx5_quota_wqe_read_offset(qix, head); + sq->elts[head].quota_obj = sync_queue ? + quota_obj : (typeof(quota_obj))job; + if (sync_queue) { + rte_spinlock_unlock(&sq->sqsl); + ret = mlx5_quota_cmd_wait_cmpl(sq, quota_obj); + } + return ret; +} + +static void +mlx5_quota_destroy_sq(struct mlx5_priv *priv) +{ + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t i, nb_queues = priv->nb_queue; + + if (!qctx->sq) + return; + for (i = 0; i < nb_queues; i++) + mlx5_aso_destroy_sq(qctx->sq + i); + mlx5_free(qctx->sq); +} + +static __rte_always_inline void +mlx5_quota_wqe_init_common(struct mlx5_aso_sq *sq, + volatile struct mlx5_aso_wqe *restrict wqe) +{ +#define ASO_MTR_DW0 RTE_BE32(1 << ASO_DSEG_VALID_OFFSET | \ + MLX5_FLOW_COLOR_GREEN << ASO_DSEG_SC_OFFSET) + + memset((void *)(uintptr_t)wqe, 0, sizeof(*wqe)); + wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) | + (sizeof(*wqe) >> 4)); + wqe->aso_cseg.operand_masks = RTE_BE32 + (0u | (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) | + (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) | + (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) | + (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET)); + wqe->general_cseg.flags = RTE_BE32 + (MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET); + wqe->aso_dseg.mtrs[0].v_bo_sc_bbog_mm = ASO_MTR_DW0; + /** + * ASO Meter tokens auto-update must be disabled in quota action. + * Tokens auto-update is disabled when Meter when *IR values set to + * ((0x1u << 16) | (0x1Eu << 24)) **NOT** 0x00 + */ + wqe->aso_dseg.mtrs[0].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); + wqe->aso_dseg.mtrs[0].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); + wqe->aso_dseg.mtrs[1].v_bo_sc_bbog_mm = ASO_MTR_DW0; + wqe->aso_dseg.mtrs[1].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); + wqe->aso_dseg.mtrs[1].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24)); +#undef ASO_MTR_DW0 +} + +static void +mlx5_quota_init_sq(struct mlx5_aso_sq *sq) +{ + uint32_t i, size = 1 << sq->log_desc_n; + + for (i = 0; i < size; i++) + mlx5_quota_wqe_init_common(sq, sq->sq_obj.aso_wqes + i); +} + +static int +mlx5_quota_alloc_sq(struct mlx5_priv *priv) +{ + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t i, nb_queues = priv->nb_queue; + + qctx->sq = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(qctx->sq[0]) * nb_queues, + 0, SOCKET_ID_ANY); + if (!qctx->sq) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ pool"); + return -ENOMEM; + } + for (i = 0; i < nb_queues; i++) { + int ret = mlx5_aso_sq_create + (sh->cdev, qctx->sq + i, sh->tx_uar.obj, + rte_log2_u32(priv->hw_q[i].size)); + if (ret) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ[%u]", i); + return -ENOMEM; + } + mlx5_quota_init_sq(qctx->sq + i); + } + return 0; +} + +static void +mlx5_quota_destroy_read_buf(struct mlx5_priv *priv) +{ + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + + if (qctx->mr.lkey) { + void *addr = qctx->mr.addr; + sh->cdev->mr_scache.dereg_mr_cb(&qctx->mr); + mlx5_free(addr); + } + if (qctx->read_buf) + mlx5_free(qctx->read_buf); +} + +static int +mlx5_quota_alloc_read_buf(struct mlx5_priv *priv) +{ + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t i, nb_queues = priv->nb_queue; + uint32_t sq_size_sum; + size_t page_size = rte_mem_page_size(); + struct mlx5_aso_mtr_dseg *buf; + size_t rd_buf_size; + int ret; + + for (i = 0, sq_size_sum = 0; i < nb_queues; i++) + sq_size_sum += priv->hw_q[i].size; + /* ACCESS MTR ASO WQE reads 2 MTR objects */ + rd_buf_size = 2 * sq_size_sum * sizeof(buf[0]); + buf = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, rd_buf_size, + page_size, SOCKET_ID_ANY); + if (!buf) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [1]"); + return -ENOMEM; + } + ret = sh->cdev->mr_scache.reg_mr_cb(sh->cdev->pd, buf, + rd_buf_size, &qctx->mr); + if (ret) { + DRV_LOG(DEBUG, "QUOTA: failed to register MTR ASO READ MR"); + return -errno; + } + qctx->read_buf = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(qctx->read_buf[0]) * nb_queues, + 0, SOCKET_ID_ANY); + if (!qctx->read_buf) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [2]"); + return -ENOMEM; + } + for (i = 0; i < nb_queues; i++) { + qctx->read_buf[i] = buf; + buf += 2 * priv->hw_q[i].size; + } + return 0; +} + +static __rte_always_inline int +mlx5_quota_check_ready(struct mlx5_quota *qobj, struct rte_flow_error *error) +{ + uint8_t state = MLX5_QUOTA_STATE_READY; + bool verdict = __atomic_compare_exchange_n + (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + + if (!verdict) + return rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action is busy"); + return 0; +} + +int +mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_handle *handle, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t work_queue = !is_quota_sync_queue(priv, queue) ? + queue : quota_sync_queue(priv); + uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle); + uint32_t qix = id - 1; + struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id); + struct mlx5_hw_q_job sync_job; + int ret; + + if (!qobj) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid query handle"); + ret = mlx5_quota_check_ready(qobj, error); + if (ret) + return ret; + ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_wqe_query, qix, work_queue, + async_job ? async_job : &sync_job, push, NULL); + if (ret) { + __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY, + __ATOMIC_RELAXED); + return rte_flow_error_set(error, EAGAIN, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again"); + } + if (is_quota_sync_queue(priv, queue)) + query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw); + return 0; +} + +int +mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue, + struct rte_flow_action_handle *handle, + const struct rte_flow_action *update, + struct rte_flow_query_quota *query, + struct mlx5_hw_q_job *async_job, bool push, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + const struct rte_flow_update_quota *conf = update->conf; + uint32_t work_queue = !is_quota_sync_queue(priv, queue) ? + queue : quota_sync_queue(priv); + uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle); + uint32_t qix = id - 1; + struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id); + struct mlx5_hw_q_job sync_job; + quota_wqe_cmd_t wqe_cmd = query ? + mlx5_quota_wqe_query_update : + mlx5_quota_wqe_update; + int ret; + + if (conf->quota > MLX5_MTR_MAX_TOKEN_VALUE) + return rte_flow_error_set(error, E2BIG, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "update value too big"); + if (!qobj) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "invalid query_update handle"); + if (conf->op == RTE_FLOW_UPDATE_QUOTA_ADD && + qobj->last_update == RTE_FLOW_UPDATE_QUOTA_ADD) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot add twice"); + ret = mlx5_quota_check_ready(qobj, error); + if (ret) + return ret; + ret = mlx5_quota_cmd_wqe(dev, qobj, wqe_cmd, qix, work_queue, + async_job ? async_job : &sync_job, push, + (void *)(uintptr_t)update->conf); + if (ret) { + __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY, + __ATOMIC_RELAXED); + return rte_flow_error_set(error, EAGAIN, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again"); + } + qobj->last_update = conf->op; + if (query && is_quota_sync_queue(priv, queue)) + query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw); + return 0; +} + +struct rte_flow_action_handle * +mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue, + const struct rte_flow_action_quota *conf, + struct mlx5_hw_q_job *job, bool push, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + uint32_t id; + struct mlx5_quota *qobj; + uintptr_t handle = (uintptr_t)MLX5_INDIRECT_ACTION_TYPE_QUOTA << + MLX5_INDIRECT_ACTION_TYPE_OFFSET; + uint32_t work_queue = !is_quota_sync_queue(priv, queue) ? + queue : quota_sync_queue(priv); + struct mlx5_hw_q_job sync_job; + uint8_t state = MLX5_QUOTA_STATE_FREE; + bool verdict; + int ret; + + qobj = mlx5_ipool_malloc(qctx->quota_ipool, &id); + if (!qobj) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "quota: failed to allocate quota object"); + return NULL; + } + verdict = __atomic_compare_exchange_n + (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false, + __ATOMIC_RELAXED, __ATOMIC_RELAXED); + if (!verdict) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "quota: new quota object has invalid state"); + return NULL; + } + switch (conf->mode) { + case RTE_FLOW_QUOTA_MODE_L2: + qobj->mode = MLX5_METER_MODE_L2_LEN; + break; + case RTE_FLOW_QUOTA_MODE_PACKET: + qobj->mode = MLX5_METER_MODE_PKT; + break; + default: + qobj->mode = MLX5_METER_MODE_IP_LEN; + } + ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_set_init_wqe, id - 1, + work_queue, job ? job : &sync_job, push, + (void *)(uintptr_t)conf); + if (ret) { + mlx5_ipool_free(qctx->quota_ipool, id); + __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_FREE, + __ATOMIC_RELAXED); + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "quota: WR failure"); + return 0; + } + return (struct rte_flow_action_handle *)(handle | id); +} + +int +mlx5_flow_quota_destroy(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + int ret; + + if (qctx->quota_ipool) + mlx5_ipool_destroy(qctx->quota_ipool); + mlx5_quota_destroy_sq(priv); + mlx5_quota_destroy_read_buf(priv); + if (qctx->dr_action) { + ret = mlx5dr_action_destroy(qctx->dr_action); + if (ret) + DRV_LOG(ERR, "QUOTA: failed to destroy DR action"); + } + if (qctx->devx_obj) { + ret = mlx5_devx_cmd_destroy(qctx->devx_obj); + if (ret) + DRV_LOG(ERR, "QUOTA: failed to destroy MTR ASO object"); + } + memset(qctx, 0, sizeof(*qctx)); + return 0; +} + +#define MLX5_QUOTA_IPOOL_TRUNK_SIZE (1u << 12) +#define MLX5_QUOTA_IPOOL_CACHE_SIZE (1u << 13) +int +mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_quota_ctx *qctx = &priv->quota_ctx; + int reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL); + uint32_t flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX; + struct mlx5_indexed_pool_config quota_ipool_cfg = { + .size = sizeof(struct mlx5_quota), + .trunk_size = RTE_MIN(nb_quotas, MLX5_QUOTA_IPOOL_TRUNK_SIZE), + .need_lock = 1, + .release_mem_en = !!priv->sh->config.reclaim_mode, + .malloc = mlx5_malloc, + .max_idx = nb_quotas, + .free = mlx5_free, + .type = "mlx5_flow_quota_index_pool" + }; + int ret; + + if (!nb_quotas) { + DRV_LOG(DEBUG, "QUOTA: cannot create quota with 0 objects"); + return -EINVAL; + } + if (!priv->mtr_en || !sh->meter_aso_en) { + DRV_LOG(DEBUG, "QUOTA: no MTR support"); + return -ENOTSUP; + } + if (reg_id < 0) { + DRV_LOG(DEBUG, "QUOTA: MRT register not available"); + return -ENOTSUP; + } + qctx->devx_obj = mlx5_devx_cmd_create_flow_meter_aso_obj + (sh->cdev->ctx, sh->cdev->pdn, rte_log2_u32(nb_quotas >> 1)); + if (!qctx->devx_obj) { + DRV_LOG(DEBUG, "QUOTA: cannot allocate MTR ASO objects"); + return -ENOMEM; + } + if (sh->config.dv_esw_en && priv->master) + flags |= MLX5DR_ACTION_FLAG_HWS_FDB; + qctx->dr_action = mlx5dr_action_create_aso_meter + (priv->dr_ctx, (struct mlx5dr_devx_obj *)qctx->devx_obj, + reg_id - REG_C_0, flags); + if (!qctx->dr_action) { + DRV_LOG(DEBUG, "QUOTA: failed to create DR action"); + ret = -ENOMEM; + goto err; + } + ret = mlx5_quota_alloc_read_buf(priv); + if (ret) + goto err; + ret = mlx5_quota_alloc_sq(priv); + if (ret) + goto err; + if (nb_quotas < MLX5_QUOTA_IPOOL_TRUNK_SIZE) + quota_ipool_cfg.per_core_cache = 0; + else if (nb_quotas < MLX5_HW_IPOOL_SIZE_THRESHOLD) + quota_ipool_cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN; + else + quota_ipool_cfg.per_core_cache = MLX5_QUOTA_IPOOL_CACHE_SIZE; + qctx->quota_ipool = mlx5_ipool_create("a_ipool_cfg); + if (!qctx->quota_ipool) { + DRV_LOG(DEBUG, "QUOTA: failed to allocate quota pool"); + ret = -ENOMEM; + goto err; + } + qctx->nb_quotas = nb_quotas; + return 0; +err: + mlx5_flow_quota_destroy(dev); + return ret; +} From patchwork Wed Jan 18 12:55:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 122310 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98F3F4240D; Wed, 18 Jan 2023 13:57:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C32C442D81; Wed, 18 Jan 2023 13:56:36 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id 688F842D59 for ; Wed, 18 Jan 2023 13:56:35 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V9aP36BjAVlGe2Luz5UQISH6dK3Q16wd28OzHppN+/Jt+iFBVlbOT5J5ThOcccGTMgGlQhe5uOvsPk7AmiA8A4HJUYzI6rIxOWLS8qv/jwTM41uF2ZKQpIVBes9UwWNCJMzLcvmHkb2OzTlHiVHKwtnSUNmfzu+GfuFzSrwOpKtR0GkJRQ+6oMXNwwV3EeVGqP7Yv2RSrPypvEDiMUkzN5rDdZvWfOP+WgosV1F7sAziLgd1V3owDBaC7btzQdT4lor2rFQVMc6bJVo4+NL9tUYpqKbv1NTs8cOCKaF8s5/meX9ueWWhCf7qXyxDaw9jZIvwbnOrgT8Rj1nmyG+xMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9T0r7iU7yL+SjKdWfHaDBkIYXtbWDQTIm1ObXc/80qg=; b=Oq/5EZgoxUhnWznjHRwSODkTxqazOUd+fWnRARkLB/XltA9dgWP16BxvwLJGlatD8Ct3LLkXIcqVzmK5rAnSCaFSe7wxI4/OmHX3t5PPc2eyV2KpYW80EDbRE9uNSLsKwvukmBhxkF+6EaLHdJJIFgoQ1gU2JHm53YZHtm62XXDmukOl0GSkPAt1r5avRWns+oS8XldcfY8XGuRs1ApCWGGyly6NL5WTl1qowMTpGnLRYdyHxtnLeS89HvI7exXKzti6t+UCr3nge1EjseU/T2D9UlFW63PNDb+WVNVLt5NqWQ00FUWmisHGo8sEBzY1p6y/ybp2B0Krri/mtiZGcg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9T0r7iU7yL+SjKdWfHaDBkIYXtbWDQTIm1ObXc/80qg=; b=lXStW0XEqVzjyozfg0m4sMcaZGG14vh45OKjB0vj9i28ukFebkebIllemCVrmag8y8zf68lyQccbViClfzqgjdagNUSJjB/BTB4ovpIzUAQuyradXDDyDmPIF/R6zWfcNnu17DVnn8Vh4VkczHjiHMWscBk5qQzIA1Jq288s9/8Y9mFUHs/ewXWD4bIIr7aKOClU/7K+OYWcmZu95pcmDDA4UDlxPQP7BLm6cEXTVHz3A+6qbZNcZ5W7aDRnEJjvzq+fBgnyP6TkKC2uyo5cIpAdBFM98y/gjEbx0mGcSCPzRj8qFjtMnIBV0kSbszJoIHEzTXDoRbDX6AGJS7eSQA== Received: from DS7PR03CA0001.namprd03.prod.outlook.com (2603:10b6:5:3b8::6) by MW4PR12MB6949.namprd12.prod.outlook.com (2603:10b6:303:208::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Wed, 18 Jan 2023 12:56:33 +0000 Received: from DS1PEPF0000B074.namprd05.prod.outlook.com (2603:10b6:5:3b8:cafe::c) by DS7PR03CA0001.outlook.office365.com (2603:10b6:5:3b8::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.19 via Frontend Transport; Wed, 18 Jan 2023 12:56:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DS1PEPF0000B074.mail.protection.outlook.com (10.167.17.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.11 via Frontend Transport; Wed, 18 Jan 2023 12:56:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:24 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 18 Jan 2023 04:56:22 -0800 From: Gregory Etelson To: CC: , , , Viacheslav Ovsiienko Subject: [PATCH 5/5] mlx5dr: Definer, translate RTE quota item Date: Wed, 18 Jan 2023 14:55:56 +0200 Message-ID: <20230118125556.23622-6-getelson@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230118125556.23622-1-getelson@nvidia.com> References: <20230118125556.23622-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0000B074:EE_|MW4PR12MB6949:EE_ X-MS-Office365-Filtering-Correlation-Id: 0f62c320-1d80-49fd-dd3c-08daf9536cb6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LQgAe+bhRQLSiO7gzrs7RHbD83wODcA0mlbe0nAzn/HBg1+QNfXnW7fY//oNWXjY0xSluj+TXKt23Wut1nLEo/X2N2BRAY8z+goVaQspZsiND2Q6d4v6SWRRSyXdFvnIG1FwyvpvjIyPs7wK3I9ltDYIB4RBLDsXc0Jx5SP0cQZMk0ouYqQ8eXyrlFhqmsJTlUjQnZN8V6uRUlwPDvlx4AFYJHVcnqC22aSmBu/DR4rkrDFpZ/g+DloZnwlZujAOzF16VFYRT18+Hw7MegGYUEJO1yrbJDBrtRShH3C3okmNPPQgK5KFfEWIcuy2ccCW6H5qsrAQQJDhF+VMRpKIYCj4Yb/meWKMp9OG37qGp4OwvApWx7qnzBvg1Xh4G3vYQHjFCGy2LN8IITKl9RLsug1UHAADrDvHhAcItee1mQ2QxjVy+2LAmI52T5D6lKDOy5RD44Tyu41Rcw3GWCzYwujUvKCQvvOzzvrl/u4MB0ZVakTb3As6rygxYyzFrn5JBvEinMg4cSbyqcMLESPUuZXxlKOuSDFEmNQ6BKubZf1RnDuRC9RcnImgfMd4aaz+gBg8L3NfKdST7aIH8NpPHikBRx/0fTQ3BemgmPoLxth3fMh4ttaoQmZOy31yxDwbcSSBk/XrR/Q6R0Ejtzg0AW5eZsBN2qmql/wHnMKgL47uPYkGzVHoMxMHbPdolyc9E4LNWnvAcsTIfsyhQzpICx77YrLZb96y+FisoVm/7f4= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(376002)(396003)(346002)(136003)(451199015)(36840700001)(40470700004)(46966006)(1076003)(54906003)(2616005)(186003)(8676002)(4326008)(336012)(6286002)(316002)(7696005)(70586007)(26005)(478600001)(70206006)(6916009)(41300700001)(8936002)(15650500001)(47076005)(426003)(5660300002)(6666004)(107886003)(2906002)(83380400001)(36860700001)(356005)(7636003)(82740400003)(16526019)(86362001)(55016003)(40480700001)(36756003)(40460700003)(82310400005)(32563001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2023 12:56:32.6465 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0f62c320-1d80-49fd-dd3c-08daf9536cb6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0000B074.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6949 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org MLX5 PMD implements QUOTA with Meter object. PMD Quota action translation implicitly increments Meter register value after HW assigns it. Meter register values are: HW QUOTA(HW+1) QUOTA state RED 0 1 (01b) BLOCK YELLOW 1 2 (10b) PASS GREEN 2 3 (11b) PASS Quota item checks Meter register bit 1 value to determine state: SPEC MASK PASS 2 (10b) 2 (10b) BLOCK 0 (00b) 2 (10b) Signed-off-by: Gregory Etelson --- drivers/net/mlx5/hws/mlx5dr_definer.c | 61 +++++++++++++++++++++++++++ 1 file changed, 61 insertions(+) diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c index 6b98eb8c96..40ffb02be0 100644 --- a/drivers/net/mlx5/hws/mlx5dr_definer.c +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -19,6 +19,9 @@ #define STE_UDP 0x2 #define STE_ICMP 0x3 +#define MLX5DR_DEFINER_QUOTA_BLOCK 0 +#define MLX5DR_DEFINER_QUOTA_PASS 2 + /* Setter function based on bit offset and mask, for 32bit DW*/ #define _DR_SET_32(p, v, byte_off, bit_off, mask) \ do { \ @@ -1134,6 +1137,60 @@ mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, return 0; } +static void +mlx5dr_definer_quota_set(struct mlx5dr_definer_fc *fc, + const void *item_data, uint8_t *tag) +{ + /** + * MLX5 PMD implements QUOTA with Meter object. + * PMD Quota action translation implicitly increments + * Meter register value after HW assigns it. + * Meter register values are: + * HW QUOTA(HW+1) QUOTA state + * RED 0 1 (01b) BLOCK + * YELLOW 1 2 (10b) PASS + * GREEN 2 3 (11b) PASS + * + * Quota item checks Meter register bit 1 value to determine state: + * SPEC MASK + * PASS 2 (10b) 2 (10b) + * BLOCK 0 (00b) 2 (10b) + * + * item_data is NULL when template quota item is non-masked: + * .. / quota / .. + */ + + const struct rte_flow_item_quota *quota = item_data; + uint32_t val; + + if (quota && (quota->state == RTE_FLOW_QUOTA_STATE_BLOCK)) + val = MLX5DR_DEFINER_QUOTA_BLOCK; + else + val = MLX5DR_DEFINER_QUOTA_PASS; + + DR_SET(tag, val, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static int +mlx5dr_definer_conv_item_quota(struct mlx5dr_definer_conv_data *cd, + __rte_unused struct rte_flow_item *item, + int item_idx) +{ + int mtr_reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + struct mlx5dr_definer_fc *fc; + + if (mtr_reg < 0) + return EINVAL; + + fc = mlx5dr_definer_get_register_fc(cd, mtr_reg); + if (!fc) + return rte_errno; + + fc->tag_set = &mlx5dr_definer_quota_set; + fc->item_idx = item_idx; + return 0; +} + static int mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, struct rte_flow_item *item, @@ -1581,6 +1638,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); item_flags |= MLX5_FLOW_ITEM_METER_COLOR; break; + case RTE_FLOW_ITEM_TYPE_QUOTA: + ret = mlx5dr_definer_conv_item_quota(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_QUOTA; + break; default: DR_LOG(ERR, "Unsupported item type %d", items->type); rte_errno = ENOTSUP;