From patchwork Mon Jun 12 20:05:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 128518 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9941E42C9A; Mon, 12 Jun 2023 22:06:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A825442B8E; Mon, 12 Jun 2023 22:06:26 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2051.outbound.protection.outlook.com [40.107.220.51]) by mails.dpdk.org (Postfix) with ESMTP id 20A7140689 for ; Mon, 12 Jun 2023 22:06:25 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lrSzFqZ038fW7/PJgHOVFZsS4grDbnTk/y4+jQMKex4VGtcH8yPqXl3U0ureplDVFTzTHoLMFCCdMb6qpmaKa4AShIrTfRLA9/E6DFvXlmcnWBo58vqNo5jqzFKJ64TnLBM7ZijLAso4UKjeJhkUReUqS/NA1jp9crfDNsl/eALojuGilkMBhPA/VG/p+fNT1rerYdUVvT4xmEha+NvVulydFZXK43zHMHwUONBTZPPdGQrbj85+B6gti4nWXbWp8nw9mAgDG5OYxpw4plrnTnWh5ClDytHPYKO2ukWq46ZBYWbzOG2tDdLJAiNkPLXUMOtFbQf+clbJ7s2sb/Uaeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XevX/g7cFKxI0uWR7OPxL9wtBqv5u0v6Tm7haBBffQo=; b=aKmpZmFEX9ntTmM9yUyyYREfup0ayFT/WB5SLKGkRXh/2S84Pn5z1W0q5p+jg2+2z1y9pD9L1dKE9pybjhXjiHVY7ImsUZUq1FIuH2M5IJZUovNEoakmQgUg7Cvzir0IMDJs73CPiztLBA6yEzY1r+g7aXEeBch3dwWjMqTOo2BMEs/CAki9/+gcuA8YBDB5OdMebqWTUZOFo6iI0/cCc9b4Ay9OsLe3wQ1f+aZ9yYtBASCvdh87mz/HrNg0tw+XPJdrArzoSosJR7l6Ya2GtH1VMOeoQA2caZK7GX0R/yMvZdUjfHOlFk0+O34PHf7XynHkX3CJBG0yRIeVurVEwA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XevX/g7cFKxI0uWR7OPxL9wtBqv5u0v6Tm7haBBffQo=; b=TaHzJQs1cC8svI2a37S/qTUZml6oIQ+bSy3pK1w0eHn4wWiUU+NJ2yblTaMGBkx0UT90nOfgDZqVxZtwYQ9jGC01GdQMQwO3YbVs3h+DI8BiqbDEa4tRIj27LwhTSiak3R81smZf/duazxxLvyDD2ir9oy8I+rSo1D1zFFG+XsF47UB6PY1x2fJ2FwuJ0GsD3IApL8lDozAPcB8R0Opdd14yNaYCBzdAD62JFQ/wx1Yj11F82OG+dMv3X+wy7Y7uivR0futR1bmrFO1GuNiT+RxKTupw0m7PMimiLLkhJ4eQI2Bc9DZYf0tim4RpA+qGibOVWucan89N+UCZKWCYLQ== Received: from SA0PR11CA0039.namprd11.prod.outlook.com (2603:10b6:806:d0::14) by SN7PR12MB7882.namprd12.prod.outlook.com (2603:10b6:806:348::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.29; Mon, 12 Jun 2023 20:06:23 +0000 Received: from SN1PEPF00026367.namprd02.prod.outlook.com (2603:10b6:806:d0:cafe::84) by SA0PR11CA0039.outlook.office365.com (2603:10b6:806:d0::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.34 via Frontend Transport; Mon, 12 Jun 2023 20:06:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SN1PEPF00026367.mail.protection.outlook.com (10.167.241.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.22 via Frontend Transport; Mon, 12 Jun 2023 20:06:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 12 Jun 2023 13:06:12 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 12 Jun 2023 13:06:10 -0700 From: Alexander Kozyrev To: CC: , , , , Subject: [PATCH 1/4] net/mlx5/hws: use the same function to check rule Date: Mon, 12 Jun 2023 23:05:49 +0300 Message-ID: <20230612200552.3450964-2-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20230612200552.3450964-1-akozyrev@nvidia.com> References: <20230612200552.3450964-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF00026367:EE_|SN7PR12MB7882:EE_ X-MS-Office365-Filtering-Correlation-Id: 55a3617b-6cc5-47d8-6aa5-08db6b807f01 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eYe04JD1uDXy6Qb+K3U8j2K4ZxSJBhaw+TZE37cx3i8RBgKHM/ExLjVxrLkjTiY6KZxORXK+WJqn+eEpCcHH1aEWJ9mBXWBEnD6EFLscWTciY60raZbRSAFXm39dIpMNSiuHUIAWSodJnszC6rdSGJ671QlUlR9yzTSxjJRpJZpH9xCa35JcfLD/oYT1YKGnT3Qc32wh3jffwF/OpdssFG8SyvnvTMAgxdvbPboasu78wAJQy4y9Ey9B+r73md5WRUJ5lCmqC6qPLP3D3K2uimBuvgcWKe2Y8+zzlwCTQBDXAqRS4+TGqvTIURhAc2kA76owXsW7sQcGK8TZPhTZEC1LpOmiu9TQb9tPKD2hcLWYA29IlX0AzneTne7gZ0m60QAoU57GXbYn5xUM/p9+mtmceevBSTlDWoXu5C/LBKuyTq0jDKNfJvwYUuuRHLtWeJ5fYq0JfydsUSswCPVpQEnr0gOwwZKMt8QxBOwngApN91s/m9HAROFdeEtlgNWriYfghKY6ybZWhW/bnWPGf+k0GGijOJYmfTLTmqVE4/LrTksCfyzdtl+2WmhhDpsmH3SwyOLJYXq146CC0oBsDvcBAuS41rPoPPNSzW5MexevrxO/X8bpDmfekzeaY8CQ2qw6gZ520zwvfevjTyzxohzik97DeJaIGZK5CZlP29SS091iROHU41kQ64JgaeBZ4lvLX9fYb+RqIQij5lt2YNfdi++ziUDTfwSr4NKkh8Y= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(396003)(376002)(39860400002)(136003)(451199021)(40470700004)(36840700001)(46966006)(40460700003)(6666004)(70206006)(107886003)(70586007)(316002)(6916009)(4326008)(54906003)(478600001)(86362001)(36756003)(47076005)(36860700001)(1076003)(26005)(336012)(426003)(83380400001)(16526019)(186003)(5660300002)(8936002)(82310400005)(40480700001)(8676002)(2906002)(7636003)(356005)(41300700001)(2616005)(82740400003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jun 2023 20:06:23.2592 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 55a3617b-6cc5-47d8-6aa5-08db6b807f01 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF00026367.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7882 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Erez Shitrit Before handling it for insert/delete Signed-off-by: Erez Shitrit Reviewed-by: Alex Vesker Acked-by: Ori Kam --- drivers/net/mlx5/hws/mlx5dr_rule.c | 38 +++++++++++++++--------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index 2418ca0b26..e0c4a6a91a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -630,6 +630,23 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, return 0; } +static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx, + struct mlx5dr_rule_attr *attr) +{ + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return rte_errno; + } + + return 0; +} + int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, uint8_t mt_idx, const struct rte_flow_item items[], @@ -644,16 +661,8 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, rule_handle->matcher = matcher; ctx = matcher->tbl->ctx; - if (unlikely(!attr->user_data)) { - rte_errno = EINVAL; + if (mlx5dr_rule_enqueue_precheck(ctx, attr)) return -rte_errno; - } - - /* Check if there is room in queue */ - if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { - rte_errno = EBUSY; - return -rte_errno; - } assert(matcher->num_of_mt >= mt_idx); assert(matcher->num_of_at >= at_idx); @@ -677,19 +686,10 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr) { - struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; int ret; - if (unlikely(!attr->user_data)) { - rte_errno = EINVAL; - return -rte_errno; - } - - /* Check if there is room in queue */ - if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { - rte_errno = EBUSY; + if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr)) return -rte_errno; - } if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) ret = mlx5dr_rule_destroy_root(rule, attr); From patchwork Mon Jun 12 20:05:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 128519 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78B8C42C9A; Mon, 12 Jun 2023 22:06:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AD957410FC; Mon, 12 Jun 2023 22:06:34 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2069.outbound.protection.outlook.com [40.107.243.69]) by mails.dpdk.org (Postfix) with ESMTP id 20109410F9 for ; Mon, 12 Jun 2023 22:06:33 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KwbWRoSNWhTNeCnyHu5CVdY1+9m7nZ3op8fqywtpcnTxsq+lVeL3w1U0soZEBOZH/lIH++ozAoCO551UjgoiH5WZ6/a/fiMKaIfBi1IngrbQf/fOX7ms9kZC1uCPiFzosJ0dXCyrlFhe52JOCU1OZ2TUlJBMdSfLMgyeDsJLiZ1fgaJiQTZh7F4DVjXew5jFq18CVRarckjfVshHUEj0bl3PkPOgfpCJXys4GvgRvoT/YBuZo6Gfp2MN66qYN87ZNRUkjkLju4vgiOjjvC1vEebroeRi65CsDCHoPVlTSiY4RzWYvnr3uzVG0ur8ZrrBCIAKB+dPtBk4ip122KyEXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lr6bxFTDpr4lmz9h4jekTTxx9XRhs+aj50HLHNNfn+o=; b=GKFkkJ07AewO7vN9jv1PIsRNkBKR966Cy43qQD2fvbnh2ZXf0kfg9HgcUpVn/WEzdOz4hKmzmoRVwrvz51klfCL3G1QuZrXCer9UlPmsy4iGE6ZwRkz8iCU6elltQmccS0DYj+rukddCDZENaNyPzcPoJOXrT2xCAbsThbEwQGZcG6unQz1w9oFYWP/M5NwxXAt8h1zaRsrgsVU7iXFM9M0gGIw+BjaWFEiAJws2GJD0Yv6a1t2p/cKQzmO9ui5ZmwKz1zg8ohadVQMFEnJAb2LqQAzwzvgSGrwZyQTGMpj+af13FlPOBZWxm+zKacN9BasHuf+6g4WHx2Z6epVBPw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lr6bxFTDpr4lmz9h4jekTTxx9XRhs+aj50HLHNNfn+o=; b=P0ooCXPP952ksKsGVjZNGbs5sQxLk6mPwAJsDlqdIgiATYnKqRlHUykmX13qxE7x+kIZxIBAMKQFO4XWdamLZ+5fz5sYtkCerMioNpqRGh/IKgPUHLtwfc9289rGuNG+OmYx8Q0Gi4NZKpohGCD6OBKMMUo8vIugqEn2tYr70rgG+rfkuqs6XvWSwfPo1MYQJEhiYVUnsv3LZ+sV9LN6MDHgpsI3U0Nxv8Qyzc26VT0keNsedqwhCAIxuFMdP9eeD1Zq6YEjrCe9M1xvQN3FrU9Xzjz0QpkiFL+5OKmkoSE8aVSQMQp5VvcACGKdOEuaTDWsM2ViForyYtl2MOJDqg== Received: from SN7PR04CA0034.namprd04.prod.outlook.com (2603:10b6:806:120::9) by PH7PR12MB7114.namprd12.prod.outlook.com (2603:10b6:510:1ed::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.39; Mon, 12 Jun 2023 20:06:29 +0000 Received: from SN1PEPF0002636E.namprd02.prod.outlook.com (2603:10b6:806:120:cafe::da) by SN7PR04CA0034.outlook.office365.com (2603:10b6:806:120::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.34 via Frontend Transport; Mon, 12 Jun 2023 20:06:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SN1PEPF0002636E.mail.protection.outlook.com (10.167.241.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.22 via Frontend Transport; Mon, 12 Jun 2023 20:06:29 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 12 Jun 2023 13:06:14 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 12 Jun 2023 13:06:12 -0700 From: Alexander Kozyrev To: CC: , , , , Subject: [PATCH 2/4] net/mlx5/hws: use union in the wqe-data struct Date: Mon, 12 Jun 2023 23:05:50 +0300 Message-ID: <20230612200552.3450964-3-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20230612200552.3450964-1-akozyrev@nvidia.com> References: <20230612200552.3450964-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002636E:EE_|PH7PR12MB7114:EE_ X-MS-Office365-Filtering-Correlation-Id: 9ee5a2dd-fa20-47c0-ae4e-08db6b80827c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DIWPwwQ6sZu+kbfqnHrMCnLgWlGHUCm7cUEdduumMzj8bFki/oKvcJURa0Wjm4D0ik5KegspL8a3S0bEpgUed00Cujst7XFRJ6R5dV4pa4FldaaraA4OSZ6P1XUFfl/uX4r8d6qgzZuD98v3VX9dHrbDshbjf6/YdBJfUmxLSa5cjVwF5cqu6QWL76mvOplgHWsKJx1I3Cv5wHJUyyV2pCnYmUuSOvQc+Lg5nlOrp/MC3++iKWVC034MgKEvsOMBESnFpF7TAYw1Ov4P7/DA0oL0agAN6t3WwIt6s3uBkXnu2n34A3q9shKWZtvxXhq6P2XJoIUkppR8Ye0BbX36lFwB4VNQwT0lte/OUWXKSwnroce9DSdjJoeJ2rWHNHPoJ0xk1yyLgk7K4EkPfKpk4DimlwaE1VaKu1ef/8IHzWzNVtB8uCgQUp3p1uTEcbUFj+AWOjV2ZYF7UjuWGNurdupEydDT1LPymZe2hVVSdyP8o52VJoyJAnQQ/RFdYT5SgsY6Tibxp69DqXoMjVnq6/h1kOlqPGlTqvj+fHAxYpAYo1aR3RjUlZktFsGVHV6rxRKJUFAGwy09C45fqBAQpCPjSPynLr/uj3OnSaie3PI/JGAZr/ZwTCJnDOZL7L3jvyL9MKqBJvvgQtOpI+qVBItSbE+2EPlo0Wur4DOCi3QVv9ZSqzQsKX6QhmJ1qSx8mPa9Zs3dir2kfGO06oAJ8TdY1QgJFfvGzaGpmuidbGxJOUCHy0+j7lQ05FEnmgbW X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(346002)(396003)(376002)(136003)(39860400002)(451199021)(40470700004)(46966006)(36840700001)(107886003)(26005)(16526019)(1076003)(186003)(2616005)(83380400001)(336012)(426003)(47076005)(6666004)(36756003)(82310400005)(82740400003)(40480700001)(36860700001)(4744005)(2906002)(8676002)(8936002)(316002)(54906003)(478600001)(6916009)(5660300002)(356005)(40460700003)(7636003)(4326008)(41300700001)(86362001)(70586007)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jun 2023 20:06:29.0845 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9ee5a2dd-fa20-47c0-ae4e-08db6b80827c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002636E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7114 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Erez Shitrit To be clear about which field we are going to set. Signed-off-by: Erez Shitrit Reviewed-by: Alex Vesker Acked-by: Ori Kam --- drivers/net/mlx5/hws/mlx5dr_send.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index d650c55124..e58fdeb117 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -110,7 +110,7 @@ mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, if (is_jumbo) { /* Clear previous possibly dirty control */ memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ); - memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); + memcpy(wqe_data->jumbo, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); } else { /* Clear previous possibly dirty control and actions */ memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ); From patchwork Mon Jun 12 20:05:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 128520 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A4C742C9A; Mon, 12 Jun 2023 22:06:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8976842D0C; Mon, 12 Jun 2023 22:06:35 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2078.outbound.protection.outlook.com [40.107.93.78]) by mails.dpdk.org (Postfix) with ESMTP id D4E42410F9 for ; Mon, 12 Jun 2023 22:06:33 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=adfD4/OgBapyX9LbzXa4OzLDI/2AXAFWLLr1qymYjqoSq3Nwvkoaly3i9KeiqFglpLFx5RrwXCbj7j+GKx/52keNSdfkGjGsYB5t2YNBZOIJMyG0AfOYzMnUurY1vDONRnHPbTzmr/g/5poI/eRf0mrRHRkiqcDWrFb10cirpZN3LFgJ/+AvOW+fJZ64KYM0wBHXxNkFjF+wBtXGFyGPWng0YXEP0fqBD0GZ1qZs49cXrl8rKSlMQ3fwBy2peJ8yfq8Rf1hmurXqkCjNDGt0YYSqJiTDfmMRj7efaNWrjst5iugB9iklL4NnQyEqhyVzBG78rvdcIBfWsfPdNlLmSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nKBdJj/PLgEdQ66ERzx6TdOLINSCzNkiKfs5rHEW0Hg=; b=nHlzOGdK1b25m4ZwtsGGQ1NrxixuH3EOx84ged2p9nhIJQ3SC4dk1ONZz7eRpdrDRPgx9cCKuI02G8rY67t8ATZHwiMWX38PBjrwqF8uOZBMOVA+ZEgDd+CYlzj29U94+a56Bske1sbyPxZms1xRKFgbKEsiQLT6XP1VVBXYzh5196/TPvn9eFbzmYCD62qQyim4ApxDy0/hbjewqZ0uQtYxCTTiyy+CuU/0caA7Q2CTlpACZ86iYsqltUgE4WgVQpetu4NJkfRc0WNbZSNgF85mOgnCPGLd9wTQAAnE24ehWBqifeOkYQAGlembl7LDa2OQbtzXHHKcaRKEqSlSqA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nKBdJj/PLgEdQ66ERzx6TdOLINSCzNkiKfs5rHEW0Hg=; b=hHo6gPFctRq0VLxBVHFP5mCCdRoCnB62w6k/fJANljOPYeLmYa3UKtO8nbCXxqdyDFvtubyRAZTiA13kXLgUQZz4pdyXZrkB4tBkIMrwYsZ8ra7vv4yoGAEv2MXugA7Bk2HfEzZOQTPV5gWjkmk4Pg6hXU8yRERQrkhccwUri6hFS+o4uYEBfrusi4Hpcz8ya0FcQgLUAwGxPyx/bPLLz/nSxOeyjGVXLwtdFLCiByGtvzE2RKngIXAhwiVWgYyPzpcEgMp5vzI426JcQsNJRWmx2KOasO/VLCdzpvwpPRkyevjtkvXdXj8/5HjQ06PqJjPSLkKQdff/Q96wovM92g== Received: from MW4PR03CA0081.namprd03.prod.outlook.com (2603:10b6:303:b6::26) by IA0PR12MB9010.namprd12.prod.outlook.com (2603:10b6:208:48e::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.33; Mon, 12 Jun 2023 20:06:31 +0000 Received: from MWH0EPF000989E6.namprd02.prod.outlook.com (2603:10b6:303:b6:cafe::ee) by MW4PR03CA0081.outlook.office365.com (2603:10b6:303:b6::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.34 via Frontend Transport; Mon, 12 Jun 2023 20:06:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by MWH0EPF000989E6.mail.protection.outlook.com (10.167.241.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.22 via Frontend Transport; Mon, 12 Jun 2023 20:06:31 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 12 Jun 2023 13:06:17 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 12 Jun 2023 13:06:15 -0700 From: Alexander Kozyrev To: CC: , , , , Subject: [PATCH 3/4] net/mlx5/hws: support rule update after its creation Date: Mon, 12 Jun 2023 23:05:51 +0300 Message-ID: <20230612200552.3450964-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20230612200552.3450964-1-akozyrev@nvidia.com> References: <20230612200552.3450964-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: MWH0EPF000989E6:EE_|IA0PR12MB9010:EE_ X-MS-Office365-Filtering-Correlation-Id: 2f2e43ff-ed57-4b33-f17a-08db6b8083d7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VCJ4Lr5wwKhXrF11dETPG7KxM69A9+3xbHagDDNTtikCuPgYL4VE06oe0GeDF6hrSAoIP8bMp1GhZt/8B/uE3eUEK5dPXtZlzyddt7zvZ4pMItzj7IBgC+TNxWzzp5Sf1WmIIVun6Oq6lluR3NpyxfImq4WlbLQ4/ET3tZBNVcHZhy9Fxi8JypEWBbAl260jtNpTUsOeCd0OFgF95718deszTidqSjKaNHfifJ/453ZjPBsWkNy6Cj/0mpTiBrRkeNWHazjokIDXUKnrbFiNj+p4ffWbpggJLrWRdD3ffwJGMb16EGYU/4xKjlPYBR3ycVdP+otmmbf0UonD6GMZWx1XYHIS/HpOnRVflosKq93Zy90Ut1wJKJWQtKQX/lP3/so6cVWLLFtAkWmdZvKIzrxfw2na3zq8id1RK/anX37/nBnVVFT67s5zSQHbX3/n/M3lG6pxAC8n9DwvuXzjk7DEQ25uZbGGocERS6fIgzyA6r9Tg5C63FLo746ITzkrED2hwZjn4pwAiUhSVPTXQNR7lkf4lsgB05qdHDLxyE+sC7Gpg5LvM2ILFDdoz+vNQBrWKDarjGpXbnOsrD/571jKq1ecNgganS5yHYGDQvdBtg1wls1AQ7KsGRHJPS5VaWoUnx/4TGLWG4rxnvogv3ft7KwtcvPB6RoO4F0zyx5bmL9eyxoNJGuaT9Gr46KIo1L1NZY9nVw3ym+fc0BkbpVwvt0r/fyt1I7LzfrBhxRG0EzazZVk2DmXZcQEiGpr X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(396003)(346002)(136003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(5660300002)(8936002)(15650500001)(8676002)(2906002)(70206006)(70586007)(54906003)(6666004)(4326008)(107886003)(1076003)(26005)(316002)(6916009)(41300700001)(186003)(16526019)(36860700001)(7636003)(356005)(82740400003)(426003)(336012)(47076005)(83380400001)(2616005)(40460700003)(478600001)(40480700001)(36756003)(86362001)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jun 2023 20:06:31.4072 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2f2e43ff-ed57-4b33-f17a-08db6b8083d7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: MWH0EPF000989E6.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB9010 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Erez Shitrit Add the ability to change rule's actions after the rule already created. The new actions should be one of the action template list. That support is only for matcher that uses the optimization of using rule insertion by index (optimize_using_rule_idx) Signed-off-by: Erez Shitrit Reviewed-by: Alex Vesker Acked-by: Ori Kam --- drivers/net/mlx5/hws/mlx5dr.h | 17 ++++++ drivers/net/mlx5/hws/mlx5dr_rule.c | 85 ++++++++++++++++++++++++++---- 2 files changed, 93 insertions(+), 9 deletions(-) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h index c14fef7a6b..f881d7c961 100644 --- a/drivers/net/mlx5/hws/mlx5dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -365,6 +365,23 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, struct mlx5dr_rule_attr *attr); +/* Enqueue update actions on an existing rule. + * + * @param[in, out] rule_handle + * A valid rule handle to update. + * @param[in] at_idx + * Action template index to update the actions with. + * @param[in] rule_actions + * Rule action to be executed on match. + * @param[in] attr + * Rule update attributes. + * @return zero on successful enqueue non zero otherwise. + */ +int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle, + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr); + /* Create direct rule drop action. * * @param[in] ctx diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c index e0c4a6a91a..071e1ad769 100644 --- a/drivers/net/mlx5/hws/mlx5dr_rule.c +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -40,6 +40,17 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, } } +static void +mlx5dr_rule_update_copy_tag(struct mlx5dr_rule *rule, + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, + bool is_jumbo) +{ + if (is_jumbo) + memcpy(wqe_data->jumbo, rule->tag.jumbo, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(wqe_data->tag, rule->tag.match, MLX5DR_MATCH_TAG_SZ); +} + static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, struct mlx5dr_rule *rule, const struct rte_flow_item *items, @@ -53,6 +64,14 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, dep_wqe->rule = rule; dep_wqe->user_data = user_data; + if (!items) { /* rule update */ + dep_wqe->rtc_0 = rule->rtc_0; + dep_wqe->rtc_1 = rule->rtc_1; + dep_wqe->retry_rtc_1 = 0; + dep_wqe->retry_rtc_0 = 0; + return; + } + switch (tbl->type) { case MLX5DR_TABLE_TYPE_NIC_RX: case MLX5DR_TABLE_TYPE_NIC_TX: @@ -213,15 +232,20 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, struct mlx5dr_send_ste_attr *ste_attr, - struct mlx5dr_actions_apply_data *apply) + struct mlx5dr_actions_apply_data *apply, + bool is_update) { struct mlx5dr_matcher *matcher = rule->matcher; struct mlx5dr_table *tbl = matcher->tbl; struct mlx5dr_context *ctx = tbl->ctx; /* Init rule before reuse */ - rule->rtc_0 = 0; - rule->rtc_1 = 0; + if (!is_update) { + /* In update we use these rtc's */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + } + rule->pending_wqes = 0; rule->action_ste_idx = -1; rule->status = MLX5DR_RULE_STATUS_CREATING; @@ -264,7 +288,7 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule, return rte_errno; } - mlx5dr_rule_create_init(rule, &ste_attr, &apply); + mlx5dr_rule_create_init(rule, &ste_attr, &apply, false); mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data); mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data); @@ -348,10 +372,13 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, struct mlx5dr_actions_apply_data apply; struct mlx5dr_send_engine *queue; uint8_t total_stes, action_stes; + bool is_update; int i, ret; + is_update = (items == NULL); + /* Insert rule using FW WQE if cannot use GTA WQE */ - if (unlikely(mlx5dr_matcher_req_fw_wqe(matcher))) + if (unlikely(mlx5dr_matcher_req_fw_wqe(matcher) && !is_update)) return mlx5dr_rule_create_hws_fw_wqe(rule, attr, mt_idx, items, at_idx, rule_actions); @@ -361,7 +388,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, return rte_errno; } - mlx5dr_rule_create_init(rule, &ste_attr, &apply); + mlx5dr_rule_create_init(rule, &ste_attr, &apply, is_update); /* Allocate dependent match WQE since rule might have dependent writes. * The queued dependent WQE can be later aborted or kept as a dependency. @@ -408,9 +435,11 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, * will always match and perform the specified actions, which * makes the tag irrelevant. */ - if (likely(!mlx5dr_matcher_is_insert_by_idx(matcher))) + if (likely(!mlx5dr_matcher_is_insert_by_idx(matcher) && !is_update)) mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, (uint8_t *)dep_wqe->wqe_data.action); + else if (unlikely(is_update)) + mlx5dr_rule_update_copy_tag(rule, &dep_wqe->wqe_data, is_jumbo); /* Rule has dependent WQEs, match dep_wqe is queued */ if (action_stes || apply.require_dep) @@ -437,8 +466,10 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, mlx5dr_send_ste(queue, &ste_attr); } - /* Backup TAG on the rule for deletion */ - mlx5dr_rule_save_delete_info(rule, &ste_attr); + /* Backup TAG on the rule for deletion, only after insertion */ + if (!is_update) + mlx5dr_rule_save_delete_info(rule, &ste_attr); + mlx5dr_send_engine_inc_rule(queue); /* Send dependent WQEs */ @@ -666,6 +697,7 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, assert(matcher->num_of_mt >= mt_idx); assert(matcher->num_of_at >= at_idx); + assert(items); if (unlikely(mlx5dr_table_is_root(matcher->tbl))) ret = mlx5dr_rule_create_root(rule_handle, @@ -699,6 +731,41 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, return -ret; } +int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle, + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule_handle->matcher; + int ret; + + if (unlikely(mlx5dr_table_is_root(matcher->tbl) || + unlikely(mlx5dr_matcher_req_fw_wqe(matcher)))) { + DR_LOG(ERR, "Rule update not supported on cureent matcher"); + rte_errno = ENOTSUP; + return -rte_errno; + } + + if (!matcher->attr.optimize_using_rule_idx && + !mlx5dr_matcher_is_insert_by_idx(matcher)) { + DR_LOG(ERR, "Rule update requires optimize by idx matcher"); + rte_errno = ENOTSUP; + return -rte_errno; + } + + if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr)) + return -rte_errno; + + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + 0, + NULL, + at_idx, + rule_actions); + + return -ret; +} + size_t mlx5dr_rule_get_handle_size(void) { return sizeof(struct mlx5dr_rule); From patchwork Mon Jun 12 20:05:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 128521 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9321942C9A; Mon, 12 Jun 2023 22:06:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B20D242D33; Mon, 12 Jun 2023 22:06:36 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2048.outbound.protection.outlook.com [40.107.237.48]) by mails.dpdk.org (Postfix) with ESMTP id A1EC542D12 for ; Mon, 12 Jun 2023 22:06:35 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e31vr4buPvciCcfKmsk37CctiG+NeGQDHL6tcjnBCMJyRZ6KhgQOEbXQGQCTNpJyS/RxUvnZE80tJX9c4woiMoWKdgVqWsjT4s8NtUuqqfT9E/Dc8SHoWaC6GNktvsZsdZ/SdvytLRKZosd2A/wRCrVfyZY3SHAqcIwB3N0ChbXALwk6GYbvUZT9WE7VVL/VDXGLXaceXCN3mBOmmMtJKqfBvouq33efPLbhTOWuu4UCV7m4mBudRr6BXpNBVmhz7LTPpkmL08KUA4QGQ+S4mNy5DInsugQMLHww8AY+3coYU9IZvQBKkAjAZkREKiiS6Ot9ZYPCiV4MWh5CoAjDOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ft/pyjq32nJBtTkwzVDfFq8h7954eG3AFKz+9vaiJ6w=; b=JTqVat5QL1CXCX8RQQVr8WspTn4sGe4P4kC6E96oGewK2RXeL2TftwM4l44w7ogfx7KXHw9MAFbUcwSxSISLYCEuLGIQNF1YVjCPBOXifE3AgzrwMFJzpQgp8c2ytKqnx0hbX6X2FK2a+Jzd4oH5HoZAgwlR5FuqvnWA0UEh5lvITzFEKA1n1oyDYKHDXylJHcpRJr51Uq95MQ6GdKNEczHLuTVPo8BLjVKC4CQQPgz+K4HwATw+tdmypAc1Visbvmhq2Wst1nTkC41rXB1Yjzi2UAoOJJ1srWNsQlBhjYcYMYC4mZmhy7ps9w4IfMalnr7zW1Uh3Q/8QOZcSvrL/w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ft/pyjq32nJBtTkwzVDfFq8h7954eG3AFKz+9vaiJ6w=; b=PeF45Fx6jJ5dLT0Obpmo6BPy4qmEvXCdgOnPvCDiHwvsGh4fcXdfVevECnLEkrlQIwrRVEHCzYsRZtHyASIUS1XwHGwWhYcqJMlXBcaRa2ICq6m/P4ufBedTRf85VM86bfWlzUFHDs8KxL7lv6tgHmNRDwrAt1XrafUb5m4LTvJaaJpGmn48YCZZk0eaOyB6s0gZoWrsj9pvYmhXhZqESONsnfr7LInknmgJ19c68MiWsZifljq7aPttg4o9uCAlrXto0rY7/zdm7jh2xUtydGJL7QXaHfjHJHukxGzsS4Uvk83oL/WfrTmrYfJZU4lUxlkjTCH3DyNJXU1KpSLE7A== Received: from SN7PR04CA0036.namprd04.prod.outlook.com (2603:10b6:806:120::11) by BN9PR12MB5132.namprd12.prod.outlook.com (2603:10b6:408:119::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.41; Mon, 12 Jun 2023 20:06:33 +0000 Received: from SN1PEPF0002636E.namprd02.prod.outlook.com (2603:10b6:806:120:cafe::30) by SN7PR04CA0036.outlook.office365.com (2603:10b6:806:120::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.34 via Frontend Transport; Mon, 12 Jun 2023 20:06:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SN1PEPF0002636E.mail.protection.outlook.com (10.167.241.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.22 via Frontend Transport; Mon, 12 Jun 2023 20:06:32 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Mon, 12 Jun 2023 13:06:19 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Mon, 12 Jun 2023 13:06:17 -0700 From: Alexander Kozyrev To: CC: , , , , Subject: [PATCH 4/4] net/mlx5: implement Flow update API Date: Mon, 12 Jun 2023 23:05:52 +0300 Message-ID: <20230612200552.3450964-5-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20230612200552.3450964-1-akozyrev@nvidia.com> References: <20230612200552.3450964-1-akozyrev@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF0002636E:EE_|BN9PR12MB5132:EE_ X-MS-Office365-Filtering-Correlation-Id: ba17f544-3458-42fd-1fdc-08db6b8084b4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: pZ6MJozjb+S/zCsSk56qipBSGzPVlaNCHVKIwFhAabW0iWnBQM7uRUFTPsjwAB3I6XIkLA53poyTX8HnXtKQ+DKkoMkXQLUPjTzSZUYtEnhfNFM/IQaCO5XG7v50E/DuPJgl4GQIUZE4F44UFsRP26JrFr6teAdrCH3oPXfqLsC/8EWvEe1Jp5lY/uAVX1EW3AabvSv9vA9Gb28yV09BYC2jWV4NULdp0P2M9riZuSRdnunaryaBe0806ZQz6CO9DCbNVAtXqn+bFPOqylUtBAZwwryIMcPLG5NgxiaxSPViLr91Odn9sDpBfRrASS/MSQgq7VcKrp0qoYRwU+InnMEvVoZZP9fl1h3I10nt0uRiWV6rkb307r+YMPlv4ABT4XaQekHW1JxhybCMQuPyj5ZeZuckREi+xsomiaCD7pdZsTu9NeSjZfKN5R4HcNHtrnFMc5d+wuREiBtZTqG2xwmq3vloxVDcKXP42QoKhDEfx4eNS/nL87xCFcy4p/5zCRlVQorT1u8iIcUxe6wtH3BUqpp5VE77wCfzVQY3+lBVGylKKeUK1Idl64czieIJqprM72du6ap04Y2CpixO5ba5hqaCqphnCKuMxPCAsz4Z0M8y/ZMt1gwOd/TI5nt+8JDMhNfNLmN75e+G/u9DbqhDNBjCHybgUqju5ORtNnqoT/TzmAomSejA5Qa/cFfqWblTql9m06Vzqg0gZxKFIwoFPT6G1M4LNFQNaa2H/g8+GaB9ScND24pwSFHavQOX X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(451199021)(36840700001)(40470700004)(46966006)(82740400003)(40460700003)(6666004)(478600001)(36860700001)(36756003)(16526019)(26005)(336012)(107886003)(1076003)(426003)(83380400001)(186003)(47076005)(86362001)(82310400005)(2616005)(7636003)(356005)(40480700001)(30864003)(5660300002)(15650500001)(8936002)(8676002)(316002)(41300700001)(2906002)(4326008)(6916009)(70586007)(70206006)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jun 2023 20:06:32.8033 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ba17f544-3458-42fd-1fdc-08db6b8084b4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF0002636E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5132 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the implementation for the rte_flow_async_actions_update() API. Construct the new actions and replace them for the Flow handle. Old resources are freed during the rte_flow_pull() invocation. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.c | 56 +++++++++ drivers/net/mlx5/mlx5_flow.h | 13 +++ drivers/net/mlx5/mlx5_flow_hw.c | 194 +++++++++++++++++++++++++++++--- 4 files changed, 249 insertions(+), 15 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 021049ad2b..2715d6c0be 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -385,6 +385,7 @@ struct mlx5_hw_q_job { struct rte_flow_item_ethdev port_spec; struct rte_flow_item_tag tag_spec; } __rte_packed; + struct rte_flow_hw *upd_flow; /* Flow with updated values. */ }; /* HW steering job descriptor LIFO pool. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index eb1d7a6be2..20d896dbe3 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1048,6 +1048,15 @@ mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev, void *user_data, struct rte_flow_error *error); static int +mlx5_flow_async_flow_update(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); +static int mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev, uint32_t queue, const struct rte_flow_op_attr *attr, @@ -1152,6 +1161,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { mlx5_flow_async_action_handle_query_update, .async_action_handle_query = mlx5_flow_async_action_handle_query, .async_action_handle_destroy = mlx5_flow_async_action_handle_destroy, + .async_actions_update = mlx5_flow_async_flow_update, }; /* Tunnel information. */ @@ -9349,6 +9359,52 @@ mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev, user_data, error); } +/** + * Enqueue flow update. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to destroy the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] flow + * Pointer to the flow to be destroyed. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +mlx5_flow_async_flow_update(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops; + struct rte_flow_attr fattr = {0}; + + if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "flow_q update with incorrect steering mode"); + fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); + return fops->async_flow_update(dev, queue, attr, flow, + actions, action_template_index, user_data, error); +} + /** * Enqueue flow destruction. * diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 02e33c7fb3..e3247fb011 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1173,6 +1173,7 @@ typedef uint32_t cnt_id_t; /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ + uint32_t res_idx; /* Resource index from indexed pool. */ uint32_t fate_type; /* Fate action type. */ union { /* Jump action. */ @@ -1180,6 +1181,7 @@ struct rte_flow_hw { struct mlx5_hrxq *hrxq; /* TIR action. */ }; struct rte_flow_template_table *table; /* The table flow allcated from. */ + uint8_t mt_idx; uint32_t age_idx; cnt_id_t cnt_id; uint32_t mtr_id; @@ -1371,6 +1373,7 @@ struct rte_flow_template_table { /* Action templates bind to the table. */ struct mlx5_hw_action_template ats[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; struct mlx5_indexed_pool *flow; /* The table's flow ipool. */ + struct mlx5_indexed_pool *resource; /* The table's resource ipool. */ struct mlx5_flow_template_table_cfg cfg; uint32_t type; /* Flow table type RX/TX/FDB. */ uint8_t nb_item_templates; /* Item template number. */ @@ -1865,6 +1868,15 @@ typedef struct rte_flow *(*mlx5_flow_async_flow_create_by_index_t) uint8_t action_template_index, void *user_data, struct rte_flow_error *error); +typedef int (*mlx5_flow_async_flow_update_t) + (struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); typedef int (*mlx5_flow_async_flow_destroy_t) (struct rte_eth_dev *dev, uint32_t queue, @@ -1975,6 +1987,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_table_destroy_t template_table_destroy; mlx5_flow_async_flow_create_t async_flow_create; mlx5_flow_async_flow_create_by_index_t async_flow_create_by_index; + mlx5_flow_async_flow_update_t async_flow_update; mlx5_flow_async_flow_destroy_t async_flow_destroy; mlx5_flow_pull_t pull; mlx5_flow_push_t push; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index f17a2a0522..949e9dfb95 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2248,7 +2248,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, if (!hw_acts->mhdr->shared) { rule_acts[pos].modify_header.offset = - job->flow->idx - 1; + job->flow->res_idx - 1; rule_acts[pos].modify_header.data = (uint8_t *)job->mhdr_cmd; rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds, @@ -2405,7 +2405,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, */ age_idx = mlx5_hws_age_action_create(priv, queue, 0, age, - job->flow->idx, + job->flow->res_idx, error); if (age_idx == 0) return -rte_errno; @@ -2504,7 +2504,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { rule_acts[hw_acts->encap_decap_pos].reformat.offset = - job->flow->idx - 1; + job->flow->res_idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf; } if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id)) @@ -2612,6 +2612,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, struct mlx5_hw_q_job *job; const struct rte_flow_item *rule_items; uint32_t flow_idx; + uint32_t res_idx = 0; int ret; if (unlikely((!dev->data->dev_started))) { @@ -2625,12 +2626,17 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); if (!flow) goto error; + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto flow_free; /* * Set the table here in order to know the destination table * when free the flow afterwards. */ flow->table = table; + flow->mt_idx = pattern_template_index; flow->idx = flow_idx; + flow->res_idx = res_idx; job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; /* * Set the job type here in order to know if the flow memory @@ -2644,8 +2650,9 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule * insertion hints. */ - MLX5_ASSERT(flow_idx > 0); - rule_attr.rule_idx = flow_idx - 1; + MLX5_ASSERT(res_idx > 0); + flow->rule_idx = res_idx - 1; + rule_attr.rule_idx = flow->rule_idx; /* * Construct the flow actions based on the input actions. * The implicitly appended action is always fixed, like metadata @@ -2672,8 +2679,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, return (struct rte_flow *)flow; free: /* Flow created fail, return the descriptor and flow memory. */ - mlx5_ipool_free(table->flow, flow_idx); priv->hw_q[queue].job_idx++; + mlx5_ipool_free(table->resource, res_idx); +flow_free: + mlx5_ipool_free(table->flow, flow_idx); error: rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -2729,6 +2738,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, struct rte_flow_hw *flow; struct mlx5_hw_q_job *job; uint32_t flow_idx; + uint32_t res_idx = 0; int ret; if (unlikely(rule_index >= table->cfg.attr.nb_flows)) { @@ -2742,12 +2752,17 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, flow = mlx5_ipool_zmalloc(table->flow, &flow_idx); if (!flow) goto error; + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto flow_free; /* * Set the table here in order to know the destination table * when free the flow afterwards. */ flow->table = table; + flow->mt_idx = 0; flow->idx = flow_idx; + flow->res_idx = res_idx; job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; /* * Set the job type here in order to know if the flow memory @@ -2760,9 +2775,8 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, /* * Set the rule index. */ - MLX5_ASSERT(flow_idx > 0); - rule_attr.rule_idx = rule_index; flow->rule_idx = rule_index; + rule_attr.rule_idx = flow->rule_idx; /* * Construct the flow actions based on the input actions. * The implicitly appended action is always fixed, like metadata @@ -2784,8 +2798,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, return (struct rte_flow *)flow; free: /* Flow created fail, return the descriptor and flow memory. */ - mlx5_ipool_free(table->flow, flow_idx); priv->hw_q[queue].job_idx++; + mlx5_ipool_free(table->resource, res_idx); +flow_free: + mlx5_ipool_free(table->flow, flow_idx); error: rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -2793,6 +2809,123 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, return NULL; } +/** + * Enqueue HW steering flow update. + * + * The flow will be applied to the HW only if the postpone bit is not set or + * the extra push function is called. + * The flow destruction status should be checked from dequeue result. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to destroy the flow. + * @param[in] attr + * Pointer to the flow operation attributes. + * @param[in] flow + * Pointer to the flow to be destroyed. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[in] user_data + * Pointer to the user_data. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, negative value otherwise and rte_errno is set. + */ +static int +flow_hw_async_flow_update(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow *flow, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5dr_rule_attr rule_attr = { + .queue_id = queue, + .user_data = user_data, + .burst = attr->postpone, + }; + struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS]; + struct rte_flow_hw *of = (struct rte_flow_hw *)flow; + struct rte_flow_hw *nf; + struct rte_flow_template_table *table = of->table; + struct mlx5_hw_q_job *job; + uint32_t res_idx = 0; + int ret; + + if (unlikely(!priv->hw_q[queue].job_idx)) { + rte_errno = ENOMEM; + goto error; + } + mlx5_ipool_malloc(table->resource, &res_idx); + if (!res_idx) + goto error; + job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx]; + nf = job->upd_flow; + memset(nf, 0, sizeof(struct rte_flow_hw)); + /* + * Set the table here in order to know the destination table + * when free the flow afterwards. + */ + nf->table = table; + nf->mt_idx = of->mt_idx; + nf->idx = of->idx; + nf->res_idx = res_idx; + /* + * Set the job type here in order to know if the flow memory + * should be freed or not when get the result from dequeue. + */ + job->type = MLX5_HW_Q_JOB_TYPE_UPDATE; + job->flow = nf; + job->user_data = user_data; + rule_attr.user_data = job; + /* + * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule + * insertion hints. + */ + MLX5_ASSERT(res_idx > 0); + nf->rule_idx = res_idx - 1; + rule_attr.rule_idx = nf->rule_idx; + /* + * Construct the flow actions based on the input actions. + * The implicitly appended action is always fixed, like metadata + * copy action from FDB to NIC Rx. + * No need to copy and contrust a new "actions" list based on the + * user's input, in order to save the cost. + */ + if (flow_hw_actions_construct(dev, job, + &table->ats[action_template_index], + nf->mt_idx, actions, + rule_acts, queue, error)) { + rte_errno = EINVAL; + goto free; + } + /* + * Switch the old flow and the new flow. + */ + job->flow = of; + job->upd_flow = nf; + ret = mlx5dr_rule_action_update((struct mlx5dr_rule *)of->rule, + action_template_index, rule_acts, &rule_attr); + if (likely(!ret)) + return 0; +free: + /* Flow created fail, return the descriptor and flow memory. */ + priv->hw_q[queue].job_idx++; + mlx5_ipool_free(table->resource, res_idx); +error: + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "fail to update rte flow"); +} + /** * Enqueue HW steering flow destruction. * @@ -3002,6 +3135,7 @@ flow_hw_pull(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_aso_mtr_pool *pool = priv->hws_mpool; struct mlx5_hw_q_job *job; + uint32_t res_idx; int ret, i; /* 1. Pull the flow completion. */ @@ -3012,9 +3146,12 @@ flow_hw_pull(struct rte_eth_dev *dev, "fail to query flow queue"); for (i = 0; i < ret; i++) { job = (struct mlx5_hw_q_job *)res[i].user_data; + /* Release the original resource index in case of update. */ + res_idx = job->flow->res_idx; /* Restore user data. */ res[i].user_data = job->user_data; - if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY || + job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) { if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP) flow_hw_jump_release(dev, job->flow->jump); else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE) @@ -3026,7 +3163,14 @@ flow_hw_pull(struct rte_eth_dev *dev, mlx5_ipool_free(pool->idx_pool, job->flow->mtr_id); job->flow->mtr_id = 0; } - mlx5_ipool_free(job->flow->table->flow, job->flow->idx); + if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) { + mlx5_ipool_free(job->flow->table->resource, res_idx); + mlx5_ipool_free(job->flow->table->flow, job->flow->idx); + } else { + rte_memcpy(job->flow, job->upd_flow, + offsetof(struct rte_flow_hw, rule)); + mlx5_ipool_free(job->flow->table->resource, res_idx); + } } priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job; } @@ -3315,6 +3459,13 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->flow = mlx5_ipool_create(&cfg); if (!tbl->flow) goto error; + /* Allocate rule indexed pool. */ + cfg.size = 0; + cfg.type = "mlx5_hw_table_rule"; + cfg.max_idx += priv->hw_q[0].size; + tbl->resource = mlx5_ipool_create(&cfg); + if (!tbl->resource) + goto error; /* Register the flow group. */ ge = mlx5_hlist_register(priv->sh->groups, attr->flow_attr.group, &ctx); if (!ge) @@ -3417,6 +3568,8 @@ flow_hw_table_create(struct rte_eth_dev *dev, if (tbl->grp) mlx5_hlist_unregister(priv->sh->groups, &tbl->grp->entry); + if (tbl->resource) + mlx5_ipool_destroy(tbl->resource); if (tbl->flow) mlx5_ipool_destroy(tbl->flow); mlx5_free(tbl); @@ -3593,16 +3746,20 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; int i; uint32_t fidx = 1; + uint32_t ridx = 1; /* Build ipool allocated object bitmap. */ + mlx5_ipool_flush_cache(table->resource); mlx5_ipool_flush_cache(table->flow); /* Check if ipool has allocated objects. */ - if (table->refcnt || mlx5_ipool_get_next(table->flow, &fidx)) { - DRV_LOG(WARNING, "Table %p is still in using.", (void *)table); + if (table->refcnt || + mlx5_ipool_get_next(table->flow, &fidx) || + mlx5_ipool_get_next(table->resource, &ridx)) { + DRV_LOG(WARNING, "Table %p is still in use.", (void *)table); return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "table in using"); + "table in use"); } LIST_REMOVE(table, next); for (i = 0; i < table->nb_item_templates; i++) @@ -3615,6 +3772,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev, } mlx5dr_matcher_destroy(table->matcher); mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry); + mlx5_ipool_destroy(table->resource); mlx5_ipool_destroy(table->flow); mlx5_free(table); return 0; @@ -7416,7 +7574,8 @@ flow_hw_configure(struct rte_eth_dev *dev, sizeof(struct mlx5_modification_cmd) * MLX5_MHDR_MAX_CMD + sizeof(struct rte_flow_item) * - MLX5_HW_MAX_ITEMS) * + MLX5_HW_MAX_ITEMS + + sizeof(struct rte_flow_hw)) * _queue_attr[i]->size; } priv->hw_q = mlx5_malloc(MLX5_MEM_ZERO, mem_size, @@ -7430,6 +7589,7 @@ flow_hw_configure(struct rte_eth_dev *dev, uint8_t *encap = NULL; struct mlx5_modification_cmd *mhdr_cmd = NULL; struct rte_flow_item *items = NULL; + struct rte_flow_hw *upd_flow = NULL; priv->hw_q[i].job_idx = _queue_attr[i]->size; priv->hw_q[i].size = _queue_attr[i]->size; @@ -7448,10 +7608,13 @@ flow_hw_configure(struct rte_eth_dev *dev, &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD]; items = (struct rte_flow_item *) &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN]; + upd_flow = (struct rte_flow_hw *) + &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; for (j = 0; j < _queue_attr[i]->size; j++) { job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD]; job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN]; job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; + job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; } snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u", @@ -9031,6 +9194,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = { .template_table_destroy = flow_hw_table_destroy, .async_flow_create = flow_hw_async_flow_create, .async_flow_create_by_index = flow_hw_async_flow_create_by_index, + .async_flow_update = flow_hw_async_flow_update, .async_flow_destroy = flow_hw_async_flow_destroy, .pull = flow_hw_pull, .push = flow_hw_push,