From patchwork Sun Oct 29 16:31:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 133587 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7320843238; Sun, 29 Oct 2023 17:34:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B50CC415D7; Sun, 29 Oct 2023 17:33:17 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2058.outbound.protection.outlook.com [40.107.220.58]) by mails.dpdk.org (Postfix) with ESMTP id AC7DC41153 for ; Sun, 29 Oct 2023 17:33:15 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KdNFrZthZTQ2t+MJbpZCFQeokJnsvXEk3LEJY9rUiV7p+axZ6e5djfLqbYxq/v2M5iuzSyOhdWBGGdsCRhCVRHCQROmamABeXPeFEhm1YcYRsUbsY4cYZdqxLAoMq4HfE59QUkeeEbWuHuSxg1swOZID+1T7MQ+AOKLP7mwCFE0Gic1yYxXb6CgfB2wID7GFn1INJwnSetRju+JMyaAH1No3Gp4SbJcdZQpYkQuoxFYKnodqJWLz6ZB3J10WmdOZbiXbChxWkhbip56s21NklC8R7RO3z5mNBKNrmF3F7D93gpeRPUe7+sHDo+nCaEjsjN/GUqJhUSxd9PuzXeFaLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8mph9CwKhiWQAYHo4rjgSRPZMNuKlnAWK7k8G34FLmI=; b=cJMscBJ5tOc0BtvyT6VMReoDdcNCWer7THu66gpfAGoceQa2EwRkGXvO3fj+/oN3KH3twemsTDmvg9O6MvFZvnGBk1Wn3rlFMau1M7DtdJhIUUXfs4Ztohym6+WxBdYbNZKOxqyErlIIvtBdJ7lEIHLNPNThdFkYj4yO158JrrozVNnwRNfQDh0L6FLoDfZcxynrU3PthcmfWnJJHCvxAKITydSZNWtAqTPrjliF3BIVIBuN7SxpVoXYuKMuBeJvOYG66HcYCmvkbqpRTNqhYMYo35EwwV67qYUHBIj8cOWkM/FgP7GlYakN4SfI6OYGtM3j6oYKEkBcAaXZhJSNlA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8mph9CwKhiWQAYHo4rjgSRPZMNuKlnAWK7k8G34FLmI=; b=JmP6eroT3GMD47g4Ig35aQvmI1UscNZTwUZ9gtT8HB8csFvW0nC7+9sgxWQBV9z/UdUd+DFxUQkQN0MED4uddQPOQKyK5C0abmtIZnBxpOsHB2vroX491VZE8/AQOMqrM2eGtlvNyrcWq6OSaC4War4xc4RQwpmE383lYS9Q6bQbjEN8UQX/iSbtkKEfytd9vhpZOeafqT8+hm0OmLK6AN/I92oh9/bNZ7stGhCZJEBJkntjpHqzGLbLa2ou6QshDHcbZAfnC8dfaztV7tHquwWLAWu/oC1pY5u4XWSIlNHBgLWCNlKUnCsXEW59ZX6kR+ty+Z9i+WPa25fxbX6p1g== Received: from BL1PR13CA0083.namprd13.prod.outlook.com (2603:10b6:208:2b8::28) by SJ0PR12MB6782.namprd12.prod.outlook.com (2603:10b6:a03:44d::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.26; Sun, 29 Oct 2023 16:33:13 +0000 Received: from BL02EPF0001A100.namprd03.prod.outlook.com (2603:10b6:208:2b8:cafe::46) by BL1PR13CA0083.outlook.office365.com (2603:10b6:208:2b8::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.14 via Frontend Transport; Sun, 29 Oct 2023 16:33:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BL02EPF0001A100.mail.protection.outlook.com (10.167.242.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.15 via Frontend Transport; Sun, 29 Oct 2023 16:33:12 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 29 Oct 2023 09:33:01 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 29 Oct 2023 09:32:58 -0700 From: Gregory Etelson To: CC: , , , "Ori Kam" , Matan Azrad , Viacheslav Ovsiienko , Suanming Mou Subject: [PATCH 14/30] net/mlx5: reuse reformat and modify header actions in a table Date: Sun, 29 Oct 2023 18:31:46 +0200 Message-ID: <20231029163202.216450-14-getelson@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231029163202.216450-1-getelson@nvidia.com> References: <20231029163202.216450-1-getelson@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0001A100:EE_|SJ0PR12MB6782:EE_ X-MS-Office365-Filtering-Correlation-Id: f190950a-1a86-4e16-b19a-08dbd89cbe7f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: I9s+tnDghdxQXndRltTAjw4yxp54GF3xBBJ3N7n8xp8ndFBY3l5TQKtbEgqJbBMjdv4euAnWH7YZqVICRmk5bnfbATj8oZ2P5cdF2+m7GqQx0xuqP5Kngg/V3xvvxsY0yhYOhabxbq4vzxfVWX9x0+ACIUGchLVQyEVtd9+q/bUztC2yfbi1fRB4tjm8CXC7AjMYwOKELuOFsUI8wbYgmUbYoD2bvUzug36EiNOaoNdef5b8s+UNU58vKJzn1mNmueMhRV7zLKVsz52xzQMDjy5SvpXZtqBejW/oyhx2NU8VE3XczFwVG+vXyyF/IyQjzUQrhygBwzfbbJ/ZNFKmFkRPT2rPyD9G3i1PhiIgnottv5AxJlCWhPY1v66YH1USvgnPqnes0VimUcZgLEiDBpdocDeuq36FZTdToO/2QubwP8MGyXUXF+EG9Bj1wL5kRspaN8eeSUkrP/TjcyMRbwBY7+AGxrjL7rR1aR3ZY3D5jMp6lhF6J62zFBpQU12XSp4SqmPzMxlPMnakERntZCg7YuhUcqScYTlB2U/V+G/S1yZ1ly84LU79HuC9X+WMDV6puyJrNWte6vlzDWR66dTsJfw16RiDAS2oWkiupYCjQt05FmbH3ErwM/T59EFmyRHVI9B96IfiYpqPHJtF5r5lthaSa/HtuJrp6NCaHvBDwNkXmKvIqlmgzbfGIhrlP4wapeBdCUumTFMH/exCo4Rh9gye/QsCJnqyQVvbGe2r4uSg2smnGrbkBEbgM/JI X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(396003)(136003)(39860400002)(376002)(230922051799003)(82310400011)(451199024)(64100799003)(186009)(1800799009)(40470700004)(36840700001)(46966006)(40460700003)(36756003)(26005)(16526019)(6286002)(1076003)(83380400001)(7636003)(426003)(336012)(5660300002)(356005)(6666004)(7696005)(2616005)(107886003)(82740400003)(8936002)(8676002)(47076005)(54906003)(316002)(41300700001)(6916009)(36860700001)(55016003)(40480700001)(30864003)(70586007)(86362001)(478600001)(70206006)(2906002)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2023 16:33:12.3719 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f190950a-1a86-4e16-b19a-08dbd89cbe7f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0001A100.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6782 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If application defined several actions templates with non-shared reformat or modify headers actions AND used these templates to create a table, HWS could share reformat or modify headers resources, instead of creating a resource for each action template. The patch activates HWS code in a way that provides reformat or modify header resources sharing. The patch updates modify field and raw encap template actions validations: - modify field does not allow empty action template masks. - raw encap added action template mask validation. Signed-off-by: Gregory Etelson Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow.h | 8 +- drivers/net/mlx5/mlx5_flow_dv.c | 3 +- drivers/net/mlx5/mlx5_flow_hw.c | 583 ++++++++++++++++++++++++-------- 3 files changed, 451 insertions(+), 143 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 64e2fc6f04..ddb3b7b6fd 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1368,7 +1368,9 @@ struct mlx5_hw_jump_action { struct mlx5_hw_encap_decap_action { struct mlx5dr_action *action; /* Action object. */ /* Is header_reformat action shared across flows in table. */ - bool shared; + uint32_t shared:1; + uint32_t multi_pattern:1; + volatile uint32_t *multi_pattern_refcnt; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; @@ -1382,7 +1384,9 @@ struct mlx5_hw_modify_header_action { /* Modify header action position in action rule table. */ uint16_t pos; /* Is MODIFY_HEADER action shared across flows in table. */ - bool shared; + uint32_t shared:1; + uint32_t multi_pattern:1; + volatile uint32_t *multi_pattern_refcnt; /* Amount of modification commands stored in the precompiled buffer. */ uint32_t mhdr_cmds_num; /* Precompiled modification commands. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index bdc8d0076a..84b94a9815 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -4579,7 +4579,8 @@ flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf, (void *)items->type, "items total size is too big" " for encap action"); - rte_memcpy((void *)&buf[temp_size], items->spec, len); + if (items->spec) + rte_memcpy(&buf[temp_size], items->spec, len); switch (items->type) { case RTE_FLOW_ITEM_TYPE_ETH: eth = (struct rte_ether_hdr *)&buf[temp_size]; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 6fc649d736..84c78ba19c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -71,6 +71,95 @@ struct mlx5_indlst_legacy { enum rte_flow_action_type legacy_type; }; +#define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \ +(((const struct encap_type *)(ptr))->definition) + +struct mlx5_multi_pattern_ctx { + union { + struct mlx5dr_action_reformat_header reformat_hdr; + struct mlx5dr_action_mh_pattern mh_pattern; + }; + union { + /* action template auxiliary structures for object destruction */ + struct mlx5_hw_encap_decap_action *encap; + struct mlx5_hw_modify_header_action *mhdr; + }; + /* multi pattern action */ + struct mlx5dr_rule_action *rule_action; +}; + +#define MLX5_MULTIPATTERN_ENCAP_NUM 4 + +struct mlx5_tbl_multi_pattern_ctx { + struct { + uint32_t elements_num; + struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } reformat[MLX5_MULTIPATTERN_ENCAP_NUM]; + + struct { + uint32_t elements_num; + struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + } mh; +}; + +#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},} + +static int +mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + struct mlx5_tbl_multi_pattern_ctx *mpat, + struct rte_flow_error *error); + +static __rte_always_inline int +mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type) +{ + switch (type) { + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2: + return 0; + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2: + return 1; + case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2: + return 2; + case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3: + return 3; + default: + break; + } + return -1; +} + +static __rte_always_inline enum mlx5dr_action_type +mlx5_multi_pattern_reformat_index_to_type(uint32_t ix) +{ + switch (ix) { + case 0: + return MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + case 1: + return MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; + case 2: + return MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2; + case 3: + return MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3; + default: + break; + } + return MLX5DR_ACTION_TYP_MAX; +} + +static inline enum mlx5dr_table_type +get_mlx5dr_table_type(const struct rte_flow_attr *attr) +{ + enum mlx5dr_table_type type; + + if (attr->transfer) + type = MLX5DR_TABLE_TYPE_FDB; + else if (attr->egress) + type = MLX5DR_TABLE_TYPE_NIC_TX; + else + type = MLX5DR_TABLE_TYPE_NIC_RX; + return type; +} + struct mlx5_mirror_clone { enum rte_flow_action_type type; void *action_ctx; @@ -462,6 +551,34 @@ flow_hw_ct_compile(struct rte_eth_dev *dev, return 0; } +static void +flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap) +{ + if (encap_decap->multi_pattern) { + uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt, + 1, __ATOMIC_RELAXED); + if (refcnt) + return; + mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt); + } + if (encap_decap->action) + mlx5dr_action_destroy(encap_decap->action); +} + +static void +flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr) +{ + if (mhdr->multi_pattern) { + uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt, + 1, __ATOMIC_RELAXED); + if (refcnt) + return; + mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt); + } + if (mhdr->action) + mlx5dr_action_destroy(mhdr->action); +} + /** * Destroy DR actions created by action template. * @@ -503,14 +620,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev, acts->tir = NULL; } if (acts->encap_decap) { - if (acts->encap_decap->action) - mlx5dr_action_destroy(acts->encap_decap->action); + flow_hw_template_destroy_reformat_action(acts->encap_decap); mlx5_free(acts->encap_decap); acts->encap_decap = NULL; } if (acts->mhdr) { - if (acts->mhdr->action) - mlx5dr_action_destroy(acts->mhdr->action); + flow_hw_template_destroy_mhdr_action(acts->mhdr); mlx5_free(acts->mhdr); acts->mhdr = NULL; } @@ -881,8 +996,6 @@ flow_hw_action_modify_field_is_shared(const struct rte_flow_action *action, if (v->src.field == RTE_FLOW_FIELD_VALUE) { uint32_t j; - if (m == NULL) - return false; for (j = 0; j < RTE_DIM(m->src.value); ++j) { /* * Immediate value is considered to be masked @@ -1630,6 +1743,137 @@ table_template_translate_indirect_list(struct rte_eth_dev *dev, return ret; } +static int +mlx5_tbl_translate_reformat(struct mlx5_priv *priv, + const struct rte_flow_template_table_attr *table_attr, + struct mlx5_hw_actions *acts, + struct rte_flow_actions_template *at, + const struct rte_flow_item *enc_item, + const struct rte_flow_item *enc_item_m, + uint8_t *encap_data, uint8_t *encap_data_m, + struct mlx5_tbl_multi_pattern_ctx *mp_ctx, + size_t data_size, uint16_t reformat_src, + enum mlx5dr_action_type refmt_type, + struct rte_flow_error *error) +{ + int mp_reformat_ix = mlx5_multi_pattern_reformat_to_index(refmt_type); + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr); + struct mlx5dr_action_reformat_header hdr; + uint8_t buf[MLX5_ENCAP_MAX_LEN]; + bool shared_rfmt = false; + int ret; + + MLX5_ASSERT(at->reformat_off != UINT16_MAX); + if (enc_item) { + MLX5_ASSERT(!encap_data); + ret = flow_dv_convert_encap_data(enc_item, buf, &data_size, error); + if (ret) + return ret; + encap_data = buf; + if (enc_item_m) + shared_rfmt = true; + } else if (encap_data && encap_data_m) { + shared_rfmt = true; + } + acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*acts->encap_decap) + data_size, + 0, SOCKET_ID_ANY); + if (!acts->encap_decap) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "no memory for reformat context"); + hdr.sz = data_size; + hdr.data = encap_data; + if (shared_rfmt || mp_reformat_ix < 0) { + uint16_t reformat_ix = at->reformat_off; + uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] | + MLX5DR_ACTION_FLAG_SHARED; + + acts->encap_decap->action = + mlx5dr_action_create_reformat(priv->dr_ctx, refmt_type, + 1, &hdr, 0, flags); + if (!acts->encap_decap->action) + return -rte_errno; + acts->rule_acts[reformat_ix].action = acts->encap_decap->action; + acts->rule_acts[reformat_ix].reformat.data = acts->encap_decap->data; + acts->rule_acts[reformat_ix].reformat.offset = 0; + acts->encap_decap->shared = true; + } else { + uint32_t ix; + typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat + + mp_reformat_ix; + + ix = reformat_ctx->elements_num++; + reformat_ctx->ctx[ix].reformat_hdr = hdr; + reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off]; + reformat_ctx->ctx[ix].encap = acts->encap_decap; + acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix; + acts->encap_decap_pos = at->reformat_off; + acts->encap_decap->data_size = data_size; + ret = __flow_hw_act_data_encap_append + (priv, acts, (at->actions + reformat_src)->type, + reformat_src, at->reformat_off, data_size); + if (ret) + return -rte_errno; + } + return 0; +} + +static int +mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev, + const struct mlx5_flow_template_table_cfg *cfg, + struct mlx5_hw_actions *acts, + struct mlx5_tbl_multi_pattern_ctx *mp_ctx, + struct mlx5_hw_modify_header_action *mhdr, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &cfg->attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr); + uint16_t mhdr_ix = mhdr->pos; + struct mlx5dr_action_mh_pattern pattern = { + .sz = sizeof(struct mlx5_modification_cmd) * mhdr->mhdr_cmds_num + }; + + if (flow_hw_validate_compiled_modify_field(dev, cfg, mhdr, error)) { + __flow_hw_action_template_destroy(dev, acts); + return -rte_errno; + } + acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr), + 0, SOCKET_ID_ANY); + if (!acts->mhdr) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "translate modify_header: no memory for modify header context"); + rte_memcpy(acts->mhdr, mhdr, sizeof(*mhdr)); + pattern.data = (__be64 *)acts->mhdr->mhdr_cmds; + if (mhdr->shared) { + uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] | + MLX5DR_ACTION_FLAG_SHARED; + + acts->mhdr->action = mlx5dr_action_create_modify_header + (priv->dr_ctx, 1, &pattern, 0, + flags); + if (!acts->mhdr->action) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "translate modify_header: failed to create DR action"); + acts->rule_acts[mhdr_ix].action = acts->mhdr->action; + } else { + typeof(mp_ctx->mh) *mh = &mp_ctx->mh; + uint32_t idx = mh->elements_num; + struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++; + + mh_ctx->mh_pattern = pattern; + mh_ctx->mhdr = acts->mhdr; + mh_ctx->rule_action = &acts->rule_acts[mhdr_ix]; + acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx; + } + return 0; +} + /** * Translate rte_flow actions to DR action. * @@ -1658,6 +1902,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, const struct mlx5_flow_template_table_cfg *cfg, struct mlx5_hw_actions *acts, struct rte_flow_actions_template *at, + struct mlx5_tbl_multi_pattern_ctx *mp_ctx, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -1820,32 +2065,26 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, break; case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: MLX5_ASSERT(!reformat_used); - enc_item = ((const struct rte_flow_action_vxlan_encap *) - actions->conf)->definition; + enc_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_vxlan_encap, + actions->conf); if (masks->conf) - enc_item_m = ((const struct rte_flow_action_vxlan_encap *) - masks->conf)->definition; + enc_item_m = MLX5_CONST_ENCAP_ITEM(rte_flow_action_vxlan_encap, + masks->conf); reformat_used = true; reformat_src = src_pos; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: MLX5_ASSERT(!reformat_used); - enc_item = ((const struct rte_flow_action_nvgre_encap *) - actions->conf)->definition; + enc_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_nvgre_encap, + actions->conf); if (masks->conf) - enc_item_m = ((const struct rte_flow_action_nvgre_encap *) - masks->conf)->definition; + enc_item_m = MLX5_CONST_ENCAP_ITEM(rte_flow_action_nvgre_encap, + masks->conf); reformat_used = true; reformat_src = src_pos; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - MLX5_ASSERT(!reformat_used); - reformat_used = true; - refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; - break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = (const struct rte_flow_action_raw_encap *) @@ -1869,6 +2108,12 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, } reformat_src = src_pos; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + MLX5_ASSERT(!reformat_used); + reformat_used = true; + refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; + break; case RTE_FLOW_ACTION_TYPE_RAW_DECAP: reformat_used = true; refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2; @@ -2005,83 +2250,20 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev, } } if (mhdr.pos != UINT16_MAX) { - struct mlx5dr_action_mh_pattern pattern; - uint32_t flags; - uint32_t bulk_size; - size_t mhdr_len; - - if (flow_hw_validate_compiled_modify_field(dev, cfg, &mhdr, error)) { - __flow_hw_action_template_destroy(dev, acts); - return -rte_errno; - } - acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr), - 0, SOCKET_ID_ANY); - if (!acts->mhdr) - goto err; - rte_memcpy(acts->mhdr, &mhdr, sizeof(*acts->mhdr)); - mhdr_len = sizeof(struct mlx5_modification_cmd) * acts->mhdr->mhdr_cmds_num; - flags = mlx5_hw_act_flag[!!attr->group][type]; - if (acts->mhdr->shared) { - flags |= MLX5DR_ACTION_FLAG_SHARED; - bulk_size = 0; - } else { - bulk_size = rte_log2_u32(table_attr->nb_flows); - } - pattern.data = (__be64 *)acts->mhdr->mhdr_cmds; - pattern.sz = mhdr_len; - acts->mhdr->action = mlx5dr_action_create_modify_header - (priv->dr_ctx, 1, &pattern, - bulk_size, flags); - if (!acts->mhdr->action) + ret = mlx5_tbl_translate_modify_header(dev, cfg, acts, mp_ctx, + &mhdr, error); + if (ret) goto err; - acts->rule_acts[acts->mhdr->pos].action = acts->mhdr->action; } if (reformat_used) { - struct mlx5dr_action_reformat_header hdr; - uint8_t buf[MLX5_ENCAP_MAX_LEN]; - bool shared_rfmt = true; - - MLX5_ASSERT(at->reformat_off != UINT16_MAX); - if (enc_item) { - MLX5_ASSERT(!encap_data); - if (flow_dv_convert_encap_data(enc_item, buf, &data_size, error)) - goto err; - encap_data = buf; - if (!enc_item_m) - shared_rfmt = false; - } else if (encap_data && !encap_data_m) { - shared_rfmt = false; - } - acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, - sizeof(*acts->encap_decap) + data_size, - 0, SOCKET_ID_ANY); - if (!acts->encap_decap) - goto err; - if (data_size) { - acts->encap_decap->data_size = data_size; - memcpy(acts->encap_decap->data, encap_data, data_size); - } - - hdr.sz = data_size; - hdr.data = encap_data; - acts->encap_decap->action = mlx5dr_action_create_reformat - (priv->dr_ctx, refmt_type, - 1, &hdr, - shared_rfmt ? 0 : rte_log2_u32(table_attr->nb_flows), - mlx5_hw_act_flag[!!attr->group][type] | - (shared_rfmt ? MLX5DR_ACTION_FLAG_SHARED : 0)); - if (!acts->encap_decap->action) - goto err; - acts->rule_acts[at->reformat_off].action = acts->encap_decap->action; - acts->rule_acts[at->reformat_off].reformat.data = acts->encap_decap->data; - if (shared_rfmt) - acts->rule_acts[at->reformat_off].reformat.offset = 0; - else if (__flow_hw_act_data_encap_append(priv, acts, - (at->actions + reformat_src)->type, - reformat_src, at->reformat_off, data_size)) + ret = mlx5_tbl_translate_reformat(priv, table_attr, acts, at, + enc_item, enc_item_m, + encap_data, encap_data_m, + mp_ctx, data_size, + reformat_src, + refmt_type, error); + if (ret) goto err; - acts->encap_decap->shared = shared_rfmt; - acts->encap_decap_pos = at->reformat_off; } return 0; err: @@ -2110,15 +2292,20 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, struct rte_flow_template_table *tbl, struct rte_flow_error *error) { + int ret; uint32_t i; + struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; for (i = 0; i < tbl->nb_action_templates; i++) { if (__flow_hw_actions_translate(dev, &tbl->cfg, &tbl->ats[i].acts, tbl->ats[i].action_template, - error)) + &mpat, error)) goto err; } + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + if (ret) + goto err; return 0; err: while (i--) @@ -3627,6 +3814,143 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev, return 0; } +static int +mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev, + struct rte_flow_template_table *tbl, + struct mlx5_tbl_multi_pattern_ctx *mpat, + struct rte_flow_error *error) +{ + uint32_t i; + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr; + const struct rte_flow_attr *attr = &table_attr->flow_attr; + enum mlx5dr_table_type type = get_mlx5dr_table_type(attr); + uint32_t flags = mlx5_hw_act_flag[!!attr->group][type]; + struct mlx5dr_action *dr_action; + uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows); + + for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) { + uint32_t j; + uint32_t *reformat_refcnt; + typeof(mpat->reformat[0]) *reformat = mpat->reformat + i; + struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + enum mlx5dr_action_type reformat_type = + mlx5_multi_pattern_reformat_index_to_type(i); + + if (!reformat->elements_num) + continue; + for (j = 0; j < reformat->elements_num; j++) + hdr[j] = reformat->ctx[j].reformat_hdr; + reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0, + rte_socket_id()); + if (!reformat_refcnt) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "failed to allocate multi-pattern encap counter"); + *reformat_refcnt = reformat->elements_num; + dr_action = mlx5dr_action_create_reformat + (priv->dr_ctx, reformat_type, reformat->elements_num, hdr, + bulk_size, flags); + if (!dr_action) { + mlx5_free(reformat_refcnt); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern encap action"); + } + for (j = 0; j < reformat->elements_num; j++) { + reformat->ctx[j].rule_action->action = dr_action; + reformat->ctx[j].encap->action = dr_action; + reformat->ctx[j].encap->multi_pattern = 1; + reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt; + } + } + if (mpat->mh.elements_num) { + typeof(mpat->mh) *mh = &mpat->mh; + struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE]; + uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), + 0, rte_socket_id()); + + if (!mh_refcnt) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "failed to allocate modify header counter"); + *mh_refcnt = mpat->mh.elements_num; + for (i = 0; i < mpat->mh.elements_num; i++) + pattern[i] = mh->ctx[i].mh_pattern; + dr_action = mlx5dr_action_create_modify_header + (priv->dr_ctx, mpat->mh.elements_num, pattern, + bulk_size, flags); + if (!dr_action) { + mlx5_free(mh_refcnt); + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to create multi-pattern header modify action"); + } + for (i = 0; i < mpat->mh.elements_num; i++) { + mh->ctx[i].rule_action->action = dr_action; + mh->ctx[i].mhdr->action = dr_action; + mh->ctx[i].mhdr->multi_pattern = 1; + mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt; + } + } + + return 0; +} + +static int +mlx5_hw_build_template_table(struct rte_eth_dev *dev, + uint8_t nb_action_templates, + struct rte_flow_actions_template *action_templates[], + struct mlx5dr_action_template *at[], + struct rte_flow_template_table *tbl, + struct rte_flow_error *error) +{ + int ret; + uint8_t i; + struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX; + + for (i = 0; i < nb_action_templates; i++) { + uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1, + __ATOMIC_RELAXED); + + if (refcnt <= 1) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + &action_templates[i], "invalid AT refcount"); + goto at_error; + } + at[i] = action_templates[i]->tmpl; + tbl->ats[i].action_template = action_templates[i]; + LIST_INIT(&tbl->ats[i].acts.act_list); + /* do NOT translate table action if `dev` was not started */ + if (!dev->data->dev_started) + continue; + ret = __flow_hw_actions_translate(dev, &tbl->cfg, + &tbl->ats[i].acts, + action_templates[i], + &mpat, error); + if (ret) { + i++; + goto at_error; + } + } + tbl->nb_action_templates = nb_action_templates; + ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error); + if (ret) + goto at_error; + return 0; + +at_error: + while (i--) { + __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); + __atomic_sub_fetch(&action_templates[i]->refcnt, + 1, __ATOMIC_RELAXED); + } + return rte_errno; +} + /** * Create flow table. * @@ -3779,29 +4103,12 @@ flow_hw_table_create(struct rte_eth_dev *dev, } tbl->nb_item_templates = nb_item_templates; /* Build the action template. */ - for (i = 0; i < nb_action_templates; i++) { - uint32_t ret; - - ret = __atomic_fetch_add(&action_templates[i]->refcnt, 1, - __ATOMIC_RELAXED) + 1; - if (ret <= 1) { - rte_errno = EINVAL; - goto at_error; - } - at[i] = action_templates[i]->tmpl; - tbl->ats[i].action_template = action_templates[i]; - LIST_INIT(&tbl->ats[i].acts.act_list); - if (!port_started) - continue; - err = __flow_hw_actions_translate(dev, &tbl->cfg, - &tbl->ats[i].acts, - action_templates[i], &sub_error); - if (err) { - i++; - goto at_error; - } + err = mlx5_hw_build_template_table(dev, nb_action_templates, + action_templates, at, tbl, &sub_error); + if (err) { + i = nb_item_templates; + goto it_error; } - tbl->nb_action_templates = nb_action_templates; tbl->matcher = mlx5dr_matcher_create (tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr); if (!tbl->matcher) @@ -3815,7 +4122,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next); return tbl; at_error: - while (i--) { + for (i = 0; i < nb_action_templates; i++) { __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts); __atomic_fetch_sub(&action_templates[i]->refcnt, 1, __ATOMIC_RELAXED); @@ -4058,6 +4365,10 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev, const struct rte_flow_action_modify_field *mask_conf = mask->conf; int ret; + if (!mask_conf) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "modify_field mask conf is missing"); if (action_conf->operation != mask_conf->operation) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, @@ -4434,16 +4745,25 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused, - const struct rte_flow_action *action, +flow_hw_validate_action_raw_encap(const struct rte_flow_action *action, + const struct rte_flow_action *mask, struct rte_flow_error *error) { - const struct rte_flow_action_raw_encap *raw_encap_data = action->conf; + const struct rte_flow_action_raw_encap *mask_conf = mask->conf; + const struct rte_flow_action_raw_encap *action_conf = action->conf; - if (!raw_encap_data || !raw_encap_data->size || !raw_encap_data->data) + if (!mask_conf || !mask_conf->size) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, mask, + "raw_encap: size must be masked"); + if (!action_conf || !action_conf->size) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, - "invalid raw_encap_data"); + "raw_encap: invalid action configuration"); + if (mask_conf->data && !action_conf->data) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "raw_encap: masked data is missing"); return 0; } @@ -4724,7 +5044,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev, action_flags |= MLX5_FLOW_ACTION_DECAP; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - ret = flow_hw_validate_action_raw_encap(dev, action, error); + ret = flow_hw_validate_action_raw_encap(action, mask, error); if (ret < 0) return ret; action_flags |= MLX5_FLOW_ACTION_ENCAP; @@ -9599,20 +9919,6 @@ mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror) mlx5_free(mirror); } -static inline enum mlx5dr_table_type -get_mlx5dr_table_type(const struct rte_flow_attr *attr) -{ - enum mlx5dr_table_type type; - - if (attr->transfer) - type = MLX5DR_TABLE_TYPE_FDB; - else if (attr->egress) - type = MLX5DR_TABLE_TYPE_NIC_TX; - else - type = MLX5DR_TABLE_TYPE_NIC_RX; - return type; -} - static __rte_always_inline bool mlx5_mirror_terminal_action(const struct rte_flow_action *action) { @@ -9751,9 +10057,6 @@ mirror_format_port(struct rte_eth_dev *dev, return 0; } -#define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \ -(((const struct encap_type *)(ptr))->definition) - static int hw_mirror_clone_reformat(const struct rte_flow_action *actions, struct mlx5dr_action_dest_attr *dest_attr,