From patchwork Fri Sep 30 12:53:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 117213 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38E64A00C4; Fri, 30 Sep 2022 14:53:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D56044280E; Fri, 30 Sep 2022 14:53:47 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2053.outbound.protection.outlook.com [40.107.220.53]) by mails.dpdk.org (Postfix) with ESMTP id B0890427F7 for ; Fri, 30 Sep 2022 14:53:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=exRU0qYbjCaW5q7Z5NhmqXskkkJO4iJPVhTapXyhmpEjvthKpqKpFbd8DgVitiHaCspR/PB323jhYVxXUKRhBWcJpF1/pAZCuXEBFHWq/ACF+b3T1+NprGvxW2rqDUWDggoP40acIgDk+C/vjBNLMXX9zk6NSuskVK2+26kdt4lBCug6ZteoVyWDndQJTx/Ul3sqO55wlhcB5ivmayp84JacxB23HnSWUbHRPObIMEkRticvSvtf9xIzzxd/V3PQU1DWm2izONA/6tCPvWeeKZCqgbvUf+u8tu511oA4G+vkrH647l351kP3YuM/LlC8C+ozEMOkQnN4/AAv32A5QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ybLPPrfAx5pOZBmLeKwO8WikZMPmbcU7ZawPMEOSslY=; b=jWsUm73LRqnMzxiL4vbBisgt+j9LScJ9sQEiEaNP5bHEJi36JQgOpYpQsf+VfZ/qStxfgzuQU23ZBw278u4pZeEtcT222Z1ANeibfzkX3qIySuDGqTLrgVvuhAq9OGo9jRGzuNz0ijEImjt5ExQHrVNI+gLZyJB3dDAFpl0LGAGmQKx72+TV8aubLcwXu2z0FGje7P3QaGm6m5U/Jt/9QGZgtfKUt8I76p+yiPEV2fzqAZFs9P4HaED0bPrsdUWLJ/9UViEkGnwtUBC3cQU3SBU5hvfdZ6IpZQxZ1qeuhy6RXmEtJSHt22WMvlnDaq/GqrBZOmnM3A4LJg3bbXMYQg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ybLPPrfAx5pOZBmLeKwO8WikZMPmbcU7ZawPMEOSslY=; b=hQ9NwbrmASgBLLg3AHxvR/xjZHAZfvYrH/C5sXsUC8+jE12/TaCYNIealgtsOhD3uwLntjOPaESzaWtKqnZ96/i0FsI0NFMaLWlQCTwvGQN41tFrlUtxQmnc/GMOaq0+rON8NNDR0pOJ0KwmuXKoIFM8SBzTu9MQB6cGV1eBoBQ2yKxjCbWwM2fCx4mhlzLpoMQi8N4VeRTylIFZuRg4aUJiRQ2LZhYwqLiUgzJx9k+WvXAnC6kJQMkCb/PFXc8V+QtVqsEp0juxXRRrjseCSvmUIgnSKm2FUUVqSJCH0Eq0+zPCgx9x84NcZVWk5SoSe9E30mvzEwLasfxk+7/4Jg== Received: from DS7PR06CA0038.namprd06.prod.outlook.com (2603:10b6:8:54::19) by LV2PR12MB5800.namprd12.prod.outlook.com (2603:10b6:408:178::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.20; Fri, 30 Sep 2022 12:53:43 +0000 Received: from DM6NAM11FT031.eop-nam11.prod.protection.outlook.com (2603:10b6:8:54:cafe::1e) by DS7PR06CA0038.outlook.office365.com (2603:10b6:8:54::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.23 via Frontend Transport; Fri, 30 Sep 2022 12:53:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by DM6NAM11FT031.mail.protection.outlook.com (10.13.172.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.17 via Frontend Transport; Fri, 30 Sep 2022 12:53:43 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Fri, 30 Sep 2022 05:53:39 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Fri, 30 Sep 2022 05:53:37 -0700 From: Suanming Mou To: Matan Azrad , Viacheslav Ovsiienko CC: , , Subject: [PATCH v3 03/17] net/mlx5: add shared header reformat support Date: Fri, 30 Sep 2022 15:53:01 +0300 Message-ID: <20220930125315.5079-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220930125315.5079-1-suanmingm@nvidia.com> References: <20220923144334.27736-1-suanmingm@nvidia.com> <20220930125315.5079-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT031:EE_|LV2PR12MB5800:EE_ X-MS-Office365-Filtering-Correlation-Id: d248b72d-59cb-44f9-f8bc-08daa2e2ce68 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vQiCeSYOa+CYrs4s/ZE3kAciGAOY517vUKn9J/bbSpqOf0W8C52DCM8+iDKKSWJcTI6CfuL8ldBRlOM4iZMUnZryWQ0JVo7+ZJAP4ZwKimYfpi5tLsHe1U6BAMb7dJuGwKByX8oRvPIpCCeqpnf7XGvTFsMp6PMBvPTR/Itl7RzXFqTE25kUKAUS6bzRo/Uwx58ojomgUCTHrthx986KlF2egTTbLKZqvfmv+6g31HofkSJtvYSpaqmsnOkq28J7jY0IqRXGDvxK8CZHT/JsFTIS6zYHWB/nUcYn0KgWgxjwRKvhdhwMXiel3DMhdIOySe5I/vZcA6mI5qB48kfo+HsoVrq524krqRRHPfpx0F4m8WZPyqx8a26I1EQnlfTiQYqfmVEIuuBeO3nFYvpaPksD6z/4PeldPCb0DpuE6g/ZClLQXeG3NalSmPEXHKSx5bKAfxEqCuRrcBpMIQvET6nSrjenLsiPJTeqaqf/XBByQ8cS10EVj3oOwc57arH/fTz0g56VEUsvTzj/HB7jDCznT5KIkzb/qexHWcqnmsacHTOVz4aBtKEgPCcLKIOHq42YoAidTEFVALNJuC7Kkelk/bXCAYmhQXL16+847TU4ZuTNamDOpgNDEBUJZaEOUVWMDxIIlYshC+//l0dg5u4s1rmbbZOtQr61okAMxpN5NJU7joSsybM/y+hgk3fDz0yYO20wzEKA8w40PVWCA5yvaOof5FOELtmhhyYGjsckieROyO2cJRbxUS/UkxUM9zZdoIg4CeS2DxZQV92vzg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(376002)(396003)(39860400002)(346002)(136003)(451199015)(40470700004)(46966006)(36840700001)(47076005)(7636003)(82310400005)(8676002)(336012)(316002)(4326008)(426003)(86362001)(54906003)(1076003)(186003)(110136005)(6636002)(2616005)(16526019)(82740400003)(478600001)(26005)(70206006)(40460700003)(70586007)(356005)(7696005)(6286002)(6666004)(107886003)(83380400001)(2906002)(55016003)(36756003)(36860700001)(40480700001)(5660300002)(41300700001)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2022 12:53:43.4234 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d248b72d-59cb-44f9-f8bc-08daa2e2ce68 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT031.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV2PR12MB5800 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org As the rte_flow_async API defines, the action mask with field value not be 0 means the action will be used as shared in all the flows in the table. The header reformat action with action mask field not be 0 will be created as constant shared action. For encapsulation header reformat action, there are two kinds of encapsulation data, raw_encap_data and rte_flow_item encap_data. Both of these two kinds of data can be identified from the action mask conf as constant or not. Examples: 1. VXLAN encap (encap_data: rte_flow_item) action conf (eth/ipv4/udp/vxlan_hdr) a. action mask conf (eth/ipv4/udp/vxlan_hdr) - items are constant. b. action mask conf (NULL) - items will change. 2. RAW encap (encap_data: raw) action conf (raw_data) a. action mask conf (not NULL) - encap_data constant. b. action mask conf (NULL) - encap_data will change. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow.h | 6 +- drivers/net/mlx5/mlx5_flow_hw.c | 124 ++++++++++---------------------- 2 files changed, 39 insertions(+), 91 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 4b53912b79..1c9f5fc1d5 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1064,10 +1064,6 @@ struct mlx5_action_construct_data { uint16_t action_dst; /* mlx5dr_rule_action dst offset. */ union { struct { - /* encap src(item) offset. */ - uint16_t src; - /* encap dst data offset. */ - uint16_t dst; /* encap data len. */ uint16_t len; } encap; @@ -1110,6 +1106,8 @@ struct mlx5_hw_jump_action { /* Encap decap action struct. */ struct mlx5_hw_encap_decap_action { struct mlx5dr_action *action; /* Action object. */ + /* Is header_reformat action shared across flows in table. */ + bool shared; size_t data_size; /* Action metadata size. */ uint8_t data[]; /* Action data. */ }; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 46c4169b4f..b6978bd051 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -402,10 +402,6 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv, * Offset of source rte flow action. * @param[in] action_dst * Offset of destination DR action. - * @param[in] encap_src - * Offset of source encap raw data. - * @param[in] encap_dst - * Offset of destination encap raw data. * @param[in] len * Length of the data to be updated. * @@ -418,16 +414,12 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv, enum rte_flow_action_type type, uint16_t action_src, uint16_t action_dst, - uint16_t encap_src, - uint16_t encap_dst, uint16_t len) { struct mlx5_action_construct_data *act_data; act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst); if (!act_data) return -1; - act_data->encap.src = encap_src; - act_data->encap.dst = encap_dst; act_data->encap.len = len; LIST_INSERT_HEAD(&acts->act_list, act_data, next); return 0; @@ -523,53 +515,6 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev, return 0; } -/** - * Translate encap items to encapsulation list. - * - * @param[in] dev - * Pointer to the rte_eth_dev data structure. - * @param[in] acts - * Pointer to the template HW steering DR actions. - * @param[in] type - * Action type. - * @param[in] action_src - * Offset of source rte flow action. - * @param[in] action_dst - * Offset of destination DR action. - * @param[in] items - * Encap item pattern. - * @param[in] items_m - * Encap item mask indicates which part are constant and dynamic. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static __rte_always_inline int -flow_hw_encap_item_translate(struct rte_eth_dev *dev, - struct mlx5_hw_actions *acts, - enum rte_flow_action_type type, - uint16_t action_src, - uint16_t action_dst, - const struct rte_flow_item *items, - const struct rte_flow_item *items_m) -{ - struct mlx5_priv *priv = dev->data->dev_private; - size_t len, total_len = 0; - uint32_t i = 0; - - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++, items_m++, i++) { - len = flow_dv_get_item_hdr_len(items->type); - if ((!items_m->spec || - memcmp(items_m->spec, items->spec, len)) && - __flow_hw_act_data_encap_append(priv, acts, type, - action_src, action_dst, i, - total_len, len)) - return -1; - total_len += len; - } - return 0; -} - /** * Translate rte_flow actions to DR action. * @@ -611,7 +556,7 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, const struct rte_flow_action_raw_encap *raw_encap_data; const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL; uint16_t reformat_pos = MLX5_HW_MAX_ACTS, reformat_src = 0; - uint8_t *encap_data = NULL; + uint8_t *encap_data = NULL, *encap_data_m = NULL; size_t data_size = 0; bool actions_end = false; uint32_t type, i; @@ -718,9 +663,9 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); enc_item = ((const struct rte_flow_action_vxlan_encap *) actions->conf)->definition; - enc_item_m = - ((const struct rte_flow_action_vxlan_encap *) - masks->conf)->definition; + if (masks->conf) + enc_item_m = ((const struct rte_flow_action_vxlan_encap *) + masks->conf)->definition; reformat_pos = i++; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; @@ -729,9 +674,9 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, MLX5_ASSERT(reformat_pos == MLX5_HW_MAX_ACTS); enc_item = ((const struct rte_flow_action_nvgre_encap *) actions->conf)->definition; - enc_item_m = - ((const struct rte_flow_action_nvgre_encap *) - actions->conf)->definition; + if (masks->conf) + enc_item_m = ((const struct rte_flow_action_nvgre_encap *) + masks->conf)->definition; reformat_pos = i++; reformat_src = actions - action_start; refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2; @@ -743,6 +688,11 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, refmt_type = MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + raw_encap_data = + (const struct rte_flow_action_raw_encap *) + masks->conf; + if (raw_encap_data) + encap_data_m = raw_encap_data->data; raw_encap_data = (const struct rte_flow_action_raw_encap *) actions->conf; @@ -773,22 +723,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, } if (reformat_pos != MLX5_HW_MAX_ACTS) { uint8_t buf[MLX5_ENCAP_MAX_LEN]; + bool shared_rfmt = true; if (enc_item) { MLX5_ASSERT(!encap_data); - if (flow_dv_convert_encap_data - (enc_item, buf, &data_size, error) || - flow_hw_encap_item_translate - (dev, acts, (action_start + reformat_src)->type, - reformat_src, reformat_pos, - enc_item, enc_item_m)) + if (flow_dv_convert_encap_data(enc_item, buf, &data_size, error)) goto err; encap_data = buf; - } else if (encap_data && __flow_hw_act_data_encap_append - (priv, acts, - (action_start + reformat_src)->type, - reformat_src, reformat_pos, 0, 0, data_size)) { - goto err; + if (!enc_item_m) + shared_rfmt = false; + } else if (encap_data && !encap_data_m) { + shared_rfmt = false; } acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->encap_decap) + data_size, @@ -802,12 +747,22 @@ flow_hw_actions_translate(struct rte_eth_dev *dev, acts->encap_decap->action = mlx5dr_action_create_reformat (priv->dr_ctx, refmt_type, data_size, encap_data, - rte_log2_u32(table_attr->nb_flows), - mlx5_hw_act_flag[!!attr->group][type]); + shared_rfmt ? 0 : rte_log2_u32(table_attr->nb_flows), + mlx5_hw_act_flag[!!attr->group][type] | + (shared_rfmt ? MLX5DR_ACTION_FLAG_SHARED : 0)); if (!acts->encap_decap->action) goto err; acts->rule_acts[reformat_pos].action = acts->encap_decap->action; + acts->rule_acts[reformat_pos].reformat.data = + acts->encap_decap->data; + if (shared_rfmt) + acts->rule_acts[reformat_pos].reformat.offset = 0; + else if (__flow_hw_act_data_encap_append(priv, acts, + (action_start + reformat_src)->type, + reformat_src, reformat_pos, data_size)) + goto err; + acts->encap_decap->shared = shared_rfmt; acts->encap_decap_pos = reformat_pos; } acts->acts_num = i; @@ -972,6 +927,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, .ingress = 1, }; uint32_t ft_flag; + size_t encap_len = 0; memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * hw_acts->acts_num); @@ -989,9 +945,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, } else { attr.ingress = 1; } - if (hw_acts->encap_decap && hw_acts->encap_decap->data_size) - memcpy(buf, hw_acts->encap_decap->data, - hw_acts->encap_decap->data_size); LIST_FOREACH(act_data, &hw_acts->act_list, next) { uint32_t jump_group; uint32_t tag; @@ -1050,23 +1003,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: enc_item = ((const struct rte_flow_action_vxlan_encap *) action->conf)->definition; - rte_memcpy((void *)&buf[act_data->encap.dst], - enc_item[act_data->encap.src].spec, - act_data->encap.len); + if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + return -1; break; case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: enc_item = ((const struct rte_flow_action_nvgre_encap *) action->conf)->definition; - rte_memcpy((void *)&buf[act_data->encap.dst], - enc_item[act_data->encap.src].spec, - act_data->encap.len); + if (flow_dv_convert_encap_data(enc_item, buf, &encap_len, NULL)) + return -1; break; case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: raw_encap_data = (const struct rte_flow_action_raw_encap *) action->conf; - rte_memcpy((void *)&buf[act_data->encap.dst], - raw_encap_data->data, act_data->encap.len); + rte_memcpy((void *)buf, raw_encap_data->data, act_data->encap.len); MLX5_ASSERT(raw_encap_data->size == act_data->encap.len); break; @@ -1074,7 +1024,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, break; } } - if (hw_acts->encap_decap) { + if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) { rule_acts[hw_acts->encap_decap_pos].reformat.offset = job->flow->idx - 1; rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;