From patchwork Wed Feb 28 17:00:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dariusz Sosnowski X-Patchwork-Id: 137444 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59E9B43C2C; Wed, 28 Feb 2024 18:02:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0841642F29; Wed, 28 Feb 2024 18:01:50 +0100 (CET) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2070.outbound.protection.outlook.com [40.107.101.70]) by mails.dpdk.org (Postfix) with ESMTP id D02D242F11 for ; Wed, 28 Feb 2024 18:01:48 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QZ1DqJU3RT7LTFJIzBU2FyIHQpsp+d/iHiaLmpFHLqyeUUz/wWKPEHfhe972/hXPucUTxTPwhxTAQwTQd/LOtSFNiVdquKUoUqZ9cI6SLHHhlZRpa3EkgZdkEKNBAxl81V7xCCOMa4LHcDI98fVus3jaZc2SLbkRpSqQVMIV6SKM7CpsCJ38RFs0rtm3Q1FljhHvp81jChdMCm2y3PrzQIgNICm2dSldLLAWsUQQMi/ngAkOS216qT5vASUMbPoUCmLP5w/8TBIZIPZRdFo4v9b9Z4tHKKnbHZs3oL25XouW3dh49xKeH1P1WSRii8AziEq4H3rzn0ojtLzbNnw4TQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=K0KaAdoKOSfUSgANE0UgnPTPeyM2dzhck5m+f6TWG3M=; b=nvZ2qpccjfhmtvpkyiXy3CctJs64CLHyn681mFL9OobjrdEuv7WYh5cnCpoII54n+e2FEFe7IX20cN27r1PiBluUCpzOs7dbcjqdmRiBJP7WHE9olsMeYYi9PPdA+M5TEjvMNKLOH3zicj5AEoAb889J2+UGmHtpX+806gP0VMhgo3/oaj1I18Sq4wo90oCD4HkucSpZmN55Ls6s0j/N4yRB0oW+IO20YW4PwzhweyLBCda/RgzwPo/72lfIiZKIekaxxs9TQEyMAMyIcUClB3BkYh0YNTTAgb6M9qPJf+HnfQC7Wx4IiGQdythgDD9JUkh7hXm2xXWKv7nmg8uZ4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K0KaAdoKOSfUSgANE0UgnPTPeyM2dzhck5m+f6TWG3M=; b=MDh8OYt7PzhaNOcEFLcryfilR4Y/Nkt1NVo9vmDw+lkN479kDuP67KvMlzWBTUuFzPP6JP5o7WjSmQOYOMW84w/oDxC45MDaZE5BhsCCh6kVDjBSb084ZOrW3Y/mlFn/QImYUeKQB0dofwkAxeiVtYVegmQkiG8h/JD+1mTz0uuld2QLWE4+WJemqcVsbtRRwnJ98KWDO7IU5UlWn2x1uPM/egV3uzOW7Wu19QaoLkbRddSYuQmyQWwok+1ba+salyq2+bccTYnhHVXX3lB2R4Ixan3tNi66agjQzKSFLCb19/ATXlGjMjJFJKEDoxSPc3GXwYSZCn0w0A102ico8w== Received: from SJ0PR03CA0035.namprd03.prod.outlook.com (2603:10b6:a03:33e::10) by SJ0PR12MB7066.namprd12.prod.outlook.com (2603:10b6:a03:4ae::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.41; Wed, 28 Feb 2024 17:01:42 +0000 Received: from SJ1PEPF00001CDD.namprd05.prod.outlook.com (2603:10b6:a03:33e:cafe::d) by SJ0PR03CA0035.outlook.office365.com (2603:10b6:a03:33e::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7339.28 via Frontend Transport; Wed, 28 Feb 2024 17:01:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SJ1PEPF00001CDD.mail.protection.outlook.com (10.167.242.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.25 via Frontend Transport; Wed, 28 Feb 2024 17:01:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 28 Feb 2024 09:01:18 -0800 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 28 Feb 2024 09:01:15 -0800 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: , Raslan Darawsheh , Bing Zhao Subject: [PATCH 06/11] net/mlx5: remove flow pattern from job Date: Wed, 28 Feb 2024 18:00:41 +0100 Message-ID: <20240228170046.176600-7-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240228170046.176600-1-dsosnowski@nvidia.com> References: <20240228170046.176600-1-dsosnowski@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CDD:EE_|SJ0PR12MB7066:EE_ X-MS-Office365-Filtering-Correlation-Id: 45ea4c72-5fbc-4696-1d9e-08dc387eefb9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: gzSjx8jSZ9uhoRqzeTErfP6jqM9I36LaonweJ9e9pgZitPINd/C/GSQCcE78KewrSGVaeh0BecUMRxxA2osKeXTQ1Ny3t3WDETZQ4zBHhDxAqtv8O9oEKKWDiMDeZIyRe5Ikt5o59zm0l7nV0fEbEVLUj86f9PU6k4q5WuEEJnu+zCCuR1MCm1jz6aWexCdee6jQIoEz/JcOrJOpi5KUmlYugoLC7cv2/u/uUwUmQ4pBXa8d8ljNV/b8COC+vX0RBFvDEyJXn6vh/m9/f6MkHTBlPOvzvWisWDg1Fr6XP9RcZAe957zJc4whicXuJorSAH3TInD8XWkPF7UMPY8yx1yvUU7peCrTrkVwlbIZlxdb/61wIi/RDeE1LeBhiizE4bDlSXIn3lhWJCrYyd7e+V1s3Ii4FS3rLiF8wGfwOfM7vZGBmkOV1D4PiQV6DIl9rbBIY9OqQhYnZy64HTnSHsf1d7vL/5R+B1u8rfjAid3UUs5tQRiov28gKf6jkCe55+gKpsV3aYofi5T8qCYKhIgd7LKtlsdPIwG6NjeuGkcRVuRR7z7c6TqRBuMS/+u686GPirG51uMmbqT2Lnnw2EoYAEqpRmKygBnbSX7FYT6UBXxBD2JPveDmBEjhey45QN8DtCwQA8a42FCh/wbTJ9jWb0MrnG51B+VT3mNoUFXAwlHwfVHGN6BWJl8r4+0FDnva9zNJ/mD7pQ40jE7DrZaonqqKs4L1bCoCiHAFb8zJkV9Uw4UNbttUzPzWGwc3 X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(36860700004)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Feb 2024 17:01:41.7907 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 45ea4c72-5fbc-4696-1d9e-08dc387eefb9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CDD.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB7066 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org mlx5_hw_q_job struct held a reference to temporary flow rule pattern and contained temporary REPRESENTED_PORT and TAG items structs. They are used whenever it is required to prepend a flow rule pattern, provided by the application with one of such items. If prepending is required, then flow rule pattern is copied over to temporary buffer and a new item added internally in PMD. Such constructed buffer is passed to the HWS layer when flow create operation is being enqueued. After operation is enqueued, temporary flow pattern can be safely discarded, so there is no need to store it during the whole lifecycle of mlx5_hw_q_job. This patch removes all references to flow rule pattern and items stored inside mlx5_hw_q_job and removes relevant allocations to reduce job memory footprint. Temporary pattern and items stored per job are replaced with stack allocated ones, contained in mlx5_flow_hw_pattern_params struct. Signed-off-by: Dariusz Sosnowski --- drivers/net/mlx5/mlx5.h | 17 ++++------- drivers/net/mlx5/mlx5_flow.h | 10 +++++++ drivers/net/mlx5/mlx5_flow_hw.c | 51 ++++++++++++++------------------- 3 files changed, 37 insertions(+), 41 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index bd0846d6bf..fc3d28e6f2 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -401,17 +401,12 @@ struct mlx5_hw_q_job { const void *action; /* Indirect action attached to the job. */ }; void *user_data; /* Job user data. */ - struct rte_flow_item *items; - union { - struct { - /* User memory for query output */ - void *user; - /* Data extracted from hardware */ - void *hw; - } __rte_packed query; - struct rte_flow_item_ethdev port_spec; - struct rte_flow_item_tag tag_spec; - } __rte_packed; + struct { + /* User memory for query output */ + void *user; + /* Data extracted from hardware */ + void *hw; + } query; struct rte_flow_hw *upd_flow; /* Flow with updated values. */ }; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index df1c913017..96b43ce61e 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1304,6 +1304,16 @@ struct mlx5_flow_hw_action_params { uint8_t ipv6_push_data[MLX5_PUSH_MAX_LEN]; }; +/** Container for dynamically generated flow items used during flow rule creation. */ +struct mlx5_flow_hw_pattern_params { + /** Array of dynamically generated flow items. */ + struct rte_flow_item items[MLX5_HW_MAX_ITEMS]; + /** Temporary REPRESENTED_PORT item generated by PMD. */ + struct rte_flow_item_ethdev port_spec; + /** Temporary TAG item generated by PMD. */ + struct rte_flow_item_tag tag_spec; +}; + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 7160477c83..c3d9eef999 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3253,44 +3253,44 @@ flow_hw_get_rule_items(struct rte_eth_dev *dev, const struct rte_flow_template_table *table, const struct rte_flow_item items[], uint8_t pattern_template_index, - struct mlx5_hw_q_job *job) + struct mlx5_flow_hw_pattern_params *pp) { struct rte_flow_pattern_template *pt = table->its[pattern_template_index]; /* Only one implicit item can be added to flow rule pattern. */ MLX5_ASSERT(!pt->implicit_port || !pt->implicit_tag); - /* At least one item was allocated in job descriptor for items. */ + /* At least one item was allocated in pattern params for items. */ MLX5_ASSERT(MLX5_HW_MAX_ITEMS >= 1); if (pt->implicit_port) { if (pt->orig_item_nb + 1 > MLX5_HW_MAX_ITEMS) { rte_errno = ENOMEM; return NULL; } - /* Set up represented port item in job descriptor. */ - job->port_spec = (struct rte_flow_item_ethdev){ + /* Set up represented port item in pattern params. */ + pp->port_spec = (struct rte_flow_item_ethdev){ .port_id = dev->data->port_id, }; - job->items[0] = (struct rte_flow_item){ + pp->items[0] = (struct rte_flow_item){ .type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT, - .spec = &job->port_spec, + .spec = &pp->port_spec, }; - rte_memcpy(&job->items[1], items, sizeof(*items) * pt->orig_item_nb); - return job->items; + rte_memcpy(&pp->items[1], items, sizeof(*items) * pt->orig_item_nb); + return pp->items; } else if (pt->implicit_tag) { if (pt->orig_item_nb + 1 > MLX5_HW_MAX_ITEMS) { rte_errno = ENOMEM; return NULL; } - /* Set up tag item in job descriptor. */ - job->tag_spec = (struct rte_flow_item_tag){ + /* Set up tag item in pattern params. */ + pp->tag_spec = (struct rte_flow_item_tag){ .data = flow_hw_tx_tag_regc_value(dev), }; - job->items[0] = (struct rte_flow_item){ + pp->items[0] = (struct rte_flow_item){ .type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_TAG, - .spec = &job->tag_spec, + .spec = &pp->tag_spec, }; - rte_memcpy(&job->items[1], items, sizeof(*items) * pt->orig_item_nb); - return job->items; + rte_memcpy(&pp->items[1], items, sizeof(*items) * pt->orig_item_nb); + return pp->items; } else { return items; } @@ -3345,6 +3345,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, }; struct mlx5dr_rule_action *rule_acts; struct mlx5_flow_hw_action_params ap; + struct mlx5_flow_hw_pattern_params pp; struct rte_flow_hw *flow = NULL; struct mlx5_hw_q_job *job = NULL; const struct rte_flow_item *rule_items; @@ -3409,7 +3410,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, goto error; } rule_items = flow_hw_get_rule_items(dev, table, items, - pattern_template_index, job); + pattern_template_index, &pp); if (!rule_items) goto error; if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) { @@ -9990,11 +9991,8 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } mem_size += (sizeof(struct mlx5_hw_q_job *) + - sizeof(struct mlx5_hw_q_job) + - sizeof(struct rte_flow_item) * - MLX5_HW_MAX_ITEMS + - sizeof(struct rte_flow_hw)) * - _queue_attr[i]->size; + sizeof(struct mlx5_hw_q_job) + + sizeof(struct rte_flow_hw)) * _queue_attr[i]->size; } priv->hw_q = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 64, SOCKET_ID_ANY); @@ -10003,7 +10001,6 @@ flow_hw_configure(struct rte_eth_dev *dev, goto err; } for (i = 0; i < nb_q_updated; i++) { - struct rte_flow_item *items = NULL; struct rte_flow_hw *upd_flow = NULL; priv->hw_q[i].job_idx = _queue_attr[i]->size; @@ -10016,12 +10013,8 @@ flow_hw_configure(struct rte_eth_dev *dev, &job[_queue_attr[i - 1]->size - 1].upd_flow[1]; job = (struct mlx5_hw_q_job *) &priv->hw_q[i].job[_queue_attr[i]->size]; - items = (struct rte_flow_item *) - &job[_queue_attr[i]->size]; - upd_flow = (struct rte_flow_hw *) - &items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS]; + upd_flow = (struct rte_flow_hw *)&job[_queue_attr[i]->size]; for (j = 0; j < _queue_attr[i]->size; j++) { - job[j].items = &items[j * MLX5_HW_MAX_ITEMS]; job[j].upd_flow = &upd_flow[j]; priv->hw_q[i].job[j] = &job[j]; } @@ -12193,14 +12186,12 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev, uint32_t *hash, struct rte_flow_error *error) { const struct rte_flow_item *items; - /* Temp job to allow adding missing items */ - static struct rte_flow_item tmp_items[MLX5_HW_MAX_ITEMS]; - static struct mlx5_hw_q_job job = {.items = tmp_items}; + struct mlx5_flow_hw_pattern_params pp; int res; items = flow_hw_get_rule_items(dev, table, pattern, pattern_template_index, - &job); + &pp); res = mlx5dr_rule_hash_calculate(mlx5_table_matcher(table), items, pattern_template_index, MLX5DR_RULE_HASH_CALC_MODE_RAW,