From patchwork Sun Dec 3 11:25:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 134768 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79E654365F; Sun, 3 Dec 2023 12:28:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6A992410E3; Sun, 3 Dec 2023 12:27:04 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2082.outbound.protection.outlook.com [40.107.94.82]) by mails.dpdk.org (Postfix) with ESMTP id AA4FE40DC9 for ; Sun, 3 Dec 2023 12:27:02 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vpb8p6j5CpN2SUHDicjaaNQ7etI+KWVI541skX7LhjkZLnrPdilOqnOB3dklfqQB1SoG9IOV+duGIY0nq5X+kOyF7lgZHPYBMXpADJB52GFJzxdPaBrUJkCgA4QK1A6WELXubV5WtRcDMV36ayNiGOSdaHWw9bAn7OHKizPiRRTQIA4ikAGesr9MDLOJ+AHTInv0IX4y6GoVBA5BKbl61yeMW2OrB6zXr0FOeE05vNgetOPcfKVxIsPQJS39YpWxl+xsi4N8DBbFO7NQRT9902xbye4tdABO5ALh+GVlJ2j44xVRLoRZRgUtfGpZqlugmuOANggPjcmcKK3g80fV5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9s8MXa9yW/f4NxUyMrfV/85rxm0e/xzbMJlEpLSwGFs=; b=FMYK+/wHIGUbvmdh8FpAIZflQrDMv7XqWNRZeSih2xBJxOHo2eGfw7teYPcH4VYN9c4kcNJ+s+e+pQNRNsCyu3xny5j/S7qz5bCH80U0NnDrvBdezsYbypH+GZcW+wx+omd8gcC3pdiShkToItNfYQ2yl6G3YmKtv6G/hrzoawzuTmgH/ZDjVsmMAOfOuEHwdHflceYIn7xtP8X9isfpouEwwIzXlc2en1FUKHpk6cyp+5jpfDBYolTMfJ2Hnc+/IqXHuYhczEE3zAI7MKTVqS7YJR8NlrM5OfxfsUQweDzlCcj4CMVJBmDxRr0WbnAdh5Cu6Q1sVFxcETHnTMUKlQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9s8MXa9yW/f4NxUyMrfV/85rxm0e/xzbMJlEpLSwGFs=; b=cJ6mGce2ZTFdmSw03tSqk72AP9ccy+pfGNojYpdlZNQzWMepR0WOkuRhEhhkxKbfKEdFRVXVZeLCfuo9bCfxek5PwR+tg0i+9MtyuEg8Sngpxht28HYb6gNQCyjA9OCeprZilISJnZP8TzJVWOv+cxzoqiXaSE43r65I0UYGCVm2s6nEPHjtPcF66KlIp4Na7db6zjEsWr34Jj/zp300Uxn7DvF3PqTJ3a31vrPgmBFy65MCY+g3zu3qfJN9M1Mq57DiOlG9pyITROxXXKb1QvyFx3xUJslwsYhyrrfRk0Y6WVHwtoUGBfUjm95sCmz1sA+Myvw7BMDNavJm0cE0Bw== Received: from SJ0PR05CA0166.namprd05.prod.outlook.com (2603:10b6:a03:339::21) by IA1PR12MB6531.namprd12.prod.outlook.com (2603:10b6:208:3a4::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7046.33; Sun, 3 Dec 2023 11:26:54 +0000 Received: from CO1PEPF000042AE.namprd03.prod.outlook.com (2603:10b6:a03:339:cafe::b3) by SJ0PR05CA0166.outlook.office365.com (2603:10b6:a03:339::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.21 via Frontend Transport; Sun, 3 Dec 2023 11:26:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1PEPF000042AE.mail.protection.outlook.com (10.167.243.43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.20 via Frontend Transport; Sun, 3 Dec 2023 11:26:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 3 Dec 2023 03:26:36 -0800 Received: from rnnvmail205.nvidia.com (10.129.68.10) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 3 Dec 2023 03:26:36 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41 via Frontend Transport; Sun, 3 Dec 2023 03:26:34 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Ori Kam , Suanming Mou Subject: [PATCH v1 19/23] net/mlx5: add support for GENEVE and option item in HWS Date: Sun, 3 Dec 2023 13:25:39 +0200 Message-ID: <20231203112543.844014-20-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231203112543.844014-1-michaelba@nvidia.com> References: <20231203112543.844014-1-michaelba@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000042AE:EE_|IA1PR12MB6531:EE_ X-MS-Office365-Filtering-Correlation-Id: cdab797a-8229-41df-aef6-08dbf3f2c096 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rbP9c1rRg30POwgVvh5XdfuQlEtA8XJrMs5Ygp48QZMdPXmFNN8+aISmvzTqGuE6J8wwXLPbY74H54NorODaMf75m6wl1etyQIoMCu8pDBlBYXrUSspaPoBpBzumPgrrRJK7KMNNFONgfSBO1x4pYkstdr7BEqOFdYUxEk1a06heNA8n5UzrLgfgRU94ZOM0NntF4hj8I2axhNLfPL11IA1a3kG9RoVlyhkte7gIeX5xIEmQ2wXm3lhSdFiFUgkzI1VQWOyx2sRDkv/2vGuBbxGXmz0/oGJ7YymQxSCn1zy6tAASlog92mc4fVu1VOoN0StOjcmxhl2cwWHhHVB+16Zm+OtvlZG1QaKRHrBZ0lZkn6dQ5ZQpDEHxWepna1WSAwzplPdqtvxH6s3CuTjFEppmFmvxHQHSIIT0DCWv9GG5mc4z6pGurFsgmB1VxG80p0oa/uxFkCEct1+xJooK/rnpEtPZd3sY+sdeJQ1KetO6OKBIDfcgEhVdT3UKszcu5h+6loO27bh8lkzIbpqS+hhjw/lZ9ms+iaOYAIySh8+wWgxW2GBgO8LdPetR/oC+QzJbS/h+DgpcJvjFsSrNS+kZOde5tgSrQ/RGhTbyHMWxsz/H8V7aHeVdz8SLKY29CT/9CiMUEja2tScoFrq8HgHrCjIOPEV01afWELDDSHIPl9+m30/xn2I4HQrx2dneIANjCW62/XS/OzSVuQ87QENuqd8Sfp0rLUvHATTGQeuws1GMA138GFeY5LWAlS/tnautLGGYQo/OZKescf9ir4dfGy46IDqCC7ufu8RlcrQ= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(376002)(346002)(39860400002)(396003)(136003)(230273577357003)(230173577357003)(230922051799003)(64100799003)(82310400011)(186009)(451199024)(1800799012)(36840700001)(46966006)(40470700004)(70586007)(70206006)(54906003)(316002)(6916009)(478600001)(40460700003)(30864003)(5660300002)(41300700001)(36756003)(2906002)(86362001)(4326008)(8676002)(8936002)(107886003)(1076003)(2616005)(36860700001)(83380400001)(40480700001)(47076005)(7636003)(55016003)(356005)(26005)(6286002)(336012)(426003)(82740400003)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2023 11:26:54.0386 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cdab797a-8229-41df-aef6-08dbf3f2c096 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000042AE.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6531 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add HW steering support for both "RTE_FLOW_ITEM_TYPE_GENEVE" and "RTE_FLOW_ITEM_TYPE_GENEVE_OPT". Signed-off-by: Michael Baum --- doc/guides/nics/mlx5.rst | 15 ++- doc/guides/rel_notes/release_24_03.rst | 5 + drivers/net/mlx5/mlx5_flow.h | 21 +++++ drivers/net/mlx5/mlx5_flow_geneve.c | 121 ++++++++++++++++++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 44 ++++++++- 5 files changed, 199 insertions(+), 7 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index b0f2cdcd62..645b566d80 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -329,12 +329,25 @@ Limitations - Length - Data - Only one Class/Type/Length Geneve TLV option is supported per shared device. Class/Type/Length fields must be specified as well as masks. Class/Type/Length specified masks must be full. Matching Geneve TLV option without specifying data is not supported. Matching Geneve TLV option with ``data & mask == 0`` is not supported. + In SW steering (``dv_flow_en`` = 1): + + - Only one Class/Type/Length Geneve TLV option is supported per shared + device. + - Supported only when ``FLEX_PARSER_PROFILE_ENABLE`` = 0. + + In HW steering (``dv_flow_en`` = 2): + + - Multiple Class/Type/Length Geneve TLV option are supported per physical + device. See :ref:`geneve_parser_api` for more information. + - Multiple of same Geneve TLV option isn't supported at the same pattern + template. + - Supported only when ``FLEX_PARSER_PROFILE_ENABLE`` = 8. + - VF: flow rules created on VF devices can only match traffic targeted at the configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``). diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index e9c9717706..bedef2a4c0 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated NVIDIA mlx5 net driver.** + + * Added HW steering support for ``RTE_FLOW_ITEM_TYPE_GENEVE`` flow item. + * Added HW steering support for ``RTE_FLOW_ITEM_TYPE_GENEVE_OPT`` flow item. + Removed Items ------------- diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index dca3cacb65..04a2eb0b0c 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1332,6 +1332,15 @@ struct mlx5_action_construct_data { #define MAX_GENEVE_OPTIONS_RESOURCES 7 +/* GENEVE TLV options manager structure. */ +struct mlx5_geneve_tlv_options_mng { + uint8_t nb_options; /* Number of options inside the template. */ + struct { + uint8_t opt_type; + uint16_t opt_class; + } options[MAX_GENEVE_OPTIONS_RESOURCES]; +}; + /* Flow item template struct. */ struct rte_flow_pattern_template { LIST_ENTRY(rte_flow_pattern_template) next; @@ -1351,6 +1360,8 @@ struct rte_flow_pattern_template { * tag pattern item for representor matching. */ bool implicit_tag; + /* Manages all GENEVE TLV options used by this pattern template. */ + struct mlx5_geneve_tlv_options_mng geneve_opt_mng; uint8_t flex_item; /* flex item index. */ }; @@ -1799,6 +1810,16 @@ mlx5_geneve_tlv_parser_create(uint16_t port_id, const struct rte_pmd_mlx5_geneve_tlv tlv_list[], uint8_t nb_options); int mlx5_geneve_tlv_parser_destroy(void *handle); +int mlx5_flow_geneve_tlv_option_validate(struct mlx5_priv *priv, + const struct rte_flow_item *geneve_opt, + struct rte_flow_error *error); + +struct mlx5_geneve_tlv_options_mng; +int mlx5_geneve_tlv_option_register(struct mlx5_priv *priv, + const struct rte_flow_item_geneve_opt *spec, + struct mlx5_geneve_tlv_options_mng *mng); +void mlx5_geneve_tlv_options_unregister(struct mlx5_priv *priv, + struct mlx5_geneve_tlv_options_mng *mng); void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); diff --git a/drivers/net/mlx5/mlx5_flow_geneve.c b/drivers/net/mlx5/mlx5_flow_geneve.c index 2d593b70ba..2c8dc39e74 100644 --- a/drivers/net/mlx5/mlx5_flow_geneve.c +++ b/drivers/net/mlx5/mlx5_flow_geneve.c @@ -152,6 +152,106 @@ mlx5_get_geneve_hl_data(const void *dr_ctx, uint8_t type, uint16_t class, return -EINVAL; } +/** + * Calculate total data size. + * + * @param[in] priv + * Pointer to port's private data. + * @param[in] geneve_opt + * Pointer to GENEVE option item structure. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_flow_geneve_tlv_option_validate(struct mlx5_priv *priv, + const struct rte_flow_item *geneve_opt, + struct rte_flow_error *error) +{ + const struct rte_flow_item_geneve_opt *spec = geneve_opt->spec; + const struct rte_flow_item_geneve_opt *mask = geneve_opt->mask; + struct mlx5_geneve_tlv_option *option; + + option = mlx5_geneve_tlv_option_get(priv, spec->option_type, spec->option_class); + if (option == NULL) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Unregistered GENEVE option"); + if (mask->option_type != UINT8_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "GENEVE option type must be fully masked"); + if (option->class_mode == 1 && mask->option_class != UINT16_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "GENEVE option class must be fully masked"); + return 0; +} + +/** + * Register single GENEVE TLV option as used by pattern template. + * + * @param[in] priv + * Pointer to port's private data. + * @param[in] spec + * Pointer to GENEVE option item structure. + * @param[out] mng + * Pointer to GENEVE option manager. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_geneve_tlv_option_register(struct mlx5_priv *priv, + const struct rte_flow_item_geneve_opt *spec, + struct mlx5_geneve_tlv_options_mng *mng) +{ + struct mlx5_geneve_tlv_option *option; + + option = mlx5_geneve_tlv_option_get(priv, spec->option_type, spec->option_class); + if (option == NULL) + return -rte_errno; + /* Increase the option reference counter. */ + rte_atomic_fetch_add_explicit(&option->refcnt, 1, + rte_memory_order_relaxed); + /* Update the manager with option information. */ + mng->options[mng->nb_options].opt_type = spec->option_type; + mng->options[mng->nb_options].opt_class = spec->option_class; + mng->nb_options++; + return 0; +} + +/** + * Unregister all GENEVE TLV options used by pattern template. + * + * @param[in] priv + * Pointer to port's private data. + * @param[in] mng + * Pointer to GENEVE option manager. + */ +void +mlx5_geneve_tlv_options_unregister(struct mlx5_priv *priv, + struct mlx5_geneve_tlv_options_mng *mng) +{ + struct mlx5_geneve_tlv_option *option; + uint8_t i; + + for (i = 0; i < mng->nb_options; ++i) { + option = mlx5_geneve_tlv_option_get(priv, + mng->options[i].opt_type, + mng->options[i].opt_class); + MLX5_ASSERT(option != NULL); + /* Decrease the option reference counter. */ + rte_atomic_fetch_sub_explicit(&option->refcnt, 1, + rte_memory_order_relaxed); + mng->options[i].opt_type = 0; + mng->options[i].opt_class = 0; + } + mng->nb_options = 0; +} + /** * Create single GENEVE TLV option sample. * @@ -208,6 +308,24 @@ mlx5_geneve_tlv_option_destroy_sample(struct mlx5_geneve_tlv_resource *resource) resource->obj = NULL; } +/* + * Sample for DW0 are created when one of two conditions is met: + * 1. Header is matchable. + * 2. This option doesn't configure any data DW. + */ +static bool +should_configure_sample_for_dw0(const struct rte_pmd_mlx5_geneve_tlv *spec) +{ + uint8_t i; + + if (spec->match_on_class_mode == 2) + return true; + for (i = 0; i < spec->sample_len; ++i) + if (spec->match_data_mask[i] != 0) + return false; + return true; +} + /** * Create single GENEVE TLV option. * @@ -237,8 +355,7 @@ mlx5_geneve_tlv_option_create(void *ctx, const struct rte_pmd_mlx5_geneve_tlv *s uint8_t i, resource_id = 0; int ret; - if (spec->match_on_class_mode == 2) { - /* Header is matchable, create sample for DW0. */ + if (should_configure_sample_for_dw0(spec)) { attr.sample_offset = 0; resource = &option->resources[resource_id]; ret = mlx5_geneve_tlv_option_create_sample(ctx, &attr, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index da873ae2e2..7c786c432f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -6781,6 +6781,17 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, " attribute"); break; } + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + { + int ret; + + ret = mlx5_flow_geneve_tlv_option_validate(priv, + &items[i], + error); + if (ret < 0) + return ret; + break; + } case RTE_FLOW_ITEM_TYPE_VOID: case RTE_FLOW_ITEM_TYPE_ETH: case RTE_FLOW_ITEM_TYPE_VLAN: @@ -6792,6 +6803,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_GTP_PSC: case RTE_FLOW_ITEM_TYPE_VXLAN: case RTE_FLOW_ITEM_TYPE_MPLS: + case RTE_FLOW_ITEM_TYPE_GENEVE: case MLX5_RTE_FLOW_ITEM_TYPE_SQ: case RTE_FLOW_ITEM_TYPE_GRE: case RTE_FLOW_ITEM_TYPE_GRE_KEY: @@ -6959,24 +6971,45 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, } } for (i = 0; items[i].type != RTE_FLOW_ITEM_TYPE_END; ++i) { - if (items[i].type == RTE_FLOW_ITEM_TYPE_FLEX) { + switch (items[i].type) { + case RTE_FLOW_ITEM_TYPE_FLEX: { const struct rte_flow_item_flex *spec = (const struct rte_flow_item_flex *)items[i].spec; struct rte_flow_item_flex_handle *handle = spec->handle; if (flow_hw_flex_item_acquire(dev, handle, &it->flex_item)) { - claim_zero(mlx5dr_match_template_destroy(it->mt)); - mlx5_free(it); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "Failed to acquire flex item"); - return NULL; + goto error; } + break; + } + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: { + const struct rte_flow_item_geneve_opt *spec = items[i].spec; + + if (mlx5_geneve_tlv_option_register(priv, spec, + &it->geneve_opt_mng)) { + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to register GENEVE TLV option"); + goto error; + } + break; + } + default: + break; } } __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); return it; +error: + flow_hw_flex_item_release(dev, &it->flex_item); + mlx5_geneve_tlv_options_unregister(priv, &it->geneve_opt_mng); + claim_zero(mlx5dr_match_template_destroy(it->mt)); + mlx5_free(it); + return NULL; } /** @@ -6997,6 +7030,8 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev, struct rte_flow_pattern_template *template, struct rte_flow_error *error __rte_unused) { + struct mlx5_priv *priv = dev->data->dev_private; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Item template %p is still in use.", (void *)template); @@ -7010,6 +7045,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev, mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); + mlx5_geneve_tlv_options_unregister(priv, &template->geneve_opt_mng); claim_zero(mlx5dr_match_template_destroy(template->mt)); mlx5_free(template); return 0;