From patchwork Thu Jan 25 13:30:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 136165 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B3E6439C1; Thu, 25 Jan 2024 14:34:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 007C842EA2; Thu, 25 Jan 2024 14:31:54 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2055.outbound.protection.outlook.com [40.107.93.55]) by mails.dpdk.org (Postfix) with ESMTP id AE14042E8C for ; Thu, 25 Jan 2024 14:31:50 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Z+JeL/SbI1cI78v4onkkZL8c/U+NJW6LidKhEm5iGixUBt12Y+8FwWTF/8FZZUYWJmf2rvJg0UI+1BvYYRlw6M11XNKrzVQXnlJeTvps6jHXutIFCwhJ5yH+2IFc4Evvk6mzBrwjaZQiMK59yh5cBE9rv207JODymdZ1Fvr+6seW6YXiQHaKwAjYL12nk/CcUfMhXr97F7TH6K4oEfsR+CjOTjyBrEjXVLZmSKnrP2AU5aFeqRW0SGlGBrL9A6lKfxI5LxkoKmqW4mEf1QD4Utcb8wNyG4BMVPL1E8QE9HHvI3o7UPpER4xW4vgIa7JRkAtGyAsRrC/rPuSYUUs6sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JsVGCNxoZvJmAVI4c7wgKDCRdIpRxd0I7ihj7hEPLPQ=; b=Twado2t8BncKk1L7iwnl5VK0IuDRIxNonzHDflueUfg/QQ0coJSMEFcvgd/scOZTaKzLZ1XcE3fU9Z2P7x22cDSuTqICT+IrEDest/D0YhPTdtu7W6bPVJk1AHiXPlIRseVTY3CEw/tE+DJqZKQ+6AghE3AltepWHw2PqHjomQpUQl64foLoqkTVSZk7q5afh9n1kiEUH2ok2HkaZt0CpAwp6XCBCpqAGkKK8NPqhURxjVKqyGTVeOMP3U195wQr4LsnL5fkiaXZZ0EqnwD0W88Nlc+Ud1Ic3Uco+0N8eyAJhkZ6WlgF4yDzGu/vwzwo0m9C0TidYdXu4RoJIoeFPw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.118.233) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JsVGCNxoZvJmAVI4c7wgKDCRdIpRxd0I7ihj7hEPLPQ=; b=gTLHhNAGNDyzRKHTcsM6so7Uq/K1wZcwgNtQh1nAzoGADbaWMEF12niAVzQrO6aR5qIhFKw3Sx3Mi/1fSq8zR2A+CSB8ehfwNnGXW9gC4f3mR6rSqfZV1S8Z05o8k8CzbLqvCavWeEcw62kfwWCHw/GQbbwlnvnZiR16SjlBKvam6eRtb9UlwEBVk5fadzndydCLopTX6kwnQP5qKc+eaxsWQnOMUGpJyKAp52jiJ19SfdDe7cKQvGgs7ksB5GhIz70FV7tyHaic212ZF8P3t62geiUGYZMrX54J/2ccOvxyCIz5Bq9Jg/3Z6Ex2+TLScYbo7Tq2oUwtD+9v/M1MIw== Received: from CY5PR17CA0056.namprd17.prod.outlook.com (2603:10b6:930:12::10) by SJ0PR12MB5408.namprd12.prod.outlook.com (2603:10b6:a03:305::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.24; Thu, 25 Jan 2024 13:31:45 +0000 Received: from CY4PEPF0000EDD7.namprd03.prod.outlook.com (2603:10b6:930:12:cafe::bb) by CY5PR17CA0056.outlook.office365.com (2603:10b6:930:12::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.27 via Frontend Transport; Thu, 25 Jan 2024 13:31:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.118.233) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.118.233 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.118.233; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.118.233) by CY4PEPF0000EDD7.mail.protection.outlook.com (10.167.241.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.16 via Frontend Transport; Thu, 25 Jan 2024 13:31:44 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by mail.nvidia.com (10.127.129.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 25 Jan 2024 05:31:37 -0800 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Thu, 25 Jan 2024 05:31:37 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41 via Frontend Transport; Thu, 25 Jan 2024 05:31:35 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Dariusz Sosnowski , Viacheslav Ovsiienko , Ori Kam , Suanming Mou Subject: [PATCH v2 19/23] net/mlx5: add support for GENEVE and option item in HWS Date: Thu, 25 Jan 2024 15:30:39 +0200 Message-ID: <20240125133043.575860-20-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240125133043.575860-1-michaelba@nvidia.com> References: <20231203112543.844014-1-michaelba@nvidia.com> <20240125133043.575860-1-michaelba@nvidia.com> MIME-Version: 1.0 X-NV-OnPremToCloud: ExternallySecured X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD7:EE_|SJ0PR12MB5408:EE_ X-MS-Office365-Filtering-Correlation-Id: d05bc5ce-b09c-4173-50a7-08dc1da9f944 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v5vsI0yRr6fijk/NBIZ1WHwPzxUgrmul8GmMJcAFGKkiKlx90kZcCzkhB3+66D3pVLVTQ0H9spkaZ/w2gz3Jv4/O3YvCd3rXusJAEAz8g0tw3mR3q1VRZ2V6hHsBKFcLTvojE7nqP1BpambHnVtx3ouaNULu9VQ860vPHkj0hgT6RzDaJEQcLq6mfJHC8wuE92VrCGB+Bhtpn4wyBVdwz2ArWMOpcnV6WWLXJrJTtORYAVpr1nRg5pfMzucqXKvoTAb78xyvIfgW9fLLuLkpbwgG0LfDcvCfT3XaETTZHB/PcfSiKNSfGIOYvCjs8Rra8VUBQRFu6WWfbPtTsWjQWkD+JvK2H/uRp7pAhAyea0zGdoxDAb/e/LETluSayHvjesKzua2hH1PFtG2FU9bGC7+wPZy3U03OrzbCaGHZrRPUmhUC+sgvR/JIyAqnGqoBpHLm+IjW5OSW4kRRpy0xEiL9Z5+q2snhRjhoRw0OvwMgoWOuQxY0enQenVXw1z7EWklJJo7cIi+MiqpJtOq8oy5zRJXGUJcZ4g0BtLftrnQ9Y39FsE212M5AcOXoEjKxQsBRdTh0Z12slZt4JBFcQQ+ozwt3SBwMuQeWlJP9fntjf2DgGEzDN+MKyyVEEBLiHfflMyL82y47gQOzfeOnV//c4IhVhqUsM3I8wBo1at5H6eGaLOd6f3rBLuNd4OUczByJKaTOiAlEnWmtlmIqautOX30EvLjWJsD9I6XBuIk8TG1ty9to56mRXOb3zI69AoI1u7UC/PDRFYQwQ0bmh6FFr0GDCT/iKI67DDF99co= X-Forefront-Antispam-Report: CIP:216.228.118.233; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc7edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(346002)(396003)(136003)(39860400002)(376002)(230173577357003)(230922051799003)(230273577357003)(186009)(82310400011)(64100799003)(1800799012)(451199024)(40470700004)(46966006)(36840700001)(47076005)(83380400001)(36860700001)(6286002)(107886003)(426003)(7636003)(336012)(26005)(356005)(1076003)(82740400003)(70206006)(6916009)(70586007)(8676002)(316002)(30864003)(5660300002)(2906002)(54906003)(41300700001)(8936002)(7696005)(4326008)(478600001)(36756003)(2616005)(86362001)(40460700003)(40480700001)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jan 2024 13:31:44.7356 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d05bc5ce-b09c-4173-50a7-08dc1da9f944 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.118.233]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000EDD7.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB5408 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add HW steering support for both "RTE_FLOW_ITEM_TYPE_GENEVE" and "RTE_FLOW_ITEM_TYPE_GENEVE_OPT". Signed-off-by: Michael Baum Acked-by: Suanming Mou --- doc/guides/nics/mlx5.rst | 15 ++- doc/guides/rel_notes/release_24_03.rst | 5 + drivers/net/mlx5/mlx5_flow.h | 21 +++++ drivers/net/mlx5/mlx5_flow_geneve.c | 121 ++++++++++++++++++++++++- drivers/net/mlx5/mlx5_flow_hw.c | 44 ++++++++- 5 files changed, 199 insertions(+), 7 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 2e5274edb8..62fd27d859 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -337,12 +337,25 @@ Limitations - Length - Data - Only one Class/Type/Length Geneve TLV option is supported per shared device. Class/Type/Length fields must be specified as well as masks. Class/Type/Length specified masks must be full. Matching Geneve TLV option without specifying data is not supported. Matching Geneve TLV option with ``data & mask == 0`` is not supported. + In SW steering (``dv_flow_en`` = 1): + + - Only one Class/Type/Length Geneve TLV option is supported per shared + device. + - Supported only when ``FLEX_PARSER_PROFILE_ENABLE`` = 0. + + In HW steering (``dv_flow_en`` = 2): + + - Multiple Class/Type/Length Geneve TLV option are supported per physical + device. See :ref:`geneve_parser_api` for more information. + - Multiple of same Geneve TLV option isn't supported at the same pattern + template. + - Supported only when ``FLEX_PARSER_PROFILE_ENABLE`` = 8. + - VF: flow rules created on VF devices can only match traffic targeted at the configured MAC addresses (see ``rte_eth_dev_mac_addr_add()``). diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index a1dfea263c..0c8491ce37 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -77,6 +77,11 @@ New Features * Added support for ``RTE_FLOW_ITEM_TYPE_RANDOM`` flow item. + * Added HW steering support for ``RTE_FLOW_ITEM_TYPE_GENEVE`` flow item. + + * Added HW steering support for ``RTE_FLOW_ITEM_TYPE_GENEVE_OPT`` flow item. + + Removed Items ------------- diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 14806fa78e..0459472fe4 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1338,6 +1338,15 @@ struct mlx5_action_construct_data { #define MAX_GENEVE_OPTIONS_RESOURCES 7 +/* GENEVE TLV options manager structure. */ +struct mlx5_geneve_tlv_options_mng { + uint8_t nb_options; /* Number of options inside the template. */ + struct { + uint8_t opt_type; + uint16_t opt_class; + } options[MAX_GENEVE_OPTIONS_RESOURCES]; +}; + /* Flow item template struct. */ struct rte_flow_pattern_template { LIST_ENTRY(rte_flow_pattern_template) next; @@ -1357,6 +1366,8 @@ struct rte_flow_pattern_template { * tag pattern item for representor matching. */ bool implicit_tag; + /* Manages all GENEVE TLV options used by this pattern template. */ + struct mlx5_geneve_tlv_options_mng geneve_opt_mng; uint8_t flex_item; /* flex item index. */ }; @@ -1805,6 +1816,16 @@ mlx5_geneve_tlv_parser_create(uint16_t port_id, const struct rte_pmd_mlx5_geneve_tlv tlv_list[], uint8_t nb_options); int mlx5_geneve_tlv_parser_destroy(void *handle); +int mlx5_flow_geneve_tlv_option_validate(struct mlx5_priv *priv, + const struct rte_flow_item *geneve_opt, + struct rte_flow_error *error); + +struct mlx5_geneve_tlv_options_mng; +int mlx5_geneve_tlv_option_register(struct mlx5_priv *priv, + const struct rte_flow_item_geneve_opt *spec, + struct mlx5_geneve_tlv_options_mng *mng); +void mlx5_geneve_tlv_options_unregister(struct mlx5_priv *priv, + struct mlx5_geneve_tlv_options_mng *mng); void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); diff --git a/drivers/net/mlx5/mlx5_flow_geneve.c b/drivers/net/mlx5/mlx5_flow_geneve.c index 2d593b70ba..2c8dc39e74 100644 --- a/drivers/net/mlx5/mlx5_flow_geneve.c +++ b/drivers/net/mlx5/mlx5_flow_geneve.c @@ -152,6 +152,106 @@ mlx5_get_geneve_hl_data(const void *dr_ctx, uint8_t type, uint16_t class, return -EINVAL; } +/** + * Calculate total data size. + * + * @param[in] priv + * Pointer to port's private data. + * @param[in] geneve_opt + * Pointer to GENEVE option item structure. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_flow_geneve_tlv_option_validate(struct mlx5_priv *priv, + const struct rte_flow_item *geneve_opt, + struct rte_flow_error *error) +{ + const struct rte_flow_item_geneve_opt *spec = geneve_opt->spec; + const struct rte_flow_item_geneve_opt *mask = geneve_opt->mask; + struct mlx5_geneve_tlv_option *option; + + option = mlx5_geneve_tlv_option_get(priv, spec->option_type, spec->option_class); + if (option == NULL) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Unregistered GENEVE option"); + if (mask->option_type != UINT8_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "GENEVE option type must be fully masked"); + if (option->class_mode == 1 && mask->option_class != UINT16_MAX) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "GENEVE option class must be fully masked"); + return 0; +} + +/** + * Register single GENEVE TLV option as used by pattern template. + * + * @param[in] priv + * Pointer to port's private data. + * @param[in] spec + * Pointer to GENEVE option item structure. + * @param[out] mng + * Pointer to GENEVE option manager. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_geneve_tlv_option_register(struct mlx5_priv *priv, + const struct rte_flow_item_geneve_opt *spec, + struct mlx5_geneve_tlv_options_mng *mng) +{ + struct mlx5_geneve_tlv_option *option; + + option = mlx5_geneve_tlv_option_get(priv, spec->option_type, spec->option_class); + if (option == NULL) + return -rte_errno; + /* Increase the option reference counter. */ + rte_atomic_fetch_add_explicit(&option->refcnt, 1, + rte_memory_order_relaxed); + /* Update the manager with option information. */ + mng->options[mng->nb_options].opt_type = spec->option_type; + mng->options[mng->nb_options].opt_class = spec->option_class; + mng->nb_options++; + return 0; +} + +/** + * Unregister all GENEVE TLV options used by pattern template. + * + * @param[in] priv + * Pointer to port's private data. + * @param[in] mng + * Pointer to GENEVE option manager. + */ +void +mlx5_geneve_tlv_options_unregister(struct mlx5_priv *priv, + struct mlx5_geneve_tlv_options_mng *mng) +{ + struct mlx5_geneve_tlv_option *option; + uint8_t i; + + for (i = 0; i < mng->nb_options; ++i) { + option = mlx5_geneve_tlv_option_get(priv, + mng->options[i].opt_type, + mng->options[i].opt_class); + MLX5_ASSERT(option != NULL); + /* Decrease the option reference counter. */ + rte_atomic_fetch_sub_explicit(&option->refcnt, 1, + rte_memory_order_relaxed); + mng->options[i].opt_type = 0; + mng->options[i].opt_class = 0; + } + mng->nb_options = 0; +} + /** * Create single GENEVE TLV option sample. * @@ -208,6 +308,24 @@ mlx5_geneve_tlv_option_destroy_sample(struct mlx5_geneve_tlv_resource *resource) resource->obj = NULL; } +/* + * Sample for DW0 are created when one of two conditions is met: + * 1. Header is matchable. + * 2. This option doesn't configure any data DW. + */ +static bool +should_configure_sample_for_dw0(const struct rte_pmd_mlx5_geneve_tlv *spec) +{ + uint8_t i; + + if (spec->match_on_class_mode == 2) + return true; + for (i = 0; i < spec->sample_len; ++i) + if (spec->match_data_mask[i] != 0) + return false; + return true; +} + /** * Create single GENEVE TLV option. * @@ -237,8 +355,7 @@ mlx5_geneve_tlv_option_create(void *ctx, const struct rte_pmd_mlx5_geneve_tlv *s uint8_t i, resource_id = 0; int ret; - if (spec->match_on_class_mode == 2) { - /* Header is matchable, create sample for DW0. */ + if (should_configure_sample_for_dw0(spec)) { attr.sample_offset = 0; resource = &option->resources[resource_id]; ret = mlx5_geneve_tlv_option_create_sample(ctx, &attr, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index f06d2ce273..00dc9bc890 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -6828,6 +6828,17 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, " attribute"); break; } + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + { + int ret; + + ret = mlx5_flow_geneve_tlv_option_validate(priv, + &items[i], + error); + if (ret < 0) + return ret; + break; + } case RTE_FLOW_ITEM_TYPE_VOID: case RTE_FLOW_ITEM_TYPE_ETH: case RTE_FLOW_ITEM_TYPE_VLAN: @@ -6840,6 +6851,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev, case RTE_FLOW_ITEM_TYPE_VXLAN: case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: case RTE_FLOW_ITEM_TYPE_MPLS: + case RTE_FLOW_ITEM_TYPE_GENEVE: case MLX5_RTE_FLOW_ITEM_TYPE_SQ: case RTE_FLOW_ITEM_TYPE_GRE: case RTE_FLOW_ITEM_TYPE_GRE_KEY: @@ -7008,24 +7020,45 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev, } } for (i = 0; items[i].type != RTE_FLOW_ITEM_TYPE_END; ++i) { - if (items[i].type == RTE_FLOW_ITEM_TYPE_FLEX) { + switch (items[i].type) { + case RTE_FLOW_ITEM_TYPE_FLEX: { const struct rte_flow_item_flex *spec = (const struct rte_flow_item_flex *)items[i].spec; struct rte_flow_item_flex_handle *handle = spec->handle; if (flow_hw_flex_item_acquire(dev, handle, &it->flex_item)) { - claim_zero(mlx5dr_match_template_destroy(it->mt)); - mlx5_free(it); rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "Failed to acquire flex item"); - return NULL; + goto error; } + break; + } + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: { + const struct rte_flow_item_geneve_opt *spec = items[i].spec; + + if (mlx5_geneve_tlv_option_register(priv, spec, + &it->geneve_opt_mng)) { + rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to register GENEVE TLV option"); + goto error; + } + break; + } + default: + break; } } __atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED); LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next); return it; +error: + flow_hw_flex_item_release(dev, &it->flex_item); + mlx5_geneve_tlv_options_unregister(priv, &it->geneve_opt_mng); + claim_zero(mlx5dr_match_template_destroy(it->mt)); + mlx5_free(it); + return NULL; } /** @@ -7046,6 +7079,8 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev, struct rte_flow_pattern_template *template, struct rte_flow_error *error __rte_unused) { + struct mlx5_priv *priv = dev->data->dev_private; + if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) { DRV_LOG(WARNING, "Item template %p is still in use.", (void *)template); @@ -7059,6 +7094,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev, mlx5_free_srh_flex_parser(dev); LIST_REMOVE(template, next); flow_hw_flex_item_release(dev, &template->flex_item); + mlx5_geneve_tlv_options_unregister(priv, &template->geneve_opt_mng); claim_zero(mlx5dr_match_template_destroy(template->mt)); mlx5_free(template); return 0;