From patchwork Thu Sep 22 19:03:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 116679 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88984A0543; Thu, 22 Sep 2022 21:07:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B56A042BC2; Thu, 22 Sep 2022 21:05:42 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2054.outbound.protection.outlook.com [40.107.220.54]) by mails.dpdk.org (Postfix) with ESMTP id D96E542BB5 for ; Thu, 22 Sep 2022 21:05:41 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I9LzBcXAVkEQSCAXD9aVa6jBXC/5UTmfVeyM5rmRD2qpr8Fw+oOFQjkeHoSRsmrJZKSUxASZfWDvysk3iyOuL8QEwpt1J+Fw4I+1A4H0Mo5HV8dmjZToP2/AVVwCBZeT+Zs1Sy2BGt4y1PI+RJZkox+84KEFaGO2AVR4yNE9vjEt3jSwk9wNFhversvIStXGkQRt8d6oYSXkaQD9KpmWMs1MldpFFBAknFvMLK95sxEFtvO+4SOQCEyU6ZRatdCipsMxAgNR6XunwwbvVJakce4a/j5cpN7LyNmgVhGIRNi6KxUrTmpmEeliRa8MiUK7f+5tkwtCHK9Rd3umMP5SYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iOfTJ85AY6C90a8y+aH+ws58tUERrTXgDMIB09TO28M=; b=DTt9lG1UJ9gYg1sK08T+t02EKGeq5KCwci42Ht4k2qkhU2yWJJRMozzxRJk/erlJqDHrrQyD73Hfzy08W1lpB1/VjHY9NsC/1psv02/uZE94JyvbGFdcQwtShg7ZWuqYV2dysJvzZnP/rgEK0kueCXC/mIV+s7WpgGToa9N7CnmG5q6VKrb+7hieYA1KZLTHbEKncAh9CTq2KScLo9MobJxazLwIXE+QlHtIWF+PpnR7vcmmf7pfwQwuV/EJSeSPp9CJ1xisMTkK8OUTc+lkHmLg0J4T+EaDK+4XcyMiS8as4yd61th9CBQyeVpYLww6re14MupPY/E5o+us8aF0YQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iOfTJ85AY6C90a8y+aH+ws58tUERrTXgDMIB09TO28M=; b=t1AJL8zCdo0vxiLsyYUmxFymI7scsyr2qsEk8y2B45ENNycwQwE5KPRhFW4B1z0PorE9BBIDJFRt+iZQdUJYgrpWsBBfGInHAohUQp55AYqRA/+PYwElcUB0iEma9QmxNFpRaJYtZcVo7zYC9HkJs9rXpyZjzzetdpLQ9NtKoc3OkuwR0UI5tN3bTCJLq0N8NCmTDxxkdsB5Q8Uu/j8PXbOoVOjepVbF6w1tT/+R/fWbwjx4ODPGTcKQYkR47NuAD9tX6VfPc0Djs338EjSfSfX3wFzGlVTBAi8XuxgI6P7QmaqC5R6w53IpTCx49tSCDekcxvmJaM0bN9b5ppBFwA== Received: from MW4PR04CA0367.namprd04.prod.outlook.com (2603:10b6:303:81::12) by BY5PR12MB4885.namprd12.prod.outlook.com (2603:10b6:a03:1de::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.19; Thu, 22 Sep 2022 19:05:39 +0000 Received: from CO1NAM11FT018.eop-nam11.prod.protection.outlook.com (2603:10b6:303:81:cafe::d4) by MW4PR04CA0367.outlook.office365.com (2603:10b6:303:81::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.16 via Frontend Transport; Thu, 22 Sep 2022 19:05:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1NAM11FT018.mail.protection.outlook.com (10.13.175.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5654.14 via Frontend Transport; Thu, 22 Sep 2022 19:05:38 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.26; Thu, 22 Sep 2022 12:05:21 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Thu, 22 Sep 2022 12:05:18 -0700 From: Alex Vesker To: , , , , , Matan Azrad CC: , Subject: [v1 16/19] net/mlx5/hws: Add HWS rule object Date: Thu, 22 Sep 2022 22:03:41 +0300 Message-ID: <20220922190345.394-17-valex@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20220922190345.394-1-valex@nvidia.com> References: <20220922190345.394-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT018:EE_|BY5PR12MB4885:EE_ X-MS-Office365-Filtering-Correlation-Id: f805434d-cc4e-4b15-2a6b-08da9ccd7035 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8UamFgYh5BHm99FDLpM9AcrOsuEjM192l1VzPd1EO1Wte2jlmYLuO2i1B2llXLpm5wD7y6RlVThIZeWyfByxr7vALvhBsu8T/VhgDnYVly87mx9wUxRcaZCHL0YyeVmCJhXvtQVwdFWzT6jIbMYbee7p5Ti7JWvOrIP1NeR1k02AXn3qrMbqKpAByJv0VS5aPPp1rqCSi4EY6AmNs8zbf5VEPIS6ZeQcjkQg8PbkYICTeH1b6htAfBTZsE2BG11jtmNwo9K4jzFBbcO78q/SkOvGFssX0+mFs4BVSyrZvo+vFGK319uz/QlpmBpesF3uOao1NeRa0oqfC8Xd7tdN0WwO2M9+b78+OIJYKgHC3gBiS2fZxS7761vmph/lbuXgtJ0Ej7fgmDt4YlJ1kN+LcQBHf8q+3YwRLAwPfIRtSzsSH2Px8C50Bm3bLk+9L4dbaGUQ+IucLUwCThsg8KzIQlkgbcbdmGr+pk4Xx81eOHNWTSkHh4QC18lPQVlJBE8iBKTmC7bMbZP01aUMjt3HM5kbn+kWV8sckJXnb4ILvqvxWhmGdEbg43pFgHgJyMk43sRzCajpbFYzyi7KZgTtoZnLbSVo2A3WZSFzX8rkeeDyEAwqOFjRsB2qhvVy4s/U2u08czUgPkFdiWN4PANdd2xK7fF3vR4m6fMp0UZ1TiAF9V7GJIlpe87qK14zuud3Co3BlRt9LAjE3KecwjGcToNXzsJxuS6vTc9dvAgma3Dn0FB+xMBkk9gbEyw7Jm00Z1gPe8UduU3h9ZxrNeY4nQ== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(136003)(396003)(39860400002)(346002)(376002)(451199015)(40470700004)(46966006)(36840700001)(2906002)(36860700001)(82310400005)(356005)(86362001)(4326008)(40460700003)(82740400003)(7636003)(7696005)(6666004)(6286002)(2616005)(107886003)(26005)(6636002)(54906003)(110136005)(316002)(478600001)(41300700001)(426003)(30864003)(47076005)(8936002)(5660300002)(8676002)(1076003)(55016003)(186003)(40480700001)(70206006)(83380400001)(36756003)(16526019)(70586007)(336012); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Sep 2022 19:05:38.9545 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f805434d-cc4e-4b15-2a6b-08da9ccd7035 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT018.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4885 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..e393080c2b --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct rte_flow_item_ethdev *v; + const struct flow_hw_port_info *vport; + + /* flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err){ + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..88ecfb3e6c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif