From patchwork Tue Jan 31 09:33:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Vesker X-Patchwork-Id: 122726 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3665341B8D; Tue, 31 Jan 2023 10:35:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B059942D42; Tue, 31 Jan 2023 10:34:42 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2041.outbound.protection.outlook.com [40.107.237.41]) by mails.dpdk.org (Postfix) with ESMTP id 15BCC42D0C for ; Tue, 31 Jan 2023 10:34:39 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=odY8Jh0f9SMVmYjwYakCr60H6qoOilhc0wHEqtZ+vDaTrkSH4YDIRzdlwx41tbLkOpdbTf7IvwNPXSo2o1AcyZzU/V/oGZedk0Acfvld6PI9HRLXoleDqxiZweHmTH+nv39NqZ9ZI8KxZ08+9pGDZ+B6Hd8a//4BbcNMv2Xexg3/G/LgTB75jXWFc7Cbl96lQD/jcXtu0yLNXPdi6hVoL0CFR854Tly4AbY4AuOPW+Z+oStisIwIjSUGP+JqbpOyftbyvP5hF6ucgM4wz7MRCaZGx+c4c3trFSCaj8ErP1jClO8xP3n6e80gGa+5X2phKy4VO1EU8viI+QkvgEJKOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cD38lRPY3V2FT/kH9lWANPzm05lc+7NHZSPGjT1mZyE=; b=F+NwxpElUaLmZwei891kMPx+GYW7g+55gSGBACFNIfA9v04ors41og/797E8snF80kHM5qUXIxZcYmZ7hfsnwHf9J+QJdqTPinS2CY4rtqKXMJH/laukuRDQC62nlR6z28hggHUezoPM5NQRT+exTH8fb6IIepPOxmfyRVpKuhj8qaW623ACMpiNRE14/Vu9vfzc/tHnT97RMdac211DDzcDHZOj9GJ6aPiKoTQJ0DWI/xW0+z4QFlcFHgmgf2aN1T7CubyOuw2jMQyNFpUS7kQezx19be8M3YauBhY3xbvzN9ol5K1hrJzyGl+e5shsC+47FtlXibu+IFm4+oLNeQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cD38lRPY3V2FT/kH9lWANPzm05lc+7NHZSPGjT1mZyE=; b=DhQpuvU46OjiJ3KYEaAJYyd5wce9kpWsNImBVgh1xoDtmfvlbHIPRzqVX6KprBUFT/u5VdUrkzXBt8I0IXplHaLlktAGawqT6VpmumY4C5rJs5PzqRMQ+Ts4eD7noGY5lbPoXGAQeCltl3c2R+jkMcqCthDz/hXPg0E49d/N4vDqfGfJpECjC+b7EUwKm2ZiRMNNcxDQFCSwIStaKCaX/qakhsA+pRcSVS5ejAcOzZuhyhO8vwdIa4ZYwURN+Ci1h6bNaXpVHzWgMK4zmWooGyIzcAEz4MtrtY5sTIyPBrakM/BAMQIgu/oNotTgH+Z1/zseipKkPI1NHZO8D1U3qw== Received: from MW4PR04CA0196.namprd04.prod.outlook.com (2603:10b6:303:86::21) by PH8PR12MB6986.namprd12.prod.outlook.com (2603:10b6:510:1bd::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38; Tue, 31 Jan 2023 09:34:37 +0000 Received: from CO1NAM11FT115.eop-nam11.prod.protection.outlook.com (2603:10b6:303:86:cafe::a1) by MW4PR04CA0196.outlook.office365.com (2603:10b6:303:86::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.38 via Frontend Transport; Tue, 31 Jan 2023 09:34:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT115.mail.protection.outlook.com (10.13.174.211) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6043.30 via Frontend Transport; Tue, 31 Jan 2023 09:34:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:19 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Tue, 31 Jan 2023 01:34:16 -0800 From: Alex Vesker To: , , , "Matan Azrad" CC: , Subject: [v1 06/16] net/mlx5/hws: add send FW match STE using gen WQE Date: Tue, 31 Jan 2023 11:33:35 +0200 Message-ID: <20230131093346.1261066-7-valex@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com> References: <20230131093346.1261066-1-valex@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT115:EE_|PH8PR12MB6986:EE_ X-MS-Office365-Filtering-Correlation-Id: 92621266-4598-4be2-0e08-08db036e5e97 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AAT20Z8MP0/nyMfQWPvTjrYai+pxhvJPzJKvbyA30iTVw5l+JXnD8WyGfmwO7x0FXgsIfXlLyxEJeN+CSkYbD9xNCOD+B5tIRHSlIaMCK1VO8pM8sGDvaCDiNOymk5ZY2ShrSd43DsusfBZ/ND6qROJzi0srpKWBZOltoj8HZWiit1M9eWCxv/eDkwgjieBjqoLCwCRo/Rx/Jj9K56SDGt7ipP0pjnlzRUJVgeZC5xGSyQvC5O8wM7wt49v0BUP2NO/g0K9Ckocji4UjKYybQy0Y9UzDhra0QXNFiFgVY9OvdoFlaUe2pklgRBZQN6R7HBRfAMMeyh7ARWaeKP4SKxsznV79fEzfXZMSYWpXFH62FBlIgVgeCw9HW4q0AqPSzfh6QM+Xx//KGAm6lYJCkcAjN83I8uYZ/Cfd41DSO9gIPDEhY3LMNVNh3u1vCnT72FhvIOvSSobbHJygO38yO7QNzMDZ1Jry4pxOjZic0xwO33wL076gfBL1WTwO/Q8wJkDvULNT+Fz1UJDeM3OCSMwprDsKjpD9Ovtj6zdXDvLoalNRZCMHlCwEA9VrEltYf95KXhbUQHyz9RLwn7C5xqsbhQ7ty4Vn0ZW2dmaKFWOKCyOz9F+2M4nrcfEA7H/OAg/9CqZNqCEBavtRqBYXHsfCwVrHm9wIvKKgm0v0O90CMzMDOcst53ikHDfsy5ZocZgZqRy6djWTq8TfKz0MPg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(39860400002)(396003)(376002)(136003)(451199018)(40470700004)(46966006)(36840700001)(36756003)(70206006)(6636002)(54906003)(316002)(110136005)(82310400005)(8676002)(4326008)(5660300002)(70586007)(41300700001)(8936002)(86362001)(356005)(7636003)(82740400003)(36860700001)(107886003)(6666004)(16526019)(186003)(26005)(1076003)(83380400001)(336012)(426003)(40480700001)(40460700003)(55016003)(2906002)(6286002)(7696005)(47076005)(478600001)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2023 09:34:37.0363 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 92621266-4598-4be2-0e08-08db036e5e97 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT115.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6986 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Send STE WQE function wraps the send WQE command to support WQE build and FDB abstraction. Sending using FW is different from sending from HW since FW returns the completion immediately which requires us to retry on failure and prepare the completion as part of the send process. Signed-off-by: Alex Vesker --- drivers/net/mlx5/hws/mlx5dr_send.c | 134 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 7 +- 2 files changed, 140 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c index a507e5f626..a9958df4f2 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.c +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -235,6 +235,140 @@ void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, send_attr->fence = fence; } +static +int mlx5dr_send_wqe_fw(struct ibv_context *ibv_ctx, + uint32_t pd_num, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_match_data, + void *send_wqe_match_tag, + bool is_jumbo, + uint8_t gta_opcode) +{ + bool has_match = send_wqe_match_data || send_wqe_match_tag; + struct mlx5dr_wqe_gta_data_seg_ste gta_wqe_data0 = {0}; + struct mlx5dr_wqe_gta_ctrl_seg gta_wqe_ctrl = {0}; + struct mlx5dr_cmd_generate_wqe_attr attr = {0}; + struct mlx5dr_wqe_ctrl_seg wqe_ctrl = {0}; + struct mlx5_cqe64 cqe; + uint32_t flags = 0; + int ret; + + /* Set WQE control */ + wqe_ctrl.opmod_idx_opcode = + rte_cpu_to_be_32((send_attr->opmod << 24) | send_attr->opcode); + wqe_ctrl.qpn_ds = + rte_cpu_to_be_32((send_attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16); + flags |= send_attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + wqe_ctrl.flags = rte_cpu_to_be_32(flags); + wqe_ctrl.imm = rte_cpu_to_be_32(send_attr->id); + + /* Set GTA WQE CTRL */ + memcpy(gta_wqe_ctrl.stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + gta_wqe_ctrl.op_dirix = htobe32(gta_opcode << 28); + + /* Set GTA match WQE DATA */ + if (has_match) { + if (send_wqe_match_data) + memcpy(>a_wqe_data0, send_wqe_match_data, sizeof(gta_wqe_data0)); + else + mlx5dr_send_wqe_set_tag(>a_wqe_data0, send_wqe_match_tag, is_jumbo); + + gta_wqe_data0.rsvd1_definer = htobe32(send_attr->match_definer_id << 8); + attr.gta_data_0 = (uint8_t *)>a_wqe_data0; + } + + attr.pdn = pd_num; + attr.wqe_ctrl = (uint8_t *)&wqe_ctrl; + attr.gta_ctrl = (uint8_t *)>a_wqe_ctrl; + +send_wqe: + ret = mlx5dr_cmd_generate_wqe(ibv_ctx, &attr, &cqe); + if (ret) { + DR_LOG(ERR, "Failed to write WQE using command"); + return ret; + } + + if ((mlx5dv_get_cqe_opcode(&cqe) == MLX5_CQE_REQ) && + (rte_be_to_cpu_32(cqe.byte_cnt) >> 31 == 0)) { + *send_attr->used_id = send_attr->id; + return 0; + } + + /* Retry if rule failed */ + if (send_attr->retry_id) { + wqe_ctrl.imm = rte_cpu_to_be_32(send_attr->retry_id); + send_attr->id = send_attr->retry_id; + send_attr->retry_id = 0; + goto send_wqe; + } + + return -1; +} + +void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + struct mlx5dr_rule *rule = send_attr->rule; + struct ibv_context *ibv_ctx; + struct mlx5dr_context *ctx; + uint16_t queue_id; + uint32_t pdn; + int ret; + + ctx = rule->matcher->tbl->ctx; + queue_id = queue - ctx->send_queue; + ibv_ctx = ctx->ibv_ctx; + pdn = ctx->pd_num; + + /* Writing through FW can't HW fence, therefore we drain the queue */ + if (send_attr->fence) + mlx5dr_send_queue_action(ctx, + queue_id, + MLX5DR_SEND_QUEUE_ACTION_DRAIN_SYNC); + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + ret = mlx5dr_send_wqe_fw(ibv_ctx, pdn, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode); + if (ret) + goto fail_rule; + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + ret = mlx5dr_send_wqe_fw(ibv_ctx, pdn, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode); + if (ret) + goto fail_rule; + } + + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + rule->status++; + mlx5dr_send_engine_gen_comp(queue, send_attr->user_data, RTE_FLOW_OP_SUCCESS); + return; + +fail_rule: + rule->status = !rule->rtc_0 && !rule->rtc_1 ? + MLX5DR_RULE_STATUS_FAILED : MLX5DR_RULE_STATUS_FAILING; + mlx5dr_send_engine_gen_comp(queue, send_attr->user_data, RTE_FLOW_OP_ERROR); +} + static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, struct mlx5dr_send_ring_priv *priv, uint16_t wqe_cnt) diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h index fcddcc6366..1e845b1c7a 100644 --- a/drivers/net/mlx5/hws/mlx5dr_send.h +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -52,7 +52,8 @@ struct mlx5dr_wqe_gta_ctrl_seg { struct mlx5dr_wqe_gta_data_seg_ste { __be32 rsvd0_ctr_id; - __be32 rsvd1[4]; + __be32 rsvd1_definer; + __be32 rsvd2[3]; __be32 action[3]; __be32 tag[8]; }; @@ -159,6 +160,7 @@ struct mlx5dr_send_engine_post_attr { uint8_t opmod; uint8_t notify_hw; uint8_t fence; + uint8_t match_definer_id; size_t len; struct mlx5dr_rule *rule; uint32_t id; @@ -238,6 +240,9 @@ void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, struct mlx5dr_send_ste_attr *ste_attr); +void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); static inline bool mlx5dr_send_engine_empty(struct mlx5dr_send_engine *queue)