From patchwork Tue Nov 23 18:38:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 104623 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1219A0C4C; Tue, 23 Nov 2021 19:38:39 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 73403410E5; Tue, 23 Nov 2021 19:38:34 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2045.outbound.protection.outlook.com [40.107.223.45]) by mails.dpdk.org (Postfix) with ESMTP id B487240040; Tue, 23 Nov 2021 19:38:32 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LN/TZ2r0wcXANoiBIV76d18DVBbF4vEfknBlMW2zME8hppQT/h8QXCs8wp2JQvedwjuT7z/QyLlyTwXNiLKJ27pPfjpWGWq+oaWls3n1/4Z+M7YFqu3mXFTQ3BsVDqJ7+PuwFjjpp08MH2CbWReseL6qb6wDSH41WBZeBQYltQzR7ehD0KyXcOmj/ENTpEqcYNJnuLkY8FVFoGQXQrKRIWakJa2XJsBiqJnRRMlrHsi3DjV8OM3WBj5J9JattvqRXnWPjkorKtWow9cggeAs5Ilzp70GGY6tMBpzwYyHKMxaYFeCZ2qYkpXW5j9/yeBidTd2/8oMUUm+JGRMD9stug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=j5UWnir1DK/ckQe9C/W6UehjuvtiqOYDOusMlwYdDAA=; b=DdXA9hEg24qiQd27xC4yv/9ra1E9RE1LUtDxw/YzkjoWEzx7xy69KjzC88ulJVZ/M91FiPZ0xetHitrQWEneroKFkmw4OSSW8O2wdXWknlsRz1oCWTmTMdcsanIRYRLZj3Ti3+zsB5xuQ4fO8jxvAwwhtYMVqZW6zcrmSqlMIWcx4ffUCNrPKmu7F6IfZpnfpeQp6QYSWk5LfnE0ujTsL4TssvfZnsuwk0IlBrrlmU240UmLVeqsD+GbsvXrXMd/8wUwtaMOSVfZZy6KzkQbBV0JQAdWw59SMhxtUaT4SBcWKVOyvztGWGvWEo5J73jWoGbOEdeLULSI/40jWQRkpA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=j5UWnir1DK/ckQe9C/W6UehjuvtiqOYDOusMlwYdDAA=; b=G5ooZGLgxcDBu+f47xZFXJ//45nptHbqPjnt8s2Ga6yfpYbJA9VMRoouGOgv/RZg0r2mEG0YJHUegxrJtQ2YUj3S5hPiTEKey5c+EFfmSmFcTdE/OdZFshfhpV/1jbLtLag+eaLGPNpeL3PkgT9w7baiA6ZmPmDfaTH8dcUBdL5epW3ipF+knjn1QBhMoOlfo65C/8qiaNI8QQewGz4PNUQ89oShFlstHY48xk093g7RBC3XL3TJqrBDz6vuyu1CeG+4wfhNDil8yUgZbGXUCCJy3FntCGzgiVP7X79hGY9apkFvzyjIG8BdCyFMty2QBjo7jq52qKwnULX222sIyw== Received: from DM5PR22CA0016.namprd22.prod.outlook.com (2603:10b6:3:101::26) by DM4PR12MB5325.namprd12.prod.outlook.com (2603:10b6:5:39e::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.13; Tue, 23 Nov 2021 18:38:30 +0000 Received: from DM6NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:3:101:cafe::77) by DM5PR22CA0016.outlook.office365.com (2603:10b6:3:101::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4734.20 via Frontend Transport; Tue, 23 Nov 2021 18:38:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT048.mail.protection.outlook.com (10.13.173.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4713.20 via Frontend Transport; Tue, 23 Nov 2021 18:38:29 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 23 Nov 2021 18:38:27 +0000 From: To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Michael Baum , Subject: [PATCH 1/3] common/mlx5: add min WQE size for striding RQ Date: Tue, 23 Nov 2021 20:38:03 +0200 Message-ID: <20211123183805.2905792-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123183805.2905792-1-michaelba@nvidia.com> References: <20211123183805.2905792-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e30f956d-d5f9-4612-dcf1-08d9aeb071f9 X-MS-TrafficTypeDiagnostic: DM4PR12MB5325: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4714; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3pQZpvgDcsawpl9o9wvBvctMbpbMtuaYIgRNXqniRpCd3fWzO6W54e81lJFQF4aZs8pXBnV/vudz0Jbqs4J533aHRpPxgp+xUVSBRD9gVdJzcS4nUuUlmr+QrMGsC9MrUPO7WZaVn1wZCyRF9EXkYBRwDUTkAWunvkw2k7u/NU5+jva5mbQ7HJTJ/UyXzkIJ7dzPjQj8bqYJKA/3cVQ/Pid43qcohU5xlnqqmqKJbPV0IWoZjgK5FUMVpuFIvkMPbYUDSZmA+aONMqb57rHgrQVfcT5NXuB1fogKTv31v2Z+YxcOr8ZWdZ5GqPr9hE36JDq1Dvcz56psazZI/hdLVxyyDaSQrFNyqWTz5IlciUVupDEiFgRWZZaKhiaInnEAAfupc3IcN6rDidWzlGA5jl5fXdZhQgH6hlapvmERzCWHWl7g1zrqmHRqXXuRNeEUSw4+chLtEFbuC4MT/IbA4InVvDVeaCrU2c1axjrpFlxluVLxnDnisN5mACIuwAcpEt82WhsRHsB1mBJDHAGELr1LnraKgcMdo5CuGQCaZSDdwbcvPs3SGvwPKfl0HGVLoHKIGxR0bMTdmYtbskJp+4r4M4nUV1ajuC64jZFaKxWDAgwOTjKq+676RFncgBxt1DFflScnXNXWKMEuluiM77i3JUoZFyHjsz7OGx7EKby8In2tJ4BVgrhRcQWnBvI1KFcNoVLukHplY2YwjxVASA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(508600001)(336012)(6916009)(70586007)(16526019)(36860700001)(70206006)(55016003)(5660300002)(356005)(6286002)(7696005)(2906002)(1076003)(47076005)(54906003)(186003)(8676002)(7636003)(26005)(426003)(6666004)(86362001)(450100002)(2616005)(316002)(36756003)(2876002)(83380400001)(8936002)(82310400004)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2021 18:38:29.8323 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e30f956d-d5f9-4612-dcf1-08d9aeb071f9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5325 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Michael Baum Some devices have a WQE size limit for striding RQ. On some newer devices, this limitation is smaller and information on its size is provided by the firmware. This patch adds the attribute query from firmware: the minimum required size of WQE in a strided RQ in granularity of Bytes. Cc: stable@dpdk.org Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_devx_cmds.c | 16 ++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 1 + drivers/common/mlx5/mlx5_prm.h | 11 +++++++++-- 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index e52b995ee3..a8efdbe1ae 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -823,6 +823,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, { uint32_t in[MLX5_ST_SZ_DW(query_hca_cap_in)] = {0}; uint32_t out[MLX5_ST_SZ_DW(query_hca_cap_out)] = {0}; + bool hca_cap_2_sup; uint64_t general_obj_types_supported = 0; void *hcattr; int rc, i; @@ -832,6 +833,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, MLX5_HCA_CAP_OPMOD_GET_CUR); if (!hcattr) return rc; + hca_cap_2_sup = MLX5_GET(cmd_hca_cap, hcattr, hca_cap_2); attr->max_wqe_sz_sq = MLX5_GET(cmd_hca_cap, hcattr, max_wqe_sz_sq); attr->flow_counter_bulk_alloc_bitmap = MLX5_GET(cmd_hca_cap, hcattr, flow_counter_bulk_alloc); @@ -967,6 +969,20 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, general_obj_types) & MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD); attr->rq_delay_drop = MLX5_GET(cmd_hca_cap, hcattr, rq_delay_drop); + if (hca_cap_2_sup) { + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) { + DRV_LOG(DEBUG, + "Failed to query DevX HCA capabilities 2."); + return rc; + } + attr->log_min_stride_wqe_sz = MLX5_GET(cmd_hca_cap_2, hcattr, + log_min_stride_wqe_sz); + } + if (attr->log_min_stride_wqe_sz == 0) + attr->log_min_stride_wqe_sz = MLX5_MPRQ_LOG_MIN_STRIDE_WQE_SIZE; if (attr->qos.sup) { hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP | diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index d7f71646a3..37821b493e 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -251,6 +251,7 @@ struct mlx5_hca_attr { uint32_t log_max_mmo_decompress:5; uint32_t umr_modify_entity_size_disabled:1; uint32_t umr_indirect_mkey_disabled:1; + uint32_t log_min_stride_wqe_sz:5; uint16_t max_wqe_sz_sq; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 2ded67e85e..8a7cb0e673 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -264,6 +264,9 @@ /* The maximum log value of segments per RQ WQE. */ #define MLX5_MAX_LOG_RQ_SEGS 5u +/* Log 2 of the default size of a WQE for Multi-Packet RQ. */ +#define MLX5_MPRQ_LOG_MIN_STRIDE_WQE_SIZE 14U + /* The alignment needed for WQ buffer. */ #define MLX5_WQE_BUF_ALIGNMENT rte_mem_page_size() @@ -1342,7 +1345,9 @@ enum { #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_6DX 0x1 struct mlx5_ifc_cmd_hca_cap_bits { - u8 reserved_at_0[0x30]; + u8 reserved_at_0[0x20]; + u8 hca_cap_2[0x1]; + u8 reserved_at_21[0xf]; u8 vhca_id[0x10]; u8 reserved_at_40[0x20]; u8 reserved_at_60[0x3]; @@ -1909,7 +1914,8 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 max_reformat_insert_offset[0x8]; u8 max_reformat_remove_size[0x8]; u8 max_reformat_remove_offset[0x8]; /* End of DW6. */ - u8 aso_conntrack_reg_id[0x8]; + u8 reserved_at_c0[0x3]; + u8 log_min_stride_wqe_sz[0x5]; u8 reserved_at_c8[0x3]; u8 log_conn_track_granularity[0x5]; u8 reserved_at_d0[0x3]; @@ -1922,6 +1928,7 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap; + struct mlx5_ifc_cmd_hca_cap_2_bits cmd_hca_cap_2; struct mlx5_ifc_per_protocol_networking_offload_caps_bits per_protocol_networking_offload_caps; struct mlx5_ifc_qos_cap_bits qos_cap; From patchwork Tue Nov 23 18:38:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 104624 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4DFFA0C4C; Tue, 23 Nov 2021 19:38:47 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDD4E41154; Tue, 23 Nov 2021 19:38:36 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2049.outbound.protection.outlook.com [40.107.92.49]) by mails.dpdk.org (Postfix) with ESMTP id BA4454113D; Tue, 23 Nov 2021 19:38:34 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f/tXTdm/Abs7jHMAUdzLpg0rl8mGePJutO8yzTkfUlb3AnHkKMrwGDtfkmkr9kfA7+wLyJVe8pwPmNDGtoDmIBZ3akgPWqRmfXUkEfSDQotSenvRixQ0/bGk9Qm43MYILGZMR0z+uwtGGR4HQMplIOSY6SxSfWvASBCvEuhb3zxGuq05h58E4mAk5glLZSAK0z+gjl92znB2XGsD/VfR4y9InMEMjO1WP7JL1V1ruGv4/nlclWkjSk/DDzcP3h+hCNahuaoBj4isIRk+Y8v/OOtisMyIIq6PQpCuDb3v7j0NRI21ScfQ2O6gefwIx6j/sib1olZXQeOtjRq7KYqvMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fCVuxxxIbGpmtWtzs2o1s4fMqqFmJiZlyeTDNyCnewU=; b=XuYK8p4onVNQPl7Dt41FUC/4gRVoZOx28xzov0NlRgmDsxmGwspJQjrlHcAuORejeq538UNgYgC6LV3AwS1x6swGvSsyMs+/yttaFohXYbuInAnZwV8jNqqxwqGHlJNzIgFycnx4vDoKUp4E375XcOOyKbtxfUkIJpslueTQ3YNoz531DHjLOZHvlgiiWkHajjXDcYLQDIahD35dkdveOFLA4cUA8SlhzBkfMSvStIcZEOYPJNfHU32ngN5H/rmh/zpobJXYHyhUnKXbrzQApzFe1tJMQL4Xo3cih7CvfUT2yXg01fe8ZPhd5U+y7D2BOX2AKIUppAKKDGa+4K2nFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fCVuxxxIbGpmtWtzs2o1s4fMqqFmJiZlyeTDNyCnewU=; b=R85/nseNNaA5ev1/+JI0uV+lyBGNahfXrC4bX8WklOCx66Wz+6/7k+LDixAmOmm376+0KjyESfwvA03tSDJw+s6MAeXpQLS88iEB6LH7LhXtOFnBEkQIvYQa1nS0qgctJ1oWefGsHBLLmpuvqBglXF9rB9qKSwuQv1bUYMnoaV8xGsvXOPADeLBIcdbRJFGKonvg+vp/8Sh5W1GCbZ0MWGxfDcnT/M+MLLWMe7J7BcuKSyxYzfNMgjTBRfS1MHS7iOtDaVMkEHY8HyMQKcb2M3gQ56Pl7NuI9syZIcdK0uv619q0O3pSxHEskToa/gta9r2UwGoKP7Mi2YljaAhC2Q== Received: from DM5PR1401CA0019.namprd14.prod.outlook.com (2603:10b6:4:4a::29) by BL0PR12MB4737.namprd12.prod.outlook.com (2603:10b6:208:8d::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4713.19; Tue, 23 Nov 2021 18:38:33 +0000 Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:4:4a:cafe::7a) by DM5PR1401CA0019.outlook.office365.com (2603:10b6:4:4a::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4713.22 via Frontend Transport; Tue, 23 Nov 2021 18:38:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4713.20 via Frontend Transport; Tue, 23 Nov 2021 18:38:32 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 23 Nov 2021 18:38:29 +0000 From: To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Michael Baum , Subject: [PATCH 2/3] net/mlx5: improve stride parameter names Date: Tue, 23 Nov 2021 20:38:04 +0200 Message-ID: <20211123183805.2905792-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123183805.2905792-1-michaelba@nvidia.com> References: <20211123183805.2905792-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cab7da8e-c1a4-42ed-3549-08d9aeb07382 X-MS-TrafficTypeDiagnostic: BL0PR12MB4737: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2201; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LPpWifYppWwgpnM1q5LdtHI3+jG+uspGKjuhZBllYABzl6o6sSwjMOArZ8arM8nAng9+gQJLSgK3o+WCGR+qHWGZaQgwvlgGEOgTqAwpEDauEdcfRgG3/zz8wQM+eMwZ75Gg+LX3Psr9M7P5pHSTEYxOSnqBjF1ePhrjjWhek/X7zjXNK7t/DTnh7pnK1pjz5i9VRZhSzkBzYpv+HSCtuQTlRq/Y3ispjcWsCp+SRP3dHJT99PDx8PHopyTC5SVOIuBqK2cNxKDvwlTIV7tQ/x3uRyTHpib939EQAzvMDEJ3KjF+vYdJoLe/wVzAYhS05xiqgpjpuNfnK/adxlMTLB9slo4AWz4DSakd44H5XQFG0WsTVOxiqPmYI+vuw75YQgtM1Gjz9VGfseTgrM3Sh4A1oiXtmhV9MiJst+8EhB5ULMmL0GidDAnadsphtVWQErXZh3BfSgaR5QZpqH5yRZo+IrPhtRVCUqiQVHwHrn2/QiMhyw7GvJlaQsA3Azk5iz8MosjNim8SLy9LxRgcd0tSDQxJKWv/Sqi3vYBTl4V50F5hpjIJPW1rvb1Vef10zxljejQQRkEduVW/I/RMDN/Ycxp9fVkOJMuzS4Sx/alqlqjS04yb+a6pypl/Kq4NUcVPKNJjvGdmymjrwkAT+97nwvLqFMjHj7tVGwweupYkfpf0hY4n8DMtfOO1Or3HbEBg9wgcqPMO+rINDLl/G5YhSVsne5920NhYAWKqrdA= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(316002)(508600001)(8936002)(1076003)(2616005)(30864003)(47076005)(82310400004)(7696005)(36860700001)(54906003)(70586007)(70206006)(86362001)(36756003)(8676002)(26005)(6916009)(186003)(16526019)(426003)(2876002)(356005)(2906002)(4326008)(7636003)(6286002)(5660300002)(55016003)(83380400001)(6666004)(336012)(450100002)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2021 18:38:32.3926 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cab7da8e-c1a4-42ed-3549-08d9aeb07382 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4737 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Michael Baum In the striding RQ management there are two important parameters, the size of the single stride in bytes and the number of strides. Both the data-path structure and config structure keep the log of the above parameters. However, in their names there is no mention that the value is a log which may be misleading as if the fields represent the values themselves. This patch updates their names describing the values more accurately. Cc: stable@dpdk.org Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 38 +++++------ drivers/net/mlx5/linux/mlx5_verbs.c | 4 +- drivers/net/mlx5/mlx5.c | 4 +- drivers/net/mlx5/mlx5.h | 8 +-- drivers/net/mlx5/mlx5_defs.h | 4 +- drivers/net/mlx5/mlx5_devx.c | 4 +- drivers/net/mlx5/mlx5_rx.c | 22 +++--- drivers/net/mlx5/mlx5_rx.h | 12 ++-- drivers/net/mlx5/mlx5_rxq.c | 102 +++++++++++++++------------- drivers/net/mlx5/mlx5_rxtx_vec.c | 8 +-- 10 files changed, 106 insertions(+), 100 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index c29fe3d92b..70472efc29 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1549,34 +1549,34 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(DEBUG, "FCS stripping configuration is %ssupported", (config->hw_fcs_strip ? "" : "not ")); if (config->mprq.enabled && mprq) { - if (config->mprq.stride_num_n && - (config->mprq.stride_num_n > mprq_max_stride_num_n || - config->mprq.stride_num_n < mprq_min_stride_num_n)) { - config->mprq.stride_num_n = - RTE_MIN(RTE_MAX(MLX5_MPRQ_STRIDE_NUM_N, - mprq_min_stride_num_n), - mprq_max_stride_num_n); + if (config->mprq.log_stride_num && + (config->mprq.log_stride_num > mprq_max_stride_num_n || + config->mprq.log_stride_num < mprq_min_stride_num_n)) { + config->mprq.log_stride_num = + RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM, + mprq_min_stride_num_n), + mprq_max_stride_num_n); DRV_LOG(WARNING, "the number of strides" " for Multi-Packet RQ is out of range," " setting default value (%u)", - 1 << config->mprq.stride_num_n); - } - if (config->mprq.stride_size_n && - (config->mprq.stride_size_n > mprq_max_stride_size_n || - config->mprq.stride_size_n < mprq_min_stride_size_n)) { - config->mprq.stride_size_n = - RTE_MIN(RTE_MAX(MLX5_MPRQ_STRIDE_SIZE_N, - mprq_min_stride_size_n), - mprq_max_stride_size_n); + 1 << config->mprq.log_stride_num); + } + if (config->mprq.log_stride_size && + (config->mprq.log_stride_size > mprq_max_stride_size_n || + config->mprq.log_stride_size < mprq_min_stride_size_n)) { + config->mprq.log_stride_size = + RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE, + mprq_min_stride_size_n), + mprq_max_stride_size_n); DRV_LOG(WARNING, "the size of a stride" " for Multi-Packet RQ is out of range," " setting default value (%u)", - 1 << config->mprq.stride_size_n); + 1 << config->mprq.log_stride_size); } - config->mprq.min_stride_size_n = mprq_min_stride_size_n; - config->mprq.max_stride_size_n = mprq_max_stride_size_n; + config->mprq.log_min_stride_size = mprq_min_stride_size_n; + config->mprq.log_max_stride_size = mprq_max_stride_size_n; } else if (config->mprq.enabled && !mprq) { DRV_LOG(WARNING, "Multi-Packet RQ isn't supported"); config->mprq.enabled = 0; diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index 58556d2bf0..2b6eef44a7 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -272,8 +272,8 @@ mlx5_rxq_ibv_wq_create(struct mlx5_rxq_priv *rxq) wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ; *mprq_attr = (struct mlx5dv_striding_rq_init_attr){ - .single_stride_log_num_of_bytes = rxq_data->strd_sz_n, - .single_wqe_log_num_of_strides = rxq_data->strd_num_n, + .single_stride_log_num_of_bytes = rxq_data->log_strd_sz, + .single_wqe_log_num_of_strides = rxq_data->log_strd_num, .two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT, }; } diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 4e04817d11..8c654045c6 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1884,9 +1884,9 @@ mlx5_args_check(const char *key, const char *val, void *opaque) } else if (strcmp(MLX5_RX_MPRQ_EN, key) == 0) { config->mprq.enabled = !!tmp; } else if (strcmp(MLX5_RX_MPRQ_LOG_STRIDE_NUM, key) == 0) { - config->mprq.stride_num_n = tmp; + config->mprq.log_stride_num = tmp; } else if (strcmp(MLX5_RX_MPRQ_LOG_STRIDE_SIZE, key) == 0) { - config->mprq.stride_size_n = tmp; + config->mprq.log_stride_size = tmp; } else if (strcmp(MLX5_RX_MPRQ_MAX_MEMCPY_LEN, key) == 0) { config->mprq.max_memcpy_len = tmp; } else if (strcmp(MLX5_RXQS_MIN_MPRQ, key) == 0) { diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 8466531060..4ba90db816 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -275,10 +275,10 @@ struct mlx5_dev_config { unsigned int hp_delay_drop:1; /* Enable hairpin Rxq delay drop. */ struct { unsigned int enabled:1; /* Whether MPRQ is enabled. */ - unsigned int stride_num_n; /* Number of strides. */ - unsigned int stride_size_n; /* Size of a stride. */ - unsigned int min_stride_size_n; /* Min size of a stride. */ - unsigned int max_stride_size_n; /* Max size of a stride. */ + unsigned int log_stride_num; /* Log number of strides. */ + unsigned int log_stride_size; /* Log size of a stride. */ + unsigned int log_min_stride_size; /* Log min size of a stride.*/ + unsigned int log_max_stride_size; /* Log max size of a stride.*/ unsigned int max_memcpy_len; /* Maximum packet size to memcpy Rx packets. */ unsigned int min_rxqs_num; diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 258475ed2c..36b384fa08 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -113,10 +113,10 @@ #define MLX5_UAR_PAGE_NUM_MASK ((MLX5_UAR_PAGE_NUM_MAX) - 1) /* Log 2 of the default number of strides per WQE for Multi-Packet RQ. */ -#define MLX5_MPRQ_STRIDE_NUM_N 6U +#define MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM 6U /* Log 2 of the default size of a stride per WQE for Multi-Packet RQ. */ -#define MLX5_MPRQ_STRIDE_SIZE_N 11U +#define MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE 11U /* Two-byte shift is disabled for Multi-Packet RQ. */ #define MLX5_MPRQ_TWO_BYTE_SHIFT 0 diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 105c3d67f0..91243f684f 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -257,11 +257,11 @@ mlx5_rxq_create_devx_rq_resources(struct mlx5_rxq_priv *rxq) * 512*2^single_wqe_log_num_of_strides. */ rq_attr.wq_attr.single_wqe_log_num_of_strides = - rxq_data->strd_num_n - + rxq_data->log_strd_num - MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES; /* Stride size = (2^single_stride_log_num_of_bytes)*64B. */ rq_attr.wq_attr.single_stride_log_num_of_bytes = - rxq_data->strd_sz_n - + rxq_data->log_strd_sz - MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES; wqe_size = sizeof(struct mlx5_wqe_mprq); } else { diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e8215f7381..6b169b33c9 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -73,7 +73,7 @@ rx_queue_count(struct mlx5_rxq_data *rxq) const unsigned int cqe_n = (1 << rxq->cqe_n); const unsigned int sges_n = (1 << rxq->sges_n); const unsigned int elts_n = (1 << rxq->elts_n); - const unsigned int strd_n = (1 << rxq->strd_num_n); + const unsigned int strd_n = RTE_BIT32(rxq->log_strd_num); const unsigned int cqe_cnt = cqe_n - 1; unsigned int cq_ci, used; @@ -167,8 +167,8 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads; qinfo->scattered_rx = dev->data->scattered_rx; qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ? - (1 << rxq->elts_n) * (1 << rxq->strd_num_n) : - (1 << rxq->elts_n); + RTE_BIT32(rxq->elts_n) * RTE_BIT32(rxq->log_strd_num) : + RTE_BIT32(rxq->elts_n); } /** @@ -354,10 +354,10 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) scat = &((volatile struct mlx5_wqe_mprq *) rxq->wqes)[i].dseg; - addr = (uintptr_t)mlx5_mprq_buf_addr(buf, - 1 << rxq->strd_num_n); - byte_count = (1 << rxq->strd_sz_n) * - (1 << rxq->strd_num_n); + addr = (uintptr_t)mlx5_mprq_buf_addr + (buf, RTE_BIT32(rxq->log_strd_num)); + byte_count = RTE_BIT32(rxq->log_strd_sz) * + RTE_BIT32(rxq->log_strd_num); lkey = mlx5_rx_addr2mr(rxq, addr); } else { struct rte_mbuf *buf = (*rxq->elts)[i]; @@ -383,7 +383,7 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) .ai = 0, }; rxq->elts_ci = mlx5_rxq_mprq_enabled(rxq) ? - (wqe_n >> rxq->sges_n) * (1 << rxq->strd_num_n) : 0; + (wqe_n >> rxq->sges_n) * RTE_BIT32(rxq->log_strd_num) : 0; /* Update doorbell counter. */ rxq->rq_ci = wqe_n >> rxq->sges_n; rte_io_wmb(); @@ -412,7 +412,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) const uint16_t cqe_n = 1 << rxq->cqe_n; const uint16_t cqe_mask = cqe_n - 1; const uint16_t wqe_n = 1 << rxq->elts_n; - const uint16_t strd_n = 1 << rxq->strd_num_n; + const uint16_t strd_n = RTE_BIT32(rxq->log_strd_num); struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); union { @@ -1045,8 +1045,8 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) { struct mlx5_rxq_data *rxq = dpdk_rxq; - const uint32_t strd_n = 1 << rxq->strd_num_n; - const uint32_t strd_sz = 1 << rxq->strd_sz_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); + const uint32_t strd_sz = RTE_BIT32(rxq->log_strd_sz); const uint32_t cq_mask = (1 << rxq->cqe_n) - 1; const uint32_t wq_mask = (1 << rxq->elts_n) - 1; volatile struct mlx5_cqe *cqe = &(*rxq->cqes)[rxq->cq_ci & cq_mask]; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 9cc1a2703b..4651826a1d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -88,8 +88,8 @@ struct mlx5_rxq_data { unsigned int elts_n:4; /* Log 2 of Mbufs. */ unsigned int rss_hash:1; /* RSS hash result is enabled. */ unsigned int mark:1; /* Marked flow available on the queue. */ - unsigned int strd_num_n:5; /* Log 2 of the number of stride. */ - unsigned int strd_sz_n:4; /* Log 2 of stride size. */ + unsigned int log_strd_num:5; /* Log 2 of the number of stride. */ + unsigned int log_strd_sz:4; /* Log 2 of stride size. */ unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */ unsigned int err_state:2; /* enum mlx5_rxq_err_state. */ unsigned int strd_scatter_en:1; /* Scattered packets from a stride. */ @@ -395,7 +395,7 @@ mlx5_timestamp_set(struct rte_mbuf *mbuf, int offset, static __rte_always_inline void mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx) { - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); struct mlx5_mprq_buf *rep = rxq->mprq_repl; volatile struct mlx5_wqe_data_seg *wqe = &((volatile struct mlx5_wqe_mprq *)rxq->wqes)[rq_idx].dseg; @@ -453,8 +453,8 @@ static __rte_always_inline enum mlx5_rqx_code mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len, struct mlx5_mprq_buf *buf, uint16_t strd_idx, uint16_t strd_cnt) { - const uint32_t strd_n = 1 << rxq->strd_num_n; - const uint16_t strd_sz = 1 << rxq->strd_sz_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); + const uint16_t strd_sz = RTE_BIT32(rxq->log_strd_sz); const uint16_t strd_shift = MLX5_MPRQ_STRIDE_SHIFT_BYTE * rxq->strd_shift_en; const int32_t hdrm_overlap = @@ -599,7 +599,7 @@ mlx5_check_mprq_support(struct rte_eth_dev *dev) static __rte_always_inline int mlx5_rxq_mprq_enabled(struct mlx5_rxq_data *rxq) { - return rxq->strd_num_n > 0; + return rxq->log_strd_num > 0; } /** diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index e406779faf..e76bfaa000 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -67,7 +67,7 @@ mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data) unsigned int wqe_n = 1 << rxq_data->elts_n; if (mlx5_rxq_mprq_enabled(rxq_data)) - cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1; + cqe_n = wqe_n * RTE_BIT32(rxq_data->log_strd_num) - 1; else cqe_n = wqe_n - 1; return cqe_n; @@ -137,8 +137,9 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) { const unsigned int sges_n = 1 << rxq_ctrl->rxq.sges_n; unsigned int elts_n = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ? - (1 << rxq_ctrl->rxq.elts_n) * (1 << rxq_ctrl->rxq.strd_num_n) : - (1 << rxq_ctrl->rxq.elts_n); + RTE_BIT32(rxq_ctrl->rxq.elts_n) * + RTE_BIT32(rxq_ctrl->rxq.log_strd_num) : + RTE_BIT32(rxq_ctrl->rxq.elts_n); unsigned int i; int err; @@ -293,8 +294,8 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) { struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; const uint16_t q_n = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ? - (1 << rxq->elts_n) * (1 << rxq->strd_num_n) : - (1 << rxq->elts_n); + RTE_BIT32(rxq->elts_n) * RTE_BIT32(rxq->log_strd_num) : + RTE_BIT32(rxq->elts_n); const uint16_t q_mask = q_n - 1; uint16_t elts_ci = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ? rxq->elts_ci : rxq->rq_ci; @@ -1373,8 +1374,8 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) unsigned int buf_len; unsigned int obj_num; unsigned int obj_size; - unsigned int strd_num_n = 0; - unsigned int strd_sz_n = 0; + unsigned int log_strd_num = 0; + unsigned int log_strd_sz = 0; unsigned int i; unsigned int n_ibv = 0; int ret; @@ -1393,16 +1394,18 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) n_ibv++; desc += 1 << rxq->elts_n; /* Get the max number of strides. */ - if (strd_num_n < rxq->strd_num_n) - strd_num_n = rxq->strd_num_n; + if (log_strd_num < rxq->log_strd_num) + log_strd_num = rxq->log_strd_num; /* Get the max size of a stride. */ - if (strd_sz_n < rxq->strd_sz_n) - strd_sz_n = rxq->strd_sz_n; - } - MLX5_ASSERT(strd_num_n && strd_sz_n); - buf_len = (1 << strd_num_n) * (1 << strd_sz_n); - obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) * - sizeof(struct rte_mbuf_ext_shared_info) + RTE_PKTMBUF_HEADROOM; + if (log_strd_sz < rxq->log_strd_sz) + log_strd_sz = rxq->log_strd_sz; + } + MLX5_ASSERT(log_strd_num && log_strd_sz); + buf_len = RTE_BIT32(log_strd_num) * RTE_BIT32(log_strd_sz); + obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + + RTE_BIT32(log_strd_num) * + sizeof(struct rte_mbuf_ext_shared_info) + + RTE_PKTMBUF_HEADROOM; /* * Received packets can be either memcpy'd or externally referenced. In * case that the packet is attached to an mbuf as an external buffer, as @@ -1448,7 +1451,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id); mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ, 0, NULL, NULL, mlx5_mprq_buf_init, - (void *)((uintptr_t)1 << strd_num_n), + (void *)((uintptr_t)1 << log_strd_num), dev->device->numa_node, 0); if (mp == NULL) { DRV_LOG(ERR, @@ -1564,15 +1567,18 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM; const int mprq_en = mlx5_check_mprq_support(dev) > 0 && n_seg == 1 && !rx_seg[0].offset && !rx_seg[0].length; - unsigned int mprq_stride_nums = config->mprq.stride_num_n ? - config->mprq.stride_num_n : MLX5_MPRQ_STRIDE_NUM_N; - unsigned int mprq_stride_size = non_scatter_min_mbuf_size <= - (1U << config->mprq.max_stride_size_n) ? - log2above(non_scatter_min_mbuf_size) : MLX5_MPRQ_STRIDE_SIZE_N; - unsigned int mprq_stride_cap = (config->mprq.stride_num_n ? - (1U << config->mprq.stride_num_n) : (1U << mprq_stride_nums)) * - (config->mprq.stride_size_n ? - (1U << config->mprq.stride_size_n) : (1U << mprq_stride_size)); + unsigned int log_mprq_stride_nums = config->mprq.log_stride_num ? + config->mprq.log_stride_num : MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM; + unsigned int log_mprq_stride_size = non_scatter_min_mbuf_size <= + RTE_BIT32(config->mprq.log_max_stride_size) ? + log2above(non_scatter_min_mbuf_size) : + MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE; + unsigned int mprq_stride_cap = (config->mprq.log_stride_num ? + RTE_BIT32(config->mprq.log_stride_num) : + RTE_BIT32(log_mprq_stride_nums)) * + (config->mprq.log_stride_size ? + RTE_BIT32(config->mprq.log_stride_size) : + RTE_BIT32(log_mprq_stride_size)); /* * Always allocate extra slots, even if eventually * the vector Rx will not be used. @@ -1584,7 +1590,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) + desc_n * sizeof(struct rte_mbuf *) + (!!mprq_en) * - (desc >> mprq_stride_nums) * sizeof(struct mlx5_mprq_buf *), + (desc >> log_mprq_stride_nums) * sizeof(struct mlx5_mprq_buf *), 0, socket); if (!tmpl) { rte_errno = ENOMEM; @@ -1689,37 +1695,37 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, * - MPRQ is enabled. * - The number of descs is more than the number of strides. * - max_rx_pktlen plus overhead is less than the max size - * of a stride or mprq_stride_size is specified by a user. + * of a stride or log_mprq_stride_size is specified by a user. * Need to make sure that there are enough strides to encap - * the maximum packet size in case mprq_stride_size is set. + * the maximum packet size in case log_mprq_stride_size is set. * Otherwise, enable Rx scatter if necessary. */ - if (mprq_en && desc > (1U << mprq_stride_nums) && + if (mprq_en && desc > RTE_BIT32(log_mprq_stride_nums) && (non_scatter_min_mbuf_size <= - (1U << config->mprq.max_stride_size_n) || - (config->mprq.stride_size_n && + RTE_BIT32(config->mprq.log_max_stride_size) || + (config->mprq.log_stride_size && non_scatter_min_mbuf_size <= mprq_stride_cap))) { /* TODO: Rx scatter isn't supported yet. */ tmpl->rxq.sges_n = 0; /* Trim the number of descs needed. */ - desc >>= mprq_stride_nums; - tmpl->rxq.strd_num_n = config->mprq.stride_num_n ? - config->mprq.stride_num_n : mprq_stride_nums; - tmpl->rxq.strd_sz_n = config->mprq.stride_size_n ? - config->mprq.stride_size_n : mprq_stride_size; + desc >>= log_mprq_stride_nums; + tmpl->rxq.log_strd_num = config->mprq.log_stride_num ? + config->mprq.log_stride_num : log_mprq_stride_nums; + tmpl->rxq.log_strd_sz = config->mprq.log_stride_size ? + config->mprq.log_stride_size : log_mprq_stride_size; tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT; tmpl->rxq.strd_scatter_en = !!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER); tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size, config->mprq.max_memcpy_len); max_lro_size = RTE_MIN(max_rx_pktlen, - (1u << tmpl->rxq.strd_num_n) * - (1u << tmpl->rxq.strd_sz_n)); + RTE_BIT32(tmpl->rxq.log_strd_num) * + RTE_BIT32(tmpl->rxq.log_strd_sz)); DRV_LOG(DEBUG, "port %u Rx queue %u: Multi-Packet RQ is enabled" " strd_num_n = %u, strd_sz_n = %u", dev->data->port_id, idx, - tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n); + tmpl->rxq.log_strd_num, tmpl->rxq.log_strd_sz); } else if (tmpl->rxq.rxseg_n == 1) { MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size); tmpl->rxq.sges_n = 0; @@ -1762,15 +1768,15 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, " min_stride_sz = %u, max_stride_sz = %u).", dev->data->port_id, non_scatter_min_mbuf_size, desc, priv->rxqs_n, - config->mprq.stride_size_n ? - (1U << config->mprq.stride_size_n) : - (1U << mprq_stride_size), - config->mprq.stride_num_n ? - (1U << config->mprq.stride_num_n) : - (1U << mprq_stride_nums), + config->mprq.log_stride_size ? + RTE_BIT32(config->mprq.log_stride_size) : + RTE_BIT32(log_mprq_stride_size), + config->mprq.log_stride_num ? + RTE_BIT32(config->mprq.log_stride_num) : + RTE_BIT32(log_mprq_stride_nums), config->mprq.min_rxqs_num, - (1U << config->mprq.min_stride_size_n), - (1U << config->mprq.max_stride_size_n)); + RTE_BIT32(config->mprq.log_min_stride_size), + RTE_BIT32(config->mprq.log_max_stride_size)); DRV_LOG(DEBUG, "port %u maximum number of segments per packet: %u", dev->data->port_id, 1 << tmpl->rxq.sges_n); if (desc % (1 << tmpl->rxq.sges_n)) { diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 6212ce8247..0e2eab068a 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -148,7 +148,7 @@ static inline void mlx5_rx_mprq_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq) { const uint16_t wqe_n = 1 << rxq->elts_n; - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); const uint32_t elts_n = wqe_n * strd_n; const uint32_t wqe_mask = elts_n - 1; uint32_t n = elts_n - (rxq->elts_ci - rxq->rq_pi); @@ -197,8 +197,8 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq, { const uint16_t wqe_n = 1 << rxq->elts_n; const uint16_t wqe_mask = wqe_n - 1; - const uint16_t strd_sz = 1 << rxq->strd_sz_n; - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint16_t strd_sz = RTE_BIT32(rxq->log_strd_sz); + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); const uint32_t elts_n = wqe_n * strd_n; const uint32_t elts_mask = elts_n - 1; uint32_t elts_idx = rxq->rq_pi & elts_mask; @@ -428,7 +428,7 @@ rxq_burst_mprq_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, const uint16_t q_n = 1 << rxq->cqe_n; const uint16_t q_mask = q_n - 1; const uint16_t wqe_n = 1 << rxq->elts_n; - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); const uint32_t elts_n = wqe_n * strd_n; const uint32_t elts_mask = elts_n - 1; volatile struct mlx5_cqe *cq; From patchwork Tue Nov 23 18:38:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 104625 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8424A0C4C; Tue, 23 Nov 2021 19:38:56 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0FF8141160; Tue, 23 Nov 2021 19:38:39 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2050.outbound.protection.outlook.com [40.107.244.50]) by mails.dpdk.org (Postfix) with ESMTP id 79D534114E; Tue, 23 Nov 2021 19:38:36 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n/5JPlw/SLGRyM3ctIcB3GDSNiZ+vyyfIbQ5xmn8TeOR3auVZxtX1x8JWRAZZfEiTgtREYlrZdadmYeEfhBlDrKDnMqTGGAuZ7wXUXxXU4L77yFAcNUj04SzQLmYL96qv2xXv23P5U5sH2mto0IDCh726z6zOECdYgX54AI3zJt/KQaCYpwIPgYKT0D2H7eNym2qJO0YVInufMsH/F0mK67nLfoPUs/gew7uUVXy1Ts+A4ATceDMK4gQJu0HzPuvYO20bYnHiu5DPvf75sFk1rOR8vBjhbMWziPfUMaAG7LAj6LSFuJtz9ZLUfGWrAo3eLiVXTlbOXElJDUQTUzZbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VGU9phLarnV394RUZDLVi4CKoAMAH+jem/SUiPAxNrY=; b=Unf8T80v1zzjlJRY2Cw9zFdi2KKElq1STJHMTJgvDX9/KArwv4x2KuJ4AOErsw0elMsNc49l6Y64MXaq3kIwjs9nOfVbsE73NLyb+T4gl+JY6YxmljNjLJiPY+v1tZgitONgCk2fngInvyV+Ws5l+ZPs8oANY/Jx1j/J30ciy/fgmPjCIHiKLXn4d9I1D0+qN2DpgwiKHcmhBxJEiEWrREssSq1Dc07PccERRnjLnfo24c3VW9yID2jm7n55mlBeqPLkRI1Z+eHOyGjNMkQZRK0/snm3BNlFZfaMdd6ywUbHY6JlKYxH99ZTym39DPa7NTNc0/auP31UTMmQuBIH5A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VGU9phLarnV394RUZDLVi4CKoAMAH+jem/SUiPAxNrY=; b=EUbryQewAhv8/AaigFYdisbTxsRid5fB1g5XkUv2607FjYjE1yLRmDLyqYuaVtrsLhj3892TXmrOYYJSQwkBhyB/Ku8vlFL3AyvVVpbuDYFP/FvI7Y09aQk6SZ2YzqC8G1WaRaLdQdYtsZOdYhryBwKEgtP1LS7uG/BDcRPInQT2Igfxr/CCQXn5X5sQ9NmZ9/I30FxlZapUtbjx00HO4rJu+dLp/0GWQJYdoW7t68QvvODzPK2ybLE725P0mMGvPxVIJ5K3JyoaaFhUyBFaSLsXffrlE/AdiCpzJFlcH118EcE1W9bS/AONv6oGxLFVgwNyZ9ZUn5Pwb1bfuEaQbA== Received: from DM5PR1401CA0013.namprd14.prod.outlook.com (2603:10b6:4:4a::23) by CY4PR12MB1383.namprd12.prod.outlook.com (2603:10b6:903:41::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4713.19; Tue, 23 Nov 2021 18:38:34 +0000 Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:4:4a:cafe::a9) by DM5PR1401CA0013.outlook.office365.com (2603:10b6:4:4a::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4713.22 via Frontend Transport; Tue, 23 Nov 2021 18:38:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4713.20 via Frontend Transport; Tue, 23 Nov 2021 18:38:33 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 23 Nov 2021 18:38:31 +0000 From: To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Michael Baum , Subject: [PATCH 3/3] net/mlx5: fix missing adjustment MPRQ stride devargs Date: Tue, 23 Nov 2021 20:38:05 +0200 Message-ID: <20211123183805.2905792-4-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211123183805.2905792-1-michaelba@nvidia.com> References: <20211123183805.2905792-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 30df050c-3f06-4424-ceea-08d9aeb07429 X-MS-TrafficTypeDiagnostic: CY4PR12MB1383: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uKE1XtTSM+QUn0G2KtkxZo+QHjdCO2k000nwHWYbHrhGCpMwnnK7eV17PTcEARbzapWRXJGMflYlkZsFfvx2ZYYyCpv33Q7A/gxi7kJYkQdKChvTVpi2GyU286m45WLaepsSnphY+fru7zqCIZi3uchYO82au8BT0nLMJgcbgni4dE8Rn8fdprtmXGf/+tWAJfJMOr8dU4MCgYmHL3zjR2oxgWNZqSLLTjx1pwpvwWnCXUU33nFOJ8wMXzn5er5mMcL0yQFYr6WpiOUqCRNX43PmpixR4XuDyL8JLy/eF5EKH2x34Z8YZLrPvmO8FEz9qkpWCUiM0iZMmbnK+HwoHlH5sp69JPuXwkrojYisnNYGOH44616ChsoYzG2Gt7VdA3S2Rxx0r6PLEOFQ5ePDm9FpqeNO+5pjsYBXLhtKjqoU1sh3riqrir0tDvDss23FNyXbMy+so5MjKZT6hj7oezUdDE8vz/tXk1bQvFtQivWWIRspKaKwhPUvG1hybUzLNdN2S0r3pH3eZqct7aGXU0czWaJ22BofO8W+nVhGQOZvkEDvNulPoTG2PtoaRZDDg291JT03CUlVUf/lZ5dmRh+oQjLbWJ7vGssGi9HUxOJpViZtWb/YBU6rS8hhKn7Zpb61BIzqGzq27K4rT6PuNzKhPmNYx+Sd+ccGi/PzINSWUNZK3afC5LV0CrCptqfvCqMcl7yoDwWZ2g61aLtDL2RsY5dB9fX6V+qV6y/J4NA= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(8936002)(8676002)(83380400001)(356005)(47076005)(30864003)(6286002)(36860700001)(7636003)(186003)(26005)(70206006)(54906003)(508600001)(6916009)(6666004)(16526019)(1076003)(2876002)(82310400004)(86362001)(316002)(2906002)(55016003)(7696005)(5660300002)(450100002)(70586007)(426003)(4326008)(336012)(36756003)(2616005)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Nov 2021 18:38:33.4740 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 30df050c-3f06-4424-ceea-08d9aeb07429 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1383 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Michael Baum In Multy-Packet RQ creation, the user can choose the number of strides and their size in bytes. The user updates it using specific devargs for both of these parameters. The above two parameters determine the size of the WQE which is actually their product of multiplication. If the user selects values that are not in the supported range, the PMD changes them to default values. However, apart from the range limitations for each parameter individually there is also a minimum value on their multiplication. When the user selects values that their multiplication are lower than minimum value, no adjustment is made and the creation of the WQE fails. This patch adds an adjustment in these cases as well. When the user selects values whose multiplication is lower than the minimum, they are replaced with the default values. Fixes: ecb160456aed ("net/mlx5: add device parameter for MPRQ stride size") Cc: stable@dpdk.org Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 56 +++------ drivers/net/mlx5/mlx5.h | 4 + drivers/net/mlx5/mlx5_rxq.c | 209 +++++++++++++++++++++---------- 3 files changed, 159 insertions(+), 110 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 70472efc29..3e496d68ea 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -881,10 +881,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, unsigned int mpls_en = 0; unsigned int swp = 0; unsigned int mprq = 0; - unsigned int mprq_min_stride_size_n = 0; - unsigned int mprq_max_stride_size_n = 0; - unsigned int mprq_min_stride_num_n = 0; - unsigned int mprq_max_stride_num_n = 0; struct rte_ether_addr mac; char name[RTE_ETH_NAME_MAX_LEN]; int own_domain_id = 0; @@ -1039,15 +1035,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mprq_caps.max_single_wqe_log_num_of_strides); DRV_LOG(DEBUG, "\tsupported_qpts: %d", mprq_caps.supported_qpts); + DRV_LOG(DEBUG, "\tmin_stride_wqe_log_size: %d", + config->mprq.log_min_stride_wqe_size); DRV_LOG(DEBUG, "device supports Multi-Packet RQ"); mprq = 1; - mprq_min_stride_size_n = + config->mprq.log_min_stride_size = mprq_caps.min_single_stride_log_num_of_bytes; - mprq_max_stride_size_n = + config->mprq.log_max_stride_size = mprq_caps.max_single_stride_log_num_of_bytes; - mprq_min_stride_num_n = + config->mprq.log_min_stride_num = mprq_caps.min_single_wqe_log_num_of_strides; - mprq_max_stride_num_n = + config->mprq.log_max_stride_num = mprq_caps.max_single_wqe_log_num_of_strides; } #endif @@ -1548,36 +1546,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->hw_fcs_strip = 0; DRV_LOG(DEBUG, "FCS stripping configuration is %ssupported", (config->hw_fcs_strip ? "" : "not ")); - if (config->mprq.enabled && mprq) { - if (config->mprq.log_stride_num && - (config->mprq.log_stride_num > mprq_max_stride_num_n || - config->mprq.log_stride_num < mprq_min_stride_num_n)) { - config->mprq.log_stride_num = - RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM, - mprq_min_stride_num_n), - mprq_max_stride_num_n); - DRV_LOG(WARNING, - "the number of strides" - " for Multi-Packet RQ is out of range," - " setting default value (%u)", - 1 << config->mprq.log_stride_num); - } - if (config->mprq.log_stride_size && - (config->mprq.log_stride_size > mprq_max_stride_size_n || - config->mprq.log_stride_size < mprq_min_stride_size_n)) { - config->mprq.log_stride_size = - RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE, - mprq_min_stride_size_n), - mprq_max_stride_size_n); - DRV_LOG(WARNING, - "the size of a stride" - " for Multi-Packet RQ is out of range," - " setting default value (%u)", - 1 << config->mprq.log_stride_size); - } - config->mprq.log_min_stride_size = mprq_min_stride_size_n; - config->mprq.log_max_stride_size = mprq_max_stride_size_n; - } else if (config->mprq.enabled && !mprq) { + if (config->mprq.enabled && !mprq) { DRV_LOG(WARNING, "Multi-Packet RQ isn't supported"); config->mprq.enabled = 0; } @@ -2068,7 +2037,8 @@ mlx5_device_bond_pci_match(const char *ibdev_name, } static void -mlx5_os_config_default(struct mlx5_dev_config *config) +mlx5_os_config_default(struct mlx5_dev_config *config, + struct mlx5_common_dev_config *cconf) { memset(config, 0, sizeof(*config)); config->mps = MLX5_ARG_UNSET; @@ -2080,6 +2050,10 @@ mlx5_os_config_default(struct mlx5_dev_config *config) config->vf_nl_en = 1; config->mprq.max_memcpy_len = MLX5_MPRQ_MEMCPY_DEFAULT_LEN; config->mprq.min_rxqs_num = MLX5_MPRQ_MIN_RXQS; + config->mprq.log_min_stride_wqe_size = cconf->devx ? + cconf->hca_attr.log_min_stride_wqe_sz : + MLX5_MPRQ_LOG_MIN_STRIDE_WQE_SIZE; + config->mprq.log_stride_num = MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM; config->dv_esw_en = 1; config->dv_flow_en = 1; config->decap_en = 1; @@ -2496,7 +2470,7 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev, uint32_t restore; /* Default configuration. */ - mlx5_os_config_default(&dev_config); + mlx5_os_config_default(&dev_config, &cdev->config); dev_config.vf = dev_config_vf; list[i].eth_dev = mlx5_dev_spawn(cdev->dev, &list[i], &dev_config, ð_da); @@ -2666,7 +2640,7 @@ mlx5_os_auxiliary_probe(struct mlx5_common_device *cdev) if (ret != 0) return ret; /* Set default config data. */ - mlx5_os_config_default(&config); + mlx5_os_config_default(&config, &cdev->config); config.sf = 1; /* Init spawn data. */ spawn.max_port = 1; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 4ba90db816..c01fb9566e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -279,6 +279,10 @@ struct mlx5_dev_config { unsigned int log_stride_size; /* Log size of a stride. */ unsigned int log_min_stride_size; /* Log min size of a stride.*/ unsigned int log_max_stride_size; /* Log max size of a stride.*/ + unsigned int log_min_stride_num; /* Log min num of strides. */ + unsigned int log_max_stride_num; /* Log max num of strides. */ + unsigned int log_min_stride_wqe_size; + /* Log min WQE size, (size of single stride)*(num of strides).*/ unsigned int max_memcpy_len; /* Maximum packet size to memcpy Rx packets. */ unsigned int min_rxqs_num; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index e76bfaa000..891ac3d874 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1528,6 +1528,126 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx, priv->max_lro_msg_size * MLX5_LRO_SEG_CHUNK_SIZE); } +/** + * Prepare both size and number of stride for Multi-Packet RQ. + * + * @param dev + * Pointer to Ethernet device. + * @param idx + * RX queue index. + * @param desc + * Number of descriptors to configure in queue. + * @param rx_seg_en + * Indicator if Rx segment enables, if so Multi-Packet RQ doesn't enable. + * @param min_mbuf_size + * Non scatter min mbuf size, max_rx_pktlen plus overhead. + * @param actual_log_stride_num + * Log number of strides to configure for this queue. + * @param actual_log_stride_size + * Log stride size to configure for this queue. + * + * @return + * 0 if Multi-Packet RQ is supported, otherwise -1. + */ +static int +mlx5_mprq_prepare(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, + bool rx_seg_en, uint32_t min_mbuf_size, + uint32_t *actual_log_stride_num, + uint32_t *actual_log_stride_size) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_config *config = &priv->config; + uint32_t log_min_stride_num = config->mprq.log_min_stride_num; + uint32_t log_max_stride_num = config->mprq.log_max_stride_num; + uint32_t log_def_stride_num = + RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM, + log_min_stride_num), + log_max_stride_num); + uint32_t log_min_stride_size = config->mprq.log_min_stride_size; + uint32_t log_max_stride_size = config->mprq.log_max_stride_size; + uint32_t log_def_stride_size = + RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE, + log_min_stride_size), + log_max_stride_size); + uint32_t log_stride_wqe_size; + + if (mlx5_check_mprq_support(dev) != 1 || rx_seg_en) + goto unsupport; + /* Checks if chosen number of strides is in supported range. */ + if (config->mprq.log_stride_num > log_max_stride_num || + config->mprq.log_stride_num < log_min_stride_num) { + *actual_log_stride_num = log_def_stride_num; + DRV_LOG(WARNING, + "Port %u Rx queue %u number of strides for Multi-Packet RQ is out of range, setting default value (%u)", + dev->data->port_id, idx, RTE_BIT32(log_def_stride_num)); + } else { + *actual_log_stride_num = config->mprq.log_stride_num; + } + if (config->mprq.log_stride_size) { + /* Checks if chosen size of stride is in supported range. */ + if (config->mprq.log_stride_size > log_max_stride_size || + config->mprq.log_stride_size < log_min_stride_size) { + *actual_log_stride_size = log_def_stride_size; + DRV_LOG(WARNING, + "Port %u Rx queue %u size of a stride for Multi-Packet RQ is out of range, setting default value (%u)", + dev->data->port_id, idx, + RTE_BIT32(log_def_stride_size)); + } else { + *actual_log_stride_size = config->mprq.log_stride_size; + } + } else { + if (min_mbuf_size <= RTE_BIT32(log_max_stride_size)) + *actual_log_stride_size = log2above(min_mbuf_size); + else + goto unsupport; + } + log_stride_wqe_size = *actual_log_stride_num + *actual_log_stride_size; + /* Check if WQE buffer size is supported by hardware. */ + if (log_stride_wqe_size < config->mprq.log_min_stride_wqe_size) { + *actual_log_stride_num = log_def_stride_num; + *actual_log_stride_size = log_def_stride_size; + DRV_LOG(WARNING, + "Port %u Rx queue %u size of WQE buffer for Multi-Packet RQ is too small, setting default values (stride_num_n=%u, stride_size_n=%u)", + dev->data->port_id, idx, RTE_BIT32(log_def_stride_num), + RTE_BIT32(log_def_stride_size)); + log_stride_wqe_size = log_def_stride_num + log_def_stride_size; + } + MLX5_ASSERT(log_stride_wqe_size < config->mprq.log_min_stride_wqe_size); + if (desc <= RTE_BIT32(*actual_log_stride_num)) + goto unsupport; + if (min_mbuf_size > RTE_BIT32(log_stride_wqe_size)) { + DRV_LOG(WARNING, "Port %u Rx queue %u " + "Multi-Packet RQ is unsupported, WQE buffer size (%u) " + "is smaller than min mbuf size (%u)", + dev->data->port_id, idx, RTE_BIT32(log_stride_wqe_size), + min_mbuf_size); + goto unsupport; + } + DRV_LOG(DEBUG, "Port %u Rx queue %u " + "Multi-Packet RQ is enabled strd_num_n = %u, strd_sz_n = %u", + dev->data->port_id, idx, RTE_BIT32(*actual_log_stride_num), + RTE_BIT32(*actual_log_stride_size)); + return 0; +unsupport: + if (config->mprq.enabled) + DRV_LOG(WARNING, + "Port %u MPRQ is requested but cannot be enabled\n" + " (requested: pkt_sz = %u, desc_num = %u," + " rxq_num = %u, stride_sz = %u, stride_num = %u\n" + " supported: min_rxqs_num = %u, min_buf_wqe_sz = %u" + " min_stride_sz = %u, max_stride_sz = %u).\n" + "Rx segment is %senable.", + dev->data->port_id, min_mbuf_size, desc, priv->rxqs_n, + RTE_BIT32(config->mprq.log_stride_size), + RTE_BIT32(config->mprq.log_stride_num), + config->mprq.min_rxqs_num, + RTE_BIT32(config->mprq.log_min_stride_wqe_size), + RTE_BIT32(config->mprq.log_min_stride_size), + RTE_BIT32(config->mprq.log_max_stride_size), + rx_seg_en ? "" : "not "); + return -1; +} + /** * Create a DPDK Rx queue. * @@ -1565,33 +1685,28 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, RTE_PKTMBUF_HEADROOM; unsigned int max_lro_size = 0; unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM; - const int mprq_en = mlx5_check_mprq_support(dev) > 0 && n_seg == 1 && - !rx_seg[0].offset && !rx_seg[0].length; - unsigned int log_mprq_stride_nums = config->mprq.log_stride_num ? - config->mprq.log_stride_num : MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM; - unsigned int log_mprq_stride_size = non_scatter_min_mbuf_size <= - RTE_BIT32(config->mprq.log_max_stride_size) ? - log2above(non_scatter_min_mbuf_size) : - MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE; - unsigned int mprq_stride_cap = (config->mprq.log_stride_num ? - RTE_BIT32(config->mprq.log_stride_num) : - RTE_BIT32(log_mprq_stride_nums)) * - (config->mprq.log_stride_size ? - RTE_BIT32(config->mprq.log_stride_size) : - RTE_BIT32(log_mprq_stride_size)); + uint32_t mprq_log_actual_stride_num = 0; + uint32_t mprq_log_actual_stride_size = 0; + bool rx_seg_en = n_seg != 1 || rx_seg[0].offset || rx_seg[0].length; + const int mprq_en = !mlx5_mprq_prepare(dev, idx, desc, rx_seg_en, + non_scatter_min_mbuf_size, + &mprq_log_actual_stride_num, + &mprq_log_actual_stride_size); /* * Always allocate extra slots, even if eventually * the vector Rx will not be used. */ uint16_t desc_n = desc + config->rx_vec_en * MLX5_VPMD_DESCS_PER_LOOP; + size_t alloc_size = sizeof(*tmpl) + desc_n * sizeof(struct rte_mbuf *); const struct rte_eth_rxseg_split *qs_seg = rx_seg; unsigned int tail_len; - tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, - sizeof(*tmpl) + desc_n * sizeof(struct rte_mbuf *) + - (!!mprq_en) * - (desc >> log_mprq_stride_nums) * sizeof(struct mlx5_mprq_buf *), - 0, socket); + if (mprq_en) { + /* Trim the number of descs needed. */ + desc >>= mprq_log_actual_stride_num; + alloc_size += desc * sizeof(struct mlx5_mprq_buf *); + } + tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, alloc_size, 0, socket); if (!tmpl) { rte_errno = ENOMEM; return NULL; @@ -1689,30 +1804,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->socket = socket; if (dev->data->dev_conf.intr_conf.rxq) tmpl->irq = 1; - /* - * This Rx queue can be configured as a Multi-Packet RQ if all of the - * following conditions are met: - * - MPRQ is enabled. - * - The number of descs is more than the number of strides. - * - max_rx_pktlen plus overhead is less than the max size - * of a stride or log_mprq_stride_size is specified by a user. - * Need to make sure that there are enough strides to encap - * the maximum packet size in case log_mprq_stride_size is set. - * Otherwise, enable Rx scatter if necessary. - */ - if (mprq_en && desc > RTE_BIT32(log_mprq_stride_nums) && - (non_scatter_min_mbuf_size <= - RTE_BIT32(config->mprq.log_max_stride_size) || - (config->mprq.log_stride_size && - non_scatter_min_mbuf_size <= mprq_stride_cap))) { + if (mprq_en) { /* TODO: Rx scatter isn't supported yet. */ tmpl->rxq.sges_n = 0; - /* Trim the number of descs needed. */ - desc >>= log_mprq_stride_nums; - tmpl->rxq.log_strd_num = config->mprq.log_stride_num ? - config->mprq.log_stride_num : log_mprq_stride_nums; - tmpl->rxq.log_strd_sz = config->mprq.log_stride_size ? - config->mprq.log_stride_size : log_mprq_stride_size; + tmpl->rxq.log_strd_num = mprq_log_actual_stride_num; + tmpl->rxq.log_strd_sz = mprq_log_actual_stride_size; tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT; tmpl->rxq.strd_scatter_en = !!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER); @@ -1721,11 +1817,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, max_lro_size = RTE_MIN(max_rx_pktlen, RTE_BIT32(tmpl->rxq.log_strd_num) * RTE_BIT32(tmpl->rxq.log_strd_sz)); - DRV_LOG(DEBUG, - "port %u Rx queue %u: Multi-Packet RQ is enabled" - " strd_num_n = %u, strd_sz_n = %u", - dev->data->port_id, idx, - tmpl->rxq.log_strd_num, tmpl->rxq.log_strd_sz); } else if (tmpl->rxq.rxseg_n == 1) { MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size); tmpl->rxq.sges_n = 0; @@ -1759,24 +1850,6 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, tmpl->rxq.sges_n = sges_n; max_lro_size = max_rx_pktlen; } - if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq)) - DRV_LOG(WARNING, - "port %u MPRQ is requested but cannot be enabled\n" - " (requested: pkt_sz = %u, desc_num = %u," - " rxq_num = %u, stride_sz = %u, stride_num = %u\n" - " supported: min_rxqs_num = %u," - " min_stride_sz = %u, max_stride_sz = %u).", - dev->data->port_id, non_scatter_min_mbuf_size, - desc, priv->rxqs_n, - config->mprq.log_stride_size ? - RTE_BIT32(config->mprq.log_stride_size) : - RTE_BIT32(log_mprq_stride_size), - config->mprq.log_stride_num ? - RTE_BIT32(config->mprq.log_stride_num) : - RTE_BIT32(log_mprq_stride_nums), - config->mprq.min_rxqs_num, - RTE_BIT32(config->mprq.log_min_stride_size), - RTE_BIT32(config->mprq.log_max_stride_size)); DRV_LOG(DEBUG, "port %u maximum number of segments per packet: %u", dev->data->port_id, 1 << tmpl->rxq.sges_n); if (desc % (1 << tmpl->rxq.sges_n)) { @@ -1834,17 +1907,15 @@ mlx5_rxq_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq, dev->data->port_id, tmpl->rxq.crc_present ? "disabled" : "enabled", tmpl->rxq.crc_present << 2); - /* Save port ID. */ tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf && (!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS)); + /* Save port ID. */ tmpl->rxq.port_id = dev->data->port_id; tmpl->sh = priv->sh; tmpl->rxq.mp = rx_seg[0].mp; tmpl->rxq.elts_n = log2above(desc); - tmpl->rxq.rq_repl_thresh = - MLX5_VPMD_RXQ_RPLNSH_THRESH(desc_n); - tmpl->rxq.elts = - (struct rte_mbuf *(*)[desc_n])(tmpl + 1); + tmpl->rxq.rq_repl_thresh = MLX5_VPMD_RXQ_RPLNSH_THRESH(desc_n); + tmpl->rxq.elts = (struct rte_mbuf *(*)[desc_n])(tmpl + 1); tmpl->rxq.mprq_bufs = (struct mlx5_mprq_buf *(*)[desc])(*tmpl->rxq.elts + desc_n); tmpl->rxq.idx = idx;