From patchwork Thu Jan 27 15:39:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 106634 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37D2FA04A6; Thu, 27 Jan 2022 16:42:04 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C06BE4280A; Thu, 27 Jan 2022 16:40:31 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2073.outbound.protection.outlook.com [40.107.236.73]) by mails.dpdk.org (Postfix) with ESMTP id 56EE642813 for ; Thu, 27 Jan 2022 16:40:28 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aAFHBEezR9sKMIjOBvb9aY58SCYm5D+CloNl+H6kLgZf2r/E3bpYuJowt+Zebvr0S2gzhNZKgI0cSXRydDlxkZySTavnmAoESQWZYPsOFPXINI8YWzv0eNPddFm0bN0HEVoCGzI5v5P7sSV/at1Fkr9ar/8sUXD3YYqPI7XPExibMsXhXnWul1MpTpGZH/UV8ltp0NAHbjNhgBjjktoUP0cUwvqbUg3XozHkk956WEeEAYsV4ZdIezh7GI0KGh0CRh6rUREWwueuWRi6NVofsH3+IWs+AXRsO+XhM/ncZHWvhAre5rM+ZsM/M/NDyXA/U5/Oo8E4MMC8bv1QcieiHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oMF3aDlmc7PpZCP9EHDiBJg6ISOc1fYOY40cXeZwI2A=; b=ZQccgzK4kgppD9Gj95392Dam0hUh0+pmG/SMEmYybNBpg2CmExo7KxkF5yv9SeS0pngpV46niHvdrzWkZPqScGbUtjkn5wB/PiEX4chy/YMlrHDt060rr5WEL6YDFvMsetjRxm3pwnnx99OACgmNS5ToolYas2MvVHM2IU59AGQR4WZY+/JxB3HgiJykCikCqG5oJrlgFmIcUUDLzm4lEmDJP+eRd1vQLc0lYnTsMvLSvV3IOxOngzhBraNa9hoxDtRCF5JVCRw8gnV02AHVtghDKshCXW4+/amsSpNiIuTeEfkl3ib2Ld4KV8VSjaQVp3e8joxIcxhJkwrSRXBhLQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oMF3aDlmc7PpZCP9EHDiBJg6ISOc1fYOY40cXeZwI2A=; b=DfexPdNDF9O+PgFn4RgDDrP3HhOwPj2TseBrheLU4jr/2o37udf3CGU4bW0td/kZrOOTRw2zr5SnL1ygO9zfoaXef2qWPUcOIMovLLAz65hje6V6ub1pzSlOpYW1ZVCA47aHcBVZkfslIUU1lXQKFIa60uaqT8ZyIg/WILGwsuuqQNm2CqnAQ4fTtdgq1plnNWwxvcZq+xg42dO4v4NCTbLj7bXnHq7xGENS47XGCXxnIWfUtOzKVN1UPcAKQRxvR5l7Gt4sokjghzGpuWytrkBQjDywH9H56i4CvhGJ2TNL0zqsCzC+B/8DVcTp+UdxwTXhPdSWqt0ic6JZRcoVug== Received: from DS7PR03CA0073.namprd03.prod.outlook.com (2603:10b6:5:3bb::18) by DM5PR12MB1577.namprd12.prod.outlook.com (2603:10b6:4:f::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.17; Thu, 27 Jan 2022 15:40:25 +0000 Received: from DM6NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3bb:cafe::91) by DS7PR03CA0073.outlook.office365.com (2603:10b6:5:3bb::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15 via Frontend Transport; Thu, 27 Jan 2022 15:40:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by DM6NAM11FT042.mail.protection.outlook.com (10.13.173.165) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4930.15 via Frontend Transport; Thu, 27 Jan 2022 15:40:25 +0000 Received: from drhqmail203.nvidia.com (10.126.190.182) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 27 Jan 2022 15:40:24 +0000 Received: from drhqmail202.nvidia.com (10.126.190.181) by drhqmail203.nvidia.com (10.126.190.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Thu, 27 Jan 2022 07:40:23 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.126.190.181) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Thu, 27 Jan 2022 07:40:22 -0800 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko Subject: [PATCH 15/20] net/mlx5: concentrate all device configurations Date: Thu, 27 Jan 2022 17:39:45 +0200 Message-ID: <20220127153950.812953-16-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220127153950.812953-1-michaelba@nvidia.com> References: <20220127153950.812953-1-michaelba@nvidia.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0ba35a29-d617-4d2c-15bc-08d9e1ab5655 X-MS-TrafficTypeDiagnostic: DM5PR12MB1577:EE_ X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1824; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: P9MRU5lshr6gU9Npw6FYq+JWgzoqscKgCJgNmVavHoQeCcwRM7pOZchlhuh6Zk1tQQDQkSLq8bRiIJJuC4zh4JVpFf8mEZgp/U8KSRqqWg9U9fVVvwGRIYHYlWOduwnL8wEjcH9BywZ3tHqyOvgO2YCL4qyKgwWF0VuAtNIeoTsQEUtyW5V/y71+PDaRKtMtn/5NsLrdvuTLJyPct2g/jJ3gLHE9Ucx9xufOBQ9TkO+G0VZaXj9dqKu0R4PcIvkIzJ3nS9HWCVnHWBA3WvleE3Jgf7Kw9T/hfLtnwkfFiFtOUivw9QMHpCXW8RPY93tmBI55sSyBRFlXksNQeWCHYMXF5x55euboF3z5FeQxaNefO0MbVRqqMwAg23ufiM3XqToerVYspniVgXq5XlZFaqkRTmgF0F6+YTxQ7BPAhagrGLBJxsVRGCMyHB9aIBE3Tmmy52V6dQaCrMYjT4eH9VrBmJ1PWK0snaBjyRPDvICz/QR+Z5Z3R9yfWd4JUMOWSjBHtmhBXXaSm7AbjwQF09OWevpFN9pC9BUz9jObgn3p1CQnXwgVgeuUF0PckyS9R5jTc2BcSWp0qZzsIbYBM9KBA5TQrPxQ0HXdn7my9Fzl4beP4DIf+0bwVht9SY+i7Bz9No/9Dx+l7ZjtZp5vvoMK9knbS4jR/5NVOBcgtHFwAdRLOwJ/GehBlM9SCDYysBE8rOFKv82hMkGWGXjjMR9pLLCeR9obLcEYxGprskpcyyqbMmKTKYy8nb7ihTrJ X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(36860700001)(26005)(30864003)(7696005)(1076003)(186003)(82310400004)(8676002)(316002)(8936002)(4326008)(356005)(81166007)(508600001)(6666004)(86362001)(70586007)(5660300002)(2616005)(40460700003)(336012)(2906002)(6286002)(83380400001)(426003)(107886003)(36756003)(55016003)(54906003)(47076005)(70206006)(6916009)(36900700001)(559001)(579004)(309714004)(20210929001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jan 2022 15:40:25.2303 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0ba35a29-d617-4d2c-15bc-08d9e1ab5655 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1577 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move all device configure to be performed by mlx5_os_cap_config() function instead of the spawn function. In addition move all relevant fields from mlx5_dev_config structure to mlx5_dev_cap. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 497 +++++++++++++------------- drivers/net/mlx5/linux/mlx5_vlan_os.c | 3 +- drivers/net/mlx5/mlx5.c | 11 +- drivers/net/mlx5/mlx5.h | 78 ++-- drivers/net/mlx5/mlx5_devx.c | 6 +- drivers/net/mlx5/mlx5_ethdev.c | 12 +- drivers/net/mlx5/mlx5_flow.c | 4 +- drivers/net/mlx5/mlx5_rxmode.c | 8 +- drivers/net/mlx5/mlx5_rxq.c | 34 +- drivers/net/mlx5/mlx5_trigger.c | 5 +- drivers/net/mlx5/mlx5_txq.c | 36 +- drivers/net/mlx5/mlx5_vlan.c | 4 +- drivers/net/mlx5/windows/mlx5_os.c | 101 +++--- 13 files changed, 380 insertions(+), 419 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index b6848fc34c..13db399b5e 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -141,11 +141,12 @@ int mlx5_os_capabilities_prepare(struct mlx5_dev_ctx_shared *sh) { int err; - struct ibv_context *ctx = sh->cdev->ctx; + struct mlx5_common_device *cdev = sh->cdev; + struct mlx5_hca_attr *hca_attr = &cdev->config.hca_attr; struct ibv_device_attr_ex attr_ex; struct mlx5dv_context dv_attr = { .comp_mask = 0 }; - err = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + err = mlx5_glue->query_device_ex(cdev->ctx, NULL, &attr_ex); if (err) { rte_errno = errno; return -rte_errno; @@ -159,45 +160,229 @@ mlx5_os_capabilities_prepare(struct mlx5_dev_ctx_shared *sh) #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_STRIDING_RQ; #endif - err = mlx5_glue->dv_query_device(ctx, &dv_attr); + err = mlx5_glue->dv_query_device(cdev->ctx, &dv_attr); if (err) { rte_errno = errno; return -rte_errno; } memset(&sh->dev_cap, 0, sizeof(struct mlx5_dev_cap)); - sh->dev_cap.device_cap_flags_ex = attr_ex.device_cap_flags_ex; + if (mlx5_dev_is_pci(cdev->dev)) + sh->dev_cap.vf = mlx5_dev_is_vf_pci(RTE_DEV_TO_PCI(cdev->dev)); + else + sh->dev_cap.sf = 1; sh->dev_cap.max_qp_wr = attr_ex.orig_attr.max_qp_wr; sh->dev_cap.max_sge = attr_ex.orig_attr.max_sge; sh->dev_cap.max_cq = attr_ex.orig_attr.max_cq; sh->dev_cap.max_qp = attr_ex.orig_attr.max_qp; - sh->dev_cap.raw_packet_caps = attr_ex.raw_packet_caps; - sh->dev_cap.max_rwq_indirection_table_size = - attr_ex.rss_caps.max_rwq_indirection_table_size; - sh->dev_cap.max_tso = attr_ex.tso_caps.max_tso; - sh->dev_cap.tso_supported_qpts = attr_ex.tso_caps.supported_qpts; +#ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR + sh->dev_cap.dest_tir = 1; +#endif +#if defined(HAVE_IBV_FLOW_DV_SUPPORT) && defined(HAVE_MLX5DV_DR) + DRV_LOG(DEBUG, "DV flow is supported."); + sh->dev_cap.dv_flow_en = 1; +#endif +#ifdef HAVE_MLX5DV_DR_ESWITCH + if (hca_attr->eswitch_manager && sh->dev_cap.dv_flow_en && sh->esw_mode) + sh->dev_cap.dv_esw_en = 1; +#endif + /* + * Multi-packet send is supported by ConnectX-4 Lx PF as well + * as all ConnectX-5 devices. + */ + if (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED) { + if (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW) { + DRV_LOG(DEBUG, "Enhanced MPW is supported."); + sh->dev_cap.mps = MLX5_MPW_ENHANCED; + } else { + DRV_LOG(DEBUG, "MPW is supported."); + sh->dev_cap.mps = MLX5_MPW; + } + } else { + DRV_LOG(DEBUG, "MPW isn't supported."); + sh->dev_cap.mps = MLX5_MPW_DISABLED; + } +#if (RTE_CACHE_LINE_SIZE == 128) + if (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP) + sh->dev_cap.cqe_comp = 1; + DRV_LOG(DEBUG, "Rx CQE 128B compression is %ssupported.", + sh->dev_cap.cqe_comp ? "" : "not "); +#else + sh->dev_cap.cqe_comp = 1; +#endif +#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT + sh->dev_cap.mpls_en = + ((dv_attr.tunnel_offloads_caps & + MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_CW_MPLS_OVER_GRE) && + (dv_attr.tunnel_offloads_caps & + MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_CW_MPLS_OVER_UDP)); + DRV_LOG(DEBUG, "MPLS over GRE/UDP tunnel offloading is %ssupported.", + sh->dev_cap.mpls_en ? "" : "not "); +#else + DRV_LOG(WARNING, + "MPLS over GRE/UDP tunnel offloading disabled due to old OFED/rdma-core version or firmware configuration"); +#endif +#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING) + sh->dev_cap.hw_padding = !!attr_ex.rx_pad_end_addr_align; +#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING) + sh->dev_cap.hw_padding = !!(attr_ex.device_cap_flags_ex & + IBV_DEVICE_PCI_WRITE_END_PADDING); +#endif + sh->dev_cap.hw_csum = + !!(attr_ex.device_cap_flags_ex & IBV_DEVICE_RAW_IP_CSUM); + DRV_LOG(DEBUG, "Checksum offloading is %ssupported.", + sh->dev_cap.hw_csum ? "" : "not "); + sh->dev_cap.hw_vlan_strip = !!(attr_ex.raw_packet_caps & + IBV_RAW_PACKET_CAP_CVLAN_STRIPPING); + DRV_LOG(DEBUG, "VLAN stripping is %ssupported.", + (sh->dev_cap.hw_vlan_strip ? "" : "not ")); + sh->dev_cap.hw_fcs_strip = !!(attr_ex.raw_packet_caps & + IBV_RAW_PACKET_CAP_SCATTER_FCS); +#if !defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) && \ + !defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45) + DRV_LOG(DEBUG, "Counters are not supported."); +#endif + /* + * DPDK doesn't support larger/variable indirection tables. + * Once DPDK supports it, take max size from device attr. + */ + sh->dev_cap.ind_table_max_size = + RTE_MIN(attr_ex.rss_caps.max_rwq_indirection_table_size, + (unsigned int)RTE_ETH_RSS_RETA_SIZE_512); + DRV_LOG(DEBUG, "Maximum Rx indirection table size is %u", + sh->dev_cap.ind_table_max_size); + sh->dev_cap.tso = (attr_ex.tso_caps.max_tso > 0 && + (attr_ex.tso_caps.supported_qpts & + (1 << IBV_QPT_RAW_PACKET))); + if (sh->dev_cap.tso) + sh->dev_cap.tso_max_payload_sz = attr_ex.tso_caps.max_tso; strlcpy(sh->dev_cap.fw_ver, attr_ex.orig_attr.fw_ver, sizeof(sh->dev_cap.fw_ver)); - sh->dev_cap.flags = dv_attr.flags; - sh->dev_cap.comp_mask = dv_attr.comp_mask; #ifdef HAVE_IBV_MLX5_MOD_SWP - sh->dev_cap.sw_parsing_offloads = - dv_attr.sw_parsing_caps.sw_parsing_offloads; + if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_SWP) + sh->dev_cap.swp = dv_attr.sw_parsing_caps.sw_parsing_offloads & + (MLX5_SW_PARSING_CAP | + MLX5_SW_PARSING_CSUM_CAP | + MLX5_SW_PARSING_TSO_CAP); + DRV_LOG(DEBUG, "SWP support: %u", sh->dev_cap.swp); #endif #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT - sh->dev_cap.min_single_stride_log_num_of_bytes = - dv_attr.striding_rq_caps.min_single_stride_log_num_of_bytes; - sh->dev_cap.max_single_stride_log_num_of_bytes = - dv_attr.striding_rq_caps.max_single_stride_log_num_of_bytes; - sh->dev_cap.min_single_wqe_log_num_of_strides = - dv_attr.striding_rq_caps.min_single_wqe_log_num_of_strides; - sh->dev_cap.max_single_wqe_log_num_of_strides = - dv_attr.striding_rq_caps.max_single_wqe_log_num_of_strides; - sh->dev_cap.stride_supported_qpts = - dv_attr.striding_rq_caps.supported_qpts; + if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_STRIDING_RQ) { + struct mlx5dv_striding_rq_caps *strd_rq_caps = + &dv_attr.striding_rq_caps; + + sh->dev_cap.mprq.enabled = 1; + sh->dev_cap.mprq.log_min_stride_size = + strd_rq_caps->min_single_stride_log_num_of_bytes; + sh->dev_cap.mprq.log_max_stride_size = + strd_rq_caps->max_single_stride_log_num_of_bytes; + sh->dev_cap.mprq.log_min_stride_num = + strd_rq_caps->min_single_wqe_log_num_of_strides; + sh->dev_cap.mprq.log_max_stride_num = + strd_rq_caps->max_single_wqe_log_num_of_strides; + sh->dev_cap.mprq.log_min_stride_wqe_size = + cdev->config.devx ? + hca_attr->log_min_stride_wqe_sz : + MLX5_MPRQ_LOG_MIN_STRIDE_WQE_SIZE; + DRV_LOG(DEBUG, "\tmin_single_stride_log_num_of_bytes: %u", + sh->dev_cap.mprq.log_min_stride_size); + DRV_LOG(DEBUG, "\tmax_single_stride_log_num_of_bytes: %u", + sh->dev_cap.mprq.log_max_stride_size); + DRV_LOG(DEBUG, "\tmin_single_wqe_log_num_of_strides: %u", + sh->dev_cap.mprq.log_min_stride_num); + DRV_LOG(DEBUG, "\tmax_single_wqe_log_num_of_strides: %u", + sh->dev_cap.mprq.log_max_stride_num); + DRV_LOG(DEBUG, "\tmin_stride_wqe_log_size: %u", + sh->dev_cap.mprq.log_min_stride_wqe_size); + DRV_LOG(DEBUG, "\tsupported_qpts: %d", + strd_rq_caps->supported_qpts); + DRV_LOG(DEBUG, "Device supports Multi-Packet RQ."); + } #endif #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - sh->dev_cap.tunnel_offloads_caps = dv_attr.tunnel_offloads_caps; + if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS) { + sh->dev_cap.tunnel_en = dv_attr.tunnel_offloads_caps & + (MLX5_TUNNELED_OFFLOADS_VXLAN_CAP | + MLX5_TUNNELED_OFFLOADS_GRE_CAP | + MLX5_TUNNELED_OFFLOADS_GENEVE_CAP); + } + if (sh->dev_cap.tunnel_en) { + DRV_LOG(DEBUG, "Tunnel offloading is supported for %s%s%s", + sh->dev_cap.tunnel_en & + MLX5_TUNNELED_OFFLOADS_VXLAN_CAP ? "[VXLAN]" : "", + sh->dev_cap.tunnel_en & + MLX5_TUNNELED_OFFLOADS_GRE_CAP ? "[GRE]" : "", + sh->dev_cap.tunnel_en & + MLX5_TUNNELED_OFFLOADS_GENEVE_CAP ? "[GENEVE]" : ""); + } else { + DRV_LOG(DEBUG, "Tunnel offloading is not supported."); + } +#else + DRV_LOG(WARNING, + "Tunnel offloading disabled due to old OFED/rdma-core version"); #endif + if (!sh->cdev->config.devx) + return 0; + /* Check capabilities for Packet Pacing. */ + DRV_LOG(DEBUG, "Timestamp counter frequency %u kHz.", + hca_attr->dev_freq_khz); + DRV_LOG(DEBUG, "Packet pacing is %ssupported.", + hca_attr->qos.packet_pacing ? "" : "not "); + DRV_LOG(DEBUG, "Cross channel ops are %ssupported.", + hca_attr->cross_channel ? "" : "not "); + DRV_LOG(DEBUG, "WQE index ignore is %ssupported.", + hca_attr->wqe_index_ignore ? "" : "not "); + DRV_LOG(DEBUG, "Non-wire SQ feature is %ssupported.", + hca_attr->non_wire_sq ? "" : "not "); + DRV_LOG(DEBUG, "Static WQE SQ feature is %ssupported (%d)", + hca_attr->log_max_static_sq_wq ? "" : "not ", + hca_attr->log_max_static_sq_wq); + DRV_LOG(DEBUG, "WQE rate PP mode is %ssupported.", + hca_attr->qos.wqe_rate_pp ? "" : "not "); + sh->dev_cap.txpp_en = hca_attr->qos.packet_pacing; + if (!hca_attr->cross_channel) { + DRV_LOG(DEBUG, + "Cross channel operations are required for packet pacing."); + sh->dev_cap.txpp_en = 0; + } + if (!hca_attr->wqe_index_ignore) { + DRV_LOG(DEBUG, + "WQE index ignore feature is required for packet pacing."); + sh->dev_cap.txpp_en = 0; + } + if (!hca_attr->non_wire_sq) { + DRV_LOG(DEBUG, + "Non-wire SQ feature is required for packet pacing."); + sh->dev_cap.txpp_en = 0; + } + if (!hca_attr->log_max_static_sq_wq) { + DRV_LOG(DEBUG, + "Static WQE SQ feature is required for packet pacing."); + sh->dev_cap.txpp_en = 0; + } + if (!hca_attr->qos.wqe_rate_pp) { + DRV_LOG(DEBUG, + "WQE rate mode is required for packet pacing."); + sh->dev_cap.txpp_en = 0; + } +#ifndef HAVE_MLX5DV_DEVX_UAR_OFFSET + DRV_LOG(DEBUG, + "DevX does not provide UAR offset, can't create queues for packet pacing."); + sh->dev_cap.txpp_en = 0; +#endif + /* Check for LRO support. */ + if (sh->dev_cap.dest_tir && sh->dev_cap.dv_flow_en && + hca_attr->lro_cap) { + /* TBD check tunnel lro caps. */ + sh->dev_cap.lro_supported = 1; + DRV_LOG(DEBUG, "Device supports LRO."); + DRV_LOG(DEBUG, + "LRO minimal size of TCP segment required for coalescing is %d bytes.", + hca_attr->lro_min_mss_size); + } + sh->dev_cap.scatter_fcs_w_decap_disable = + hca_attr->scatter_fcs_w_decap_disable; + sh->dev_cap.rq_delay_drop_en = hca_attr->rq_delay_drop; + mlx5_rt_timestamp_config(sh, hca_attr); return 0; } @@ -840,11 +1025,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, struct rte_eth_dev *eth_dev = NULL; struct mlx5_priv *priv = NULL; int err = 0; - unsigned int hw_padding = 0; - unsigned int mps; - unsigned int mpls_en = 0; - unsigned int swp = 0; - unsigned int mprq = 0; struct rte_ether_addr mac; char name[RTE_ETH_NAME_MAX_LEN]; int own_domain_id = 0; @@ -939,18 +1119,14 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!sh) return NULL; /* Update final values for devargs before check sibling config. */ -#if !defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_MLX5DV_DR) - if (config->dv_flow_en) { + if (config->dv_flow_en && !sh->dev_cap.dv_flow_en) { DRV_LOG(WARNING, "DV flow is not supported."); config->dv_flow_en = 0; } -#endif -#ifdef HAVE_MLX5DV_DR_ESWITCH - if (!(hca_attr->eswitch_manager && config->dv_flow_en && sh->esw_mode)) + if (config->dv_esw_en && !sh->dev_cap.dv_esw_en) { + DRV_LOG(WARNING, "E-Switch DV flow is not supported."); config->dv_esw_en = 0; -#else - config->dv_esw_en = 0; -#endif + } if (config->dv_miss_info && config->dv_esw_en) config->dv_xmeta_en = MLX5_XMETA_MODE_META16; if (!config->dv_esw_en && @@ -964,93 +1140,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = mlx5_dev_check_sibling_config(sh, config, dpdk_dev); if (err) goto error; -#ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR - config->dest_tir = 1; -#endif - /* - * Multi-packet send is supported by ConnectX-4 Lx PF as well - * as all ConnectX-5 devices. - */ - if (sh->dev_cap.flags & MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED) { - if (sh->dev_cap.flags & MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW) { - DRV_LOG(DEBUG, "enhanced MPW is supported"); - mps = MLX5_MPW_ENHANCED; - } else { - DRV_LOG(DEBUG, "MPW is supported"); - mps = MLX5_MPW; - } - } else { - DRV_LOG(DEBUG, "MPW isn't supported"); - mps = MLX5_MPW_DISABLED; - } -#ifdef HAVE_IBV_MLX5_MOD_SWP - if (sh->dev_cap.comp_mask & MLX5DV_CONTEXT_MASK_SWP) - swp = sh->dev_cap.sw_parsing_offloads; - DRV_LOG(DEBUG, "SWP support: %u", swp); -#endif - config->swp = swp & (MLX5_SW_PARSING_CAP | MLX5_SW_PARSING_CSUM_CAP | - MLX5_SW_PARSING_TSO_CAP); -#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT - if (sh->dev_cap.comp_mask & MLX5DV_CONTEXT_MASK_STRIDING_RQ) { - DRV_LOG(DEBUG, "\tmin_single_stride_log_num_of_bytes: %d", - sh->dev_cap.min_single_stride_log_num_of_bytes); - DRV_LOG(DEBUG, "\tmax_single_stride_log_num_of_bytes: %d", - sh->dev_cap.max_single_stride_log_num_of_bytes); - DRV_LOG(DEBUG, "\tmin_single_wqe_log_num_of_strides: %d", - sh->dev_cap.min_single_wqe_log_num_of_strides); - DRV_LOG(DEBUG, "\tmax_single_wqe_log_num_of_strides: %d", - sh->dev_cap.max_single_wqe_log_num_of_strides); - DRV_LOG(DEBUG, "\tsupported_qpts: %d", - sh->dev_cap.stride_supported_qpts); - DRV_LOG(DEBUG, "\tmin_stride_wqe_log_size: %d", - config->mprq.log_min_stride_wqe_size); - DRV_LOG(DEBUG, "device supports Multi-Packet RQ"); - mprq = 1; - config->mprq.log_min_stride_size = - sh->dev_cap.min_single_stride_log_num_of_bytes; - config->mprq.log_max_stride_size = - sh->dev_cap.max_single_stride_log_num_of_bytes; - config->mprq.log_min_stride_num = - sh->dev_cap.min_single_wqe_log_num_of_strides; - config->mprq.log_max_stride_num = - sh->dev_cap.max_single_wqe_log_num_of_strides; - } -#endif -#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - if (sh->dev_cap.comp_mask & MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS) { - config->tunnel_en = sh->dev_cap.tunnel_offloads_caps & - (MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_VXLAN | - MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GRE | - MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GENEVE); - } - if (config->tunnel_en) { - DRV_LOG(DEBUG, "tunnel offloading is supported for %s%s%s", - config->tunnel_en & - MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_VXLAN ? "[VXLAN]" : "", - config->tunnel_en & - MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GRE ? "[GRE]" : "", - config->tunnel_en & - MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GENEVE ? "[GENEVE]" : "" - ); - } else { - DRV_LOG(DEBUG, "tunnel offloading is not supported"); - } -#else - DRV_LOG(WARNING, - "tunnel offloading disabled due to old OFED/rdma-core version"); -#endif -#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT - mpls_en = ((sh->dev_cap.tunnel_offloads_caps & - MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_CW_MPLS_OVER_GRE) && - (sh->dev_cap.tunnel_offloads_caps & - MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_CW_MPLS_OVER_UDP)); - DRV_LOG(DEBUG, "MPLS over GRE/UDP tunnel offloading is %ssupported", - mpls_en ? "" : "not "); -#else - DRV_LOG(WARNING, "MPLS over GRE/UDP tunnel offloading disabled due to" - " old OFED/rdma-core version or firmware configuration"); -#endif - config->mpls_en = mpls_en; nl_rdma = mlx5_nl_init(NETLINK_RDMA); /* Check port status. */ if (spawn->phys_port <= UINT8_MAX) { @@ -1203,80 +1292,40 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(DEBUG, "dev_port-%u new domain_id=%u\n", priv->dev_port, priv->domain_id); } - config->hw_csum = !!(sh->dev_cap.device_cap_flags_ex & - IBV_DEVICE_RAW_IP_CSUM); - DRV_LOG(DEBUG, "checksum offloading is %ssupported", - (config->hw_csum ? "" : "not ")); -#if !defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) && \ - !defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45) - DRV_LOG(DEBUG, "counters are not supported"); -#endif - config->ind_table_max_size = - sh->dev_cap.max_rwq_indirection_table_size; - /* - * Remove this check once DPDK supports larger/variable - * indirection tables. - */ - if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512) - config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512; - DRV_LOG(DEBUG, "maximum Rx indirection table size is %u", - config->ind_table_max_size); - config->hw_vlan_strip = !!(sh->dev_cap.raw_packet_caps & - IBV_RAW_PACKET_CAP_CVLAN_STRIPPING); - DRV_LOG(DEBUG, "VLAN stripping is %ssupported", - (config->hw_vlan_strip ? "" : "not ")); - config->hw_fcs_strip = !!(sh->dev_cap.raw_packet_caps & - IBV_RAW_PACKET_CAP_SCATTER_FCS); -#if defined(HAVE_IBV_WQ_FLAG_RX_END_PADDING) - hw_padding = !!sh->dev_cap.rx_pad_end_addr_align; -#elif defined(HAVE_IBV_WQ_FLAGS_PCI_WRITE_END_PADDING) - hw_padding = !!(sh->dev_cap.device_cap_flags_ex & - IBV_DEVICE_PCI_WRITE_END_PADDING); -#endif - if (config->hw_padding && !hw_padding) { + if (config->hw_padding && !sh->dev_cap.hw_padding) { DRV_LOG(DEBUG, "Rx end alignment padding isn't supported"); config->hw_padding = 0; } else if (config->hw_padding) { DRV_LOG(DEBUG, "Rx end alignment padding is enabled"); } - config->tso = (sh->dev_cap.max_tso > 0 && - (sh->dev_cap.tso_supported_qpts & - (1 << IBV_QPT_RAW_PACKET))); - if (config->tso) - config->tso_max_payload_sz = sh->dev_cap.max_tso; /* * MPW is disabled by default, while the Enhanced MPW is enabled * by default. */ if (config->mps == MLX5_ARG_UNSET) - config->mps = (mps == MLX5_MPW_ENHANCED) ? MLX5_MPW_ENHANCED : - MLX5_MPW_DISABLED; + config->mps = (sh->dev_cap.mps == MLX5_MPW_ENHANCED) ? + MLX5_MPW_ENHANCED : MLX5_MPW_DISABLED; else - config->mps = config->mps ? mps : MLX5_MPW_DISABLED; + config->mps = config->mps ? sh->dev_cap.mps : MLX5_MPW_DISABLED; DRV_LOG(INFO, "%sMPS is %s", config->mps == MLX5_MPW_ENHANCED ? "enhanced " : config->mps == MLX5_MPW ? "legacy " : "", config->mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); if (sh->cdev->config.devx) { sh->steering_format_version = hca_attr->steering_format_version; - /* Check for LRO support. */ - if (config->dest_tir && hca_attr->lro_cap && - config->dv_flow_en) { - /* TBD check tunnel lro caps. */ - config->lro.supported = hca_attr->lro_cap; - DRV_LOG(DEBUG, "Device supports LRO"); + /* LRO is supported only when DV flow enabled. */ + if (sh->dev_cap.lro_supported && config->dv_flow_en) + sh->dev_cap.lro_supported = 0; + if (sh->dev_cap.lro_supported) { /* * If LRO timeout is not configured by application, * use the minimal supported value. */ - if (!config->lro.timeout) - config->lro.timeout = + if (!config->lro_timeout) + config->lro_timeout = hca_attr->lro_timer_supported_periods[0]; DRV_LOG(DEBUG, "LRO session timeout set to %d usec", - config->lro.timeout); - DRV_LOG(DEBUG, "LRO minimal size of TCP segment " - "required for coalescing is %d bytes", - hca_attr->lro_min_mss_size); + config->lro_timeout); } #if defined(HAVE_MLX5DV_DR) && \ (defined(HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER) || \ @@ -1369,9 +1418,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } #endif } - if (config->cqe_comp && RTE_CACHE_LINE_SIZE == 128 && - !(sh->dev_cap.flags & MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP)) { - DRV_LOG(WARNING, "Rx CQE 128B compression is not supported"); + if (config->cqe_comp && !sh->dev_cap.cqe_comp) { + DRV_LOG(WARNING, "Rx CQE 128B compression is not supported."); config->cqe_comp = 0; } if (config->cqe_comp_fmt == MLX5_CQE_RESP_FORMAT_FTAG_STRIDX && @@ -1388,68 +1436,10 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } DRV_LOG(DEBUG, "Rx CQE compression is %ssupported", config->cqe_comp ? "" : "not "); - if (config->tx_pp) { - DRV_LOG(DEBUG, "Timestamp counter frequency %u kHz", - hca_attr->dev_freq_khz); - DRV_LOG(DEBUG, "Packet pacing is %ssupported", - hca_attr->qos.packet_pacing ? "" : "not "); - DRV_LOG(DEBUG, "Cross channel ops are %ssupported", - hca_attr->cross_channel ? "" : "not "); - DRV_LOG(DEBUG, "WQE index ignore is %ssupported", - hca_attr->wqe_index_ignore ? "" : "not "); - DRV_LOG(DEBUG, "Non-wire SQ feature is %ssupported", - hca_attr->non_wire_sq ? "" : "not "); - DRV_LOG(DEBUG, "Static WQE SQ feature is %ssupported (%d)", - hca_attr->log_max_static_sq_wq ? "" : "not ", - hca_attr->log_max_static_sq_wq); - DRV_LOG(DEBUG, "WQE rate PP mode is %ssupported", - hca_attr->qos.wqe_rate_pp ? "" : "not "); - if (!sh->cdev->config.devx) { - DRV_LOG(ERR, "DevX is required for packet pacing"); - err = ENODEV; - goto error; - } - if (!hca_attr->qos.packet_pacing) { - DRV_LOG(ERR, "Packet pacing is not supported"); - err = ENODEV; - goto error; - } - if (!hca_attr->cross_channel) { - DRV_LOG(ERR, "Cross channel operations are" - " required for packet pacing"); - err = ENODEV; - goto error; - } - if (!hca_attr->wqe_index_ignore) { - DRV_LOG(ERR, "WQE index ignore feature is" - " required for packet pacing"); - err = ENODEV; - goto error; - } - if (!hca_attr->non_wire_sq) { - DRV_LOG(ERR, "Non-wire SQ feature is" - " required for packet pacing"); - err = ENODEV; - goto error; - } - if (!hca_attr->log_max_static_sq_wq) { - DRV_LOG(ERR, "Static WQE SQ feature is" - " required for packet pacing"); - err = ENODEV; - goto error; - } - if (!hca_attr->qos.wqe_rate_pp) { - DRV_LOG(ERR, "WQE rate mode is required" - " for packet pacing"); - err = ENODEV; - goto error; - } -#ifndef HAVE_MLX5DV_DEVX_UAR_OFFSET - DRV_LOG(ERR, "DevX does not provide UAR offset," - " can't create queues for packet pacing"); + if (config->tx_pp && !sh->dev_cap.txpp_en) { + DRV_LOG(ERR, "Packet pacing is not supported."); err = ENODEV; goto error; -#endif } if (config->std_delay_drop || config->hp_delay_drop) { if (!hca_attr->rq_delay_drop) { @@ -1460,19 +1450,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->dev_port); } } - if (sh->cdev->config.devx) - mlx5_rt_timestamp_config(sh, config, hca_attr); /* * If HW has bug working with tunnel packet decapsulation and * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore. */ - if (hca_attr->scatter_fcs_w_decap_disable && config->decap_en) + if (sh->dev_cap.scatter_fcs_w_decap_disable && config->decap_en) config->hw_fcs_strip = 0; + else + config->hw_fcs_strip = sh->dev_cap.hw_fcs_strip; DRV_LOG(DEBUG, "FCS stripping configuration is %ssupported", (config->hw_fcs_strip ? "" : "not ")); - if (config->mprq.enabled && !mprq) { - DRV_LOG(WARNING, "Multi-Packet RQ isn't supported"); + if (config->mprq.enabled && !sh->dev_cap.mprq.enabled) { + DRV_LOG(WARNING, "Multi-Packet RQ isn't supported."); config->mprq.enabled = 0; } if (config->max_dump_files_num == 0) @@ -1556,7 +1546,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, eth_dev->rx_queue_count = mlx5_rx_queue_count; /* Register MAC address. */ claim_zero(mlx5_mac_addr_add(eth_dev, &mac, 0, 0)); - if (config->vf && config->vf_nl_en) + if (sh->dev_cap.vf && config->vf_nl_en) mlx5_nl_mac_addr_sync(priv->nl_socket_route, mlx5_ifindex(eth_dev), eth_dev->data->mac_addrs, @@ -1598,7 +1588,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (mlx5_flex_item_port_init(eth_dev) < 0) goto error; } - if (sh->cdev->config.devx && config->dv_flow_en && config->dest_tir) { + if (sh->cdev->config.devx && config->dv_flow_en && + sh->dev_cap.dest_tir) { priv->obj_ops = devx_obj_ops; mlx5_queue_counter_id_prepare(eth_dev); priv->obj_ops.lb_dummy_queue_create = @@ -1949,8 +1940,7 @@ mlx5_device_bond_pci_match(const char *ibdev_name, } static void -mlx5_os_config_default(struct mlx5_dev_config *config, - struct mlx5_common_dev_config *cconf) +mlx5_os_config_default(struct mlx5_dev_config *config) { memset(config, 0, sizeof(*config)); config->mps = MLX5_ARG_UNSET; @@ -1963,9 +1953,6 @@ mlx5_os_config_default(struct mlx5_dev_config *config, config->vf_nl_en = 1; config->mprq.max_memcpy_len = MLX5_MPRQ_MEMCPY_DEFAULT_LEN; config->mprq.min_rxqs_num = MLX5_MPRQ_MIN_RXQS; - config->mprq.log_min_stride_wqe_size = cconf->devx ? - cconf->hca_attr.log_min_stride_wqe_sz : - MLX5_MPRQ_LOG_MIN_STRIDE_WQE_SIZE; config->mprq.log_stride_num = MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM; config->dv_esw_en = 1; config->dv_flow_en = 1; @@ -2367,8 +2354,7 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev, uint32_t restore; /* Default configuration. */ - mlx5_os_config_default(&dev_config, &cdev->config); - dev_config.vf = mlx5_dev_is_vf_pci(pci_dev); + mlx5_os_config_default(&dev_config); list[i].eth_dev = mlx5_dev_spawn(cdev->dev, &list[i], &dev_config, ð_da); if (!list[i].eth_dev) { @@ -2537,8 +2523,7 @@ mlx5_os_auxiliary_probe(struct mlx5_common_device *cdev) if (ret != 0) return ret; /* Set default config data. */ - mlx5_os_config_default(&config, &cdev->config); - config.sf = 1; + mlx5_os_config_default(&config); /* Init spawn data. */ spawn.max_port = 1; spawn.phys_port = 1; @@ -2769,7 +2754,7 @@ void mlx5_os_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) { struct mlx5_priv *priv = dev->data->dev_private; - const int vf = priv->config.vf; + const int vf = priv->sh->dev_cap.vf; if (vf) mlx5_nl_mac_addr_remove(priv->nl_socket_route, @@ -2795,7 +2780,7 @@ mlx5_os_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uint32_t index) { struct mlx5_priv *priv = dev->data->dev_private; - const int vf = priv->config.vf; + const int vf = priv->sh->dev_cap.vf; int ret = 0; if (vf) diff --git a/drivers/net/mlx5/linux/mlx5_vlan_os.c b/drivers/net/mlx5/linux/mlx5_vlan_os.c index 005904bdfe..80ccd5a460 100644 --- a/drivers/net/mlx5/linux/mlx5_vlan_os.c +++ b/drivers/net/mlx5/linux/mlx5_vlan_os.c @@ -103,12 +103,11 @@ void * mlx5_vlan_vmwa_init(struct rte_eth_dev *dev, uint32_t ifindex) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_config *config = &priv->config; struct mlx5_nl_vlan_vmwa_context *vmwa; enum rte_hypervisor hv_type; /* Do not engage workaround over PF. */ - if (!config->vf) + if (!priv->sh->dev_cap.vf) return NULL; /* Check whether there is desired virtual environment */ hv_type = rte_hypervisor_get(); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b33dc0e7b4..bd23ce5afd 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1174,14 +1174,11 @@ mlx5_setup_tis(struct mlx5_dev_ctx_shared *sh) * * @param sh * Pointer to mlx5_dev_ctx_shared object. - * @param config - * Device configuration parameters. * @param hca_attr * Pointer to DevX HCA capabilities structure. */ void mlx5_rt_timestamp_config(struct mlx5_dev_ctx_shared *sh, - struct mlx5_dev_config *config, struct mlx5_hca_attr *hca_attr) { uint32_t dw_cnt = MLX5_ST_SZ_DW(register_mtutc); @@ -1198,11 +1195,11 @@ mlx5_rt_timestamp_config(struct mlx5_dev_ctx_shared *sh, /* MTUTC register is read successfully. */ ts_mode = MLX5_GET(register_mtutc, reg, time_stamp_mode); if (ts_mode == MLX5_MTUTC_TIMESTAMP_MODE_REAL_TIME) - config->rt_timestamp = 1; + sh->dev_cap.rt_timestamp = 1; } else { /* Kernel does not support register reading. */ if (hca_attr->dev_freq_khz == (NS_PER_S / MS_PER_S)) - config->rt_timestamp = 1; + sh->dev_cap.rt_timestamp = 1; } } @@ -1676,7 +1673,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_free(priv->rss_conf.rss_key); if (priv->reta_idx != NULL) mlx5_free(priv->reta_idx); - if (priv->config.vf) + if (priv->sh->dev_cap.vf) mlx5_os_mac_addr_flush(dev); if (priv->nl_socket_route >= 0) close(priv->nl_socket_route); @@ -2028,7 +2025,7 @@ mlx5_args_check(const char *key, const char *val, void *opaque) } else if (strcmp(MLX5_MAX_DUMP_FILES_NUM, key) == 0) { config->max_dump_files_num = tmp; } else if (strcmp(MLX5_LRO_TIMEOUT_USEC, key) == 0) { - config->lro.timeout = tmp; + config->lro_timeout = tmp; } else if (strcmp(RTE_DEVARGS_KEY_CLASS, key) == 0) { DRV_LOG(DEBUG, "class argument is %s.", val); } else if (strcmp(MLX5_HP_BUF_SIZE, key) == 0) { diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index fd6350eee7..bda09cf96e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -116,7 +116,6 @@ struct mlx5_flow_cb_ctx { /* Device capabilities structure which isn't changed in any stage. */ struct mlx5_dev_cap { - uint64_t device_cap_flags_ex; int max_cq; /* Maximum number of supported CQs */ int max_qp; /* Maximum number of supported QPs. */ int max_qp_wr; /* Maximum number of outstanding WR on any WQ. */ @@ -124,20 +123,40 @@ struct mlx5_dev_cap { /* Maximum number of s/g per WR for SQ & RQ of QP for non RDMA Read * operations. */ - uint32_t raw_packet_caps; - uint32_t max_rwq_indirection_table_size; + int mps; /* Multi-packet send supported mode. */ + uint32_t vf:1; /* This is a VF. */ + uint32_t sf:1; /* This is a SF. */ + uint32_t txpp_en:1; /* Tx packet pacing is supported. */ + uint32_t mpls_en:1; /* MPLS over GRE/UDP is supported. */ + uint32_t cqe_comp:1; /* CQE compression is supported. */ + uint32_t hw_csum:1; /* Checksum offload is supported. */ + uint32_t hw_padding:1; /* End alignment padding is supported. */ + uint32_t dest_tir:1; /* Whether advanced DR API is available. */ + uint32_t dv_esw_en:1; /* E-Switch DV flow is supported. */ + uint32_t dv_flow_en:1; /* DV flow is supported. */ + uint32_t swp:3; /* Tx generic tunnel checksum and TSO offload. */ + uint32_t hw_vlan_strip:1; /* VLAN stripping is supported. */ + uint32_t scatter_fcs_w_decap_disable:1; + /* HW has bug working with tunnel packet decap and scatter FCS. */ + uint32_t hw_fcs_strip:1; /* FCS stripping is supported. */ + uint32_t rt_timestamp:1; /* Realtime timestamp format. */ + uint32_t lro_supported:1; /* Whether LRO is supported. */ + uint32_t rq_delay_drop_en:1; /* Enable RxQ delay drop. */ + uint32_t tunnel_en:3; + /* Whether tunnel stateless offloads are supported. */ + uint32_t ind_table_max_size; /* Maximum receive WQ indirection table size. */ - uint32_t max_tso; /* Maximum TCP payload for TSO. */ - uint32_t tso_supported_qpts; - uint64_t flags; - uint64_t comp_mask; - uint32_t sw_parsing_offloads; - uint32_t min_single_stride_log_num_of_bytes; - uint32_t max_single_stride_log_num_of_bytes; - uint32_t min_single_wqe_log_num_of_strides; - uint32_t max_single_wqe_log_num_of_strides; - uint32_t stride_supported_qpts; - uint32_t tunnel_offloads_caps; + uint32_t tso:1; /* Whether TSO is supported. */ + uint32_t tso_max_payload_sz; /* Maximum TCP payload for TSO. */ + struct { + uint32_t enabled:1; /* Whether MPRQ is enabled. */ + uint32_t log_min_stride_size; /* Log min size of a stride. */ + uint32_t log_max_stride_size; /* Log max size of a stride. */ + uint32_t log_min_stride_num; /* Log min num of strides. */ + uint32_t log_max_stride_num; /* Log max num of strides. */ + uint32_t log_min_stride_wqe_size; + /* Log min WQE size, (size of single stride)*(num of strides).*/ + } mprq; /* Capability for Multi-Packet RQ. */ char fw_ver[64]; /* Firmware version of this device. */ }; @@ -214,9 +233,6 @@ struct mlx5_stats_ctrl { uint64_t imissed; }; -#define MLX5_LRO_SUPPORTED(dev) \ - (((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported) - /* Maximal size of coalesced segment for LRO is set in chunks of 256 Bytes. */ #define MLX5_LRO_SEG_CHUNK_SIZE 256u @@ -226,12 +242,6 @@ struct mlx5_stats_ctrl { /* Maximal number of segments to split. */ #define MLX5_MAX_RXQ_NSEG (1u << MLX5_MAX_LOG_RQ_SEGS) -/* LRO configurations structure. */ -struct mlx5_lro_config { - uint32_t supported:1; /* Whether LRO is supported. */ - uint32_t timeout; /* User configuration. */ -}; - /* * Device configuration structure. * @@ -241,19 +251,11 @@ struct mlx5_lro_config { * - User device parameters disabled features. */ struct mlx5_dev_config { - unsigned int hw_csum:1; /* Checksum offload is supported. */ - unsigned int hw_vlan_strip:1; /* VLAN stripping is supported. */ unsigned int hw_vlan_insert:1; /* VLAN insertion in WQE is supported. */ unsigned int hw_fcs_strip:1; /* FCS stripping is supported. */ unsigned int hw_padding:1; /* End alignment padding is supported. */ - unsigned int vf:1; /* This is a VF. */ - unsigned int sf:1; /* This is a SF. */ - unsigned int tunnel_en:3; - /* Whether tunnel stateless offloads are supported. */ - unsigned int mpls_en:1; /* MPLS over GRE/UDP is enabled. */ unsigned int cqe_comp:1; /* CQE compression is enabled. */ unsigned int cqe_comp_fmt:3; /* CQE compression format. */ - unsigned int tso:1; /* Whether TSO is supported. */ unsigned int rx_vec_en:1; /* Rx vector is enabled. */ unsigned int l3_vxlan_en:1; /* Enable L3 VXLAN flow creation. */ unsigned int vf_nl_en:1; /* Enable Netlink requests in VF mode. */ @@ -262,10 +264,7 @@ struct mlx5_dev_config { unsigned int dv_xmeta_en:2; /* Enable extensive flow metadata. */ unsigned int lacp_by_user:1; /* Enable user to manage LACP traffic. */ - unsigned int swp:3; /* Tx generic tunnel checksum and TSO offload. */ - unsigned int dest_tir:1; /* Whether advanced DR API is available. */ unsigned int reclaim_mode:2; /* Memory reclaim mode. */ - unsigned int rt_timestamp:1; /* realtime timestamp format. */ unsigned int decap_en:1; /* Whether decap will be used or not. */ unsigned int dv_miss_info:1; /* restore packet after partial hw miss */ unsigned int allow_duplicate_pattern:1; @@ -276,29 +275,21 @@ struct mlx5_dev_config { unsigned int enabled:1; /* Whether MPRQ is enabled. */ unsigned int log_stride_num; /* Log number of strides. */ unsigned int log_stride_size; /* Log size of a stride. */ - unsigned int log_min_stride_size; /* Log min size of a stride.*/ - unsigned int log_max_stride_size; /* Log max size of a stride.*/ - unsigned int log_min_stride_num; /* Log min num of strides. */ - unsigned int log_max_stride_num; /* Log max num of strides. */ - unsigned int log_min_stride_wqe_size; - /* Log min WQE size, (size of single stride)*(num of strides).*/ unsigned int max_memcpy_len; /* Maximum packet size to memcpy Rx packets. */ unsigned int min_rxqs_num; /* Rx queue count threshold to enable MPRQ. */ } mprq; /* Configurations for Multi-Packet RQ. */ int mps; /* Multi-packet send supported mode. */ - unsigned int tso_max_payload_sz; /* Maximum TCP payload for TSO. */ - unsigned int ind_table_max_size; /* Maximum indirection table size. */ unsigned int max_dump_files_num; /* Maximum dump files per queue. */ unsigned int log_hp_size; /* Single hairpin queue data size in total. */ + unsigned int lro_timeout; /* LRO user configuration. */ int txqs_inline; /* Queue number threshold for inlining. */ int txq_inline_min; /* Minimal amount of data bytes to inline. */ int txq_inline_max; /* Max packet size for inlining with SEND. */ int txq_inline_mpw; /* Max packet size for inlining with eMPW. */ int tx_pp; /* Timestamp scheduling granularity in nanoseconds. */ int tx_skew; /* Tx scheduling skew between WQE and data on wire. */ - struct mlx5_lro_config lro; /* LRO configuration. */ }; @@ -1518,7 +1509,6 @@ void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); port_id = mlx5_eth_find_next(port_id + 1, dev)) int mlx5_args(struct mlx5_dev_config *config, struct rte_devargs *devargs); void mlx5_rt_timestamp_config(struct mlx5_dev_ctx_shared *sh, - struct mlx5_dev_config *config, struct mlx5_hca_attr *hca_attr); struct mlx5_dev_ctx_shared * mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 553df6424d..de0f3672c1 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -571,7 +571,7 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, rte_errno = ENOMEM; return NULL; } - rqt_attr->rqt_max_size = priv->config.ind_table_max_size; + rqt_attr->rqt_max_size = priv->sh->dev_cap.ind_table_max_size; rqt_attr->rqt_actual_size = rqt_n; if (queues == NULL) { for (i = 0; i < rqt_n; i++) @@ -769,7 +769,7 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, tir_attr->self_lb_block = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; if (lro) { - tir_attr->lro_timeout_period_usecs = priv->config.lro.timeout; + tir_attr->lro_timeout_period_usecs = priv->config.lro_timeout; tir_attr->lro_max_msg_sz = priv->max_lro_msg_size; tir_attr->lro_enable_mask = MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO | @@ -1196,7 +1196,7 @@ mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx, .flush_in_error_en = 1, .allow_multi_pkt_send_wqe = !!priv->config.mps, .min_wqe_inline_mode = cdev->config.hca_attr.vport_inline_mode, - .allow_swp = !!priv->config.swp, + .allow_swp = !!priv->sh->dev_cap.swp, .cqn = txq_obj->cq_obj.cq->id, .tis_lst_sz = 1, .wq_attr = (struct mlx5_devx_wq_attr){ diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index d970eb6904..b7fe781d3a 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -121,7 +121,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev) dev->data->port_id, priv->txqs_n, txqs_n); priv->txqs_n = txqs_n; } - if (rxqs_n > priv->config.ind_table_max_size) { + if (rxqs_n > priv->sh->dev_cap.ind_table_max_size) { DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u)", dev->data->port_id, rxqs_n); rte_errno = EINVAL; @@ -177,7 +177,7 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) rss_queue_arr[j++] = i; } rss_queue_n = j; - if (rss_queue_n > priv->config.ind_table_max_size) { + if (rss_queue_n > priv->sh->dev_cap.ind_table_max_size) { DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u)", dev->data->port_id, rss_queue_n); rte_errno = EINVAL; @@ -193,8 +193,8 @@ mlx5_dev_configure_rss_reta(struct rte_eth_dev *dev) * The result is always rounded to the next power of two. */ reta_idx_n = (1 << log2above((rss_queue_n & (rss_queue_n - 1)) ? - priv->config.ind_table_max_size : - rss_queue_n)); + priv->sh->dev_cap.ind_table_max_size : + rss_queue_n)); ret = mlx5_rss_reta_index_resize(dev, reta_idx_n); if (ret) { mlx5_free(rss_queue_arr); @@ -330,7 +330,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) info->dev_capa = RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP; info->if_index = mlx5_ifindex(dev); info->reta_size = priv->reta_idx_n ? - priv->reta_idx_n : config->ind_table_max_size; + priv->reta_idx_n : priv->sh->dev_cap.ind_table_max_size; info->hash_key_size = MLX5_RSS_HASH_KEY_LEN; info->speed_capa = priv->link_speed_capa; info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK; @@ -722,7 +722,7 @@ mlx5_hairpin_cap_get(struct rte_eth_dev *dev, struct rte_eth_hairpin_cap *cap) struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_config *config = &priv->config; - if (!priv->sh->cdev->config.devx || !config->dest_tir || + if (!priv->sh->cdev->config.devx || !priv->sh->dev_cap.dest_tir || !config->dv_flow_en) { rte_errno = ENOTSUP; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 907f3fd75a..8bb9a72ba5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1759,7 +1759,7 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &rss->key_len, "RSS hash key too large"); - if (rss->queue_num > priv->config.ind_table_max_size) + if (rss->queue_num > priv->sh->dev_cap.ind_table_max_size) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION_CONF, &rss->queue_num, @@ -3138,7 +3138,7 @@ mlx5_flow_validate_item_mpls(struct rte_eth_dev *dev __rte_unused, struct mlx5_priv *priv = dev->data->dev_private; int ret; - if (!priv->config.mpls_en) + if (!priv->sh->dev_cap.mpls_en) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "MPLS not supported or" diff --git a/drivers/net/mlx5/mlx5_rxmode.c b/drivers/net/mlx5/mlx5_rxmode.c index 7f19b235c2..f44906e1a7 100644 --- a/drivers/net/mlx5/mlx5_rxmode.c +++ b/drivers/net/mlx5/mlx5_rxmode.c @@ -36,7 +36,7 @@ mlx5_promiscuous_enable(struct rte_eth_dev *dev) dev->data->port_id); return 0; } - if (priv->config.vf || priv->config.sf) { + if (priv->sh->dev_cap.vf || priv->sh->dev_cap.sf) { ret = mlx5_os_set_promisc(dev, 1); if (ret) return ret; @@ -69,7 +69,7 @@ mlx5_promiscuous_disable(struct rte_eth_dev *dev) int ret; dev->data->promiscuous = 0; - if (priv->config.vf || priv->config.sf) { + if (priv->sh->dev_cap.vf || priv->sh->dev_cap.sf) { ret = mlx5_os_set_promisc(dev, 0); if (ret) return ret; @@ -109,7 +109,7 @@ mlx5_allmulticast_enable(struct rte_eth_dev *dev) dev->data->port_id); return 0; } - if (priv->config.vf || priv->config.sf) { + if (priv->sh->dev_cap.vf || priv->sh->dev_cap.sf) { ret = mlx5_os_set_allmulti(dev, 1); if (ret) goto error; @@ -142,7 +142,7 @@ mlx5_allmulticast_disable(struct rte_eth_dev *dev) int ret; dev->data->all_multicast = 0; - if (priv->config.vf || priv->config.sf) { + if (priv->sh->dev_cap.vf || priv->sh->dev_cap.sf) { ret = mlx5_os_set_allmulti(dev, 0); if (ret) goto error; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 0ede46aa43..bcb04018f8 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -368,13 +368,13 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev) offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; if (config->hw_fcs_strip) offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC; - if (config->hw_csum) + if (priv->sh->dev_cap.hw_csum) offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM); - if (config->hw_vlan_strip) + if (priv->sh->dev_cap.hw_vlan_strip) offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP; - if (MLX5_LRO_SUPPORTED(dev)) + if (priv->sh->dev_cap.lro_supported) offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO; return offloads; } @@ -1564,14 +1564,15 @@ mlx5_mprq_prepare(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_config *config = &priv->config; - uint32_t log_min_stride_num = config->mprq.log_min_stride_num; - uint32_t log_max_stride_num = config->mprq.log_max_stride_num; + struct mlx5_dev_cap *dev_cap = &priv->sh->dev_cap; + uint32_t log_min_stride_num = dev_cap->mprq.log_min_stride_num; + uint32_t log_max_stride_num = dev_cap->mprq.log_max_stride_num; uint32_t log_def_stride_num = RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM, log_min_stride_num), log_max_stride_num); - uint32_t log_min_stride_size = config->mprq.log_min_stride_size; - uint32_t log_max_stride_size = config->mprq.log_max_stride_size; + uint32_t log_min_stride_size = dev_cap->mprq.log_min_stride_size; + uint32_t log_max_stride_size = dev_cap->mprq.log_max_stride_size; uint32_t log_def_stride_size = RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE, log_min_stride_size), @@ -1610,7 +1611,7 @@ mlx5_mprq_prepare(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } log_stride_wqe_size = *actual_log_stride_num + *actual_log_stride_size; /* Check if WQE buffer size is supported by hardware. */ - if (log_stride_wqe_size < config->mprq.log_min_stride_wqe_size) { + if (log_stride_wqe_size < dev_cap->mprq.log_min_stride_wqe_size) { *actual_log_stride_num = log_def_stride_num; *actual_log_stride_size = log_def_stride_size; DRV_LOG(WARNING, @@ -1619,7 +1620,8 @@ mlx5_mprq_prepare(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, RTE_BIT32(log_def_stride_size)); log_stride_wqe_size = log_def_stride_num + log_def_stride_size; } - MLX5_ASSERT(log_stride_wqe_size >= config->mprq.log_min_stride_wqe_size); + MLX5_ASSERT(log_stride_wqe_size >= + dev_cap->mprq.log_min_stride_wqe_size); if (desc <= RTE_BIT32(*actual_log_stride_num)) goto unsupport; if (min_mbuf_size > RTE_BIT32(log_stride_wqe_size)) { @@ -1648,9 +1650,9 @@ mlx5_mprq_prepare(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, RTE_BIT32(config->mprq.log_stride_size), RTE_BIT32(config->mprq.log_stride_num), config->mprq.min_rxqs_num, - RTE_BIT32(config->mprq.log_min_stride_wqe_size), - RTE_BIT32(config->mprq.log_min_stride_size), - RTE_BIT32(config->mprq.log_max_stride_size), + RTE_BIT32(dev_cap->mprq.log_min_stride_wqe_size), + RTE_BIT32(dev_cap->mprq.log_min_stride_size), + RTE_BIT32(dev_cap->mprq.log_max_stride_size), rx_seg_en ? "" : "not "); return -1; } @@ -2370,7 +2372,7 @@ mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, int ret = 0, err; const unsigned int n = rte_is_power_of_2(queues_n) ? log2above(queues_n) : - log2above(priv->config.ind_table_max_size); + log2above(priv->sh->dev_cap.ind_table_max_size); if (ref_qs) for (i = 0; i != queues_n; ++i) { @@ -2495,7 +2497,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, int ret = 0, err; const unsigned int n = rte_is_power_of_2(queues_n) ? log2above(queues_n) : - log2above(priv->config.ind_table_max_size); + log2above(priv->sh->dev_cap.ind_table_max_size); MLX5_ASSERT(standalone); RTE_SET_USED(standalone); @@ -2576,7 +2578,7 @@ mlx5_ind_table_obj_detach(struct rte_eth_dev *dev, struct mlx5_priv *priv = dev->data->dev_private; const unsigned int n = rte_is_power_of_2(ind_tbl->queues_n) ? log2above(ind_tbl->queues_n) : - log2above(priv->config.ind_table_max_size); + log2above(priv->sh->dev_cap.ind_table_max_size); unsigned int i; int ret; @@ -2994,6 +2996,6 @@ mlx5_rxq_timestamp_set(struct rte_eth_dev *dev) if (data == NULL) continue; data->sh = sh; - data->rt_timestamp = priv->config.rt_timestamp; + data->rt_timestamp = sh->dev_cap.rt_timestamp; } } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index cd8c451286..72dfb2128a 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1105,7 +1105,8 @@ mlx5_dev_start(struct rte_eth_dev *dev) goto error; } if ((priv->sh->cdev->config.devx && priv->config.dv_flow_en && - priv->config.dest_tir) && priv->obj_ops.lb_dummy_queue_create) { + priv->sh->dev_cap.dest_tir) && + priv->obj_ops.lb_dummy_queue_create) { ret = priv->obj_ops.lb_dummy_queue_create(dev); if (ret) goto error; @@ -1117,7 +1118,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) goto error; } if (priv->config.std_delay_drop || priv->config.hp_delay_drop) { - if (!priv->config.vf && !priv->config.sf && + if (!priv->sh->dev_cap.vf && !priv->sh->dev_cap.sf && !priv->representor) { ret = mlx5_get_flag_dropless_rq(dev); if (ret < 0) diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 56e0937ca3..47bca9e3ea 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -101,33 +101,34 @@ mlx5_get_tx_port_offloads(struct rte_eth_dev *dev) uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS | RTE_ETH_TX_OFFLOAD_VLAN_INSERT); struct mlx5_dev_config *config = &priv->config; + struct mlx5_dev_cap *dev_cap = &priv->sh->dev_cap; - if (config->hw_csum) + if (dev_cap->hw_csum) offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM); - if (config->tso) + if (dev_cap->tso) offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO; if (config->tx_pp) offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP; - if (config->swp) { - if (config->swp & MLX5_SW_PARSING_CSUM_CAP) + if (dev_cap->swp) { + if (dev_cap->swp & MLX5_SW_PARSING_CSUM_CAP) offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM; - if (config->swp & MLX5_SW_PARSING_TSO_CAP) + if (dev_cap->swp & MLX5_SW_PARSING_TSO_CAP) offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO); } - if (config->tunnel_en) { - if (config->hw_csum) + if (dev_cap->tunnel_en) { + if (dev_cap->hw_csum) offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM; - if (config->tso) { - if (config->tunnel_en & + if (dev_cap->tso) { + if (dev_cap->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP) offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO; - if (config->tunnel_en & + if (dev_cap->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP) offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO; - if (config->tunnel_en & + if (dev_cap->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP) offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; } @@ -741,6 +742,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl) { struct mlx5_priv *priv = txq_ctrl->priv; struct mlx5_dev_config *config = &priv->config; + struct mlx5_dev_cap *dev_cap = &priv->sh->dev_cap; unsigned int inlen_send; /* Inline data for ordinary SEND.*/ unsigned int inlen_empw; /* Inline data for enhanced MPW. */ unsigned int inlen_mode; /* Minimal required Inline data. */ @@ -924,19 +926,19 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl) txq_ctrl->txq.tso_en = 1; } if (((RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO & txq_ctrl->txq.offloads) && - (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) | + (dev_cap->tunnel_en & MLX5_TUNNELED_OFFLOADS_VXLAN_CAP)) | ((RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO & txq_ctrl->txq.offloads) && - (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) | + (dev_cap->tunnel_en & MLX5_TUNNELED_OFFLOADS_GRE_CAP)) | ((RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO & txq_ctrl->txq.offloads) && - (config->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) | - (config->swp & MLX5_SW_PARSING_TSO_CAP)) + (dev_cap->tunnel_en & MLX5_TUNNELED_OFFLOADS_GENEVE_CAP)) | + (dev_cap->swp & MLX5_SW_PARSING_TSO_CAP)) txq_ctrl->txq.tunnel_en = 1; txq_ctrl->txq.swp_en = (((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO) & - txq_ctrl->txq.offloads) && (config->swp & + txq_ctrl->txq.offloads) && (dev_cap->swp & MLX5_SW_PARSING_TSO_CAP)) | ((RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM & - txq_ctrl->txq.offloads) && (config->swp & + txq_ctrl->txq.offloads) && (dev_cap->swp & MLX5_SW_PARSING_CSUM_CAP)); } diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index ea841bb32f..e7161b66fe 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -97,7 +97,7 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL); /* Validate hw support */ - if (!priv->config.hw_vlan_strip) { + if (!priv->sh->dev_cap.hw_vlan_strip) { DRV_LOG(ERR, "port %u VLAN stripping is not supported", dev->data->port_id); return; @@ -146,7 +146,7 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask) int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP); - if (!priv->config.hw_vlan_strip) { + if (!priv->sh->dev_cap.hw_vlan_strip) { DRV_LOG(ERR, "port %u VLAN stripping is not supported", dev->data->port_id); return 0; diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 16fd54091e..dfcd28901a 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -159,6 +159,8 @@ mlx5_os_capabilities_prepare(struct mlx5_dev_ctx_shared *sh) void *pv_iseg = NULL; u32 cb_iseg = 0; + MLX5_ASSERT(sh->cdev->config.devx); + MLX5_ASSERT(mlx5_dev_is_pci(sh->cdev->dev)); pv_iseg = mlx5_glue->query_hca_iseg(mlx5_ctx, &cb_iseg); if (pv_iseg == NULL) { DRV_LOG(ERR, "Failed to get device hca_iseg."); @@ -166,22 +168,55 @@ mlx5_os_capabilities_prepare(struct mlx5_dev_ctx_shared *sh) return -rte_errno; } memset(&sh->dev_cap, 0, sizeof(struct mlx5_dev_cap)); + sh->dev_cap.vf = mlx5_dev_is_vf_pci(RTE_DEV_TO_PCI(sh->cdev->dev)); sh->dev_cap.max_cq = 1 << hca_attr->log_max_cq; sh->dev_cap.max_qp = 1 << hca_attr->log_max_qp; sh->dev_cap.max_qp_wr = 1 << hca_attr->log_max_qp_sz; - sh->dev_cap.max_tso = 1 << hca_attr->max_lso_cap; + sh->dev_cap.dv_flow_en = 1; + sh->dev_cap.mps = MLX5_MPW_DISABLED; + DRV_LOG(DEBUG, "MPW isn't supported."); + DRV_LOG(DEBUG, "MPLS over GRE/UDP tunnel offloading is no supported."); + sh->dev_cap.hw_csum = hca_attr->csum_cap; + DRV_LOG(DEBUG, "Checksum offloading is %ssupported.", + (sh->dev_cap.hw_csum ? "" : "not ")); + sh->dev_cap.hw_vlan_strip = hca_attr->vlan_cap; + DRV_LOG(DEBUG, "VLAN stripping is %ssupported.", + (sh->dev_cap.hw_vlan_strip ? "" : "not ")); + sh->dev_cap.hw_fcs_strip = hca_attr->scatter_fcs; + sh->dev_cap.tso = ((1 << hca_attr->max_lso_cap) > 0); + if (sh->dev_cap.tso) + sh->dev_cap.tso_max_payload_sz = 1 << hca_attr->max_lso_cap; + DRV_LOG(DEBUG, "Counters are not supported."); if (hca_attr->rss_ind_tbl_cap) { - sh->dev_cap.max_rwq_indirection_table_size = - 1 << hca_attr->rss_ind_tbl_cap; + /* + * DPDK doesn't support larger/variable indirection tables. + * Once DPDK supports it, take max size from device attr. + */ + sh->dev_cap.ind_table_max_size = + RTE_MIN(1 << hca_attr->rss_ind_tbl_cap, + (unsigned int)RTE_ETH_RSS_RETA_SIZE_512); + DRV_LOG(DEBUG, "Maximum Rx indirection table size is %u", + sh->dev_cap.ind_table_max_size); + } + sh->dev_cap.swp = mlx5_get_supported_sw_parsing_offloads(hca_attr); + sh->dev_cap.tunnel_en = mlx5_get_supported_tunneling_offloads(hca_attr); + if (sh->dev_cap.tunnel_en) { + DRV_LOG(DEBUG, "Tunnel offloading is supported for %s%s%s", + sh->dev_cap.tunnel_en & + MLX5_TUNNELED_OFFLOADS_VXLAN_CAP ? "[VXLAN]" : "", + sh->dev_cap.tunnel_en & + MLX5_TUNNELED_OFFLOADS_GRE_CAP ? "[GRE]" : "", + sh->dev_cap.tunnel_en & + MLX5_TUNNELED_OFFLOADS_GENEVE_CAP ? "[GENEVE]" : ""); + } else { + DRV_LOG(DEBUG, "Tunnel offloading is not supported."); } - sh->dev_cap.sw_parsing_offloads = - mlx5_get_supported_sw_parsing_offloads(hca_attr); - sh->dev_cap.tunnel_offloads_caps = - mlx5_get_supported_tunneling_offloads(hca_attr); snprintf(sh->dev_cap.fw_ver, 64, "%x.%x.%04x", MLX5_GET(initial_seg, pv_iseg, fw_rev_major), MLX5_GET(initial_seg, pv_iseg, fw_rev_minor), MLX5_GET(initial_seg, pv_iseg, fw_rev_subminor)); + DRV_LOG(DEBUG, "Packet pacing is not supported."); + mlx5_rt_timestamp_config(sh, hca_attr); return 0; } @@ -265,7 +300,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, { const struct mlx5_switch_info *switch_info = &spawn->info; struct mlx5_dev_ctx_shared *sh = NULL; - struct mlx5_hca_attr *hca_attr; struct rte_eth_dev *eth_dev = NULL; struct mlx5_priv *priv = NULL; int err = 0; @@ -321,30 +355,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, strerror(errno)); goto error; } - DRV_LOG(DEBUG, "MPW isn't supported"); - config->swp = sh->dev_cap.sw_parsing_offloads & - (MLX5_SW_PARSING_CAP | MLX5_SW_PARSING_CSUM_CAP | - MLX5_SW_PARSING_TSO_CAP); - config->ind_table_max_size = - sh->dev_cap.max_rwq_indirection_table_size; - config->tunnel_en = sh->dev_cap.tunnel_offloads_caps & - (MLX5_TUNNELED_OFFLOADS_VXLAN_CAP | - MLX5_TUNNELED_OFFLOADS_GRE_CAP | - MLX5_TUNNELED_OFFLOADS_GENEVE_CAP); - if (config->tunnel_en) { - DRV_LOG(DEBUG, "tunnel offloading is supported for %s%s%s", - config->tunnel_en & - MLX5_TUNNELED_OFFLOADS_VXLAN_CAP ? "[VXLAN]" : "", - config->tunnel_en & - MLX5_TUNNELED_OFFLOADS_GRE_CAP ? "[GRE]" : "", - config->tunnel_en & - MLX5_TUNNELED_OFFLOADS_GENEVE_CAP ? "[GENEVE]" : "" - ); - } else { - DRV_LOG(DEBUG, "tunnel offloading is not supported"); - } - DRV_LOG(DEBUG, "MPLS over GRE/UDP tunnel offloading is no supported"); - config->mpls_en = 0; /* Allocate private eth device data. */ priv = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE, sizeof(*priv), @@ -395,24 +405,10 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } own_domain_id = 1; } - DRV_LOG(DEBUG, "counters are not supported"); - config->ind_table_max_size = - sh->dev_cap.max_rwq_indirection_table_size; - /* - * Remove this check once DPDK supports larger/variable - * indirection tables. - */ - if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512) - config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512; - DRV_LOG(DEBUG, "maximum Rx indirection table size is %u", - config->ind_table_max_size); if (config->hw_padding) { DRV_LOG(DEBUG, "Rx end alignment padding isn't supported"); config->hw_padding = 0; } - config->tso = (sh->dev_cap.max_tso > 0); - if (config->tso) - config->tso_max_payload_sz = sh->dev_cap.max_tso; DRV_LOG(DEBUG, "%sMPS is %s.", config->mps == MLX5_MPW_ENHANCED ? "enhanced " : config->mps == MLX5_MPW ? "legacy " : "", @@ -421,17 +417,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(WARNING, "Rx CQE compression isn't supported."); config->cqe_comp = 0; } - if (sh->cdev->config.devx) { - hca_attr = &sh->cdev->config.hca_attr; - config->hw_csum = hca_attr->csum_cap; - DRV_LOG(DEBUG, "checksum offloading is %ssupported", - (config->hw_csum ? "" : "not ")); - config->hw_vlan_strip = hca_attr->vlan_cap; - DRV_LOG(DEBUG, "VLAN stripping is %ssupported", - (config->hw_vlan_strip ? "" : "not ")); - config->hw_fcs_strip = hca_attr->scatter_fcs; - mlx5_rt_timestamp_config(sh, config, hca_attr); - } + config->hw_fcs_strip = sh->dev_cap.hw_fcs_strip; if (config->mprq.enabled) { DRV_LOG(WARNING, "Multi-Packet RQ isn't supported"); config->mprq.enabled = 0; @@ -853,7 +839,6 @@ mlx5_os_net_probe(struct mlx5_common_device *cdev) }, .dv_flow_en = 1, .log_hp_size = MLX5_ARG_UNSET, - .vf = mlx5_dev_is_vf_pci(pci_dev), }; int ret; uint32_t restore;