From patchwork Sun Jan 2 06:59:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 105540 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0D44A0093; Sun, 2 Jan 2022 08:00:04 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2D3EA41141; Sun, 2 Jan 2022 08:00:00 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2082.outbound.protection.outlook.com [40.107.236.82]) by mails.dpdk.org (Postfix) with ESMTP id 0708C410F5 for ; Sun, 2 Jan 2022 07:59:58 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nV33oDfalOfHjVeVPK8CRkH1ABnzrW9oxiiBg+A6zmBumNu4YHKp/KCaj5Sjz6G18g78ZN8iDadID/cYZ51l6LbFlCarMR6x1yA2T8m7cmtlFKZUSu41MUxiuLy55eb/MXnXEuIx9agVUf6YqPAt02ts2TBJrHTvvTFwfTbGxwRbFzc2bJJl9B/1DJ6o8ihQ3QLxvSPzwceRv/gx5rVAe1Fe83np/p3wNyaaIDXnP5U9my0VvvE/DFw04J2Ik1R0PBn0BM7P8cVHyaorQHJJnd4yBIQV6c0t++FKA4AGQ+bQ4ho17b0YOqKboELZ7TYDa9VPoDitZuhbj19W/W5C1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uZOJPQpz0FyocXs3FQ3Kp7Z0dDxex5XnViBdE2sU2Dw=; b=LGWSMy2m8UUtALiF5FsyfUde38rv7yRI/8vJSJ1pEoOD1t00+SbzWwUv2iiFmq7g9RvNN1ddwaqRpTttS9f/mq7lJivQveIrawLqyCkHTE/B8nyJ84fsvqXQeGXCH6dtLYB+pUsSl/zga+XZk+UxZTSf8j/wbzWcxt5D6Yzam16j4AkSvvdZOXJYb3d/Ri4WL7ufsAVWhe/B36afAWTYJRvToJS0vQesPsi7bwiWgdtOUA2hmxGcDPLWPQxMOcWVtF/HtarnFM0dWVv+EfaF0mJfjArCM9UkKtb+22GXriTOik9Ii7mCBM5N1g/8AvthUz40/cKxjUYREkjcTkS2EQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uZOJPQpz0FyocXs3FQ3Kp7Z0dDxex5XnViBdE2sU2Dw=; b=KPMVGJyoTJcYtIOvxdyVaMHvfOQYyD8GRCYq/qkwUTJE/41HYw1cG+vSf1sdy0KZQD2V+v1EQPVxFUPlKPw/WxHYTpIn/jqNA7iQmBXrdEtIJKXYd4jxYXmVrWBY5ETVGnQ2CVx488OkQKGVtLmQYZtj8l0iUWKi7jchTk2+4Ar/S4wVzubHGTgNZ46L+mS89axBWd4eRvMoezzHnplt0TGGQvCPvem6Ynig3d3RlTOrZsJ4hUP4OcZcOVGiQaaZdiNVSXD3TDhd5UCscdd2ul46FWG6B32fn+BukFXCvg+3Quj7BubLFZjnJJRI7O681wPQt1rxFiE+1I1h1X64VA== Received: from DS7PR03CA0276.namprd03.prod.outlook.com (2603:10b6:5:3ad::11) by BY5PR12MB5542.namprd12.prod.outlook.com (2603:10b6:a03:1db::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4844.14; Sun, 2 Jan 2022 06:59:55 +0000 Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3ad:cafe::be) by DS7PR03CA0276.outlook.office365.com (2603:10b6:5:3ad::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4844.14 via Frontend Transport; Sun, 2 Jan 2022 06:59:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4844.14 via Frontend Transport; Sun, 2 Jan 2022 06:59:54 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 2 Jan 2022 06:59:53 +0000 Received: from nvidia.com (172.20.187.5) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Sat, 1 Jan 2022 22:59:51 -0800 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum Subject: [RFC 1/3] net/mlx5: remove some duplications Date: Sun, 2 Jan 2022 08:59:24 +0200 Message-ID: <20220102065927.2210733-2-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220102065927.2210733-1-michaelba@nvidia.com> References: <20220102065927.2210733-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 17e96909-48a1-4b9e-b668-08d9cdbd7afd X-MS-TrafficTypeDiagnostic: BY5PR12MB5542:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: eMYPrWl9BVl0LpKp7e4wJDEabnuTso54XSRIdyEkG5EWq3gzpXbfxeJz+ISNDwYTWbk/S7uiPB9YzPzyOESGZs9515bcF1z6bFymjt/PEzoa5zf3PIHHMZZ9i59Ye1S5g5bBZ4QN9zXYycc4TS7X6LBYtTE88XBAfwpt22qFlxEFNQUDUMIz4s0M6QrOwegeiiQK8vspkMsN4If0DGldd88IzaEe8CmbnqYx1VtNVeTundH0c1AjU/CykqyXS58+8NktFhbQxhPGVlhTaDQj3PuzzDQYjTkdYKm4GWyIyb6S+OMwxY11JrT371MspULMywvJnkgFFlsK6Ay+6dxBAGFUEGs+PWld/9Rjpo/iKr6Q8XeuET4shJqacDYLAWr2hrso+je7Ubkl5lau4pRGaVpzASmVC+vh5wGCZG9HMeUJ01XPUOPLvKrFmbpEfs36RjjUptr+wd3H80iMnGF1gqNWDfRyyGhrFCOhdGSHUPsXQcSfcJQTELm1Jr0LkPm7/nI0AfG/gC4cb45c4f34JgNT7nAL9CFDhBGTQtaNF1Feh9KswbSCvMVvnHoGpc/xHOzyETfjW2JDWZDPyzMsg60Y+yiK/iTxjK6uSFltMqF+Rf+pK5OaP5DgkscKG1xNg58r2FZM/8ydDpFfhwOXxh+AguWRYyP+gsSATdO1Igk4Jj7Rd9OxH8QfSoV1GIH7y6C58OVXbY9ZnYxS0/L7l7hdtbnVHYMA7+Y9E9je0nZrxnG97coq2cN038eHUgTrcE/wcG0KLCNMHLob70MjBALbyEJM9mDzXkBiTz8uMsDbcSXGPTX5damH2577+glfj7PhhSV4lUdFr78k3jSqxw== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(46966006)(40470700002)(36840700001)(2876002)(8676002)(40460700001)(82310400004)(2906002)(55016003)(336012)(2616005)(47076005)(16526019)(4326008)(86362001)(70206006)(1076003)(70586007)(66574015)(316002)(26005)(30864003)(356005)(508600001)(7696005)(6916009)(6666004)(36860700001)(107886003)(54906003)(426003)(186003)(8936002)(36756003)(5660300002)(83380400001)(6286002)(81166007)(36900700001)(559001)(579004)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jan 2022 06:59:54.3877 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 17e96909-48a1-4b9e-b668-08d9cdbd7afd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT021.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB5542 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Michael Baum Removes duplications in a few kinds: - Same function or operation for both Linux and Windows. - Same variable/structure for both common and net drivers. - Function called twice during spawn function. - Query device by Verbs twice during probing. Signed-off-by: Michael Baum --- drivers/common/mlx5/mlx5_common.h | 15 ++ drivers/common/mlx5/mlx5_common_pci.c | 18 ++ drivers/common/mlx5/version.map | 1 + drivers/net/mlx5/linux/mlx5_os.c | 304 +++++++++----------------- drivers/net/mlx5/linux/mlx5_verbs.c | 4 +- drivers/net/mlx5/mlx5.c | 117 ++++++++-- drivers/net/mlx5/mlx5.h | 8 +- drivers/net/mlx5/mlx5_devx.c | 8 +- drivers/net/mlx5/mlx5_ethdev.c | 5 +- drivers/net/mlx5/mlx5_flow.c | 18 +- drivers/net/mlx5/mlx5_flow_dv.c | 36 +-- drivers/net/mlx5/mlx5_flow_flex.c | 4 +- drivers/net/mlx5/mlx5_flow_meter.c | 4 +- drivers/net/mlx5/mlx5_rxq.c | 4 +- drivers/net/mlx5/mlx5_trigger.c | 14 +- drivers/net/mlx5/mlx5_txpp.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 148 +++---------- 17 files changed, 327 insertions(+), 383 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index e8809844af..80f59c81fb 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -487,6 +488,20 @@ __rte_internal bool mlx5_dev_is_pci(const struct rte_device *dev); +/** + * Test PCI device is a VF device. + * + * @param pci_dev + * Pointer to PCI device. + * + * @return + * - True on PCI device is a VF device. + * - False otherwise. + */ +__rte_internal +bool +mlx5_dev_is_vf_pci(struct rte_pci_device *pci_dev); + __rte_internal int mlx5_dev_mempool_subscribe(struct mlx5_common_device *cdev); diff --git a/drivers/common/mlx5/mlx5_common_pci.c b/drivers/common/mlx5/mlx5_common_pci.c index 8b38091d87..8fd2cb076c 100644 --- a/drivers/common/mlx5/mlx5_common_pci.c +++ b/drivers/common/mlx5/mlx5_common_pci.c @@ -108,6 +108,24 @@ mlx5_dev_is_pci(const struct rte_device *dev) return strcmp(dev->bus->name, "pci") == 0; } +bool +mlx5_dev_is_vf_pci(struct rte_pci_device *pci_dev) +{ + switch (pci_dev->id.device_id) { + case PCI_DEVICE_ID_MELLANOX_CONNECTX4VF: + case PCI_DEVICE_ID_MELLANOX_CONNECTX4LXVF: + case PCI_DEVICE_ID_MELLANOX_CONNECTX5VF: + case PCI_DEVICE_ID_MELLANOX_CONNECTX5EXVF: + case PCI_DEVICE_ID_MELLANOX_CONNECTX5BFVF: + case PCI_DEVICE_ID_MELLANOX_CONNECTX6VF: + case PCI_DEVICE_ID_MELLANOX_CONNECTXVF: + return true; + default: + break; + } + return false; +} + bool mlx5_dev_pci_match(const struct mlx5_class_driver *drv, const struct rte_device *dev) diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index 34e86004a0..30caa090fd 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -13,6 +13,7 @@ INTERNAL { mlx5_common_verbs_dereg_mr; # WINDOWS_NO_EXPORT mlx5_dev_is_pci; + mlx5_dev_is_vf_pci; mlx5_dev_mempool_unregister; mlx5_dev_mempool_subscribe; diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index e5e745adbe..7c503cceec 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -171,6 +171,15 @@ mlx5_os_get_dev_attr(struct mlx5_common_device *cdev, device_attr->tso_supported_qpts = attr_ex.tso_caps.supported_qpts; struct mlx5dv_context dv_attr = { .comp_mask = 0 }; +#ifdef HAVE_IBV_MLX5_MOD_SWP + dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_SWP; +#endif +#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT + dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS; +#endif +#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT + dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_STRIDING_RQ; +#endif err = mlx5_glue->dv_query_device(ctx, &dv_attr); if (err) { rte_errno = errno; @@ -183,6 +192,7 @@ mlx5_os_get_dev_attr(struct mlx5_common_device *cdev, device_attr->sw_parsing_offloads = dv_attr.sw_parsing_caps.sw_parsing_offloads; #endif +#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT device_attr->min_single_stride_log_num_of_bytes = dv_attr.striding_rq_caps.min_single_stride_log_num_of_bytes; device_attr->max_single_stride_log_num_of_bytes = @@ -193,6 +203,7 @@ mlx5_os_get_dev_attr(struct mlx5_common_device *cdev, dv_attr.striding_rq_caps.max_single_wqe_log_num_of_strides; device_attr->stride_supported_qpts = dv_attr.striding_rq_caps.supported_qpts; +#endif #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT device_attr->tunnel_offloads_caps = dv_attr.tunnel_offloads_caps; #endif @@ -662,45 +673,6 @@ mlx5_init_once(void) return ret; } -/** - * DV flow counter mode detect and config. - * - * @param dev - * Pointer to rte_eth_dev structure. - * - */ -static void -mlx5_flow_counter_mode_config(struct rte_eth_dev *dev __rte_unused) -{ -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - bool fallback; - -#ifndef HAVE_IBV_DEVX_ASYNC - fallback = true; -#else - fallback = false; - if (!sh->devx || !priv->config.dv_flow_en || - !priv->config.hca_attr.flow_counters_dump || - !(priv->config.hca_attr.flow_counter_bulk_alloc_bitmap & 0x4) || - (mlx5_flow_dv_discover_counter_offset_support(dev) == -ENOTSUP)) - fallback = true; -#endif - if (fallback) - DRV_LOG(INFO, "Use fall-back DV counter management. Flow " - "counter dump:%d, bulk_alloc_bitmap:0x%hhx.", - priv->config.hca_attr.flow_counters_dump, - priv->config.hca_attr.flow_counter_bulk_alloc_bitmap); - /* Initialize fallback mode only on the port initializes sh. */ - if (sh->refcnt == 1) - sh->cmng.counter_fallback = fallback; - else if (fallback != sh->cmng.counter_fallback) - DRV_LOG(WARNING, "Port %d in sh has different fallback mode " - "with others:%d.", PORT_ID(priv), fallback); -#endif -} - /** * DR flow drop action support detect. * @@ -875,8 +847,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, { const struct mlx5_switch_info *switch_info = &spawn->info; struct mlx5_dev_ctx_shared *sh = NULL; + struct mlx5_hca_attr *hca_attr = &spawn->cdev->config.hca_attr; struct ibv_port_attr port_attr = { .state = IBV_PORT_NOP }; - struct mlx5dv_context dv_attr = { .comp_mask = 0 }; struct rte_eth_dev *eth_dev = NULL; struct mlx5_priv *priv = NULL; int err = 0; @@ -968,41 +940,54 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_dev_close(eth_dev); return NULL; } - /* - * Some parameters ("tx_db_nc" in particularly) are needed in - * advance to create dv/verbs device context. We proceed the - * devargs here to get ones, and later proceed devargs again - * to override some hardware settings. - */ + /* Process parameters. */ err = mlx5_args(config, dpdk_dev->devargs); if (err) { - err = rte_errno; DRV_LOG(ERR, "failed to process device arguments: %s", strerror(rte_errno)); - goto error; + return NULL; } sh = mlx5_alloc_shared_dev_ctx(spawn, config); if (!sh) return NULL; + /* Update final values for devargs before check sibling config. */ + if (config->dv_miss_info) { + if (switch_info->master || switch_info->representor) + config->dv_xmeta_en = MLX5_XMETA_MODE_META16; + } +#if !defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_MLX5DV_DR) + if (config->dv_flow_en) { + DRV_LOG(WARNING, "DV flow is not supported."); + config->dv_flow_en = 0; + } +#endif +#ifdef HAVE_MLX5DV_DR_ESWITCH + if (!(hca_attr->eswitch_manager && config->dv_flow_en && + (switch_info->representor || switch_info->master))) + config->dv_esw_en = 0; +#else + config->dv_esw_en = 0; +#endif + if (!config->dv_esw_en && + config->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + DRV_LOG(WARNING, + "Metadata mode %u is not supported (no E-Switch).", + config->dv_xmeta_en); + config->dv_xmeta_en = MLX5_XMETA_MODE_LEGACY; + } + /* Check sibling device configurations. */ + err = mlx5_dev_check_sibling_config(sh, config, dpdk_dev); + if (err) + goto error; #ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR config->dest_tir = 1; -#endif -#ifdef HAVE_IBV_MLX5_MOD_SWP - dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_SWP; #endif /* * Multi-packet send is supported by ConnectX-4 Lx PF as well * as all ConnectX-5 devices. */ -#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS; -#endif -#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT - dv_attr.comp_mask |= MLX5DV_CONTEXT_MASK_STRIDING_RQ; -#endif - mlx5_glue->dv_query_device(sh->cdev->ctx, &dv_attr); - if (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED) { - if (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW) { + if (sh->device_attr.flags & MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED) { + if (sh->device_attr.flags & MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW) { DRV_LOG(DEBUG, "enhanced MPW is supported"); mps = MLX5_MPW_ENHANCED; } else { @@ -1014,46 +999,41 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mps = MLX5_MPW_DISABLED; } #ifdef HAVE_IBV_MLX5_MOD_SWP - if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_SWP) - swp = dv_attr.sw_parsing_caps.sw_parsing_offloads; + if (sh->device_attr.comp_mask & MLX5DV_CONTEXT_MASK_SWP) + swp = sh->device_attr.sw_parsing_offloads; DRV_LOG(DEBUG, "SWP support: %u", swp); #endif config->swp = swp & (MLX5_SW_PARSING_CAP | MLX5_SW_PARSING_CSUM_CAP | MLX5_SW_PARSING_TSO_CAP); #ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT - if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_STRIDING_RQ) { - struct mlx5dv_striding_rq_caps mprq_caps = - dv_attr.striding_rq_caps; - + if (sh->device_attr.comp_mask & MLX5DV_CONTEXT_MASK_STRIDING_RQ) { DRV_LOG(DEBUG, "\tmin_single_stride_log_num_of_bytes: %d", - mprq_caps.min_single_stride_log_num_of_bytes); + sh->device_attr.min_single_stride_log_num_of_bytes); DRV_LOG(DEBUG, "\tmax_single_stride_log_num_of_bytes: %d", - mprq_caps.max_single_stride_log_num_of_bytes); + sh->device_attr.max_single_stride_log_num_of_bytes); DRV_LOG(DEBUG, "\tmin_single_wqe_log_num_of_strides: %d", - mprq_caps.min_single_wqe_log_num_of_strides); + sh->device_attr.min_single_wqe_log_num_of_strides); DRV_LOG(DEBUG, "\tmax_single_wqe_log_num_of_strides: %d", - mprq_caps.max_single_wqe_log_num_of_strides); + sh->device_attr.max_single_wqe_log_num_of_strides); DRV_LOG(DEBUG, "\tsupported_qpts: %d", - mprq_caps.supported_qpts); + sh->device_attr.stride_supported_qpts); DRV_LOG(DEBUG, "\tmin_stride_wqe_log_size: %d", config->mprq.log_min_stride_wqe_size); DRV_LOG(DEBUG, "device supports Multi-Packet RQ"); mprq = 1; config->mprq.log_min_stride_size = - mprq_caps.min_single_stride_log_num_of_bytes; + sh->device_attr.min_single_stride_log_num_of_bytes; config->mprq.log_max_stride_size = - mprq_caps.max_single_stride_log_num_of_bytes; + sh->device_attr.max_single_stride_log_num_of_bytes; config->mprq.log_min_stride_num = - mprq_caps.min_single_wqe_log_num_of_strides; + sh->device_attr.min_single_wqe_log_num_of_strides; config->mprq.log_max_stride_num = - mprq_caps.max_single_wqe_log_num_of_strides; + sh->device_attr.max_single_wqe_log_num_of_strides; } #endif - /* Rx CQE compression is enabled by default. */ - config->cqe_comp = 1; #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS) { - config->tunnel_en = dv_attr.tunnel_offloads_caps & + if (sh->device_attr.comp_mask & MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS) { + config->tunnel_en = sh->device_attr.tunnel_offloads_caps & (MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_VXLAN | MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GRE | MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GENEVE); @@ -1075,9 +1055,9 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, "tunnel offloading disabled due to old OFED/rdma-core version"); #endif #ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT - mpls_en = ((dv_attr.tunnel_offloads_caps & + mpls_en = ((sh->device_attr.tunnel_offloads_caps & MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_CW_MPLS_OVER_GRE) && - (dv_attr.tunnel_offloads_caps & + (sh->device_attr.tunnel_offloads_caps & MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_CW_MPLS_OVER_UDP)); DRV_LOG(DEBUG, "MPLS over GRE/UDP tunnel offloading is %ssupported", mpls_en ? "" : "not "); @@ -1239,37 +1219,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(DEBUG, "dev_port-%u new domain_id=%u\n", priv->dev_port, priv->domain_id); } - /* Override some values set by hardware configuration. */ - mlx5_args(config, dpdk_dev->devargs); - /* Update final values for devargs before check sibling config. */ - if (config->dv_miss_info) { - if (switch_info->master || switch_info->representor) - config->dv_xmeta_en = MLX5_XMETA_MODE_META16; - } -#if !defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_MLX5DV_DR) - if (config->dv_flow_en) { - DRV_LOG(WARNING, "DV flow is not supported."); - config->dv_flow_en = 0; - } -#endif -#ifdef HAVE_MLX5DV_DR_ESWITCH - if (!(sh->cdev->config.hca_attr.eswitch_manager && config->dv_flow_en && - (switch_info->representor || switch_info->master))) - config->dv_esw_en = 0; -#else - config->dv_esw_en = 0; -#endif - if (!priv->config.dv_esw_en && - priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - DRV_LOG(WARNING, - "Metadata mode %u is not supported (no E-Switch).", - priv->config.dv_xmeta_en); - priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY; - } - /* Check sibling device configurations. */ - err = mlx5_dev_check_sibling_config(priv, config, dpdk_dev); - if (err) - goto error; config->hw_csum = !!(sh->device_attr.device_cap_flags_ex & IBV_DEVICE_RAW_IP_CSUM); DRV_LOG(DEBUG, "checksum offloading is %ssupported", @@ -1324,15 +1273,13 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->mps == MLX5_MPW_ENHANCED ? "enhanced " : config->mps == MLX5_MPW ? "legacy " : "", config->mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); - if (sh->devx) { - config->hca_attr = sh->cdev->config.hca_attr; - sh->steering_format_version = - config->hca_attr.steering_format_version; + if (sh->cdev->config.devx) { + sh->steering_format_version = hca_attr->steering_format_version; /* Check for LRO support. */ - if (config->dest_tir && config->hca_attr.lro_cap && + if (config->dest_tir && hca_attr->lro_cap && config->dv_flow_en) { /* TBD check tunnel lro caps. */ - config->lro.supported = config->hca_attr.lro_cap; + config->lro.supported = hca_attr->lro_cap; DRV_LOG(DEBUG, "Device supports LRO"); /* * If LRO timeout is not configured by application, @@ -1340,21 +1287,19 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, */ if (!config->lro.timeout) config->lro.timeout = - config->hca_attr.lro_timer_supported_periods[0]; + hca_attr->lro_timer_supported_periods[0]; DRV_LOG(DEBUG, "LRO session timeout set to %d usec", config->lro.timeout); DRV_LOG(DEBUG, "LRO minimal size of TCP segment " "required for coalescing is %d bytes", - config->hca_attr.lro_min_mss_size); + hca_attr->lro_min_mss_size); } #if defined(HAVE_MLX5DV_DR) && \ (defined(HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER) || \ defined(HAVE_MLX5_DR_CREATE_ACTION_ASO)) - if (config->hca_attr.qos.sup && - config->hca_attr.qos.flow_meter_old && + if (hca_attr->qos.sup && hca_attr->qos.flow_meter_old && config->dv_flow_en) { - uint8_t reg_c_mask = - config->hca_attr.qos.flow_meter_reg_c_ids; + uint8_t reg_c_mask = hca_attr->qos.flow_meter_reg_c_ids; /* * Meter needs two REG_C's for color match and pre-sfx * flow match. Here get the REG_C for color match. @@ -1378,20 +1323,18 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->mtr_color_reg = ffs(reg_c_mask) - 1 + REG_C_0; priv->mtr_en = 1; - priv->mtr_reg_share = - config->hca_attr.qos.flow_meter; + priv->mtr_reg_share = hca_attr->qos.flow_meter; DRV_LOG(DEBUG, "The REG_C meter uses is %d", priv->mtr_color_reg); } } - if (config->hca_attr.qos.sup && - config->hca_attr.qos.flow_meter_aso_sup) { + if (hca_attr->qos.sup && hca_attr->qos.flow_meter_aso_sup) { uint32_t log_obj_size = rte_log2_u32(MLX5_ASO_MTRS_PER_POOL >> 1); if (log_obj_size >= - config->hca_attr.qos.log_meter_aso_granularity && - log_obj_size <= - config->hca_attr.qos.log_meter_aso_max_alloc) + hca_attr->qos.log_meter_aso_granularity && + log_obj_size <= + hca_attr->qos.log_meter_aso_max_alloc) sh->meter_aso_en = 1; } if (priv->mtr_en) { @@ -1401,12 +1344,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; } } - if (config->hca_attr.flow.tunnel_header_0_1) + if (hca_attr->flow.tunnel_header_0_1) sh->tunnel_header_0_1 = 1; #endif #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO - if (config->hca_attr.flow_hit_aso && - priv->mtr_color_reg == REG_C_3) { + if (hca_attr->flow_hit_aso && priv->mtr_color_reg == REG_C_3) { sh->flow_hit_aso_en = 1; err = mlx5_flow_aso_age_mng_init(sh); if (err) { @@ -1418,8 +1360,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #endif /* HAVE_MLX5_DR_CREATE_ACTION_ASO */ #if defined(HAVE_MLX5_DR_CREATE_ACTION_ASO) && \ defined(HAVE_MLX5_DR_ACTION_ASO_CT) - if (config->hca_attr.ct_offload && - priv->mtr_color_reg == REG_C_3) { + if (hca_attr->ct_offload && priv->mtr_color_reg == REG_C_3) { err = mlx5_flow_aso_ct_mng_init(sh); if (err) { err = -err; @@ -1430,13 +1371,13 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } #endif /* HAVE_MLX5_DR_CREATE_ACTION_ASO && HAVE_MLX5_DR_ACTION_ASO_CT */ #if defined(HAVE_MLX5DV_DR) && defined(HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE) - if (config->hca_attr.log_max_ft_sampler_num > 0 && + if (hca_attr->log_max_ft_sampler_num > 0 && config->dv_flow_en) { priv->sampler_en = 1; DRV_LOG(DEBUG, "Sampler enabled!"); } else { priv->sampler_en = 0; - if (!config->hca_attr.log_max_ft_sampler_num) + if (!hca_attr->log_max_ft_sampler_num) DRV_LOG(WARNING, "No available register for sampler."); else @@ -1445,18 +1386,18 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #endif } if (config->cqe_comp && RTE_CACHE_LINE_SIZE == 128 && - !(dv_attr.flags & MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP)) { + !(sh->device_attr.flags & MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP)) { DRV_LOG(WARNING, "Rx CQE 128B compression is not supported"); config->cqe_comp = 0; } if (config->cqe_comp_fmt == MLX5_CQE_RESP_FORMAT_FTAG_STRIDX && - (!sh->devx || !config->hca_attr.mini_cqe_resp_flow_tag)) { + (!sh->cdev->config.devx || !hca_attr->mini_cqe_resp_flow_tag)) { DRV_LOG(WARNING, "Flow Tag CQE compression" " format isn't supported."); config->cqe_comp = 0; } if (config->cqe_comp_fmt == MLX5_CQE_RESP_FORMAT_L34H_STRIDX && - (!sh->devx || !config->hca_attr.mini_cqe_resp_l3_l4_tag)) { + (!sh->cdev->config.devx || !hca_attr->mini_cqe_resp_l3_l4_tag)) { DRV_LOG(WARNING, "L3/L4 Header CQE compression" " format isn't supported."); config->cqe_comp = 0; @@ -1465,55 +1406,55 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->cqe_comp ? "" : "not "); if (config->tx_pp) { DRV_LOG(DEBUG, "Timestamp counter frequency %u kHz", - config->hca_attr.dev_freq_khz); + hca_attr->dev_freq_khz); DRV_LOG(DEBUG, "Packet pacing is %ssupported", - config->hca_attr.qos.packet_pacing ? "" : "not "); + hca_attr->qos.packet_pacing ? "" : "not "); DRV_LOG(DEBUG, "Cross channel ops are %ssupported", - config->hca_attr.cross_channel ? "" : "not "); + hca_attr->cross_channel ? "" : "not "); DRV_LOG(DEBUG, "WQE index ignore is %ssupported", - config->hca_attr.wqe_index_ignore ? "" : "not "); + hca_attr->wqe_index_ignore ? "" : "not "); DRV_LOG(DEBUG, "Non-wire SQ feature is %ssupported", - config->hca_attr.non_wire_sq ? "" : "not "); + hca_attr->non_wire_sq ? "" : "not "); DRV_LOG(DEBUG, "Static WQE SQ feature is %ssupported (%d)", - config->hca_attr.log_max_static_sq_wq ? "" : "not ", - config->hca_attr.log_max_static_sq_wq); + hca_attr->log_max_static_sq_wq ? "" : "not ", + hca_attr->log_max_static_sq_wq); DRV_LOG(DEBUG, "WQE rate PP mode is %ssupported", - config->hca_attr.qos.wqe_rate_pp ? "" : "not "); - if (!sh->devx) { + hca_attr->qos.wqe_rate_pp ? "" : "not "); + if (!sh->cdev->config.devx) { DRV_LOG(ERR, "DevX is required for packet pacing"); err = ENODEV; goto error; } - if (!config->hca_attr.qos.packet_pacing) { + if (!hca_attr->qos.packet_pacing) { DRV_LOG(ERR, "Packet pacing is not supported"); err = ENODEV; goto error; } - if (!config->hca_attr.cross_channel) { + if (!hca_attr->cross_channel) { DRV_LOG(ERR, "Cross channel operations are" " required for packet pacing"); err = ENODEV; goto error; } - if (!config->hca_attr.wqe_index_ignore) { + if (!hca_attr->wqe_index_ignore) { DRV_LOG(ERR, "WQE index ignore feature is" " required for packet pacing"); err = ENODEV; goto error; } - if (!config->hca_attr.non_wire_sq) { + if (!hca_attr->non_wire_sq) { DRV_LOG(ERR, "Non-wire SQ feature is" " required for packet pacing"); err = ENODEV; goto error; } - if (!config->hca_attr.log_max_static_sq_wq) { + if (!hca_attr->log_max_static_sq_wq) { DRV_LOG(ERR, "Static WQE SQ feature is" " required for packet pacing"); err = ENODEV; goto error; } - if (!config->hca_attr.qos.wqe_rate_pp) { + if (!hca_attr->qos.wqe_rate_pp) { DRV_LOG(ERR, "WQE rate mode is required" " for packet pacing"); err = ENODEV; @@ -1527,7 +1468,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #endif } if (config->std_delay_drop || config->hp_delay_drop) { - if (!config->hca_attr.rq_delay_drop) { + if (!hca_attr->rq_delay_drop) { config->std_delay_drop = 0; config->hp_delay_drop = 0; DRV_LOG(WARNING, @@ -1535,34 +1476,14 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->dev_port); } } - if (sh->devx) { - uint32_t reg[MLX5_ST_SZ_DW(register_mtutc)]; - - err = config->hca_attr.access_register_user ? - mlx5_devx_cmd_register_read - (sh->cdev->ctx, MLX5_REGISTER_ID_MTUTC, 0, - reg, MLX5_ST_SZ_DW(register_mtutc)) : ENOTSUP; - if (!err) { - uint32_t ts_mode; - - /* MTUTC register is read successfully. */ - ts_mode = MLX5_GET(register_mtutc, reg, - time_stamp_mode); - if (ts_mode == MLX5_MTUTC_TIMESTAMP_MODE_REAL_TIME) - config->rt_timestamp = 1; - } else { - /* Kernel does not support register reading. */ - if (config->hca_attr.dev_freq_khz == - (NS_PER_S / MS_PER_S)) - config->rt_timestamp = 1; - } - } + if (sh->cdev->config.devx) + mlx5_rt_timestamp_config(sh, config, hca_attr); /* * If HW has bug working with tunnel packet decapsulation and * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore. */ - if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en) + if (hca_attr->scatter_fcs_w_decap_disable && config->decap_en) config->hw_fcs_strip = 0; DRV_LOG(DEBUG, "FCS stripping configuration is %ssupported", (config->hw_fcs_strip ? "" : "not ")); @@ -1693,7 +1614,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (mlx5_flex_item_port_init(eth_dev) < 0) goto error; } - if (sh->devx && config->dv_flow_en && config->dest_tir) { + if (sh->cdev->config.devx && config->dv_flow_en && config->dest_tir) { priv->obj_ops = devx_obj_ops; mlx5_queue_counter_id_prepare(eth_dev); priv->obj_ops.lb_dummy_queue_create = @@ -2049,6 +1970,7 @@ mlx5_os_config_default(struct mlx5_dev_config *config, { memset(config, 0, sizeof(*config)); config->mps = MLX5_ARG_UNSET; + config->cqe_comp = 1; config->rx_vec_en = 1; config->txq_inline_max = MLX5_ARG_UNSET; config->txq_inline_min = MLX5_ARG_UNSET; @@ -2119,7 +2041,6 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev, struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(cdev->dev); struct mlx5_dev_spawn_data *list = NULL; struct mlx5_dev_config dev_config; - unsigned int dev_config_vf; struct rte_eth_devargs eth_da = *req_eth_da; struct rte_pci_addr owner_pci = pci_dev->addr; /* Owner PF. */ struct mlx5_bond_info bond_info; @@ -2440,21 +2361,6 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev, * (i.e. master first, then representors from lowest to highest ID). */ qsort(list, ns, sizeof(*list), mlx5_dev_spawn_data_cmp); - /* Device specific configuration. */ - switch (pci_dev->id.device_id) { - case PCI_DEVICE_ID_MELLANOX_CONNECTX4VF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX4LXVF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX5VF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX5EXVF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX5BFVF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX6VF: - case PCI_DEVICE_ID_MELLANOX_CONNECTXVF: - dev_config_vf = 1; - break; - default: - dev_config_vf = 0; - break; - } if (eth_da.type != RTE_ETH_REPRESENTOR_NONE) { /* Set devargs default values. */ if (eth_da.nb_mh_controllers == 0) { @@ -2478,7 +2384,7 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev, /* Default configuration. */ mlx5_os_config_default(&dev_config, &cdev->config); - dev_config.vf = dev_config_vf; + dev_config.vf = mlx5_dev_is_vf_pci(pci_dev); list[i].eth_dev = mlx5_dev_spawn(cdev->dev, &list[i], &dev_config, ð_da); if (!list[i].eth_dev) { @@ -2751,7 +2657,7 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh) rte_intr_fd_set(sh->intr_handle, -1); } } - if (sh->devx) { + if (sh->cdev->config.devx) { #ifdef HAVE_IBV_DEVX_ASYNC sh->intr_handle_devx = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED); diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index 2b6eef44a7..722017efa4 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -998,7 +998,7 @@ mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) qp.comp_mask = MLX5DV_QP_MASK_UAR_MMAP_OFFSET; #ifdef HAVE_IBV_FLOW_DV_SUPPORT /* If using DevX, need additional mask to read tisn value. */ - if (priv->sh->devx && !priv->sh->tdn) + if (priv->sh->cdev->config.devx && !priv->sh->tdn) qp.comp_mask |= MLX5DV_QP_MASK_RAW_QP_HANDLES; #endif obj.cq.in = txq_obj->cq; @@ -1042,7 +1042,7 @@ mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) * This is done once per port. * Will use this value on Rx, when creating matching TIR. */ - if (priv->sh->devx && !priv->sh->tdn) { + if (priv->sh->cdev->config.devx && !priv->sh->tdn) { ret = mlx5_devx_cmd_qp_query_tis_td(txq_obj->qp, qp.tisn, &priv->sh->tdn); if (ret) { diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 81d373bc17..cce4d4448c 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -513,6 +513,46 @@ mlx5_flow_aging_init(struct mlx5_dev_ctx_shared *sh) } } +/** + * DV flow counter mode detect and config. + * + * @param dev + * Pointer to rte_eth_dev structure. + * + */ +void +mlx5_flow_counter_mode_config(struct rte_eth_dev *dev __rte_unused) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_hca_attr *hca_attr = &sh->cdev->config.hca_attr; + bool fallback; + +#ifndef HAVE_IBV_DEVX_ASYNC + fallback = true; +#else + fallback = false; + if (!sh->cdev->config.devx || !priv->config.dv_flow_en || + !hca_attr->flow_counters_dump || + !(hca_attr->flow_counter_bulk_alloc_bitmap & 0x4) || + (mlx5_flow_dv_discover_counter_offset_support(dev) == -ENOTSUP)) + fallback = true; +#endif + if (fallback) + DRV_LOG(INFO, "Use fall-back DV counter management. Flow " + "counter dump:%d, bulk_alloc_bitmap:0x%hhx.", + hca_attr->flow_counters_dump, + hca_attr->flow_counter_bulk_alloc_bitmap); + /* Initialize fallback mode only on the port initializes sh. */ + if (sh->refcnt == 1) + sh->cmng.counter_fallback = fallback; + else if (fallback != sh->cmng.counter_fallback) + DRV_LOG(WARNING, "Port %d in sh has different fallback mode " + "with others:%d.", PORT_ID(priv), fallback); +#endif +} + /** * Initialize the counters management structure. * @@ -889,7 +929,7 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev) uint32_t ids[8]; int ret; - if (!priv->config.hca_attr.parse_graph_flex_node) { + if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) { DRV_LOG(ERR, "Dynamic flex parser is not supported " "for device %s.", priv->dev_data->name); return -ENOTSUP; @@ -1129,6 +1169,43 @@ mlx5_setup_tis(struct mlx5_dev_ctx_shared *sh) return 0; } +/** + * Configure realtime timestamp format. + * + * @param sh + * Pointer to mlx5_dev_ctx_shared object. + * @param config + * Device configuration parameters. + * @param hca_attr + * Pointer to DevX HCA capabilities structure. + */ +void +mlx5_rt_timestamp_config(struct mlx5_dev_ctx_shared *sh, + struct mlx5_dev_config *config, + struct mlx5_hca_attr *hca_attr) +{ + uint32_t dw_cnt = MLX5_ST_SZ_DW(register_mtutc); + uint32_t reg[dw_cnt]; + int ret = ENOTSUP; + + if (hca_attr->access_register_user) + ret = mlx5_devx_cmd_register_read(sh->cdev->ctx, + MLX5_REGISTER_ID_MTUTC, 0, + reg, dw_cnt); + if (!ret) { + uint32_t ts_mode; + + /* MTUTC register is read successfully. */ + ts_mode = MLX5_GET(register_mtutc, reg, time_stamp_mode); + if (ts_mode == MLX5_MTUTC_TIMESTAMP_MODE_REAL_TIME) + config->rt_timestamp = 1; + } else { + /* Kernel does not support register reading. */ + if (hca_attr->dev_freq_khz == (NS_PER_S / MS_PER_S)) + config->rt_timestamp = 1; + } +} + /** * Allocate shared device context. If there is multiport device the * master and representors will share this context, if there is single @@ -1182,7 +1259,6 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, pthread_mutex_init(&sh->txpp.mutex, NULL); sh->numa_node = spawn->cdev->dev->numa_node; sh->cdev = spawn->cdev; - sh->devx = sh->cdev->config.devx; if (spawn->bond_info) sh->bond = *spawn->bond_info; err = mlx5_os_get_dev_attr(sh->cdev, &sh->device_attr); @@ -1205,7 +1281,7 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, sh->port[i].ih_port_id = RTE_MAX_ETHPORTS; sh->port[i].devx_ih_port_id = RTE_MAX_ETHPORTS; } - if (sh->devx) { + if (sh->cdev->config.devx) { sh->td = mlx5_devx_cmd_create_td(sh->cdev->ctx); if (!sh->td) { DRV_LOG(ERR, "TD allocation failure"); @@ -2035,6 +2111,8 @@ void mlx5_set_min_inline(struct mlx5_dev_spawn_data *spawn, struct mlx5_dev_config *config) { + struct mlx5_hca_attr *hca_attr = &spawn->cdev->config.hca_attr; + if (config->txq_inline_min != MLX5_ARG_UNSET) { /* Application defines size of inlined data explicitly. */ if (spawn->pci_dev != NULL) { @@ -2054,9 +2132,9 @@ mlx5_set_min_inline(struct mlx5_dev_spawn_data *spawn, } goto exit; } - if (config->hca_attr.eth_net_offloads) { + if (hca_attr->eth_net_offloads) { /* We have DevX enabled, inline mode queried successfully. */ - switch (config->hca_attr.wqe_inline_mode) { + switch (hca_attr->wqe_inline_mode) { case MLX5_CAP_INLINE_MODE_L2: /* outer L2 header must be inlined. */ config->txq_inline_min = MLX5_INLINE_HSIZE_L2; @@ -2065,14 +2143,14 @@ mlx5_set_min_inline(struct mlx5_dev_spawn_data *spawn, /* No inline data are required by NIC. */ config->txq_inline_min = MLX5_INLINE_HSIZE_NONE; config->hw_vlan_insert = - config->hca_attr.wqe_vlan_insert; + hca_attr->wqe_vlan_insert; DRV_LOG(DEBUG, "Tx VLAN insertion is supported"); goto exit; case MLX5_CAP_INLINE_MODE_VPORT_CONTEXT: /* inline mode is defined by NIC vport context. */ - if (!config->hca_attr.eth_virt) + if (!hca_attr->eth_virt) break; - switch (config->hca_attr.vport_inline_mode) { + switch (hca_attr->vport_inline_mode) { case MLX5_INLINE_MODE_NONE: config->txq_inline_min = MLX5_INLINE_HSIZE_NONE; @@ -2216,25 +2294,26 @@ rte_pmd_mlx5_get_dyn_flag_names(char *names[], unsigned int n) } /** - * Comparison callback to sort device data. + * Check sibling device configurations. * - * This is meant to be used with qsort(). + * Sibling devices sharing the Infiniband device context should have compatible + * configurations. This regards representors and bonding slaves. * - * @param a[in] - * Pointer to pointer to first data object. - * @param b[in] - * Pointer to pointer to second data object. + * @param sh + * Shared device context. + * @param config + * Configuration of the device is going to be created. + * @param dpdk_dev + * Backing DPDK device. * * @return - * 0 if both objects are equal, less than 0 if the first argument is less - * than the second, greater than 0 otherwise. + * 0 on success, EINVAL otherwise */ int -mlx5_dev_check_sibling_config(struct mlx5_priv *priv, +mlx5_dev_check_sibling_config(struct mlx5_dev_ctx_shared *sh, struct mlx5_dev_config *config, struct rte_device *dpdk_dev) { - struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_dev_config *sh_conf = NULL; uint16_t port_id; @@ -2247,7 +2326,7 @@ mlx5_dev_check_sibling_config(struct mlx5_priv *priv, struct mlx5_priv *opriv = rte_eth_devices[port_id].data->dev_private; - if (opriv && opriv != priv && opriv->sh == sh) { + if (opriv && opriv->sh == sh) { sh_conf = &opriv->config; break; } diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index c01fb9566e..874ac36071 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -299,7 +299,6 @@ struct mlx5_dev_config { int txq_inline_mpw; /* Max packet size for inlining with eMPW. */ int tx_pp; /* Timestamp scheduling granularity in nanoseconds. */ int tx_skew; /* Tx scheduling skew between WQE and data on wire. */ - struct mlx5_hca_attr hca_attr; /* HCA attributes. */ struct mlx5_lro_config lro; /* LRO configuration. */ }; @@ -1147,7 +1146,6 @@ struct mlx5_flex_item { struct mlx5_dev_ctx_shared { LIST_ENTRY(mlx5_dev_ctx_shared) next; uint32_t refcnt; - uint32_t devx:1; /* Opened with DV. */ uint32_t flow_hit_aso_en:1; /* Flow Hit ASO is supported. */ uint32_t steering_format_version:4; /* Indicates the device steering logic format. */ @@ -1518,6 +1516,9 @@ void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); port_id < RTE_MAX_ETHPORTS; \ port_id = mlx5_eth_find_next(port_id + 1, dev)) int mlx5_args(struct mlx5_dev_config *config, struct rte_devargs *devargs); +void mlx5_rt_timestamp_config(struct mlx5_dev_ctx_shared *sh, + struct mlx5_dev_config *config, + struct mlx5_hca_attr *hca_attr); struct mlx5_dev_ctx_shared * mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, const struct mlx5_dev_config *config); @@ -1528,7 +1529,7 @@ int mlx5_alloc_table_hash_list(struct mlx5_priv *priv); void mlx5_set_min_inline(struct mlx5_dev_spawn_data *spawn, struct mlx5_dev_config *config); void mlx5_set_metadata_mask(struct rte_eth_dev *dev); -int mlx5_dev_check_sibling_config(struct mlx5_priv *priv, +int mlx5_dev_check_sibling_config(struct mlx5_dev_ctx_shared *sh, struct mlx5_dev_config *config, struct rte_device *dpdk_dev); int mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info); @@ -1538,6 +1539,7 @@ int mlx5_hairpin_cap_get(struct rte_eth_dev *dev, struct rte_eth_hairpin_cap *cap); bool mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev); int mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev); +void mlx5_flow_counter_mode_config(struct rte_eth_dev *dev); int mlx5_flow_aso_age_mng_init(struct mlx5_dev_ctx_shared *sh); int mlx5_aso_flow_mtrs_mng_init(struct mlx5_dev_ctx_shared *sh); int mlx5_flow_aso_ct_mng_init(struct mlx5_dev_ctx_shared *sh); diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 91243f684f..97c8925044 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -419,7 +419,8 @@ mlx5_rxq_obj_hairpin_new(struct mlx5_rxq_priv *rxq) MLX5_ASSERT(rxq != NULL && rxq->ctrl != NULL && tmpl != NULL); tmpl->rxq_ctrl = rxq_ctrl; attr.hairpin = 1; - max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz; + max_wq_data = + priv->sh->cdev->config.hca_attr.log_max_hairpin_wq_data_sz; /* Jumbo frames > 9KB should be supported, and more packets. */ if (priv->config.log_hp_size != (uint32_t)MLX5_ARG_UNSET) { if (priv->config.log_hp_size > max_wq_data) { @@ -1117,7 +1118,8 @@ mlx5_txq_obj_hairpin_new(struct rte_eth_dev *dev, uint16_t idx) tmpl->txq_ctrl = txq_ctrl; attr.hairpin = 1; attr.tis_lst_sz = 1; - max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz; + max_wq_data = + priv->sh->cdev->config.hca_attr.log_max_hairpin_wq_data_sz; /* Jumbo frames > 9KB should be supported, and more packets. */ if (priv->config.log_hp_size != (uint32_t)MLX5_ARG_UNSET) { if (priv->config.log_hp_size > max_wq_data) { @@ -1193,7 +1195,7 @@ mlx5_txq_create_devx_sq_resources(struct rte_eth_dev *dev, uint16_t idx, struct mlx5_devx_create_sq_attr sq_attr = { .flush_in_error_en = 1, .allow_multi_pkt_send_wqe = !!priv->config.mps, - .min_wqe_inline_mode = priv->config.hca_attr.vport_inline_mode, + .min_wqe_inline_mode = cdev->config.hca_attr.vport_inline_mode, .allow_swp = !!priv->config.swp, .cqn = txq_obj->cq_obj.cq->id, .tis_lst_sz = 1, diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index dc647d5580..801c467bba 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -337,7 +337,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK; mlx5_set_default_params(dev, info); mlx5_set_txlimit_params(dev, info); - if (priv->config.hca_attr.mem_rq_rmp && + if (priv->sh->cdev->config.hca_attr.mem_rq_rmp && priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new) info->dev_capa |= RTE_ETH_DEV_CAPA_RXQ_SHARE; info->switch_info.name = dev->data->name; @@ -723,7 +723,8 @@ mlx5_hairpin_cap_get(struct rte_eth_dev *dev, struct rte_eth_hairpin_cap *cap) struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_config *config = &priv->config; - if (!priv->sh->devx || !config->dest_tir || !config->dv_flow_en) { + if (!priv->sh->cdev->config.devx || !config->dest_tir || + !config->dv_flow_en) { rte_errno = ENOTSUP; return -rte_errno; } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f34e4b88aa..d15407e8f6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -2893,7 +2893,7 @@ mlx5_flow_validate_item_geneve(const struct rte_flow_item *item, const struct rte_flow_item_geneve *mask = item->mask; int ret; uint16_t gbhdr; - uint8_t opt_len = priv->config.hca_attr.geneve_max_opt_len ? + uint8_t opt_len = priv->sh->cdev->config.hca_attr.geneve_max_opt_len ? MLX5_GENEVE_OPT_LEN_1 : MLX5_GENEVE_OPT_LEN_0; const struct rte_flow_item_geneve nic_mask = { .ver_opt_len_o_c_rsvd0 = RTE_BE16(0x3f80), @@ -2901,7 +2901,7 @@ mlx5_flow_validate_item_geneve(const struct rte_flow_item *item, .protocol = RTE_BE16(UINT16_MAX), }; - if (!priv->config.hca_attr.tunnel_stateless_geneve_rx) + if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_geneve_rx) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "L3 Geneve is not enabled by device" @@ -2981,10 +2981,9 @@ mlx5_flow_validate_item_geneve_opt(const struct rte_flow_item *item, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_geneve_tlv_option_resource *geneve_opt_resource; - struct mlx5_hca_attr *hca_attr = &priv->config.hca_attr; + struct mlx5_hca_attr *hca_attr = &sh->cdev->config.hca_attr; uint8_t data_max_supported = hca_attr->max_geneve_tlv_option_data_len * 4; - struct mlx5_dev_config *config = &priv->config; const struct rte_flow_item_geneve *geneve_spec; const struct rte_flow_item_geneve *geneve_mask; const struct rte_flow_item_geneve_opt *spec = item->spec; @@ -3018,11 +3017,11 @@ mlx5_flow_validate_item_geneve_opt(const struct rte_flow_item *item, "Geneve TLV opt class/type/length masks must be full"); /* Check if length is supported */ if ((uint32_t)spec->option_len > - config->hca_attr.max_geneve_tlv_option_data_len) + hca_attr->max_geneve_tlv_option_data_len) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "Geneve TLV opt length not supported"); - if (config->hca_attr.max_geneve_tlv_options > 1) + if (hca_attr->max_geneve_tlv_options > 1) DRV_LOG(DEBUG, "max_geneve_tlv_options supports more than 1 option"); /* Check GENEVE item preceding. */ @@ -3077,7 +3076,7 @@ mlx5_flow_validate_item_geneve_opt(const struct rte_flow_item *item, "Data mask is of unsupported size"); } /* Check GENEVE option is supported in NIC. */ - if (!config->hca_attr.geneve_tlv_opt) + if (!hca_attr->geneve_tlv_opt) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "Geneve TLV opt not supported"); @@ -6232,7 +6231,8 @@ flow_create_split_sample(struct rte_eth_dev *dev, * When reg_c_preserve is set, metadata registers Cx preserve * their value even through packet duplication. */ - add_tag = (!fdb_tx || priv->config.hca_attr.reg_c_preserve); + add_tag = (!fdb_tx || + priv->sh->cdev->config.hca_attr.reg_c_preserve); if (add_tag) sfx_items = (struct rte_flow_item *)((char *)sfx_actions + act_size); @@ -9948,7 +9948,7 @@ mlx5_flow_discover_priorities(struct rte_eth_dev *dev) type = mlx5_flow_os_get_type(); if (type == MLX5_FLOW_TYPE_MAX) { type = MLX5_FLOW_TYPE_VERBS; - if (priv->sh->devx && priv->config.dv_flow_en) + if (priv->sh->cdev->config.devx && priv->config.dv_flow_en) type = MLX5_FLOW_TYPE_DV; } fops = flow_get_drv_ops(type); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 1c6cae8779..be48eb0b1b 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2331,7 +2331,7 @@ flow_dv_validate_item_gtp(struct rte_eth_dev *dev, .teid = RTE_BE32(0xffffffff), }; - if (!priv->config.hca_attr.tunnel_stateless_gtp) + if (!priv->sh->cdev->config.hca_attr.tunnel_stateless_gtp) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, "GTP support is not enabled"); @@ -2440,6 +2440,7 @@ flow_dv_validate_item_ipv4(struct rte_eth_dev *dev, { int ret; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hca_attr *attr = &priv->sh->cdev->config.hca_attr; const struct rte_flow_item_ipv4 *spec = item->spec; const struct rte_flow_item_ipv4 *last = item->last; const struct rte_flow_item_ipv4 *mask = item->mask; @@ -2458,8 +2459,8 @@ flow_dv_validate_item_ipv4(struct rte_eth_dev *dev, if (mask && (mask->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK)) { int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - bool ihl_cap = !tunnel ? priv->config.hca_attr.outer_ipv4_ihl : - priv->config.hca_attr.inner_ipv4_ihl; + bool ihl_cap = !tunnel ? + attr->outer_ipv4_ihl : attr->inner_ipv4_ihl; if (!ihl_cap) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, @@ -3304,7 +3305,7 @@ flow_dv_validate_action_count(struct rte_eth_dev *dev, bool shared, { struct mlx5_priv *priv = dev->data->dev_private; - if (!priv->sh->devx) + if (!priv->sh->cdev->config.devx) goto notsup_err; if (action_flags & MLX5_FLOW_ACTION_COUNT) return rte_flow_error_set(error, EINVAL, @@ -3398,7 +3399,7 @@ flow_dv_validate_action_decap(struct rte_eth_dev *dev, { const struct mlx5_priv *priv = dev->data->dev_private; - if (priv->config.hca_attr.scatter_fcs_w_decap_disable && + if (priv->sh->cdev->config.hca_attr.scatter_fcs_w_decap_disable && !priv->config.decap_en) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, NULL, @@ -5311,8 +5312,8 @@ flow_dv_validate_action_age(uint64_t action_flags, struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_action_age *age = action->conf; - if (!priv->sh->devx || (priv->sh->cmng.counter_fallback && - !priv->sh->aso_age_mng)) + if (!priv->sh->cdev->config.devx || + (priv->sh->cmng.counter_fallback && !priv->sh->aso_age_mng)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -5596,7 +5597,8 @@ flow_dv_validate_action_sample(uint64_t *action_flags, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, "ratio value starts from 1"); - if (!priv->sh->devx || (sample->ratio > 0 && !priv->sampler_en)) + if (!priv->sh->cdev->config.devx || + (sample->ratio > 0 && !priv->sampler_en)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -5763,7 +5765,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags, NULL, "E-Switch must has a dest " "port for mirroring"); - if (!priv->config.hca_attr.reg_c_preserve && + if (!priv->sh->cdev->config.hca_attr.reg_c_preserve && priv->representor_id != UINT16_MAX) *fdb_mirror_limit = 1; } @@ -6184,7 +6186,7 @@ flow_dv_counter_alloc(struct rte_eth_dev *dev, uint32_t age) age ? MLX5_COUNTER_TYPE_AGE : MLX5_COUNTER_TYPE_ORIGIN; uint32_t cnt_idx; - if (!priv->sh->devx) { + if (!priv->sh->cdev->config.devx) { rte_errno = ENOTSUP; return 0; } @@ -6507,7 +6509,7 @@ flow_dv_mtr_alloc(struct rte_eth_dev *dev) struct mlx5_aso_mtr_pool *pool; uint32_t mtr_idx = 0; - if (!priv->sh->devx) { + if (!priv->sh->cdev->config.devx) { rte_errno = ENOTSUP; return 0; } @@ -6696,7 +6698,7 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev, const struct rte_flow_item_integrity *spec = (typeof(spec)) integrity_item->spec; - if (!priv->config.hca_attr.pkt_integrity_match) + if (!priv->sh->cdev->config.hca_attr.pkt_integrity_match) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, integrity_item, @@ -12524,7 +12526,7 @@ flow_dv_aso_ct_alloc(struct rte_eth_dev *dev, struct rte_flow_error *error) uint32_t ct_idx; MLX5_ASSERT(mng); - if (!priv->sh->devx) { + if (!priv->sh->cdev->config.devx) { rte_errno = ENOTSUP; return 0; } @@ -12962,7 +12964,7 @@ flow_dv_translate(struct rte_eth_dev *dev, } break; case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->sh->devx) { + if (!priv->sh->cdev->config.devx) { return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -15841,7 +15843,7 @@ flow_dv_query_count(struct rte_eth_dev *dev, uint32_t cnt_idx, void *data, struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_query_count *qc = data; - if (!priv->sh->devx) + if (!priv->sh->cdev->config.devx) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -15894,7 +15896,7 @@ flow_dv_query_count_ptr(struct rte_eth_dev *dev, uint32_t cnt_idx, { struct mlx5_priv *priv = dev->data->dev_private; - if (!priv->sh->devx || !action_ptr) + if (!priv->sh->cdev->config.devx || !action_ptr) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -17496,7 +17498,7 @@ flow_dv_counter_query(struct rte_eth_dev *dev, uint32_t counter, bool clear, uint64_t inn_pkts, inn_bytes; int ret; - if (!priv->sh->devx) + if (!priv->sh->cdev->config.devx) return -1; ret = _flow_dv_query_count(dev, counter, &inn_pkts, &inn_bytes); diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index 64867dc9e2..54bc8aef79 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -910,7 +910,7 @@ mlx5_flex_translate_sample(struct mlx5_hca_flex_attr *attr, * offsets in any order. * * Gather all similar fields together, build array of bit intervals - * in asсending order and try to cover with the smallest set of sample + * in as��ending order and try to cover with the smallest set of sample * registers. */ memset(&cover, 0, sizeof(cover)); @@ -1153,7 +1153,7 @@ mlx5_flex_translate_conf(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hca_flex_attr *attr = &priv->config.hca_attr.flex; + struct mlx5_hca_flex_attr *attr = &priv->sh->cdev->config.hca_attr.flex; int ret; ret = mlx5_flex_translate_length(attr, conf, devx, error); diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index f4a7b697e6..2f91c0074e 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -155,7 +155,7 @@ mlx5_flow_meter_profile_validate(struct rte_eth_dev *dev, "Meter profile already exists."); if (!priv->sh->meter_aso_en) { /* Old version is even not supported. */ - if (!priv->config.hca_attr.qos.flow_meter_old) + if (!priv->sh->cdev->config.hca_attr.qos.flow_meter_old) return -rte_mtr_error_set(error, ENOTSUP, RTE_MTR_ERROR_TYPE_METER_PROFILE, NULL, "Metering is not supported."); @@ -426,7 +426,7 @@ mlx5_flow_mtr_cap_get(struct rte_eth_dev *dev, struct rte_mtr_error *error __rte_unused) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hca_qos_attr *qattr = &priv->config.hca_attr.qos; + struct mlx5_hca_qos_attr *qattr = &priv->sh->cdev->config.hca_attr.qos; if (!priv->mtr_en) return -rte_mtr_error_set(error, ENOTSUP, diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 38273463b9..62561eb335 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -861,7 +861,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, MLX5_ASSERT(n_seg < MLX5_MAX_RXQ_NSEG); } if (conf->share_group > 0) { - if (!priv->config.hca_attr.mem_rq_rmp) { + if (!priv->sh->cdev->config.hca_attr.mem_rq_rmp) { DRV_LOG(ERR, "port %u queue index %u shared Rx queue not supported by fw", dev->data->port_id, idx); rte_errno = EINVAL; @@ -1515,7 +1515,7 @@ mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint16_t idx, { struct mlx5_priv *priv = dev->data->dev_private; - if (priv->config.hca_attr.lro_max_msg_sz_mode == + if (priv->sh->cdev->config.hca_attr.lro_max_msg_sz_mode == MLX5_LRO_MAX_MSG_SIZE_START_FROM_L4 && max_lro_size > MLX5_MAX_TCP_HDR_OFFSET) max_lro_size -= MLX5_MAX_TCP_HDR_OFFSET; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 74c9c0a4ff..1dfe7da435 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -341,14 +341,16 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev) sq_attr.state = MLX5_SQC_STATE_RDY; sq_attr.sq_state = MLX5_SQC_STATE_RST; sq_attr.hairpin_peer_rq = rq->id; - sq_attr.hairpin_peer_vhca = priv->config.hca_attr.vhca_id; + sq_attr.hairpin_peer_vhca = + priv->sh->cdev->config.hca_attr.vhca_id; ret = mlx5_devx_cmd_modify_sq(sq, &sq_attr); if (ret) goto error; rq_attr.state = MLX5_SQC_STATE_RDY; rq_attr.rq_state = MLX5_SQC_STATE_RST; rq_attr.hairpin_peer_sq = sq->id; - rq_attr.hairpin_peer_vhca = priv->config.hca_attr.vhca_id; + rq_attr.hairpin_peer_vhca = + priv->sh->cdev->config.hca_attr.vhca_id; ret = mlx5_devx_cmd_modify_rq(rq, &rq_attr); if (ret) goto error; @@ -425,7 +427,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, return -rte_errno; } peer_info->qp_id = txq_ctrl->obj->sq->id; - peer_info->vhca_id = priv->config.hca_attr.vhca_id; + peer_info->vhca_id = priv->sh->cdev->config.hca_attr.vhca_id; /* 1-to-1 mapping, only the first one is used. */ peer_info->peer_q = txq_ctrl->hairpin_conf.peers[0].queue; peer_info->tx_explicit = txq_ctrl->hairpin_conf.tx_explicit; @@ -455,7 +457,7 @@ mlx5_hairpin_queue_peer_update(struct rte_eth_dev *dev, uint16_t peer_queue, return -rte_errno; } peer_info->qp_id = rxq_ctrl->obj->rq->id; - peer_info->vhca_id = priv->config.hca_attr.vhca_id; + peer_info->vhca_id = priv->sh->cdev->config.hca_attr.vhca_id; peer_info->peer_q = rxq->hairpin_conf.peers[0].queue; peer_info->tx_explicit = rxq->hairpin_conf.tx_explicit; peer_info->manual_bind = rxq->hairpin_conf.manual_bind; @@ -817,7 +819,7 @@ mlx5_hairpin_bind_single_port(struct rte_eth_dev *dev, uint16_t rx_port) /* Pass TxQ's information to peer RxQ and try binding. */ cur.peer_q = rx_queue; cur.qp_id = txq_ctrl->obj->sq->id; - cur.vhca_id = priv->config.hca_attr.vhca_id; + cur.vhca_id = priv->sh->cdev->config.hca_attr.vhca_id; cur.tx_explicit = txq_ctrl->hairpin_conf.tx_explicit; cur.manual_bind = txq_ctrl->hairpin_conf.manual_bind; /* @@ -1102,7 +1104,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->port_id, strerror(rte_errno)); goto error; } - if ((priv->sh->devx && priv->config.dv_flow_en && + if ((priv->sh->cdev->config.devx && priv->config.dv_flow_en && priv->config.dest_tir) && priv->obj_ops.lb_dummy_queue_create) { ret = priv->obj_ops.lb_dummy_queue_create(dev); if (ret) diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index af77e91e4c..1d16ebcb41 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -825,7 +825,7 @@ mlx5_txpp_create(struct mlx5_dev_ctx_shared *sh, struct mlx5_priv *priv) sh->txpp.tick = tx_pp >= 0 ? tx_pp : -tx_pp; sh->txpp.test = !!(tx_pp < 0); sh->txpp.skew = priv->config.tx_skew; - sh->txpp.freq = priv->config.hca_attr.dev_freq_khz; + sh->txpp.freq = sh->cdev->config.hca_attr.dev_freq_khz; ret = mlx5_txpp_create_event_channel(sh); if (ret) goto exit; diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 37a592528b..9effbb9201 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -239,45 +239,6 @@ mlx5_os_set_nonblock_channel_fd(int fd) return -ENOTSUP; } -/** - * DV flow counter mode detect and config. - * - * @param dev - * Pointer to rte_eth_dev structure. - * - */ -static void -mlx5_flow_counter_mode_config(struct rte_eth_dev *dev __rte_unused) -{ -#ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_dev_ctx_shared *sh = priv->sh; - bool fallback; - -#ifndef HAVE_IBV_DEVX_ASYNC - fallback = true; -#else - fallback = false; - if (!sh->devx || !priv->config.dv_flow_en || - !priv->config.hca_attr.flow_counters_dump || - !(priv->config.hca_attr.flow_counter_bulk_alloc_bitmap & 0x4) || - (mlx5_flow_dv_discover_counter_offset_support(dev) == -ENOTSUP)) - fallback = true; -#endif - if (fallback) - DRV_LOG(INFO, "Use fall-back DV counter management. Flow " - "counter dump:%d, bulk_alloc_bitmap:0x%hhx.", - priv->config.hca_attr.flow_counters_dump, - priv->config.hca_attr.flow_counter_bulk_alloc_bitmap); - /* Initialize fallback mode only on the port initializes sh. */ - if (sh->refcnt == 1) - sh->cmng.counter_fallback = fallback; - else if (fallback != sh->cmng.counter_fallback) - DRV_LOG(WARNING, "Port %d in sh has different fallback mode " - "with others:%d.", PORT_ID(priv), fallback); -#endif -} - /** * Spawn an Ethernet device from DevX information. * @@ -301,11 +262,10 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, { const struct mlx5_switch_info *switch_info = &spawn->info; struct mlx5_dev_ctx_shared *sh = NULL; - struct mlx5_dev_attr device_attr; + struct mlx5_hca_attr *hca_attr; struct rte_eth_dev *eth_dev = NULL; struct mlx5_priv *priv = NULL; int err = 0; - unsigned int cqe_comp; struct rte_ether_addr mac; char name[RTE_ETH_NAME_MAX_LEN]; int own_domain_id = 0; @@ -320,11 +280,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, return NULL; } DRV_LOG(DEBUG, "naming Ethernet device \"%s\"", name); - /* - * Some parameters are needed in advance to create device context. We - * process the devargs here to get ones, and later process devargs - * again to override some hardware settings. - */ + /* Process parameters. */ err = mlx5_args(config, dpdk_dev->devargs); if (err) { err = rte_errno; @@ -335,6 +291,24 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, sh = mlx5_alloc_shared_dev_ctx(spawn, config); if (!sh) return NULL; + /* Update final values for devargs before check sibling config. */ + config->dv_esw_en = 0; + if (!config->dv_flow_en) { + DRV_LOG(ERR, "Windows flow mode must be DV flow enable."); + err = ENOTSUP; + goto error; + } + if (!config->dv_esw_en && + config->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + DRV_LOG(WARNING, + "Metadata mode %u is not supported (no E-Switch).", + config->dv_xmeta_en); + config->dv_xmeta_en = MLX5_XMETA_MODE_LEGACY; + } + /* Check sibling device configurations. */ + err = mlx5_dev_check_sibling_config(sh, config, dpdk_dev); + if (err) + goto error; /* Initialize the shutdown event in mlx5_dev_spawn to * support mlx5_is_removed for Windows. */ @@ -345,15 +319,12 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; } DRV_LOG(DEBUG, "MPW isn't supported"); - mlx5_os_get_dev_attr(sh->cdev, &device_attr); - config->swp = device_attr.sw_parsing_offloads & + config->swp = sh->device_attr.sw_parsing_offloads & (MLX5_SW_PARSING_CAP | MLX5_SW_PARSING_CSUM_CAP | MLX5_SW_PARSING_TSO_CAP); config->ind_table_max_size = sh->device_attr.max_rwq_indirection_table_size; - cqe_comp = 0; - config->cqe_comp = cqe_comp; - config->tunnel_en = device_attr.tunnel_offloads_caps & + config->tunnel_en = sh->device_attr.tunnel_offloads_caps & (MLX5_TUNNELED_OFFLOADS_VXLAN_CAP | MLX5_TUNNELED_OFFLOADS_GRE_CAP | MLX5_TUNNELED_OFFLOADS_GENEVE_CAP); @@ -421,26 +392,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } own_domain_id = 1; } - /* Override some values set by hardware configuration. */ - mlx5_args(config, dpdk_dev->devargs); - /* Update final values for devargs before check sibling config. */ - config->dv_esw_en = 0; - if (!config->dv_flow_en) { - DRV_LOG(ERR, "Windows flow mode must be DV flow enable."); - err = ENOTSUP; - goto error; - } - if (!priv->config.dv_esw_en && - priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - DRV_LOG(WARNING, - "Metadata mode %u is not supported (no E-Switch).", - priv->config.dv_xmeta_en); - priv->config.dv_xmeta_en = MLX5_XMETA_MODE_LEGACY; - } - /* Check sibling device configurations. */ - err = mlx5_dev_check_sibling_config(priv, config, dpdk_dev); - if (err) - goto error; DRV_LOG(DEBUG, "counters are not supported"); config->ind_table_max_size = sh->device_attr.max_rwq_indirection_table_size; @@ -463,41 +414,20 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->mps == MLX5_MPW_ENHANCED ? "enhanced " : config->mps == MLX5_MPW ? "legacy " : "", config->mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); - if (config->cqe_comp && !cqe_comp) { + if (config->cqe_comp) { DRV_LOG(WARNING, "Rx CQE compression isn't supported."); config->cqe_comp = 0; } - if (sh->devx) { - config->hca_attr = sh->cdev->config.hca_attr; - config->hw_csum = config->hca_attr.csum_cap; + if (sh->cdev->config.devx) { + hca_attr = &sh->cdev->config.hca_attr; + config->hw_csum = hca_attr->csum_cap; DRV_LOG(DEBUG, "checksum offloading is %ssupported", - (config->hw_csum ? "" : "not ")); - config->hw_vlan_strip = config->hca_attr.vlan_cap; + (config->hw_csum ? "" : "not ")); + config->hw_vlan_strip = hca_attr->vlan_cap; DRV_LOG(DEBUG, "VLAN stripping is %ssupported", (config->hw_vlan_strip ? "" : "not ")); - config->hw_fcs_strip = config->hca_attr.scatter_fcs; - } - if (sh->devx) { - uint32_t reg[MLX5_ST_SZ_DW(register_mtutc)]; - - err = config->hca_attr.access_register_user ? - mlx5_devx_cmd_register_read - (sh->cdev->ctx, MLX5_REGISTER_ID_MTUTC, 0, - reg, MLX5_ST_SZ_DW(register_mtutc)) : ENOTSUP; - if (!err) { - uint32_t ts_mode; - - /* MTUTC register is read successfully. */ - ts_mode = MLX5_GET(register_mtutc, reg, - time_stamp_mode); - if (ts_mode == MLX5_MTUTC_TIMESTAMP_MODE_REAL_TIME) - config->rt_timestamp = 1; - } else { - /* Kernel does not support register reading. */ - if (config->hca_attr.dev_freq_khz == - (NS_PER_S / MS_PER_S)) - config->rt_timestamp = 1; - } + config->hw_fcs_strip = hca_attr->scatter_fcs; + mlx5_rt_timestamp_config(sh, config, hca_attr); } if (config->mprq.enabled) { DRV_LOG(WARNING, "Multi-Packet RQ isn't supported"); @@ -653,7 +583,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; } } - if (sh->devx) { + if (sh->cdev->config.devx) { priv->obj_ops = devx_obj_ops; } else { DRV_LOG(ERR, "Windows flow must be DevX."); @@ -917,6 +847,7 @@ mlx5_os_net_probe(struct mlx5_common_device *cdev) }, .dv_flow_en = 1, .log_hp_size = MLX5_ARG_UNSET, + .vf = mlx5_dev_is_vf_pci(pci_dev), }; int ret; uint32_t restore; @@ -931,21 +862,6 @@ mlx5_os_net_probe(struct mlx5_common_device *cdev) strerror(rte_errno)); return -rte_errno; } - /* Device specific configuration. */ - switch (pci_dev->id.device_id) { - case PCI_DEVICE_ID_MELLANOX_CONNECTX4VF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX4LXVF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX5VF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX5EXVF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX5BFVF: - case PCI_DEVICE_ID_MELLANOX_CONNECTX6VF: - case PCI_DEVICE_ID_MELLANOX_CONNECTXVF: - dev_config.vf = 1; - break; - default: - dev_config.vf = 0; - break; - } spawn.eth_dev = mlx5_dev_spawn(cdev->dev, &spawn, &dev_config); if (!spawn.eth_dev) return -rte_errno;