From patchwork Thu Sep 30 17:28:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 100176 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5BD81A0C43; Thu, 30 Sep 2021 19:40:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B1C04113F; Thu, 30 Sep 2021 19:39:53 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2083.outbound.protection.outlook.com [40.107.92.83]) by mails.dpdk.org (Postfix) with ESMTP id CE6D9410EA for ; Thu, 30 Sep 2021 19:28:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ad70UpPymnImacEvt8pO1FMhCEu/4heOeSB0Z1YoDt53fw44L5h/mzvr2bKAmC7giAynvyQ5d4AHNa+522PL9wgKHDGJ+7Wwu07PbAqeNuHHH3Gn+iSvGG2BcMrCddpCa6lMss8Rb1vVdK9qJMHi6zY9Yg2msyaE5Aob5uz323qFPZjIMfWqExp8VQfp0FYbg4ToovShRMlhDuT37eWtSWpGedOgQ9sh7YZvZewl5hla4IBdCLmMmTmTA7a3xC4ExMUVeKpAFhv7kANAKzX7QMzf96vISLjkOY/7d4l77atNWWYVMuHrMjwuLJl1wsDpjTU+ZNyZD+biWqX+C+SIlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=mo4KWezFgI+1F8Ojv0a2YOWNLjmzINFjTO36/dCxjGo=; b=OarYytW3Qn6l4O/73HPYAFWHXlJZP+DA/Oi0NF9VU2CMzPoYjJBNQg5Ym6C914ygGdHO4yk95f74/T3c0C0v89o6adCIuVsimQ7ubt5RVXB25Ye3e9Mo6717h7fC+ViEFHWmDUd42KdBjgfifFRZa+u13Eeb/cd9F8wjLzyqSUl3kMdiJSp0XtqFo/ds/esIMa6RAcOGkssYv5t14IYBn2AV712MdNVvNaAvFkQyGMF0LJ5FgkXEd83Os2dTPfjBKUOc6EDy0A6R8P/ViHVC3/Mak8TdriSQw91jMXraujWZy0YxsKn/TKr1iO67JsyNVZpFWMtQ+VQg0IJS6imfQw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mo4KWezFgI+1F8Ojv0a2YOWNLjmzINFjTO36/dCxjGo=; b=MmmRwiQK08C6FuGrygmscvnu7WPihWz5XNjoLPsH5dQiN1SgP6S/DBGeI+zJiRMUfaQh6wY56FG+IhSsK5aMqQYPKgpkoVZ4z8Okf64o0yWQZn9Ra+jlT3wXfsI4a/Aof+WUlMJdp0eVmBoBEu97nFntAnhsmyF2mDoKHcyf90YTUoBEHGDT6D4fbUyYODhEXuclF6sbG37cSwtPo9G7/rpWggQWS0WqcX69Okc8D72WjGOo0SaojqabTrSi5R4Le+2ETmraui93S6Mblk1j35nrI1ektuRArERv2VmYujEF9IKH6+mEHhlFhsBxyqbo5n2GOsM0rxcZKa+u1gaICQ== Received: from BN0PR02CA0039.namprd02.prod.outlook.com (2603:10b6:408:e5::14) by BN6PR12MB1875.namprd12.prod.outlook.com (2603:10b6:404:103::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.20; Thu, 30 Sep 2021 17:28:57 +0000 Received: from BN8NAM11FT010.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e5:cafe::92) by BN0PR02CA0039.outlook.office365.com (2603:10b6:408:e5::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4566.14 via Frontend Transport; Thu, 30 Sep 2021 17:28:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT010.mail.protection.outlook.com (10.13.177.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4566.14 via Frontend Transport; Thu, 30 Sep 2021 17:28:57 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 30 Sep 2021 17:28:53 +0000 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum Date: Thu, 30 Sep 2021 20:28:11 +0300 Message-ID: <20210930172822.1949969-8-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210930172822.1949969-1-michaelba@nvidia.com> References: <20210930172822.1949969-1-michaelba@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 63917296-4788-43b1-1f3f-08d98437c8a1 X-MS-TrafficTypeDiagnostic: BN6PR12MB1875: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4303; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YOaiPDv4W3f9xrz2ioDSEHKxflPd9HqF3aSJvkFm5ABmzTXHgKFt7OBIvrNZmFLOwrzWmqInYW06We57mplHsObcin7x0GigQtyjnpFnf7A4sF73N/2o0okY20dACP/mlPW/naNUpcGthNX2K8w0lDZEk2W7UQDrt8WZ4eqmPbPagJ11VLYAEXMklJgqLoZZwLwdZ6sIBl94K0XiwT8/KkemaAT2zTs96vK0dPPvVT7wppOL092BjrtXPAyerQ35tlc2v76yXRhNOb+pC2HZgw1PgwHTddKnJR59Ez6OgRUHAXU/PJQY/x3L40vY7dPKjGqbAdY2OsWAFRvMgEZ6mtZoxigaHLI/OMoz0PDiBPyOOs9kpwB8adnx78xILn/guwArf2pnbWZHf6Ws22eDEKzOToiC5gejvrhHqFaD7FLp5QxNJzvaigtuVg5nQQ+47eImAaSLLHOxdmlEhl1THHHYtt4z8aHQgIcICiKP0Y40gxNhtQU5U+pTHZKW0KGjhGggycMauT73h9oBNvHzU/69EuXjxQGjK3F8kAWRmx9AYwxN67ikeCqtBjcQ2c87VSWVT0zfSh5H/0/H1L7fxPW8BEtYF+pnv5muDvcxSsoDfm5304KMmsCX3GoZqfDSNmP6RdXXWhyDeJs23aZEXEif3smgXBW1rMApHn4x4YFVRFnOeGuVB+0fbkeUl8tm51Flx9LLeaS1lZdoofDcNQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(7636003)(2906002)(2876002)(316002)(70586007)(70206006)(86362001)(55016002)(7696005)(8936002)(36756003)(54906003)(6916009)(6286002)(26005)(8676002)(186003)(16526019)(47076005)(508600001)(107886003)(1076003)(83380400001)(336012)(2616005)(356005)(4326008)(82310400003)(6666004)(36860700001)(5660300002)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Sep 2021 17:28:57.0937 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 63917296-4788-43b1-1f3f-08d98437c8a1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT010.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1875 X-Mailman-Approved-At: Thu, 30 Sep 2021 19:39:43 +0200 Subject: [dpdk-dev] [PATCH 07/18] net/mlx5: remove redundant flag in device config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Baum Device configure structure has flag named devx as same as SH structure with the same meaning. Remove the flag from the configuration structure and move all the usages to the SH flag. Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 15 +++++++-------- drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_flow_dv.c | 18 +++++++++--------- drivers/net/mlx5/mlx5_trigger.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 9 ++++----- 5 files changed, 21 insertions(+), 24 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index c1b828a422..07ba0ff43b 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -718,7 +718,7 @@ mlx5_flow_counter_mode_config(struct rte_eth_dev *dev __rte_unused) fallback = true; #else fallback = false; - if (!priv->config.devx || !priv->config.dv_flow_en || + if (!sh->devx || !priv->config.dv_flow_en || !priv->config.hca_attr.flow_counters_dump || !(priv->config.hca_attr.flow_counter_bulk_alloc_bitmap & 0x4) || (mlx5_flow_dv_discover_counter_offset_support(dev) == -ENOTSUP)) @@ -1025,7 +1025,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, sh = mlx5_alloc_shared_dev_ctx(spawn, config); if (!sh) return NULL; - config->devx = sh->devx; #ifdef HAVE_MLX5DV_DR_ACTION_DEST_DEVX_TIR config->dest_tir = 1; #endif @@ -1325,7 +1324,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->mps == MLX5_MPW_ENHANCED ? "enhanced " : config->mps == MLX5_MPW ? "legacy " : "", config->mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); - if (config->devx) { + if (sh->devx) { err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr); if (err) { err = -err; @@ -1468,13 +1467,13 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->cqe_comp = 0; } if (config->cqe_comp_fmt == MLX5_CQE_RESP_FORMAT_FTAG_STRIDX && - (!config->devx || !config->hca_attr.mini_cqe_resp_flow_tag)) { + (!sh->devx || !config->hca_attr.mini_cqe_resp_flow_tag)) { DRV_LOG(WARNING, "Flow Tag CQE compression" " format isn't supported."); config->cqe_comp = 0; } if (config->cqe_comp_fmt == MLX5_CQE_RESP_FORMAT_L34H_STRIDX && - (!config->devx || !config->hca_attr.mini_cqe_resp_l3_l4_tag)) { + (!sh->devx || !config->hca_attr.mini_cqe_resp_l3_l4_tag)) { DRV_LOG(WARNING, "L3/L4 Header CQE compression" " format isn't supported."); config->cqe_comp = 0; @@ -1497,7 +1496,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, config->hca_attr.log_max_static_sq_wq); DRV_LOG(DEBUG, "WQE rate PP mode is %ssupported", config->hca_attr.qos.wqe_rate_pp ? "" : "not "); - if (!config->devx) { + if (!sh->devx) { DRV_LOG(ERR, "DevX is required for packet pacing"); err = ENODEV; goto error; @@ -1544,7 +1543,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; #endif } - if (config->devx) { + if (sh->devx) { uint32_t reg[MLX5_ST_SZ_DW(register_mtutc)]; err = config->hca_attr.access_register_user ? @@ -1722,7 +1721,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (err) goto error; } - if (config->devx && config->dv_flow_en && config->dest_tir) { + if (sh->devx && config->dv_flow_en && config->dest_tir) { priv->obj_ops = devx_obj_ops; priv->obj_ops.drop_action_create = ibv_obj_ops.drop_action_create; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index becd8722de..d2eabe04a5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -265,7 +265,6 @@ struct mlx5_dev_config { unsigned int lacp_by_user:1; /* Enable user to manage LACP traffic. */ unsigned int swp:1; /* Tx generic tunnel checksum and TSO offload. */ - unsigned int devx:1; /* Whether devx interface is available or not. */ unsigned int dest_tir:1; /* Whether advanced DR API is available. */ unsigned int reclaim_mode:2; /* Memory reclaim mode. */ unsigned int rt_timestamp:1; /* realtime timestamp format. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index b610ad3ef4..0f3288df96 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3350,7 +3350,7 @@ flow_dv_validate_action_count(struct rte_eth_dev *dev, bool shared, { struct mlx5_priv *priv = dev->data->dev_private; - if (!priv->config.devx) + if (!priv->sh->devx) goto notsup_err; if (action_flags & MLX5_FLOW_ACTION_COUNT) return rte_flow_error_set(error, EINVAL, @@ -5297,7 +5297,7 @@ flow_dv_validate_action_age(uint64_t action_flags, struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_action_age *age = action->conf; - if (!priv->config.devx || (priv->sh->cmng.counter_fallback && + if (!priv->sh->devx || (priv->sh->cmng.counter_fallback && !priv->sh->aso_age_mng)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -5582,7 +5582,7 @@ flow_dv_validate_action_sample(uint64_t *action_flags, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, "ratio value starts from 1"); - if (!priv->config.devx || (sample->ratio > 0 && !priv->sampler_en)) + if (!priv->sh->devx || (sample->ratio > 0 && !priv->sampler_en)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -6166,7 +6166,7 @@ flow_dv_counter_alloc(struct rte_eth_dev *dev, uint32_t age) age ? MLX5_COUNTER_TYPE_AGE : MLX5_COUNTER_TYPE_ORIGIN; uint32_t cnt_idx; - if (!priv->config.devx) { + if (!priv->sh->devx) { rte_errno = ENOTSUP; return 0; } @@ -6553,7 +6553,7 @@ flow_dv_mtr_alloc(struct rte_eth_dev *dev) struct mlx5_aso_mtr_pool *pool; uint32_t mtr_idx = 0; - if (!priv->config.devx) { + if (!priv->sh->devx) { rte_errno = ENOTSUP; return 0; } @@ -12438,7 +12438,7 @@ flow_dv_aso_ct_alloc(struct rte_eth_dev *dev, struct rte_flow_error *error) uint32_t ct_idx; MLX5_ASSERT(mng); - if (!priv->config.devx) { + if (!priv->sh->devx) { rte_errno = ENOTSUP; return 0; } @@ -12874,7 +12874,7 @@ flow_dv_translate(struct rte_eth_dev *dev, } break; case RTE_FLOW_ACTION_TYPE_COUNT: - if (!dev_conf->devx) { + if (!priv->sh->devx) { return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -15718,7 +15718,7 @@ flow_dv_query_count(struct rte_eth_dev *dev, uint32_t cnt_idx, void *data, struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_query_count *qc = data; - if (!priv->config.devx) + if (!priv->sh->devx) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -17331,7 +17331,7 @@ flow_dv_counter_query(struct rte_eth_dev *dev, uint32_t counter, bool clear, uint64_t inn_pkts, inn_bytes; int ret; - if (!priv->config.devx) + if (!priv->sh->devx) return -1; ret = _flow_dv_query_count(dev, counter, &inn_pkts, &inn_bytes); diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 3cbf5816a1..e93647aafd 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1112,7 +1112,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->port_id, strerror(rte_errno)); goto error; } - if ((priv->config.devx && priv->config.dv_flow_en && + if ((priv->sh->devx && priv->config.dv_flow_en && priv->config.dest_tir) && priv->obj_ops.lb_dummy_queue_create) { ret = priv->obj_ops.lb_dummy_queue_create(dev); if (ret) diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 1e76f63fc1..a882a18439 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -272,7 +272,7 @@ mlx5_flow_counter_mode_config(struct rte_eth_dev *dev __rte_unused) fallback = true; #else fallback = false; - if (!priv->config.devx || !priv->config.dv_flow_en || + if (!sh->devx || !priv->config.dv_flow_en || !priv->config.hca_attr.flow_counters_dump || !(priv->config.hca_attr.flow_counter_bulk_alloc_bitmap & 0x4) || (mlx5_flow_dv_discover_counter_offset_support(dev) == -ENOTSUP)) @@ -349,7 +349,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, sh = mlx5_alloc_shared_dev_ctx(spawn, config); if (!sh) return NULL; - config->devx = sh->devx; /* Initialize the shutdown event in mlx5_dev_spawn to * support mlx5_is_removed for Windows. */ @@ -452,7 +451,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(WARNING, "Rx CQE compression isn't supported."); config->cqe_comp = 0; } - if (config->devx) { + if (sh->devx) { err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr); if (err) { err = -err; @@ -471,7 +470,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(DEBUG, "checksum offloading is %ssupported", (config->hw_csum ? "" : "not ")); } - if (config->devx) { + if (sh->devx) { uint32_t reg[MLX5_ST_SZ_DW(register_mtutc)]; err = config->hca_attr.access_register_user ? @@ -642,7 +641,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; } } - if (config->devx && config->dv_flow_en) { + if (sh->devx && config->dv_flow_en) { priv->obj_ops = devx_obj_ops; } else { DRV_LOG(ERR, "Flow mode %u is not supported "