From patchwork Sat Jun 19 12:48:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 94554 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B76BA0A0C; Sat, 19 Jun 2021 14:48:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 98B0640E64; Sat, 19 Jun 2021 14:48:54 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2055.outbound.protection.outlook.com [40.107.93.55]) by mails.dpdk.org (Postfix) with ESMTP id C3C1440E32; Sat, 19 Jun 2021 14:48:52 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WYv0vn/OkVIiI03NpOB4BLCSeWI9EaX+4IDgC4NQMkhpsxLd2nr5gKtqZz18Zap9Dhe/kBZW03AMA50d/nIqc5Xc2cHCtK+P+GYrKDQc+HuGCWgteiu1RfwJ8ht/wK1CjPYLCQQPdbGk4NFuHkLMWO96v7tvrvi6D2oDQlt0c7sanjXuXzhUAuoXAGQb5/jtcszsnqyD3MLZXW+faqYtrh1UNNMFk1bi0MQCoS0G6xgmHUln7I9COWfCUJ+68+Az5L/q55TCb1NrYSCe1PDvFMLc0HOc84vvb0aYzt4aG+uz9arkHMlJnyYpM0eHFnYzupoqtyFxyAHBQpqgLeQGlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=A3HpFh0CeTPK6yelSWDDytktx7s0gHDHgGbraSIb4Wc=; b=AHfu+Wpkgvmr2lYTy7GS6A1ShlexSy/E25ol60XWhtNY56PzTIETiy/XdYCPKLG8KVY63lpqhzDURD6cMCIW3jOEHsV4Boso3VvIPzwSQBoTPadYb9T/KMvlPY7Pn5UYFnmTUzFDjvQ8J/e9QRkhwyZFm7IwoSF3uRwOUXEQSQIIuz6GcGVRZKoxD4RKOWrqgIZv1loMEDngZsrqgZF/UzHJd/XqZ+lYPJLhZGqfifhNF8ooqowpv/F//fRFzoRyn6NvEYBCmBQzSUpDpgN38R8+IKdK6wt6iILXfTKrCi6UUixD4I6Zu8FmZsNhycJ+jIUJi8DqVhGoydly5MZzGg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=A3HpFh0CeTPK6yelSWDDytktx7s0gHDHgGbraSIb4Wc=; b=Y6SQ7vQ11hiRvVzk1mB6MVFKA437WbpqmmgPSUq9l63FwuSnYC3FyUp0HC+WytE5hFbM8Rk2d6XyqI9wpNVOpyh3QPrkbtMP9xn6e2P4o+XEn6b4Owtx0XGBuvX0mmATzU1FE99sjMPI5GjAi7PT5IrCrHgHtoYspVX6mI9Xht/rjSqdlujl7QfSPC42u+ksWTNMkUP2OwYCQgiQUKUVWiMcMdnUPZUzDWPXYoQYyCNFp0/bXvZUKiTLnkI079BrcvtZVeZ999Ed+V7jlSvqjQqVeTW60jKX/Y8gMk0k6Wz6Owm8Z+hCjCeKfBYxkUOwuaoCLb6ODuoYJd0wyyqBaw== Received: from DM6PR17CA0008.namprd17.prod.outlook.com (2603:10b6:5:1b3::21) by MW3PR12MB4394.namprd12.prod.outlook.com (2603:10b6:303:54::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Sat, 19 Jun 2021 12:48:49 +0000 Received: from DM6NAM11FT036.eop-nam11.prod.protection.outlook.com (2603:10b6:5:1b3:cafe::3a) by DM6PR17CA0008.outlook.office365.com (2603:10b6:5:1b3::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.15 via Frontend Transport; Sat, 19 Jun 2021 12:48:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT036.mail.protection.outlook.com (10.13.172.64) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4242.16 via Frontend Transport; Sat, 19 Jun 2021 12:48:49 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 19 Jun 2021 12:48:45 +0000 From: Viacheslav Ovsiienko To: CC: , , , Date: Sat, 19 Jun 2021 15:48:30 +0300 Message-ID: <20210619124830.25297-1-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: eb2999a0-fe1b-4078-d06b-08d9332095a7 X-MS-TrafficTypeDiagnostic: MW3PR12MB4394: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1360; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: aedDuXHMDLEfCyNstROfDDXgmduILcwsA+55byYhm3Il/WROM+2b/nR2dLFPgipAwXGr5QRQL/t3O8W4LISD0MImhM+0eddXlIfpIcGbHO/Vb1Vv0xh9FBvE3HkqYN9ggaX239Kloh4Kas7aUrZ9w+f0aODm5MzTcTZaOAcoE89fQXWbMPK9557M12TLVaauktyJF1wu8SIcPUavh8EkjVaDg5gpAJSUzSFvq21X8AmAAR2Ey76jgBDry3DF5w54wPHJuW0tyHNnEZZi6nZAZBeqPDrGYff8puQvFXk5XgbCT3rfklKt4Tq90Qe118pwxMtGryB5kX4Q+YXufMn/Q4CDoHVWIpdWphiiHYTXvh9llcWQ7obahiKIPVOFoQ2ilha/h10OFEaeyBu/N38huuPMzxWFMssWghS2dh0R5ZboQD81IGv2fcIyPPNggKkzQyqUQwYNSOBn+BJQckwTn6XfpBkIv5LRqULvHNj4LAVKcSxqaNxHVsxqCAwAyQcXBxuE2JAaHKgmp4VOjJay0MtkSSrJUTTttbQmdzvtEKof12qBMk8McpghoiKui5BuXw6YsqXAhxptMrvJpmP3jAHsYJj1NdpCyZLtOsoiewxx60mcnLfzlUYB8qE/MBceHCoG/SZNlLws7fUSzUV/qnhe/67SWtN/7ke0aXggJP4zQI1FlBxrbxZkgNJnHMiZ8w9eHGpAjTAl+Csw3kdcb7rdanU2QKOwOwFDo8L5JrZIFuTyiklXxJFNxWYlkNWu2kwaXLCsk3RUsLneBu00Zw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(346002)(376002)(39860400002)(46966006)(36840700001)(8676002)(426003)(47076005)(966005)(82310400003)(8936002)(36756003)(4326008)(26005)(55016002)(83380400001)(478600001)(7636003)(356005)(82740400003)(186003)(16526019)(70586007)(86362001)(7696005)(54906003)(2906002)(1076003)(336012)(30864003)(6916009)(6666004)(6286002)(36860700001)(36906005)(2616005)(5660300002)(316002)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jun 2021 12:48:49.1229 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eb2999a0-fe1b-4078-d06b-08d9332095a7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT036.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4394 Subject: [dpdk-dev] [PATCH v2] common/mlx5: add provider query port support to glue library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The rdma-core mlx5 provider introduced the port attributes query API since version v35.0 - the mlx5dv_query_port routine. In order to support this change in the rdma-core the conditional compilation flag HAVE_MLX5DV_DR_DEVX_PORT_V35 is introduced by the this patch. In the OFED rdma-core version the new compatible mlx5dv_query_port routine was introduced as well, replacing the existing proprietary mlx5dv_query_devx_port routine. The proprietary routine was controlled in PMD code with HAVE_MLX5DV_DR_DEVX_PORT conditional flag. Currently, the OFED rdma-core library contains both versions of port query API. And this version is a transitional one, there are the plans to remove the proprietary mlx5dv_query_devx_port routine and the HAVE_MLX5DV_DR_DEVX_PORT flag in PMD will not work anymore. We had one more dependency on this flag in the code (for the mlx5dv_dr_action_create_dest_ib_port routine) and the patch fixes mentioned dependency also, by introducing the new dedicated conditional flag - HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT. This patch is highly desirable to be provided in DPDK LTS releases due to it covers the major compatibility issue. Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko Acked-by: Matan Azrad --- v1: http://patches.dpdk.org/project/dpdk/patch/20210607093726.14546-1-viacheslavo@nvidia.com/ v2: commit message was clarified drivers/common/mlx5/linux/meson.build | 4 ++ drivers/common/mlx5/linux/mlx5_glue.c | 57 ++++++++++++++++++++----- drivers/common/mlx5/linux/mlx5_glue.h | 16 ++++++- drivers/net/mlx5/linux/mlx5_os.c | 60 ++++++++++++--------------- drivers/net/mlx5/mlx5_flow_dv.c | 2 +- 5 files changed, 93 insertions(+), 46 deletions(-) diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index 007834a49b..5cea1b44d7 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -93,6 +93,10 @@ has_sym_args = [ 'IBV_WQ_FLAG_RX_END_PADDING' ], [ 'HAVE_MLX5DV_DR_DEVX_PORT', 'infiniband/mlx5dv.h', 'mlx5dv_query_devx_port' ], + [ 'HAVE_MLX5DV_DR_DEVX_PORT_V35', 'infiniband/mlx5dv.h', + 'mlx5dv_query_port' ], + [ 'HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT', 'infiniband/mlx5dv.h', + 'mlx5dv_dr_action_create_dest_ib_port' ], [ 'HAVE_IBV_DEVX_OBJ', 'infiniband/mlx5dv.h', 'mlx5dv_devx_obj_create' ], [ 'HAVE_IBV_FLOW_DEVX_COUNTERS', 'infiniband/mlx5dv.h', diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index d3bd645a5b..00be8114be 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -391,7 +391,7 @@ mlx5_glue_dr_create_flow_action_dest_flow_tbl(void *tbl) static void * mlx5_glue_dr_create_flow_action_dest_port(void *domain, uint32_t port) { -#ifdef HAVE_MLX5DV_DR_DEVX_PORT +#ifdef HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT return mlx5dv_dr_action_create_dest_ib_port(domain, port); #else #ifdef HAVE_MLX5DV_DR_ESWITCH @@ -1087,17 +1087,54 @@ mlx5_glue_devx_wq_query(struct ibv_wq *wq, const void *in, size_t inlen, static int mlx5_glue_devx_port_query(struct ibv_context *ctx, uint32_t port_num, - struct mlx5dv_devx_port *mlx5_devx_port) -{ + struct mlx5_port_info *info) +{ + int err = 0; + + info->query_flags = 0; +#ifdef HAVE_MLX5DV_DR_DEVX_PORT_V35 + /* The DevX port query API is implemented (rdma-core v35 and above). */ + struct mlx5_ib_uapi_query_port devx_port; + + memset(&devx_port, 0, sizeof(devx_port)); + err = mlx5dv_query_port(ctx, port_num, &devx_port); + if (err) + return err; + if (devx_port.flags & MLX5DV_QUERY_PORT_VPORT_REG_C0) { + info->vport_meta_tag = devx_port.reg_c0.value; + info->vport_meta_mask = devx_port.reg_c0.mask; + info->query_flags |= MLX5_PORT_QUERY_REG_C0; + } + if (devx_port.flags & MLX5DV_QUERY_PORT_VPORT) { + info->vport_id = devx_port.vport; + info->query_flags |= MLX5_PORT_QUERY_VPORT; + } +#else #ifdef HAVE_MLX5DV_DR_DEVX_PORT - return mlx5dv_query_devx_port(ctx, port_num, mlx5_devx_port); + /* The legacy DevX port query API is implemented (prior v35). */ + struct mlx5dv_devx_port devx_port = { + .comp_mask = MLX5DV_DEVX_PORT_VPORT | + MLX5DV_DEVX_PORT_MATCH_REG_C_0 + }; + + err = mlx5dv_query_devx_port(ctx, port_num, &devx_port); + if (err) + return err; + if (devx_port.comp_mask & MLX5DV_DEVX_PORT_MATCH_REG_C_0) { + info->vport_meta_tag = devx_port.reg_c_0.value; + info->vport_meta_mask = devx_port.reg_c_0.mask; + info->query_flags |= MLX5_PORT_QUERY_REG_C0; + } + if (devx_port.comp_mask & MLX5DV_DEVX_PORT_VPORT) { + info->vport_id = devx_port.vport_num; + info->query_flags |= MLX5_PORT_QUERY_VPORT; + } #else - (void)ctx; - (void)port_num; - (void)mlx5_devx_port; - errno = ENOTSUP; - return errno; -#endif + RTE_SET_USED(ctx); + RTE_SET_USED(port_num); +#endif /* HAVE_MLX5DV_DR_DEVX_PORT */ +#endif /* HAVE_MLX5DV_DR_DEVX_PORT_V35 */ + return err; } static int diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index 97462e9ab8..840d8cf57f 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -84,6 +84,20 @@ struct mlx5dv_dr_action; struct mlx5dv_devx_port; #endif +#ifndef HAVE_MLX5DV_DR_DEVX_PORT_V35 +struct mlx5dv_port; +#endif + +#define MLX5_PORT_QUERY_VPORT (1u << 0) +#define MLX5_PORT_QUERY_REG_C0 (1u << 1) + +struct mlx5_port_info { + uint16_t query_flags; + uint16_t vport_id; /* Associated VF vport index (if any). */ + uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ + uint32_t vport_meta_mask; /* Used for vport index field match mask. */ +}; + #ifndef HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER struct mlx5dv_dr_flow_meter_attr; #endif @@ -311,7 +325,7 @@ struct mlx5_glue { void *out, size_t outlen); int (*devx_port_query)(struct ibv_context *ctx, uint32_t port_num, - struct mlx5dv_devx_port *mlx5_devx_port); + struct mlx5_port_info *info); int (*dr_dump_domain)(FILE *file, void *domain); int (*dr_dump_rule)(FILE *file, void *rule); int (*devx_query_eqn)(struct ibv_context *context, uint32_t cpus, diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 534a56a555..54e4a1fe60 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -822,9 +822,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, char name[RTE_ETH_NAME_MAX_LEN]; int own_domain_id = 0; uint16_t port_id; -#ifdef HAVE_MLX5DV_DR_DEVX_PORT - struct mlx5dv_devx_port devx_port = { .comp_mask = 0 }; -#endif + struct mlx5_port_info vport_info = { .query_flags = 0 }; /* Determine if this port representor is supposed to be spawned. */ if (switch_info->representor && dpdk_dev->devargs && @@ -1055,29 +1053,27 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->vport_meta_tag = 0; priv->vport_meta_mask = 0; priv->pf_bond = spawn->pf_bond; -#ifdef HAVE_MLX5DV_DR_DEVX_PORT /* - * The DevX port query API is implemented. E-Switch may use - * either vport or reg_c[0] metadata register to match on - * vport index. The engaged part of metadata register is - * defined by mask. + * If we have E-Switch we should determine the vport attributes. + * E-Switch may use either source vport field or reg_c[0] metadata + * register to match on vport index. The engaged part of metadata + * register is defined by mask. */ if (switch_info->representor || switch_info->master) { - devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT | - MLX5DV_DEVX_PORT_MATCH_REG_C_0; - err = mlx5_glue->devx_port_query(sh->ctx, spawn->phys_port, - &devx_port); + err = mlx5_glue->devx_port_query(sh->ctx, + spawn->phys_port, + &vport_info); if (err) { DRV_LOG(WARNING, "can't query devx port %d on device %s", spawn->phys_port, mlx5_os_get_dev_device_name(spawn->phys_dev)); - devx_port.comp_mask = 0; + vport_info.query_flags = 0; } } - if (devx_port.comp_mask & MLX5DV_DEVX_PORT_MATCH_REG_C_0) { - priv->vport_meta_tag = devx_port.reg_c_0.value; - priv->vport_meta_mask = devx_port.reg_c_0.mask; + if (vport_info.query_flags & MLX5_PORT_QUERY_REG_C0) { + priv->vport_meta_tag = vport_info.vport_meta_tag; + priv->vport_meta_mask = vport_info.vport_meta_mask; if (!priv->vport_meta_mask) { DRV_LOG(ERR, "vport zero mask for port %d" " on bonding device %s", @@ -1097,8 +1093,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, goto error; } } - if (devx_port.comp_mask & MLX5DV_DEVX_PORT_VPORT) { - priv->vport_id = devx_port.vport_num; + if (vport_info.query_flags & MLX5_PORT_QUERY_VPORT) { + priv->vport_id = vport_info.vport_id; } else if (spawn->pf_bond >= 0 && (switch_info->representor || switch_info->master)) { DRV_LOG(ERR, "can't deduce vport index for port %d" @@ -1108,25 +1104,21 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } else { - /* Suppose vport index in compatible way. */ + /* + * Suppose vport index in compatible way. Kernel/rdma_core + * support single E-Switch per PF configurations only and + * vport_id field contains the vport index for associated VF, + * which is deduced from representor port name. + * For example, let's have the IB device port 10, it has + * attached network device eth0, which has port name attribute + * pf0vf2, we can deduce the VF number as 2, and set vport index + * as 3 (2+1). This assigning schema should be changed if the + * multiple E-Switch instances per PF configurations or/and PCI + * subfunctions are added. + */ priv->vport_id = switch_info->representor ? switch_info->port_name + 1 : -1; } -#else - /* - * Kernel/rdma_core support single E-Switch per PF configurations - * only and vport_id field contains the vport index for - * associated VF, which is deduced from representor port name. - * For example, let's have the IB device port 10, it has - * attached network device eth0, which has port name attribute - * pf0vf2, we can deduce the VF number as 2, and set vport index - * as 3 (2+1). This assigning schema should be changed if the - * multiple E-Switch instances per PF configurations or/and PCI - * subfunctions are added. - */ - priv->vport_id = switch_info->representor ? - switch_info->port_name + 1 : -1; -#endif priv->representor_id = mlx5_representor_id_encode(switch_info, eth_da->type); /* diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 1b2639d232..dafd37ab93 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -10337,7 +10337,7 @@ flow_dv_translate_action_port_id(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "No eswitch info was found for port"); -#ifdef HAVE_MLX5DV_DR_DEVX_PORT +#ifdef HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT /* * This parameter is transferred to * mlx5dv_dr_action_create_dest_ib_port().