diff mbox series

[v2] common/mlx5: add provider query port support to glue library

Message ID 20210619124830.25297-1-viacheslavo@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Raslan Darawsheh
Headers show
Series [v2] common/mlx5: add provider query port support to glue library | expand

Checks

Context Check Description
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-mellanox-Functional fail Functional Testing issues
ci/intel-Testing success Testing PASS
ci/iol-testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/github-robot success github build: passed
ci/iol-abi-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/checkpatch warning coding style issues

Commit Message

Viacheslav Ovsiienko June 19, 2021, 12:48 p.m. UTC
The rdma-core mlx5 provider introduced the port attributes query
API since version v35.0 - the mlx5dv_query_port routine. In order
to support this change in the rdma-core the conditional compilation
flag HAVE_MLX5DV_DR_DEVX_PORT_V35 is introduced by the this patch.

In the OFED rdma-core version the new compatible mlx5dv_query_port
routine was introduced as well, replacing the existing proprietary
mlx5dv_query_devx_port routine. The proprietary routine was
controlled in PMD code with HAVE_MLX5DV_DR_DEVX_PORT conditional
flag.

Currently, the OFED rdma-core library contains both versions of
port query API. And this version is a transitional one, there are
the plans to remove the proprietary mlx5dv_query_devx_port routine
and the HAVE_MLX5DV_DR_DEVX_PORT flag in PMD will not work anymore.

We had one more dependency on this flag in the code (for the
mlx5dv_dr_action_create_dest_ib_port routine) and the patch
fixes mentioned dependency also, by introducing the new
dedicated conditional flag - HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT.

This patch is highly desirable to be provided in DPDK LTS releases
due to it covers the major compatibility issue.

Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---

v1: http://patches.dpdk.org/project/dpdk/patch/20210607093726.14546-1-viacheslavo@nvidia.com/
v2: commit message was clarified

 drivers/common/mlx5/linux/meson.build |  4 ++
 drivers/common/mlx5/linux/mlx5_glue.c | 57 ++++++++++++++++++++-----
 drivers/common/mlx5/linux/mlx5_glue.h | 16 ++++++-
 drivers/net/mlx5/linux/mlx5_os.c      | 60 ++++++++++++---------------
 drivers/net/mlx5/mlx5_flow_dv.c       |  2 +-
 5 files changed, 93 insertions(+), 46 deletions(-)

Comments

Raslan Darawsheh June 20, 2021, 8:25 a.m. UTC | #1
Hi,

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Saturday, June 19, 2021 3:49 PM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; stable@dpdk.org
> Subject: [PATCH v2] common/mlx5: add provider query port support to glue
> library
> 
> The rdma-core mlx5 provider introduced the port attributes query
> API since version v35.0 - the mlx5dv_query_port routine. In order
> to support this change in the rdma-core the conditional compilation
> flag HAVE_MLX5DV_DR_DEVX_PORT_V35 is introduced by the this patch.
> 
> In the OFED rdma-core version the new compatible mlx5dv_query_port
> routine was introduced as well, replacing the existing proprietary
> mlx5dv_query_devx_port routine. The proprietary routine was
> controlled in PMD code with HAVE_MLX5DV_DR_DEVX_PORT conditional
> flag.
> 
> Currently, the OFED rdma-core library contains both versions of
> port query API. And this version is a transitional one, there are
> the plans to remove the proprietary mlx5dv_query_devx_port routine
> and the HAVE_MLX5DV_DR_DEVX_PORT flag in PMD will not work anymore.
> 
> We had one more dependency on this flag in the code (for the
> mlx5dv_dr_action_create_dest_ib_port routine) and the patch
> fixes mentioned dependency also, by introducing the new
> dedicated conditional flag - HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT.
> 
> This patch is highly desirable to be provided in DPDK LTS releases
> due to it covers the major compatibility issue.
> 
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>

Removed v1, 
V2 applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
David Marchand June 23, 2021, 10:42 a.m. UTC | #2
On Sat, Jun 19, 2021 at 2:49 PM Viacheslav Ovsiienko
<viacheslavo@nvidia.com> wrote:
>
> The rdma-core mlx5 provider introduced the port attributes query
> API since version v35.0 - the mlx5dv_query_port routine. In order
> to support this change in the rdma-core the conditional compilation
> flag HAVE_MLX5DV_DR_DEVX_PORT_V35 is introduced by the this patch.
>
> In the OFED rdma-core version the new compatible mlx5dv_query_port
> routine was introduced as well, replacing the existing proprietary
> mlx5dv_query_devx_port routine. The proprietary routine was
> controlled in PMD code with HAVE_MLX5DV_DR_DEVX_PORT conditional
> flag.
>
> Currently, the OFED rdma-core library contains both versions of
> port query API. And this version is a transitional one, there are
> the plans to remove the proprietary mlx5dv_query_devx_port routine
> and the HAVE_MLX5DV_DR_DEVX_PORT flag in PMD will not work anymore.
>
> We had one more dependency on this flag in the code (for the
> mlx5dv_dr_action_create_dest_ib_port routine) and the patch
> fixes mentioned dependency also, by introducing the new
> dedicated conditional flag - HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT.
>
> This patch is highly desirable to be provided in DPDK LTS releases
> due to it covers the major compatibility issue.

This patch is a fix, yet nothing tells this story in the title.
And the title does not reflect that it is a fix wrt versions of rdma-core.
Is this a build issue? or a runtime compat issue?

A good title makes life easier for users and people maintaining stable
versions of DPDK.
Viacheslav Ovsiienko June 23, 2021, 11:27 a.m. UTC | #3
Hi David,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Wednesday, June 23, 2021 13:43
> To: Slava Ovsiienko <viacheslavo@nvidia.com>
> Cc: dev <dev@dpdk.org>; Raslan Darawsheh <rasland@nvidia.com>; Matan
> Azrad <matan@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; dpdk stable <stable@dpdk.org>
> Subject: Re: [dpdk-dev] [PATCH v2] common/mlx5: add provider query port
> support to glue library
> 
> On Sat, Jun 19, 2021 at 2:49 PM Viacheslav Ovsiienko
> <viacheslavo@nvidia.com> wrote:
> >
> > The rdma-core mlx5 provider introduced the port attributes query API
> > since version v35.0 - the mlx5dv_query_port routine. In order to
> > support this change in the rdma-core the conditional compilation flag
> > HAVE_MLX5DV_DR_DEVX_PORT_V35 is introduced by the this patch.
> >
> > In the OFED rdma-core version the new compatible mlx5dv_query_port
> > routine was introduced as well, replacing the existing proprietary
> > mlx5dv_query_devx_port routine. The proprietary routine was controlled
> > in PMD code with HAVE_MLX5DV_DR_DEVX_PORT conditional flag.
> >
> > Currently, the OFED rdma-core library contains both versions of port
> > query API. And this version is a transitional one, there are the plans
> > to remove the proprietary mlx5dv_query_devx_port routine and the
> > HAVE_MLX5DV_DR_DEVX_PORT flag in PMD will not work anymore.
> >
> > We had one more dependency on this flag in the code (for the
> > mlx5dv_dr_action_create_dest_ib_port routine) and the patch fixes
> > mentioned dependency also, by introducing the new dedicated
> > conditional flag - HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT.
> >
> > This patch is highly desirable to be provided in DPDK LTS releases due
> > to it covers the major compatibility issue.
> 
> This patch is a fix, yet nothing tells this story in the title.

This patch is not a fix. Actually it covers the compatibility issue, not a bug.
The Upstream rdma-core was evolved, its community adopted a
slightly different API version than was presented in the vendor version.
Our PMD should conform both versions and we provided this patch for DPDK.

> And the title does not reflect that it is a fix wrt versions of rdma-core.
> Is this a build issue? or a runtime compat issue?

It is not a build because we have conditional compilation and we can
build PMD, but functionality is affected in severe way.

> 
> A good title makes life easier for users and people maintaining stable versions
> of DPDK.
I'm in contact with LTS maintainers about the patch,
the title should not be a problem.

Anyway, if you prefer the title be changed to one with "fix" word -
please, let me know, I'll provide the update. 

With best regards,
Slava
David Marchand June 23, 2021, 1:51 p.m. UTC | #4
On Wed, Jun 23, 2021 at 1:27 PM Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
> > > This patch is highly desirable to be provided in DPDK LTS releases due
> > > to it covers the major compatibility issue.
> >
> > This patch is a fix, yet nothing tells this story in the title.
>
> This patch is not a fix. Actually it covers the compatibility issue, not a bug.

I still think it counts as a fix in the sense that the mlx5 driver
behavior changes to an undesired state if rdma-core gets updated.

It's not about preferring "fix" in the title.
It is more accurate/descriptive to me.
If you feel strongly against "fix", I won't insist.

Yet "add provider quer port support to glue library" is just black
magic to most of us.


> The Upstream rdma-core was evolved, its community adopted a
> slightly different API version than was presented in the vendor version.
> Our PMD should conform both versions and we provided this patch for DPDK.

Let's try differently.
Place yourself as someone who does not know a thing about the mlx5
driver and rdma-core.
How does such a person understand the impact of this patch?

I would state in the title that the mlx5 driver can now handle
correctly rdma-core 35.
Additionally, it could indicate which feature X is now behaving as intended.
But if feature X is something internal to the mlx5 driver, it is worth skipping.
Viacheslav Ovsiienko June 23, 2021, 3:39 p.m. UTC | #5
Hi, David

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Wednesday, June 23, 2021 16:52
> To: Slava Ovsiienko <viacheslavo@nvidia.com>
> Cc: dev <dev@dpdk.org>; Raslan Darawsheh <rasland@nvidia.com>; Matan
> Azrad <matan@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; dpdk stable <stable@dpdk.org>
> Subject: Re: [dpdk-dev] [PATCH v2] common/mlx5: add provider query port
> support to glue library
> 
> On Wed, Jun 23, 2021 at 1:27 PM Slava Ovsiienko <viacheslavo@nvidia.com>
> wrote:
> > > > This patch is highly desirable to be provided in DPDK LTS releases
> > > > due to it covers the major compatibility issue.
> > >
> > > This patch is a fix, yet nothing tells this story in the title.
> >
> > This patch is not a fix. Actually it covers the compatibility issue, not a bug.
> 
> I still think it counts as a fix in the sense that the mlx5 driver behavior changes
> to an undesired state if rdma-core gets updated.
> 
> It's not about preferring "fix" in the title.
> It is more accurate/descriptive to me.
> If you feel strongly against "fix", I won't insist.

I have no strong objections against "fix". The patch definitely can be
categorized as "fix" as well. It would be easier to push the patch to LTS 😊
I just tried to be extremely honest - upstream rdma-core did not provide this API,
now it does, it would be very nice to engage it, allowing full E-Switch support over
upstream rdma-core in some configurations. From other side - you are right,
w/o patch E-Switch might not work in DPDK, with patch - it should work.
Looks like a true magic fix 😊.

> 
> Yet "add provider quer port support to glue library" is just black magic to
> most of us.
> 
> > The Upstream rdma-core was evolved, its community adopted a slightly
> > different API version than was presented in the vendor version.
> > Our PMD should conform both versions and we provided this patch for
> DPDK.
> 
> Let's try differently.
> Place yourself as someone who does not know a thing about the mlx5 driver
> and rdma-core.
> How does such a person understand the impact of this patch?
> 
> I would state in the title that the mlx5 driver can now handle correctly rdma-
> core 35.
> Additionally, it could indicate which feature X is now behaving as intended.
> But if feature X is something internal to the mlx5 driver, it is worth skipping.

This rdma-core API mostly reports E-Switch vport assigned indices, the assigning schema
of these ones depends on many factors - kernel/firmware/LAG configs/etc. Formerly,
the vport indices were assigned in direct correspondence with VF index, for these cases
E-Switch is supported fine even w/o API. But the newer kernel drivers with new features supported
changed the vport identification schema and former approach might not work, that's why this
API was introduced.

So, if I understand your comment correctly, we should tell few words the E-Switch
behavior might be affected and the feature malfunction is possible.

With best regards,
Slava
Viacheslav Ovsiienko June 24, 2021, 10:10 a.m. UTC | #6
Hi, David

Thank you for the review and comments.
What do you think about the commit message like this?

common/mlx5: fix rdma-core v35 query port API support

The rdma-core mlx5 provider introduced the port attributes query
API since version v35.0 - the mlx5dv_query_port routine. In order
to support this change in the rdma-core the conditional compilation
flag HAVE_MLX5DV_DR_DEVX_PORT_V35 is introduced by the this patch.

In the OFED rdma-core version the new compatible mlx5dv_query_port
routine was introduced as well, replacing the existing proprietary
mlx5dv_query_devx_port routine. The proprietary routine was
controlled in PMD code with HAVE_MLX5DV_DR_DEVX_PORT conditional
flag.

Currently, the OFED rdma-core library contains both versions of
port query API. And this version is a transitional one, there are
the plans to remove the proprietary mlx5dv_query_devx_port routine
and the HAVE_MLX5DV_DR_DEVX_PORT flag in PMD will not work anymore.

We had one more dependency on this flag in the code (for the
mlx5dv_dr_action_create_dest_ib_port routine) and the patch
fixes mentioned dependency also, by introducing the new
dedicated conditional flag - HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT.

The introduced port query API is related to getting kernel
metadata indices assigned to the vports of E-switch. Without
engaging the API the PMD makes the assumptions about the indices,
those might be incorrect for some configurations (for example -
LAG ones) and the E-switch feature might not operate correctly.

This patch is highly desirable to be provided in DPDK LTS releases
due to it covers the major compatibility issue.

(I will add cc: and fixes tag while sending the update)

With best regards,
Slava


> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Wednesday, June 23, 2021 16:52
> To: Slava Ovsiienko <viacheslavo@nvidia.com>
> Cc: dev <dev@dpdk.org>; Raslan Darawsheh <rasland@nvidia.com>; Matan
> Azrad <matan@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; dpdk stable <stable@dpdk.org>
> Subject: Re: [dpdk-dev] [PATCH v2] common/mlx5: add provider query port
> support to glue library
> 
> On Wed, Jun 23, 2021 at 1:27 PM Slava Ovsiienko <viacheslavo@nvidia.com>
> wrote:
> > > > This patch is highly desirable to be provided in DPDK LTS releases
> > > > due to it covers the major compatibility issue.
> > >
> > > This patch is a fix, yet nothing tells this story in the title.
> >
> > This patch is not a fix. Actually it covers the compatibility issue, not a bug.
> 
> I still think it counts as a fix in the sense that the mlx5 driver behavior
> changes to an undesired state if rdma-core gets updated.
> 
> It's not about preferring "fix" in the title.
> It is more accurate/descriptive to me.
> If you feel strongly against "fix", I won't insist.
> 
> Yet "add provider quer port support to glue library" is just black magic to
> most of us.
> 
> 
> > The Upstream rdma-core was evolved, its community adopted a slightly
> > different API version than was presented in the vendor version.
> > Our PMD should conform both versions and we provided this patch for
> DPDK.
> 
> Let's try differently.
> Place yourself as someone who does not know a thing about the mlx5 driver
> and rdma-core.
> How does such a person understand the impact of this patch?
> 
> I would state in the title that the mlx5 driver can now handle correctly rdma-
> core 35.
> Additionally, it could indicate which feature X is now behaving as intended.
> But if feature X is something internal to the mlx5 driver, it is worth skipping.
> 
> 
> 
> --
> David Marchand
diff mbox series

Patch

diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 007834a49b..5cea1b44d7 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -93,6 +93,10 @@  has_sym_args = [
             'IBV_WQ_FLAG_RX_END_PADDING' ],
         [ 'HAVE_MLX5DV_DR_DEVX_PORT', 'infiniband/mlx5dv.h',
             'mlx5dv_query_devx_port' ],
+	[ 'HAVE_MLX5DV_DR_DEVX_PORT_V35', 'infiniband/mlx5dv.h',
+	    'mlx5dv_query_port' ],
+	[ 'HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT', 'infiniband/mlx5dv.h',
+	    'mlx5dv_dr_action_create_dest_ib_port' ],
         [ 'HAVE_IBV_DEVX_OBJ', 'infiniband/mlx5dv.h',
             'mlx5dv_devx_obj_create' ],
         [ 'HAVE_IBV_FLOW_DEVX_COUNTERS', 'infiniband/mlx5dv.h',
diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index d3bd645a5b..00be8114be 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -391,7 +391,7 @@  mlx5_glue_dr_create_flow_action_dest_flow_tbl(void *tbl)
 static void *
 mlx5_glue_dr_create_flow_action_dest_port(void *domain, uint32_t port)
 {
-#ifdef HAVE_MLX5DV_DR_DEVX_PORT
+#ifdef HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT
 	return mlx5dv_dr_action_create_dest_ib_port(domain, port);
 #else
 #ifdef HAVE_MLX5DV_DR_ESWITCH
@@ -1087,17 +1087,54 @@  mlx5_glue_devx_wq_query(struct ibv_wq *wq, const void *in, size_t inlen,
 static int
 mlx5_glue_devx_port_query(struct ibv_context *ctx,
 			  uint32_t port_num,
-			  struct mlx5dv_devx_port *mlx5_devx_port)
-{
+			  struct mlx5_port_info *info)
+{
+	int err = 0;
+
+	info->query_flags = 0;
+#ifdef HAVE_MLX5DV_DR_DEVX_PORT_V35
+	/* The DevX port query API is implemented (rdma-core v35 and above). */
+	struct mlx5_ib_uapi_query_port devx_port;
+
+	memset(&devx_port, 0, sizeof(devx_port));
+	err = mlx5dv_query_port(ctx, port_num, &devx_port);
+	if (err)
+		return err;
+	if (devx_port.flags & MLX5DV_QUERY_PORT_VPORT_REG_C0) {
+		info->vport_meta_tag = devx_port.reg_c0.value;
+		info->vport_meta_mask = devx_port.reg_c0.mask;
+		info->query_flags |= MLX5_PORT_QUERY_REG_C0;
+	}
+	if (devx_port.flags & MLX5DV_QUERY_PORT_VPORT) {
+		info->vport_id = devx_port.vport;
+		info->query_flags |= MLX5_PORT_QUERY_VPORT;
+	}
+#else
 #ifdef HAVE_MLX5DV_DR_DEVX_PORT
-	return mlx5dv_query_devx_port(ctx, port_num, mlx5_devx_port);
+	/* The legacy DevX port query API is implemented (prior v35). */
+	struct mlx5dv_devx_port devx_port = {
+		.comp_mask = MLX5DV_DEVX_PORT_VPORT |
+			     MLX5DV_DEVX_PORT_MATCH_REG_C_0
+	};
+
+	err = mlx5dv_query_devx_port(ctx, port_num, &devx_port);
+	if (err)
+		return err;
+	if (devx_port.comp_mask & MLX5DV_DEVX_PORT_MATCH_REG_C_0) {
+		info->vport_meta_tag = devx_port.reg_c_0.value;
+		info->vport_meta_mask = devx_port.reg_c_0.mask;
+		info->query_flags |= MLX5_PORT_QUERY_REG_C0;
+	}
+	if (devx_port.comp_mask & MLX5DV_DEVX_PORT_VPORT) {
+		info->vport_id = devx_port.vport_num;
+		info->query_flags |= MLX5_PORT_QUERY_VPORT;
+	}
 #else
-	(void)ctx;
-	(void)port_num;
-	(void)mlx5_devx_port;
-	errno = ENOTSUP;
-	return errno;
-#endif
+	RTE_SET_USED(ctx);
+	RTE_SET_USED(port_num);
+#endif /* HAVE_MLX5DV_DR_DEVX_PORT */
+#endif /* HAVE_MLX5DV_DR_DEVX_PORT_V35 */
+	return err;
 }
 
 static int
diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h
index 97462e9ab8..840d8cf57f 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.h
+++ b/drivers/common/mlx5/linux/mlx5_glue.h
@@ -84,6 +84,20 @@  struct mlx5dv_dr_action;
 struct mlx5dv_devx_port;
 #endif
 
+#ifndef HAVE_MLX5DV_DR_DEVX_PORT_V35
+struct mlx5dv_port;
+#endif
+
+#define MLX5_PORT_QUERY_VPORT (1u << 0)
+#define MLX5_PORT_QUERY_REG_C0 (1u << 1)
+
+struct mlx5_port_info {
+	uint16_t query_flags;
+	uint16_t vport_id; /* Associated VF vport index (if any). */
+	uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */
+	uint32_t vport_meta_mask; /* Used for vport index field match mask. */
+};
+
 #ifndef HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER
 struct mlx5dv_dr_flow_meter_attr;
 #endif
@@ -311,7 +325,7 @@  struct mlx5_glue {
 			     void *out, size_t outlen);
 	int (*devx_port_query)(struct ibv_context *ctx,
 			       uint32_t port_num,
-			       struct mlx5dv_devx_port *mlx5_devx_port);
+			       struct mlx5_port_info *info);
 	int (*dr_dump_domain)(FILE *file, void *domain);
 	int (*dr_dump_rule)(FILE *file, void *rule);
 	int (*devx_query_eqn)(struct ibv_context *context, uint32_t cpus,
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 534a56a555..54e4a1fe60 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -822,9 +822,7 @@  mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	char name[RTE_ETH_NAME_MAX_LEN];
 	int own_domain_id = 0;
 	uint16_t port_id;
-#ifdef HAVE_MLX5DV_DR_DEVX_PORT
-	struct mlx5dv_devx_port devx_port = { .comp_mask = 0 };
-#endif
+	struct mlx5_port_info vport_info = { .query_flags = 0 };
 
 	/* Determine if this port representor is supposed to be spawned. */
 	if (switch_info->representor && dpdk_dev->devargs &&
@@ -1055,29 +1053,27 @@  mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	priv->vport_meta_tag = 0;
 	priv->vport_meta_mask = 0;
 	priv->pf_bond = spawn->pf_bond;
-#ifdef HAVE_MLX5DV_DR_DEVX_PORT
 	/*
-	 * The DevX port query API is implemented. E-Switch may use
-	 * either vport or reg_c[0] metadata register to match on
-	 * vport index. The engaged part of metadata register is
-	 * defined by mask.
+	 * If we have E-Switch we should determine the vport attributes.
+	 * E-Switch may use either source vport field or reg_c[0] metadata
+	 * register to match on vport index. The engaged part of metadata
+	 * register is defined by mask.
 	 */
 	if (switch_info->representor || switch_info->master) {
-		devx_port.comp_mask = MLX5DV_DEVX_PORT_VPORT |
-				      MLX5DV_DEVX_PORT_MATCH_REG_C_0;
-		err = mlx5_glue->devx_port_query(sh->ctx, spawn->phys_port,
-						 &devx_port);
+		err = mlx5_glue->devx_port_query(sh->ctx,
+						 spawn->phys_port,
+						 &vport_info);
 		if (err) {
 			DRV_LOG(WARNING,
 				"can't query devx port %d on device %s",
 				spawn->phys_port,
 				mlx5_os_get_dev_device_name(spawn->phys_dev));
-			devx_port.comp_mask = 0;
+			vport_info.query_flags = 0;
 		}
 	}
-	if (devx_port.comp_mask & MLX5DV_DEVX_PORT_MATCH_REG_C_0) {
-		priv->vport_meta_tag = devx_port.reg_c_0.value;
-		priv->vport_meta_mask = devx_port.reg_c_0.mask;
+	if (vport_info.query_flags & MLX5_PORT_QUERY_REG_C0) {
+		priv->vport_meta_tag = vport_info.vport_meta_tag;
+		priv->vport_meta_mask = vport_info.vport_meta_mask;
 		if (!priv->vport_meta_mask) {
 			DRV_LOG(ERR, "vport zero mask for port %d"
 				     " on bonding device %s",
@@ -1097,8 +1093,8 @@  mlx5_dev_spawn(struct rte_device *dpdk_dev,
 			goto error;
 		}
 	}
-	if (devx_port.comp_mask & MLX5DV_DEVX_PORT_VPORT) {
-		priv->vport_id = devx_port.vport_num;
+	if (vport_info.query_flags & MLX5_PORT_QUERY_VPORT) {
+		priv->vport_id = vport_info.vport_id;
 	} else if (spawn->pf_bond >= 0 &&
 		   (switch_info->representor || switch_info->master)) {
 		DRV_LOG(ERR, "can't deduce vport index for port %d"
@@ -1108,25 +1104,21 @@  mlx5_dev_spawn(struct rte_device *dpdk_dev,
 		err = ENOTSUP;
 		goto error;
 	} else {
-		/* Suppose vport index in compatible way. */
+		/*
+		 * Suppose vport index in compatible way. Kernel/rdma_core
+		 * support single E-Switch per PF configurations only and
+		 * vport_id field contains the vport index for associated VF,
+		 * which is deduced from representor port name.
+		 * For example, let's have the IB device port 10, it has
+		 * attached network device eth0, which has port name attribute
+		 * pf0vf2, we can deduce the VF number as 2, and set vport index
+		 * as 3 (2+1). This assigning schema should be changed if the
+		 * multiple E-Switch instances per PF configurations or/and PCI
+		 * subfunctions are added.
+		 */
 		priv->vport_id = switch_info->representor ?
 				 switch_info->port_name + 1 : -1;
 	}
-#else
-	/*
-	 * Kernel/rdma_core support single E-Switch per PF configurations
-	 * only and vport_id field contains the vport index for
-	 * associated VF, which is deduced from representor port name.
-	 * For example, let's have the IB device port 10, it has
-	 * attached network device eth0, which has port name attribute
-	 * pf0vf2, we can deduce the VF number as 2, and set vport index
-	 * as 3 (2+1). This assigning schema should be changed if the
-	 * multiple E-Switch instances per PF configurations or/and PCI
-	 * subfunctions are added.
-	 */
-	priv->vport_id = switch_info->representor ?
-			 switch_info->port_name + 1 : -1;
-#endif
 	priv->representor_id = mlx5_representor_id_encode(switch_info,
 							  eth_da->type);
 	/*
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 1b2639d232..dafd37ab93 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10337,7 +10337,7 @@  flow_dv_translate_action_port_id(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  NULL,
 					  "No eswitch info was found for port");
-#ifdef HAVE_MLX5DV_DR_DEVX_PORT
+#ifdef HAVE_MLX5DV_DR_CREATE_DEST_IB_PORT
 	/*
 	 * This parameter is transferred to
 	 * mlx5dv_dr_action_create_dest_ib_port().