vdpa/mlx5: fix live migration termination

Message ID 1595592431-164904-1-git-send-email-matan@mellanox.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series vdpa/mlx5: fix live migration termination |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/Intel-compilation success Compilation OK
ci/travis-robot success Travis build: passed
ci/iol-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

Matan Azrad July 24, 2020, 12:07 p.m. UTC
  There are a lot of per virtq operations in the live migration
handling.

Before the driver support for queue update, when a virtq was not valid,
all the LM handling was terminated.

But now, when the driver supports queue update, the virtq can be invalid
as legal stage.

Skip invalid virtq in LM handling.

Fixes: c47d6e83334e ("vdpa/mlx5: support queue update")

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Xueming Li <xuemingl@mellanox.com>
---
 drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)
  

Comments

Maxime Coquelin July 28, 2020, 9:29 a.m. UTC | #1
On 7/24/20 2:07 PM, Matan Azrad wrote:
> There are a lot of per virtq operations in the live migration
> handling.
> 
> Before the driver support for queue update, when a virtq was not valid,
> all the LM handling was terminated.
> 
> But now, when the driver supports queue update, the virtq can be invalid
> as legal stage.
> 
> Skip invalid virtq in LM handling.
> 
> Fixes: c47d6e83334e ("vdpa/mlx5: support queue update")
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Xueming Li <xuemingl@mellanox.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 28 +++++++++++++++++-----------
>  1 file changed, 17 insertions(+), 11 deletions(-)

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
  
Maxime Coquelin July 28, 2020, 3:27 p.m. UTC | #2
On 7/24/20 2:07 PM, Matan Azrad wrote:
> There are a lot of per virtq operations in the live migration
> handling.
> 
> Before the driver support for queue update, when a virtq was not valid,
> all the LM handling was terminated.
> 
> But now, when the driver supports queue update, the virtq can be invalid
> as legal stage.
> 
> Skip invalid virtq in LM handling.
> 
> Fixes: c47d6e83334e ("vdpa/mlx5: support queue update")
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Xueming Li <xuemingl@mellanox.com>
> ---
>  drivers/vdpa/mlx5/mlx5_vdpa_lm.c | 28 +++++++++++++++++-----------
>  1 file changed, 17 insertions(+), 11 deletions(-)

Applied to dpdk-next-virtio/master

Thanks,
Maxime
  

Patch

diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
index 460e01d..273c46f 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_lm.c
@@ -19,9 +19,13 @@ 
 
 	for (i = 0; i < priv->nr_virtqs; ++i) {
 		attr.queue_index = i;
-		if (!priv->virtqs[i].virtq ||
-		    mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, &attr)) {
-			DRV_LOG(ERR, "Failed to modify virtq %d logging.", i);
+		if (!priv->virtqs[i].virtq) {
+			DRV_LOG(DEBUG, "virtq %d is invalid for dirty bitmap "
+				"enabling.", i);
+		} else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq,
+			   &attr)) {
+			DRV_LOG(ERR, "Failed to modify virtq %d for dirty "
+				"bitmap enabling.", i);
 			return -1;
 		}
 	}
@@ -69,9 +73,11 @@ 
 	attr.dirty_bitmap_mkey = mr->mkey->id;
 	for (i = 0; i < priv->nr_virtqs; ++i) {
 		attr.queue_index = i;
-		if (!priv->virtqs[i].virtq ||
-		    mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq, &attr)) {
-			DRV_LOG(ERR, "Failed to modify virtq %d for lm.", i);
+		if (!priv->virtqs[i].virtq) {
+			DRV_LOG(DEBUG, "virtq %d is invalid for LM.", i);
+		} else if (mlx5_devx_cmd_modify_virtq(priv->virtqs[i].virtq,
+						      &attr)) {
+			DRV_LOG(ERR, "Failed to modify virtq %d for LM.", i);
 			goto err;
 		}
 	}
@@ -104,15 +110,15 @@ 
 	if (!RTE_VHOST_NEED_LOG(features))
 		return 0;
 	for (i = 0; i < priv->nr_virtqs; ++i) {
-		if (priv->virtqs[i].virtq) {
+		if (!priv->virtqs[i].virtq) {
+			DRV_LOG(DEBUG, "virtq %d is invalid for LM log.", i);
+		} else {
 			ret = mlx5_vdpa_virtq_stop(priv, i);
 			if (ret) {
-				DRV_LOG(ERR, "Failed to stop virtq %d.", i);
+				DRV_LOG(ERR, "Failed to stop virtq %d for LM "
+					"log.", i);
 				return -1;
 			}
-		} else {
-			DRV_LOG(ERR, "virtq %d is not created.", i);
-			return -1;
 		}
 		rte_vhost_log_used_vring(priv->vid, i, 0,
 			      MLX5_VDPA_USED_RING_LEN(priv->virtqs[i].vq_size));