vhost: restore IOTLB mempool allocation

Message ID 20210517085951.28970-1-david.marchand@redhat.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series vhost: restore IOTLB mempool allocation |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/github-robot success github build: passed
ci/iol-abi-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-mellanox-Functional success Functional Testing PASS
ci/iol-mellanox-Performance fail Performance Testing issues

Commit Message

David Marchand May 17, 2021, 8:59 a.m. UTC
  As explained by Chenbo, IOTLB messages will be sent when some queues
are not enabled. If we initialize IOTLB in vhost_user_set_vring_num,
it could happen that IOTLB update comes when IOTLB pool of disabled
queues are not initialized.

Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Summary of a discussion with Maxime:

To keep the mempool allocation optimization, we could try to initialise
the per-vring mempools at reception of the first IOTLB message.
Since those pools are used as caches, it is not an issue if some vrings
received more IOTLB updates than others.

But looking/testing this now is too late for 21.05, hence reverting is
the safer.

---
 lib/vhost/vhost.c      | 5 +++--
 lib/vhost/vhost_user.c | 6 +-----
 2 files changed, 4 insertions(+), 7 deletions(-)
  

Comments

Chenbo Xia May 17, 2021, 1:06 p.m. UTC | #1
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Monday, May 17, 2021 5:00 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>; Maxime
> Coquelin <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Zhihong Wang <wangzhihong.wzh@bytedance.com>; Junjie Wan
> <wanjunjie@bytedance.com>
> Subject: [PATCH] vhost: restore IOTLB mempool allocation
> 
> As explained by Chenbo, IOTLB messages will be sent when some queues
> are not enabled. If we initialize IOTLB in vhost_user_set_vring_num,
> it could happen that IOTLB update comes when IOTLB pool of disabled
> queues are not initialized.
> 
> Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU
> disabled")
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Summary of a discussion with Maxime:
> 
> To keep the mempool allocation optimization, we could try to initialise
> the per-vring mempools at reception of the first IOTLB message.
> Since those pools are used as caches, it is not an issue if some vrings
> received more IOTLB updates than others.
> 
> But looking/testing this now is too late for 21.05, hence reverting is
> the safer.
> 
> ---
>  lib/vhost/vhost.c      | 5 +++--
>  lib/vhost/vhost_user.c | 6 +-----
>  2 files changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 9cbcf650b6..c96f6335c8 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -365,8 +365,7 @@ free_vq(struct virtio_net *dev, struct vhost_virtqueue
> *vq)
> 
>  	vhost_free_async_mem(vq);
>  	rte_free(vq->batch_copy_elems);
> -	if (vq->iotlb_pool)
> -		rte_mempool_free(vq->iotlb_pool);
> +	rte_mempool_free(vq->iotlb_pool);
>  	rte_free(vq->log_cache);
>  	rte_free(vq);
>  }
> @@ -570,6 +569,8 @@ init_vring_queue(struct virtio_net *dev, uint32_t
> vring_idx)
>  	vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD;
>  	vq->callfd = VIRTIO_UNINITIALIZED_EVENTFD;
>  	vq->notif_enable = VIRTIO_UNINITIALIZED_NOTIF;
> +
> +	vhost_user_iotlb_init(dev, vring_idx);
>  }
> 
>  static void
> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index 611ff209e3..8f0eba6412 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -469,10 +469,6 @@ vhost_user_set_vring_num(struct virtio_net **pdev,
>  		return RTE_VHOST_MSG_RESULT_ERR;
>  	}
> 
> -	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
> -		if (vhost_user_iotlb_init(dev, msg->payload.state.index))
> -			return RTE_VHOST_MSG_RESULT_ERR;
> -	}
>  	return RTE_VHOST_MSG_RESULT_OK;
>  }
> 
> @@ -578,7 +574,7 @@ numa_realloc(struct virtio_net *dev, int index)
>  	dev->virtqueue[index] = vq;
>  	vhost_devices[dev->vid] = dev;
> 
> -	if (old_vq != vq && (dev->features & (1ULL <<
> VIRTIO_F_IOMMU_PLATFORM)))
> +	if (old_vq != vq)
>  		vhost_user_iotlb_init(dev, index);
> 
>  	return dev;
> --
> 2.23.0

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
  
Chenbo Xia May 18, 2021, 8:08 a.m. UTC | #2
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Monday, May 17, 2021 5:00 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>; Maxime
> Coquelin <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Zhihong Wang <wangzhihong.wzh@bytedance.com>; Junjie Wan
> <wanjunjie@bytedance.com>
> Subject: [PATCH] vhost: restore IOTLB mempool allocation
> 
> As explained by Chenbo, IOTLB messages will be sent when some queues
> are not enabled. If we initialize IOTLB in vhost_user_set_vring_num,
> it could happen that IOTLB update comes when IOTLB pool of disabled
> queues are not initialized.
> 
> Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU
> disabled")
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Summary of a discussion with Maxime:
> 
> To keep the mempool allocation optimization, we could try to initialise
> the per-vring mempools at reception of the first IOTLB message.
> Since those pools are used as caches, it is not an issue if some vrings
> received more IOTLB updates than others.
> 
> But looking/testing this now is too late for 21.05, hence reverting is
> the safer.
> 
> ---
>  lib/vhost/vhost.c      | 5 +++--
>  lib/vhost/vhost_user.c | 6 +-----
>  2 files changed, 4 insertions(+), 7 deletions(-)
> --
> 2.23.0

Applied to next-virtio/main. Thanks
  

Patch

diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 9cbcf650b6..c96f6335c8 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -365,8 +365,7 @@  free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq)
 
 	vhost_free_async_mem(vq);
 	rte_free(vq->batch_copy_elems);
-	if (vq->iotlb_pool)
-		rte_mempool_free(vq->iotlb_pool);
+	rte_mempool_free(vq->iotlb_pool);
 	rte_free(vq->log_cache);
 	rte_free(vq);
 }
@@ -570,6 +569,8 @@  init_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
 	vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD;
 	vq->callfd = VIRTIO_UNINITIALIZED_EVENTFD;
 	vq->notif_enable = VIRTIO_UNINITIALIZED_NOTIF;
+
+	vhost_user_iotlb_init(dev, vring_idx);
 }
 
 static void
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index 611ff209e3..8f0eba6412 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -469,10 +469,6 @@  vhost_user_set_vring_num(struct virtio_net **pdev,
 		return RTE_VHOST_MSG_RESULT_ERR;
 	}
 
-	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
-		if (vhost_user_iotlb_init(dev, msg->payload.state.index))
-			return RTE_VHOST_MSG_RESULT_ERR;
-	}
 	return RTE_VHOST_MSG_RESULT_OK;
 }
 
@@ -578,7 +574,7 @@  numa_realloc(struct virtio_net *dev, int index)
 	dev->virtqueue[index] = vq;
 	vhost_devices[dev->vid] = dev;
 
-	if (old_vq != vq && (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
+	if (old_vq != vq)
 		vhost_user_iotlb_init(dev, index);
 
 	return dev;