[v8,1/2] vhost: introduce DMA vchannel unconfiguration

Message ID 20221025082540.100618-2-xuan.ding@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series vhost: introduce DMA vchannel unconfiguration |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Ding, Xuan Oct. 25, 2022, 8:25 a.m. UTC
  From: Xuan Ding <xuan.ding@intel.com>

Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
vChannels in vhost async data path. Lock protection are also added
to protect DMA vChannel configuration and unconfiguration
from concurrent calls.

Signed-off-by: Xuan Ding <xuan.ding@intel.com>
---
 doc/guides/prog_guide/vhost_lib.rst    |  5 ++
 doc/guides/rel_notes/release_22_11.rst |  5 ++
 lib/vhost/rte_vhost_async.h            | 20 +++++++
 lib/vhost/version.map                  |  3 ++
 lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
 5 files changed, 100 insertions(+), 5 deletions(-)
  

Comments

Maxime Coquelin Oct. 26, 2022, 5:13 a.m. UTC | #1
On 10/25/22 10:25, xuan.ding@intel.com wrote:
> From: Xuan Ding <xuan.ding@intel.com>
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vChannels in vhost async data path. Lock protection are also added
> to protect DMA vChannel configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>   doc/guides/prog_guide/vhost_lib.rst    |  5 ++
>   doc/guides/rel_notes/release_22_11.rst |  5 ++
>   lib/vhost/rte_vhost_async.h            | 20 +++++++
>   lib/vhost/version.map                  |  3 ++
>   lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
>   5 files changed, 100 insertions(+), 5 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
  
Chenbo Xia Oct. 26, 2022, 9:02 a.m. UTC | #2
> -----Original Message-----
> From: Ding, Xuan <xuan.ding@intel.com>
> Sent: Tuesday, October 25, 2022 4:26 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; He, Xingguang
> <xingguang.he@intel.com>; Ling, WeiX <weix.ling@intel.com>; Jiang, Cheng1
> <cheng1.jiang@intel.com>; Wang, YuanX <yuanx.wang@intel.com>; Ma, WenwuX
> <wenwux.ma@intel.com>; Ding, Xuan <xuan.ding@intel.com>
> Subject: [PATCH v8 1/2] vhost: introduce DMA vchannel unconfiguration
> 
> From: Xuan Ding <xuan.ding@intel.com>
> 
> Add a new API rte_vhost_async_dma_unconfigure() to unconfigure DMA
> vChannels in vhost async data path. Lock protection are also added
> to protect DMA vChannel configuration and unconfiguration
> from concurrent calls.
> 
> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> ---
>  doc/guides/prog_guide/vhost_lib.rst    |  5 ++
>  doc/guides/rel_notes/release_22_11.rst |  5 ++
>  lib/vhost/rte_vhost_async.h            | 20 +++++++
>  lib/vhost/version.map                  |  3 ++
>  lib/vhost/vhost.c                      | 72 ++++++++++++++++++++++++--
>  5 files changed, 100 insertions(+), 5 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index bad4d819e1..0d9eca1f7d 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -323,6 +323,11 @@ The following is an overview of some key Vhost API
> functions:
>    Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
>    VDPA_DEVICE_TYPE_BLK.
> 
> +* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
> +
> +  Clean DMA vChannel finished to use. After this function is called,
> +  the specified DMA vChannel should no longer be used by the Vhost
> library.
> +
>  Vhost-user Implementations
>  --------------------------
> 
> diff --git a/doc/guides/rel_notes/release_22_11.rst
> b/doc/guides/rel_notes/release_22_11.rst
> index 2da8bc9661..bbd1c5aa9c 100644
> --- a/doc/guides/rel_notes/release_22_11.rst
> +++ b/doc/guides/rel_notes/release_22_11.rst
> @@ -225,6 +225,11 @@ New Features
>    sysfs entries to adjust the minimum and maximum uncore frequency values,
>    which works on Linux with Intel hardware only.
> 
> +* **Added DMA vChannel unconfiguration for async vhost.**
> +
> +  Added support to unconfigure DMA vChannel that is no longer used
> +  by the Vhost library.
> +
>  * **Rewritten pmdinfo script.**
> 
>    The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only.
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index 1db2a10124..8f190dd44b 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -266,6 +266,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t
> queue_id,
>  	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> count,
>  	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
> 
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> notice.
> + *
> + * Unconfigure DMA vChannel in Vhost asynchronous data path.
> + * This function should be called when the specified DMA vChannel is no
> longer
> + * used by the Vhost library. Before this function is called, make sure
> there
> + * does not exist in-flight packets in DMA vChannel.
> + *
> + * @param dma_id
> + *  the identifier of DMA device
> + * @param vchan_id
> + *  the identifier of virtual DMA channel
> + * @return
> + *  0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index 7a00b65740..0b61870870 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -94,6 +94,9 @@ EXPERIMENTAL {
>  	rte_vhost_async_try_dequeue_burst;
>  	rte_vhost_driver_get_vdpa_dev_type;
>  	rte_vhost_clear_queue;
> +
> +	# added in 22.11
> +	rte_vhost_async_dma_unconfigure;
>  };
> 
>  INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 8740aa2788..1bb01c2a2e 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -23,6 +23,7 @@
> 
>  struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
>  pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> +pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
> 
>  struct vhost_vq_stats_name_off {
>  	char name[RTE_VHOST_STATS_NAME_SIZE];
> @@ -1844,19 +1845,21 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	void *pkts_cmpl_flag_addr;
>  	uint16_t max_desc;
> 
> +	pthread_mutex_lock(&vhost_dma_lock);
> +
>  	if (!rte_dma_is_valid(dma_id)) {
>  		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (rte_dma_info_get(dma_id, &info) != 0) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (vchan_id >= info.max_vchans) {
>  		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> -		return -1;
> +		goto error;
>  	}
> 
>  	if (!dma_copy_track[dma_id].vchans) {
> @@ -1868,7 +1871,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			VHOST_LOG_CONFIG("dma", ERR,
>  				"Failed to allocate vchans for DMA %d
> vChannel %u.\n",
>  				dma_id, vchan_id);
> -			return -1;
> +			goto error;
>  		}
> 
>  		dma_copy_track[dma_id].vchans = vchans;
> @@ -1877,6 +1880,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
>  		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already
> registered.\n",
>  			dma_id, vchan_id);
> +		pthread_mutex_unlock(&vhost_dma_lock);
>  		return 0;
>  	}
> 
> @@ -1894,7 +1898,7 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  			rte_free(dma_copy_track[dma_id].vchans);
>  			dma_copy_track[dma_id].vchans = NULL;
>  		}
> -		return -1;
> +		goto error;
>  	}
> 
>  	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr =
> pkts_cmpl_flag_addr;
> @@ -1902,7 +1906,12 @@ rte_vhost_async_dma_configure(int16_t dma_id,
> uint16_t vchan_id)
>  	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
>  	dma_copy_track[dma_id].nr_vchans++;
> 
> +	pthread_mutex_unlock(&vhost_dma_lock);
>  	return 0;
> +
> +error:
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return -1;
>  }
> 
>  int
> @@ -2091,5 +2100,58 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t
> queue_id)
>  	return 0;
>  }
> 
> +int
> +rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
> +{
> +	struct rte_dma_info info;
> +	struct rte_dma_stats stats = { 0 };
> +
> +	pthread_mutex_lock(&vhost_dma_lock);
> +
> +	if (!rte_dma_is_valid(dma_id)) {
> +		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (rte_dma_info_get(dma_id, &info) != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d
> information.\n", dma_id);
> +		goto error;
> +	}
> +
> +	if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans ||
> +		!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)
> {
> +		VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n",
> dma_id, vchan_id);
> +		goto error;
> +	}
> +
> +	if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR,
> +				 "Failed to get stats for DMA %d vChannel %u.\n",
> dma_id, vchan_id);
> +		goto error;
> +	}
> +
> +	if (stats.submitted - stats.completed != 0) {
> +		VHOST_LOG_CONFIG("dma", ERR,
> +				 "Do not unconfigure when there are inflight
> packets.\n");
> +		goto error;
> +	}
> +
> +
> 	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr
> );
> +	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
> +	dma_copy_track[dma_id].nr_vchans--;
> +
> +	if (dma_copy_track[dma_id].nr_vchans == 0) {
> +		rte_free(dma_copy_track[dma_id].vchans);
> +		dma_copy_track[dma_id].vchans = NULL;
> +	}
> +
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return 0;
> +
> +error:
> +	pthread_mutex_unlock(&vhost_dma_lock);
> +	return -1;
> +}
> +
>  RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
>  RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
> --
> 2.17.1

Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
  

Patch

diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index bad4d819e1..0d9eca1f7d 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -323,6 +323,11 @@  The following is an overview of some key Vhost API functions:
   Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,
   VDPA_DEVICE_TYPE_BLK.
 
+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
+
+  Clean DMA vChannel finished to use. After this function is called,
+  the specified DMA vChannel should no longer be used by the Vhost library.
+
 Vhost-user Implementations
 --------------------------
 
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 2da8bc9661..bbd1c5aa9c 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -225,6 +225,11 @@  New Features
   sysfs entries to adjust the minimum and maximum uncore frequency values,
   which works on Linux with Intel hardware only.
 
+* **Added DMA vChannel unconfiguration for async vhost.**
+
+  Added support to unconfigure DMA vChannel that is no longer used
+  by the Vhost library.
+
 * **Rewritten pmdinfo script.**
 
   The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only.
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 1db2a10124..8f190dd44b 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -266,6 +266,26 @@  rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
 	int *nr_inflight, int16_t dma_id, uint16_t vchan_id);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Unconfigure DMA vChannel in Vhost asynchronous data path.
+ * This function should be called when the specified DMA vChannel is no longer
+ * used by the Vhost library. Before this function is called, make sure there
+ * does not exist in-flight packets in DMA vChannel.
+ *
+ * @param dma_id
+ *  the identifier of DMA device
+ * @param vchan_id
+ *  the identifier of virtual DMA channel
+ * @return
+ *  0 on success, and -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 7a00b65740..0b61870870 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -94,6 +94,9 @@  EXPERIMENTAL {
 	rte_vhost_async_try_dequeue_burst;
 	rte_vhost_driver_get_vdpa_dev_type;
 	rte_vhost_clear_queue;
+
+	# added in 22.11
+	rte_vhost_async_dma_unconfigure;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 8740aa2788..1bb01c2a2e 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -23,6 +23,7 @@ 
 
 struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+pthread_mutex_t vhost_dma_lock = PTHREAD_MUTEX_INITIALIZER;
 
 struct vhost_vq_stats_name_off {
 	char name[RTE_VHOST_STATS_NAME_SIZE];
@@ -1844,19 +1845,21 @@  rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	void *pkts_cmpl_flag_addr;
 	uint16_t max_desc;
 
+	pthread_mutex_lock(&vhost_dma_lock);
+
 	if (!rte_dma_is_valid(dma_id)) {
 		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (rte_dma_info_get(dma_id, &info) != 0) {
 		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
-		return -1;
+		goto error;
 	}
 
 	if (vchan_id >= info.max_vchans) {
 		VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id);
-		return -1;
+		goto error;
 	}
 
 	if (!dma_copy_track[dma_id].vchans) {
@@ -1868,7 +1871,7 @@  rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			VHOST_LOG_CONFIG("dma", ERR,
 				"Failed to allocate vchans for DMA %d vChannel %u.\n",
 				dma_id, vchan_id);
-			return -1;
+			goto error;
 		}
 
 		dma_copy_track[dma_id].vchans = vchans;
@@ -1877,6 +1880,7 @@  rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
 		VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n",
 			dma_id, vchan_id);
+		pthread_mutex_unlock(&vhost_dma_lock);
 		return 0;
 	}
 
@@ -1894,7 +1898,7 @@  rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 			rte_free(dma_copy_track[dma_id].vchans);
 			dma_copy_track[dma_id].vchans = NULL;
 		}
-		return -1;
+		goto error;
 	}
 
 	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;
@@ -1902,7 +1906,12 @@  rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)
 	dma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;
 	dma_copy_track[dma_id].nr_vchans++;
 
+	pthread_mutex_unlock(&vhost_dma_lock);
 	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
 }
 
 int
@@ -2091,5 +2100,58 @@  int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
 	return 0;
 }
 
+int
+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)
+{
+	struct rte_dma_info info;
+	struct rte_dma_stats stats = { 0 };
+
+	pthread_mutex_lock(&vhost_dma_lock);
+
+	if (!rte_dma_is_valid(dma_id)) {
+		VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id);
+		goto error;
+	}
+
+	if (rte_dma_info_get(dma_id, &info) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id);
+		goto error;
+	}
+
+	if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans ||
+		!dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {
+		VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) {
+		VHOST_LOG_CONFIG("dma", ERR,
+				 "Failed to get stats for DMA %d vChannel %u.\n", dma_id, vchan_id);
+		goto error;
+	}
+
+	if (stats.submitted - stats.completed != 0) {
+		VHOST_LOG_CONFIG("dma", ERR,
+				 "Do not unconfigure when there are inflight packets.\n");
+		goto error;
+	}
+
+	rte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);
+	dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;
+	dma_copy_track[dma_id].nr_vchans--;
+
+	if (dma_copy_track[dma_id].nr_vchans == 0) {
+		rte_free(dma_copy_track[dma_id].vchans);
+		dma_copy_track[dma_id].vchans = NULL;
+	}
+
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return 0;
+
+error:
+	pthread_mutex_unlock(&vhost_dma_lock);
+	return -1;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);