From patchwork Thu Oct 22 08:59:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81753 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3C79A04DD; Thu, 22 Oct 2020 11:12:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 857B1A951; Thu, 22 Oct 2020 11:12:19 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id CC748A939 for ; Thu, 22 Oct 2020 11:12:15 +0200 (CEST) IronPort-SDR: TNlFHhVMnfROuPP/DaPEjY+HNpCXzAqpLL0NSPX3gXuxU9bnY3nknKNktubQsvXEwgaWFg37Dl T1W4qG2RRimA== X-IronPort-AV: E=McAfee;i="6000,8403,9781"; a="229126559" X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="229126559" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2020 02:12:15 -0700 IronPort-SDR: yOgxSq4DcrQhPIsKJx8RmU2tnioMDE0GZ9tvXcV7ZhujbZm+YscxYnO5NT08snxoSCY/MJmT9r 3Dlq4zo4dTYg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="359183122" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2020 02:12:13 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, YvonneX.Yang@intel.com, Cheng Jiang Date: Thu, 22 Oct 2020 08:59:06 +0000 Message-Id: <20201022085909.112403-2-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201022085909.112403-1-Cheng1.jiang@intel.com> References: <20200910064351.35513-1-Cheng1.jiang@intel.com> <20201022085909.112403-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v10 1/4] example/vhost: add async vhost args parsing function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch is to add async vhost driver arguments parsing function for CBDMA channel, DMA initiation function and args description. The meson build file is changed to fix dependency problem. With these arguments vhost device can be set to use CBDMA or CPU for enqueue operation and bind vhost device with specific CBDMA channel to accelerate data copy. Signed-off-by: Cheng Jiang Reviewed-by: Maxime Coquelin --- examples/vhost/ioat.c | 101 +++++++++++++++++++++++++++++++++++++ examples/vhost/ioat.h | 33 ++++++++++++ examples/vhost/main.c | 36 ++++++++++++- examples/vhost/meson.build | 5 ++ 4 files changed, 174 insertions(+), 1 deletion(-) create mode 100644 examples/vhost/ioat.c create mode 100644 examples/vhost/ioat.h diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c new file mode 100644 index 000000000..8cf44cb54 --- /dev/null +++ b/examples/vhost/ioat.c @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2020 Intel Corporation + */ +#include +#include + +#include "ioat.h" +#include "main.h" + +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE]; + +int +open_ioat(const char *value) +{ + struct dma_for_vhost *dma_info = dma_bind; + char *input = strndup(value, strlen(value) + 1); + char *addrs = input; + char *ptrs[2]; + char *start, *end, *substr; + int64_t vid, vring_id; + struct rte_ioat_rawdev_config config; + struct rte_rawdev_info info = { .dev_private = &config }; + char name[32]; + int dev_id; + int ret = 0; + uint16_t i = 0; + char *dma_arg[MAX_VHOST_DEVICE]; + uint8_t args_nr; + + while (isblank(*addrs)) + addrs++; + if (*addrs == '\0') { + ret = -1; + goto out; + } + + /* process DMA devices within bracket. */ + addrs++; + substr = strtok(addrs, ";]"); + if (!substr) { + ret = -1; + goto out; + } + args_nr = rte_strsplit(substr, strlen(substr), + dma_arg, MAX_VHOST_DEVICE, ','); + do { + char *arg_temp = dma_arg[i]; + rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@'); + + start = strstr(ptrs[0], "txd"); + if (start == NULL) { + ret = -1; + goto out; + } + + start += 3; + vid = strtol(start, &end, 0); + if (end == start) { + ret = -1; + goto out; + } + + vring_id = 0 + VIRTIO_RXQ; + if (rte_pci_addr_parse(ptrs[1], + &(dma_info + vid)->dmas[vring_id].addr) < 0) { + ret = -1; + goto out; + } + + rte_pci_device_name(&(dma_info + vid)->dmas[vring_id].addr, + name, sizeof(name)); + dev_id = rte_rawdev_get_dev_id(name); + if (dev_id == (uint16_t)(-ENODEV) || + dev_id == (uint16_t)(-EINVAL)) { + ret = -1; + goto out; + } + + if (rte_rawdev_info_get(dev_id, &info, sizeof(config)) < 0 || + strstr(info.driver_name, "ioat") == NULL) { + ret = -1; + goto out; + } + + (dma_info + vid)->dmas[vring_id].dev_id = dev_id; + (dma_info + vid)->dmas[vring_id].is_valid = true; + config.ring_size = IOAT_RING_SIZE; + config.hdls_disable = true; + if (rte_rawdev_configure(dev_id, &info, sizeof(config)) < 0) { + ret = -1; + goto out; + } + rte_rawdev_start(dev_id); + + dma_info->nr++; + i++; + } while (i < args_nr); +out: + free(input); + return ret; +} diff --git a/examples/vhost/ioat.h b/examples/vhost/ioat.h new file mode 100644 index 000000000..9641286a9 --- /dev/null +++ b/examples/vhost/ioat.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2020 Intel Corporation + */ + +#ifndef _IOAT_H_ +#define _IOAT_H_ + +#include +#include + +#define MAX_VHOST_DEVICE 1024 +#define IOAT_RING_SIZE 4096 + +struct dma_info { + struct rte_pci_addr addr; + uint16_t dev_id; + bool is_valid; +}; + +struct dma_for_vhost { + struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2]; + uint16_t nr; +}; + +#ifdef RTE_ARCH_X86 +int open_ioat(const char *value); +#else +static int open_ioat(const char *value __rte_unused) +{ + return -1; +} +#endif +#endif /* _IOAT_H_ */ diff --git a/examples/vhost/main.c b/examples/vhost/main.c index faa482245..08182ff01 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -25,6 +25,7 @@ #include #include +#include "ioat.h" #include "main.h" #ifndef MAX_QUEUES @@ -95,6 +96,10 @@ static int client_mode; static int builtin_net_driver; +static int async_vhost_driver; + +static char dma_type[MAX_LONG_OPT_SZ]; + /* Specify timeout (in useconds) between retries on RX. */ static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US; /* Specify the number of retries on RX. */ @@ -181,6 +186,15 @@ struct mbuf_table lcore_tx_queue[RTE_MAX_LCORE]; / US_PER_S * BURST_TX_DRAIN_US) #define VLAN_HLEN 4 +static inline int +open_dma(const char *value) +{ + if (strncmp(dma_type, "ioat", 4) == 0) + return open_ioat(value); + + return -1; +} + /* * Builds up the correct configuration for VMDQ VLAN pool map * according to the pool & queue limits. @@ -446,7 +460,9 @@ us_vhost_usage(const char *prgname) " --socket-file: The path of the socket file.\n" " --tx-csum [0|1] disable/enable TX checksum offload.\n" " --tso [0|1] disable/enable TCP segment offload.\n" - " --client register a vhost-user socket as client mode.\n", + " --client register a vhost-user socket as client mode.\n" + " --dma-type register dma type for your vhost async driver. For example \"ioat\" for now.\n" + " --dmas register dma channel for specific vhost device.\n", prgname); } @@ -472,6 +488,8 @@ us_vhost_parse_args(int argc, char **argv) {"tso", required_argument, NULL, 0}, {"client", no_argument, &client_mode, 1}, {"builtin-net-driver", no_argument, &builtin_net_driver, 1}, + {"dma-type", required_argument, NULL, 0}, + {"dmas", required_argument, NULL, 0}, {NULL, 0, 0, 0}, }; @@ -614,6 +632,22 @@ us_vhost_parse_args(int argc, char **argv) } } + if (!strncmp(long_option[option_index].name, + "dma-type", MAX_LONG_OPT_SZ)) { + strcpy(dma_type, optarg); + } + + if (!strncmp(long_option[option_index].name, + "dmas", MAX_LONG_OPT_SZ)) { + if (open_dma(optarg) == -1) { + RTE_LOG(INFO, VHOST_CONFIG, + "Wrong DMA args\n"); + us_vhost_usage(prgname); + return -1; + } + async_vhost_driver = 1; + } + break; /* Invalid option - print options. */ diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build index 872d51153..24f1f7131 100644 --- a/examples/vhost/meson.build +++ b/examples/vhost/meson.build @@ -14,3 +14,8 @@ allow_experimental_apis = true sources = files( 'main.c', 'virtio_net.c' ) + +if dpdk_conf.has('RTE_ARCH_X86') + deps += 'raw_ioat' + sources += files('ioat.c') +endif From patchwork Thu Oct 22 08:59:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81754 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 15433A04DD; Thu, 22 Oct 2020 11:12:50 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 19342A95A; Thu, 22 Oct 2020 11:12:21 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 64C79A950 for ; Thu, 22 Oct 2020 11:12:18 +0200 (CEST) IronPort-SDR: oqwQK4em0Cac2y1q/ao6X05j2F8S5T77T9fR4AqsZdAP0n+GainH6VBOFLVLXNCnLE1q/egXgK bekdGov6/zEA== X-IronPort-AV: E=McAfee;i="6000,8403,9781"; a="229126563" X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="229126563" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2020 02:12:17 -0700 IronPort-SDR: aRmJaYhnYmc8PWQ9SiHRWaLHmR1aMr0UDP6ro29ocDku0jDWn1B1N5GqYb2/Fa1N/7O7q9HZy8 hSAve8QMJxsw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="359183132" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2020 02:12:16 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, YvonneX.Yang@intel.com, Cheng Jiang Date: Thu, 22 Oct 2020 08:59:07 +0000 Message-Id: <20201022085909.112403-3-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201022085909.112403-1-Cheng1.jiang@intel.com> References: <20200910064351.35513-1-Cheng1.jiang@intel.com> <20201022085909.112403-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v10 2/4] example/vhost: add support for vhost async data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch is to implement vhost DMA operation callbacks for CBDMA PMD and add vhost async data-path in vhost sample. With providing callback implementation for CBDMA, vswitch can leverage IOAT to accelerate vhost async data-path. Signed-off-by: Cheng Jiang Reviewed-by: Maxime Coquelin --- examples/vhost/ioat.c | 100 ++++++++++++++++++++++++++++++++++++++++++ examples/vhost/ioat.h | 12 +++++ examples/vhost/main.c | 57 +++++++++++++++++++++++- examples/vhost/main.h | 1 + 4 files changed, 169 insertions(+), 1 deletion(-) diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c index 8cf44cb54..b2c74f653 100644 --- a/examples/vhost/ioat.c +++ b/examples/vhost/ioat.c @@ -3,12 +3,23 @@ */ #include #include +#include #include "ioat.h" #include "main.h" struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE]; +struct packet_tracker { + unsigned short size_track[MAX_ENQUEUED_SIZE]; + unsigned short next_read; + unsigned short next_write; + unsigned short last_remain; +}; + +struct packet_tracker cb_tracker[MAX_VHOST_DEVICE]; + + int open_ioat(const char *value) { @@ -99,3 +110,92 @@ open_ioat(const char *value) free(input); return ret; } + +uint32_t +ioat_transfer_data_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_desc *descs, + struct rte_vhost_async_status *opaque_data, uint16_t count) +{ + uint32_t i_desc; + int dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id; + struct rte_vhost_iov_iter *src = NULL; + struct rte_vhost_iov_iter *dst = NULL; + unsigned long i_seg; + unsigned short mask = MAX_ENQUEUED_SIZE - 1; + unsigned short write = cb_tracker[dev_id].next_write; + + if (!opaque_data) { + for (i_desc = 0; i_desc < count; i_desc++) { + src = descs[i_desc].src; + dst = descs[i_desc].dst; + i_seg = 0; + while (i_seg < src->nr_segs) { + /* + * TODO: Assuming that the ring space of the + * IOAT device is large enough, so there is no + * error here, and the actual error handling + * will be added later. + */ + rte_ioat_enqueue_copy(dev_id, + (uintptr_t)(src->iov[i_seg].iov_base) + + src->offset, + (uintptr_t)(dst->iov[i_seg].iov_base) + + dst->offset, + src->iov[i_seg].iov_len, + 0, + 0); + i_seg++; + } + write &= mask; + cb_tracker[dev_id].size_track[write] = i_seg; + write++; + } + } else { + /* Opaque data is not supported */ + return -1; + } + /* ring the doorbell */ + rte_ioat_perform_ops(dev_id); + cb_tracker[dev_id].next_write = write; + return i_desc; +} + +uint32_t +ioat_check_completed_copies_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_status *opaque_data, + uint16_t max_packets) +{ + if (!opaque_data) { + uintptr_t dump[255]; + unsigned short n_seg; + unsigned short read, write; + unsigned short nb_packet = 0; + unsigned short mask = MAX_ENQUEUED_SIZE - 1; + unsigned short i; + int dev_id = dma_bind[vid].dmas[queue_id * 2 + + VIRTIO_RXQ].dev_id; + n_seg = rte_ioat_completed_ops(dev_id, 255, dump, dump); + n_seg += cb_tracker[dev_id].last_remain; + if (!n_seg) + return 0; + read = cb_tracker[dev_id].next_read; + write = cb_tracker[dev_id].next_write; + for (i = 0; i < max_packets; i++) { + read &= mask; + if (read == write) + break; + if (n_seg >= cb_tracker[dev_id].size_track[read]) { + n_seg -= cb_tracker[dev_id].size_track[read]; + read++; + nb_packet++; + } else { + break; + } + } + cb_tracker[dev_id].next_read = read; + cb_tracker[dev_id].last_remain = n_seg; + return nb_packet; + } + /* Opaque data is not supported */ + return -1; +} diff --git a/examples/vhost/ioat.h b/examples/vhost/ioat.h index 9641286a9..9664fcc3a 100644 --- a/examples/vhost/ioat.h +++ b/examples/vhost/ioat.h @@ -7,9 +7,11 @@ #include #include +#include #define MAX_VHOST_DEVICE 1024 #define IOAT_RING_SIZE 4096 +#define MAX_ENQUEUED_SIZE 256 struct dma_info { struct rte_pci_addr addr; @@ -30,4 +32,14 @@ static int open_ioat(const char *value __rte_unused) return -1; } #endif + +uint32_t +ioat_transfer_data_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_desc *descs, + struct rte_vhost_async_status *opaque_data, uint16_t count); + +uint32_t +ioat_check_completed_copies_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_status *opaque_data, + uint16_t max_packets); #endif /* _IOAT_H_ */ diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 08182ff01..59a1aff07 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -803,9 +803,22 @@ virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, struct rte_mbuf *m) { uint16_t ret; + struct rte_mbuf *m_cpl[1]; if (builtin_net_driver) { ret = vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, &m, 1); + } else if (async_vhost_driver) { + ret = rte_vhost_submit_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, + &m, 1); + + if (likely(ret)) + dst_vdev->nr_async_pkts++; + + while (likely(dst_vdev->nr_async_pkts)) { + if (rte_vhost_poll_enqueue_completed(dst_vdev->vid, + VIRTIO_RXQ, m_cpl, 1)) + dst_vdev->nr_async_pkts--; + } } else { ret = rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, &m, 1); } @@ -1054,6 +1067,19 @@ drain_mbuf_table(struct mbuf_table *tx_q) } } +static __rte_always_inline void +complete_async_pkts(struct vhost_dev *vdev, uint16_t qid) +{ + struct rte_mbuf *p_cpl[MAX_PKT_BURST]; + uint16_t complete_count; + + complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, + qid, p_cpl, MAX_PKT_BURST); + vdev->nr_async_pkts -= complete_count; + if (complete_count) + free_pkts(p_cpl, complete_count); +} + static __rte_always_inline void drain_eth_rx(struct vhost_dev *vdev) { @@ -1062,6 +1088,10 @@ drain_eth_rx(struct vhost_dev *vdev) rx_count = rte_eth_rx_burst(ports[0], vdev->vmdq_rx_q, pkts, MAX_PKT_BURST); + + while (likely(vdev->nr_async_pkts)) + complete_async_pkts(vdev, VIRTIO_RXQ); + if (!rx_count) return; @@ -1086,16 +1116,22 @@ drain_eth_rx(struct vhost_dev *vdev) if (builtin_net_driver) { enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ, pkts, rx_count); + } else if (async_vhost_driver) { + enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, + VIRTIO_RXQ, pkts, rx_count); + vdev->nr_async_pkts += enqueue_count; } else { enqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, pkts, rx_count); } + if (enable_stats) { rte_atomic64_add(&vdev->stats.rx_total_atomic, rx_count); rte_atomic64_add(&vdev->stats.rx_atomic, enqueue_count); } - free_pkts(pkts, rx_count); + if (!async_vhost_driver) + free_pkts(pkts, rx_count); } static __rte_always_inline void @@ -1242,6 +1278,9 @@ destroy_device(int vid) "(%d) device has been removed from data core\n", vdev->vid); + if (async_vhost_driver) + rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); + rte_free(vdev); } @@ -1256,6 +1295,12 @@ new_device(int vid) uint32_t device_num_min = num_devices; struct vhost_dev *vdev; + struct rte_vhost_async_channel_ops channel_ops = { + .transfer_data = ioat_transfer_data_cb, + .check_completed_copies = ioat_check_completed_copies_cb + }; + struct rte_vhost_async_features f; + vdev = rte_zmalloc("vhost device", sizeof(*vdev), RTE_CACHE_LINE_SIZE); if (vdev == NULL) { RTE_LOG(INFO, VHOST_DATA, @@ -1296,6 +1341,13 @@ new_device(int vid) "(%d) device has been added to data core %d\n", vid, vdev->coreid); + if (async_vhost_driver) { + f.async_inorder = 1; + f.async_threshold = 256; + return rte_vhost_async_channel_register(vid, VIRTIO_RXQ, + f.intval, &channel_ops); + } + return 0; } @@ -1534,6 +1586,9 @@ main(int argc, char *argv[]) /* Register vhost user driver to handle vhost messages. */ for (i = 0; i < nb_sockets; i++) { char *file = socket_files + i * PATH_MAX; + if (async_vhost_driver) + flags = flags | RTE_VHOST_USER_ASYNC_COPY; + ret = rte_vhost_driver_register(file, flags); if (ret != 0) { unregister_drivers(i); diff --git a/examples/vhost/main.h b/examples/vhost/main.h index 7cba0edbf..4317b6ae8 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -51,6 +51,7 @@ struct vhost_dev { uint64_t features; size_t hdr_len; uint16_t nr_vrings; + uint16_t nr_async_pkts; struct rte_vhost_memory *mem; struct device_statistics stats; TAILQ_ENTRY(vhost_dev) global_vdev_entry; From patchwork Thu Oct 22 08:59:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81755 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B904A04DD; Thu, 22 Oct 2020 11:13:16 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3F899A968; Thu, 22 Oct 2020 11:12:25 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 016C0A958 for ; Thu, 22 Oct 2020 11:12:20 +0200 (CEST) IronPort-SDR: CqyAwguC/IgvkbkZIVJ+SdEdEvACSEcG/AxXxuzhn7nMoOydvixJ7WtxCTKDATKC/o+6uh62Bn bDuUEVy7OCdQ== X-IronPort-AV: E=McAfee;i="6000,8403,9781"; a="229126574" X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="229126574" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2020 02:12:20 -0700 IronPort-SDR: Zk7cy0EzaYVBc+9cACU2hMLscoI70QaWWi7hFvqf9CmP2AxT8LUaa/I4Pu6s9MZ78gggqe+2g/ gqtvu0Xo2J7Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="359183150" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2020 02:12:18 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, YvonneX.Yang@intel.com, Cheng Jiang Date: Thu, 22 Oct 2020 08:59:08 +0000 Message-Id: <20201022085909.112403-4-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201022085909.112403-1-Cheng1.jiang@intel.com> References: <20200910064351.35513-1-Cheng1.jiang@intel.com> <20201022085909.112403-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v10 3/4] doc: update vhost sample doc for vhost async data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add vhost async driver arguments information for vhost async data path in vhost sample application. Signed-off-by: Cheng Jiang Reviewed-by: Maxime Coquelin --- doc/guides/sample_app_ug/vhost.rst | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst index b7ed4f8bd..0f4f70945 100644 --- a/doc/guides/sample_app_ug/vhost.rst +++ b/doc/guides/sample_app_ug/vhost.rst @@ -162,6 +162,17 @@ enabled and cannot be disabled. A very simple vhost-user net driver which demonstrates how to use the generic vhost APIs will be used when this option is given. It is disabled by default. +**--dma-type** +This parameter is used to specify DMA type for async vhost-user net driver which +demonstrates how to use the async vhost APIs. It's used in combination with dmas. + +**--dmas** +This parameter is used to specify the assigned DMA device of a vhost device. +Async vhost-user net driver will be used if --dmas is set. For example +--dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel 00:04.0 for vhost +device 0 enqueue operation and use DMA channel 00:04.1 for vhost device 1 +enqueue operation. + Common Issues ------------- From patchwork Thu Oct 22 08:59:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81756 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DE2DA04DD; Thu, 22 Oct 2020 11:13:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 87DD4A96C; Thu, 22 Oct 2020 11:12:29 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id DE24FA966 for ; Thu, 22 Oct 2020 11:12:23 +0200 (CEST) IronPort-SDR: Hiu9B9Pc+/Vy3DyH5UA6YTf9WLYdQyEMg7Tvp4glDCqq/CLxdrFcCZI5c9g4E+jxX/gmEqdgq6 2ra4GU84oCJg== X-IronPort-AV: E=McAfee;i="6000,8403,9781"; a="229126583" X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="229126583" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2020 02:12:23 -0700 IronPort-SDR: 4oGNzIVn/C35bj2cWXcZ3fU7oh5aAEPL/tE0tD0IQUlf1SfRHVclMzgCcPKBfPigt6HxHBGK1C S+E2AuI+25sA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,404,1596524400"; d="scan'208";a="359183162" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by FMSMGA003.fm.intel.com with ESMTP; 22 Oct 2020 02:12:21 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, YvonneX.Yang@intel.com, Cheng Jiang Date: Thu, 22 Oct 2020 08:59:09 +0000 Message-Id: <20201022085909.112403-5-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201022085909.112403-1-Cheng1.jiang@intel.com> References: <20200910064351.35513-1-Cheng1.jiang@intel.com> <20201022085909.112403-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v10 4/4] doc: update release notes for vhost sample X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add release notes for vhost async data path support in vhost sample. Signed-off-by: Cheng Jiang Reviewed-by: Maxime Coquelin --- doc/guides/rel_notes/release_20_11.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 0d45b5003..f2db063e0 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -345,6 +345,12 @@ New Features * Replaced ``--scalar`` command-line option with ``--alg=``, to allow the user to select the desired classify method. +* **Updated vhost sample application.** + + Added vhost asynchronous APIs support, which demonstrated how the application + leverage IOAT DMA channel with vhost asynchronous APIs. + See the :doc:`../sample_app_ug/vhost` for more details. + Removed Items -------------