Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/81732/?format=api
http://patchwork.dpdk.org/api/patches/81732/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/20201022064625.90974-3-Cheng1.jiang@intel.com/", "project": { "id": 1, "url": "http://patchwork.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20201022064625.90974-3-Cheng1.jiang@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20201022064625.90974-3-Cheng1.jiang@intel.com", "date": "2020-10-22T06:46:23", "name": "[v9,2/4] example/vhost: add support for vhost async data path", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "59b0bb70794ec3d2a8575f8f229d276cbb80f0fb", "submitter": { "id": 1530, "url": "http://patchwork.dpdk.org/api/people/1530/?format=api", "name": "Jiang, Cheng1", "email": "Cheng1.jiang@intel.com" }, "delegate": { "id": 2642, "url": "http://patchwork.dpdk.org/api/users/2642/?format=api", "username": "mcoquelin", "first_name": "Maxime", "last_name": "Coquelin", "email": "maxime.coquelin@redhat.com" }, "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/20201022064625.90974-3-Cheng1.jiang@intel.com/mbox/", "series": [ { "id": 13190, "url": "http://patchwork.dpdk.org/api/series/13190/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=13190", "date": "2020-10-22T06:46:21", "name": "add async data path in vhost sample", "version": 9, "mbox": "http://patchwork.dpdk.org/series/13190/mbox/" } ], "comments": "http://patchwork.dpdk.org/api/patches/81732/comments/", "check": "success", "checks": "http://patchwork.dpdk.org/api/patches/81732/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 52443A04DD;\n\tThu, 22 Oct 2020 09:00:08 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id ED6A37CC8;\n\tThu, 22 Oct 2020 08:59:49 +0200 (CEST)", "from mga12.intel.com (mga12.intel.com [192.55.52.136])\n by dpdk.org (Postfix) with ESMTP id 4733669C8\n for <dev@dpdk.org>; Thu, 22 Oct 2020 08:59:37 +0200 (CEST)", "from orsmga007.jf.intel.com ([10.7.209.58])\n by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 21 Oct 2020 23:59:35 -0700", "from dpdk_jiangcheng.sh.intel.com ([10.67.119.112])\n by orsmga007.jf.intel.com with ESMTP; 21 Oct 2020 23:59:33 -0700" ], "IronPort-SDR": [ "\n M26MpAA4tfM+Cbehj/7rpMDfgU6rDr91p6joldsbmGRnAn/Kp30Vx3fqUC8oYb6yPOofwHZSgb\n sTecsVoQ50VA==", "\n SVJKLcMMvSiLUubcdIPAED0kZL8GekPpcxBGh+mtS9d0NMF/xhuTPrdw9wct8NkEHzOclJuKv7\n PGopBmLUwXtQ==" ], "X-IronPort-AV": [ "E=McAfee;i=\"6000,8403,9781\"; a=\"146774888\"", "E=Sophos;i=\"5.77,403,1596524400\"; d=\"scan'208\";a=\"146774888\"", "E=Sophos;i=\"5.77,403,1596524400\"; d=\"scan'208\";a=\"359772093\"" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "From": "Cheng Jiang <Cheng1.jiang@intel.com>", "To": "maxime.coquelin@redhat.com,\n\tchenbo.xia@intel.com", "Cc": "dev@dpdk.org, patrick.fu@intel.com, YvonneX.Yang@intel.com,\n Cheng Jiang <Cheng1.jiang@intel.com>", "Date": "Thu, 22 Oct 2020 06:46:23 +0000", "Message-Id": "<20201022064625.90974-3-Cheng1.jiang@intel.com>", "X-Mailer": "git-send-email 2.27.0", "In-Reply-To": "<20201022064625.90974-1-Cheng1.jiang@intel.com>", "References": "<20201021065044.31839-1-Cheng1.jiang@intel.com>\n <20201022064625.90974-1-Cheng1.jiang@intel.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Subject": "[dpdk-dev] [PATCH v9 2/4] example/vhost: add support for vhost\n\tasync data path", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "This patch is to implement vhost DMA operation callbacks for CBDMA\nPMD and add vhost async data-path in vhost sample. With providing\ncallback implementation for CBDMA, vswitch can leverage IOAT to\naccelerate vhost async data-path.\n\nSigned-off-by: Cheng Jiang <Cheng1.jiang@intel.com>\n---\n examples/vhost/ioat.c | 99 +++++++++++++++++++++++++++++++++++++++++++\n examples/vhost/ioat.h | 12 ++++++\n examples/vhost/main.c | 57 ++++++++++++++++++++++++-\n examples/vhost/main.h | 1 +\n 4 files changed, 168 insertions(+), 1 deletion(-)", "diff": "diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c\nindex 52767d599..4c3ceb990 100644\n--- a/examples/vhost/ioat.c\n+++ b/examples/vhost/ioat.c\n@@ -3,10 +3,20 @@\n */\n #include <rte_rawdev.h>\n #include <rte_ioat_rawdev.h>\n+#include <sys/uio.h>\n \n #include \"ioat.h\"\n #include \"main.h\"\n \n+struct packet_tracker {\n+\tunsigned short size_track[MAX_ENQUEUED_SIZE];\n+\tunsigned short next_read;\n+\tunsigned short next_write;\n+\tunsigned short last_remain;\n+};\n+\n+struct packet_tracker cb_tracker[MAX_VHOST_DEVICE];\n+\n int\n open_ioat(const char *value)\n {\n@@ -97,3 +107,92 @@ open_ioat(const char *value)\n \tfree(input);\n \treturn ret;\n }\n+\n+uint32_t\n+ioat_transfer_data_cb(int vid, uint16_t queue_id,\n+\t\tstruct rte_vhost_async_desc *descs,\n+\t\tstruct rte_vhost_async_status *opaque_data, uint16_t count)\n+{\n+\tuint32_t i_desc;\n+\tint dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id;\n+\tstruct rte_vhost_iov_iter *src = NULL;\n+\tstruct rte_vhost_iov_iter *dst = NULL;\n+\tunsigned long i_seg;\n+\tunsigned short mask = MAX_ENQUEUED_SIZE - 1;\n+\tunsigned short write = cb_tracker[dev_id].next_write;\n+\n+\tif (!opaque_data) {\n+\t\tfor (i_desc = 0; i_desc < count; i_desc++) {\n+\t\t\tsrc = descs[i_desc].src;\n+\t\t\tdst = descs[i_desc].dst;\n+\t\t\ti_seg = 0;\n+\t\t\twhile (i_seg < src->nr_segs) {\n+\t\t\t\t/*\n+\t\t\t\t * TODO: Assuming that the ring space of the\n+\t\t\t\t * IOAT device is large enough, so there is no\n+\t\t\t\t * error here, and the actual error handling\n+\t\t\t\t * will be added later.\n+\t\t\t\t */\n+\t\t\t\trte_ioat_enqueue_copy(dev_id,\n+\t\t\t\t\t(uintptr_t)(src->iov[i_seg].iov_base)\n+\t\t\t\t\t\t+ src->offset,\n+\t\t\t\t\t(uintptr_t)(dst->iov[i_seg].iov_base)\n+\t\t\t\t\t\t+ dst->offset,\n+\t\t\t\t\tsrc->iov[i_seg].iov_len,\n+\t\t\t\t\t0,\n+\t\t\t\t\t0);\n+\t\t\t\ti_seg++;\n+\t\t\t}\n+\t\t\twrite &= mask;\n+\t\t\tcb_tracker[dev_id].size_track[write] = i_seg;\n+\t\t\twrite++;\n+\t\t}\n+\t} else {\n+\t\t/* Opaque data is not supported */\n+\t\treturn -1;\n+\t}\n+\t/* ring the doorbell */\n+\trte_ioat_perform_ops(dev_id);\n+\tcb_tracker[dev_id].next_write = write;\n+\treturn i_desc;\n+}\n+\n+uint32_t\n+ioat_check_completed_copies_cb(int vid, uint16_t queue_id,\n+\t\tstruct rte_vhost_async_status *opaque_data,\n+\t\tuint16_t max_packets)\n+{\n+\tif (!opaque_data) {\n+\t\tuintptr_t dump[255];\n+\t\tunsigned short n_seg;\n+\t\tunsigned short read, write;\n+\t\tunsigned short nb_packet = 0;\n+\t\tunsigned short mask = MAX_ENQUEUED_SIZE - 1;\n+\t\tunsigned short i;\n+\t\tint dev_id = dma_bind[vid].dmas[queue_id * 2\n+\t\t\t\t+ VIRTIO_RXQ].dev_id;\n+\t\tn_seg = rte_ioat_completed_ops(dev_id, 255, dump, dump);\n+\t\tn_seg += cb_tracker[dev_id].last_remain;\n+\t\tif (!n_seg)\n+\t\t\treturn 0;\n+\t\tread = cb_tracker[dev_id].next_read;\n+\t\twrite = cb_tracker[dev_id].next_write;\n+\t\tfor (i = 0; i < max_packets; i++) {\n+\t\t\tread &= mask;\n+\t\t\tif (read == write)\n+\t\t\t\tbreak;\n+\t\t\tif (n_seg >= cb_tracker[dev_id].size_track[read]) {\n+\t\t\t\tn_seg -= cb_tracker[dev_id].size_track[read];\n+\t\t\t\tread++;\n+\t\t\t\tnb_packet++;\n+\t\t\t} else {\n+\t\t\t\tbreak;\n+\t\t\t}\n+\t\t}\n+\t\tcb_tracker[dev_id].next_read = read;\n+\t\tcb_tracker[dev_id].last_remain = n_seg;\n+\t\treturn nb_packet;\n+\t}\n+\t/* Opaque data is not supported */\n+\treturn -1;\n+}\ndiff --git a/examples/vhost/ioat.h b/examples/vhost/ioat.h\nindex 02c1a8d35..c3bb8145a 100644\n--- a/examples/vhost/ioat.h\n+++ b/examples/vhost/ioat.h\n@@ -7,9 +7,11 @@\n \n #include <rte_vhost.h>\n #include <rte_pci.h>\n+#include <rte_vhost_async.h>\n \n #define MAX_VHOST_DEVICE 1024\n #define IOAT_RING_SIZE 4096\n+#define MAX_ENQUEUED_SIZE 256\n \n struct dma_info {\n \tstruct rte_pci_addr addr;\n@@ -32,4 +34,14 @@ static int open_ioat(const char *value __rte_unused)\n \treturn -1;\n }\n #endif\n+\n+uint32_t\n+ioat_transfer_data_cb(int vid, uint16_t queue_id,\n+\t\tstruct rte_vhost_async_desc *descs,\n+\t\tstruct rte_vhost_async_status *opaque_data, uint16_t count);\n+\n+uint32_t\n+ioat_check_completed_copies_cb(int vid, uint16_t queue_id,\n+\t\tstruct rte_vhost_async_status *opaque_data,\n+\t\tuint16_t max_packets);\n #endif /* _IOAT_H_ */\ndiff --git a/examples/vhost/main.c b/examples/vhost/main.c\nindex 08182ff01..59a1aff07 100644\n--- a/examples/vhost/main.c\n+++ b/examples/vhost/main.c\n@@ -803,9 +803,22 @@ virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev,\n \t struct rte_mbuf *m)\n {\n \tuint16_t ret;\n+\tstruct rte_mbuf *m_cpl[1];\n \n \tif (builtin_net_driver) {\n \t\tret = vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, &m, 1);\n+\t} else if (async_vhost_driver) {\n+\t\tret = rte_vhost_submit_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ,\n+\t\t\t\t\t\t&m, 1);\n+\n+\t\tif (likely(ret))\n+\t\t\tdst_vdev->nr_async_pkts++;\n+\n+\t\twhile (likely(dst_vdev->nr_async_pkts)) {\n+\t\t\tif (rte_vhost_poll_enqueue_completed(dst_vdev->vid,\n+\t\t\t\t\tVIRTIO_RXQ, m_cpl, 1))\n+\t\t\t\tdst_vdev->nr_async_pkts--;\n+\t\t}\n \t} else {\n \t\tret = rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, &m, 1);\n \t}\n@@ -1054,6 +1067,19 @@ drain_mbuf_table(struct mbuf_table *tx_q)\n \t}\n }\n \n+static __rte_always_inline void\n+complete_async_pkts(struct vhost_dev *vdev, uint16_t qid)\n+{\n+\tstruct rte_mbuf *p_cpl[MAX_PKT_BURST];\n+\tuint16_t complete_count;\n+\n+\tcomplete_count = rte_vhost_poll_enqueue_completed(vdev->vid,\n+\t\t\t\t\t\tqid, p_cpl, MAX_PKT_BURST);\n+\tvdev->nr_async_pkts -= complete_count;\n+\tif (complete_count)\n+\t\tfree_pkts(p_cpl, complete_count);\n+}\n+\n static __rte_always_inline void\n drain_eth_rx(struct vhost_dev *vdev)\n {\n@@ -1062,6 +1088,10 @@ drain_eth_rx(struct vhost_dev *vdev)\n \n \trx_count = rte_eth_rx_burst(ports[0], vdev->vmdq_rx_q,\n \t\t\t\t pkts, MAX_PKT_BURST);\n+\n+\twhile (likely(vdev->nr_async_pkts))\n+\t\tcomplete_async_pkts(vdev, VIRTIO_RXQ);\n+\n \tif (!rx_count)\n \t\treturn;\n \n@@ -1086,16 +1116,22 @@ drain_eth_rx(struct vhost_dev *vdev)\n \tif (builtin_net_driver) {\n \t\tenqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ,\n \t\t\t\t\t\tpkts, rx_count);\n+\t} else if (async_vhost_driver) {\n+\t\tenqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid,\n+\t\t\t\t\tVIRTIO_RXQ, pkts, rx_count);\n+\t\tvdev->nr_async_pkts += enqueue_count;\n \t} else {\n \t\tenqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ,\n \t\t\t\t\t\tpkts, rx_count);\n \t}\n+\n \tif (enable_stats) {\n \t\trte_atomic64_add(&vdev->stats.rx_total_atomic, rx_count);\n \t\trte_atomic64_add(&vdev->stats.rx_atomic, enqueue_count);\n \t}\n \n-\tfree_pkts(pkts, rx_count);\n+\tif (!async_vhost_driver)\n+\t\tfree_pkts(pkts, rx_count);\n }\n \n static __rte_always_inline void\n@@ -1242,6 +1278,9 @@ destroy_device(int vid)\n \t\t\"(%d) device has been removed from data core\\n\",\n \t\tvdev->vid);\n \n+\tif (async_vhost_driver)\n+\t\trte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);\n+\n \trte_free(vdev);\n }\n \n@@ -1256,6 +1295,12 @@ new_device(int vid)\n \tuint32_t device_num_min = num_devices;\n \tstruct vhost_dev *vdev;\n \n+\tstruct rte_vhost_async_channel_ops channel_ops = {\n+\t\t.transfer_data = ioat_transfer_data_cb,\n+\t\t.check_completed_copies = ioat_check_completed_copies_cb\n+\t};\n+\tstruct rte_vhost_async_features f;\n+\n \tvdev = rte_zmalloc(\"vhost device\", sizeof(*vdev), RTE_CACHE_LINE_SIZE);\n \tif (vdev == NULL) {\n \t\tRTE_LOG(INFO, VHOST_DATA,\n@@ -1296,6 +1341,13 @@ new_device(int vid)\n \t\t\"(%d) device has been added to data core %d\\n\",\n \t\tvid, vdev->coreid);\n \n+\tif (async_vhost_driver) {\n+\t\tf.async_inorder = 1;\n+\t\tf.async_threshold = 256;\n+\t\treturn rte_vhost_async_channel_register(vid, VIRTIO_RXQ,\n+\t\t\tf.intval, &channel_ops);\n+\t}\n+\n \treturn 0;\n }\n \n@@ -1534,6 +1586,9 @@ main(int argc, char *argv[])\n \t/* Register vhost user driver to handle vhost messages. */\n \tfor (i = 0; i < nb_sockets; i++) {\n \t\tchar *file = socket_files + i * PATH_MAX;\n+\t\tif (async_vhost_driver)\n+\t\t\tflags = flags | RTE_VHOST_USER_ASYNC_COPY;\n+\n \t\tret = rte_vhost_driver_register(file, flags);\n \t\tif (ret != 0) {\n \t\t\tunregister_drivers(i);\ndiff --git a/examples/vhost/main.h b/examples/vhost/main.h\nindex 7cba0edbf..4317b6ae8 100644\n--- a/examples/vhost/main.h\n+++ b/examples/vhost/main.h\n@@ -51,6 +51,7 @@ struct vhost_dev {\n \tuint64_t features;\n \tsize_t hdr_len;\n \tuint16_t nr_vrings;\n+\tuint16_t nr_async_pkts;\n \tstruct rte_vhost_memory *mem;\n \tstruct device_statistics stats;\n \tTAILQ_ENTRY(vhost_dev) global_vdev_entry;\n", "prefixes": [ "v9", "2/4" ] }{ "id": 81732, "url": "