From patchwork Mon Sep 21 06:48:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 78145 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B66FA04B7; Mon, 21 Sep 2020 08:53:03 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DF0341D946; Mon, 21 Sep 2020 08:52:58 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id CDF861D914 for ; Mon, 21 Sep 2020 08:52:55 +0200 (CEST) IronPort-SDR: FIiBVA3BJ8pecvZXYnEWV8K1jv3mlJxqDUmbQgdO+Y0r+Rki1Db4o3lVIwICZDLn4dF0ex61JD KRVQYVU/HI/g== X-IronPort-AV: E=McAfee;i="6000,8403,9750"; a="148073414" X-IronPort-AV: E=Sophos;i="5.77,285,1596524400"; d="scan'208";a="148073414" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2020 23:52:55 -0700 IronPort-SDR: fRwj87LKGLZP9NjYfKpFQVWNpHsYHtc/JBiuTBmCvj0yTRmozy00vCk03J1+j9t4DNemRjXki3 MAveLK/LeExw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,285,1596524400"; d="scan'208";a="412078401" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.56]) by fmsmga001.fm.intel.com with ESMTP; 20 Sep 2020 23:52:52 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, Marvin Liu Date: Mon, 21 Sep 2020 14:48:33 +0800 Message-Id: <20200921064837.15957-2-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200921064837.15957-1-yong.liu@intel.com> References: <20200819032414.51430-2-yong.liu@intel.com> <20200921064837.15957-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2 1/5] vhost: add vectorized data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Packed ring operations are split into batch and single functions for performance perspective. Ring operations in batch function can be accelerated by SIMD instructions like AVX512. So introduce vectorized parameter in vhost. Vectorized data path can be selected if platform and ring format matched requirements. Otherwise will fallback to original data path. Signed-off-by: Marvin Liu diff --git a/doc/guides/nics/vhost.rst b/doc/guides/nics/vhost.rst index d36f3120b2..efdaf4de09 100644 --- a/doc/guides/nics/vhost.rst +++ b/doc/guides/nics/vhost.rst @@ -64,6 +64,11 @@ The user can specify below arguments in `--vdev` option. It is used to enable external buffer support in vhost library. (Default: 0 (disabled)) +#. ``vectorized``: + + It is used to enable vectorized data path support in vhost library. + (Default: 0 (disabled)) + Vhost PMD event handling ------------------------ diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index b892eec67a..d5d421441c 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -162,6 +162,18 @@ The following is an overview of some key Vhost API functions: It is disabled by default. + - ``RTE_VHOST_USER_VECTORIZED`` + Vectorized data path will used when this flag is set. When packed ring + enabled, available descriptors are stored from frontend driver in sequence. + SIMD instructions like AVX can be used to handle multiple descriptors + simultaneously. Thus can accelerate the throughput of ring operations. + + * Only packed ring has vectorized data path. + + * Will fallback to normal datapath if no vectorization support. + + It is disabled by default. + * ``rte_vhost_driver_set_features(path, features)`` This function sets the feature bits the vhost-user driver supports. The diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index e55278af69..2ba5a2a076 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -35,6 +35,7 @@ enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM}; #define ETH_VHOST_VIRTIO_NET_F_HOST_TSO "tso" #define ETH_VHOST_LINEAR_BUF "linear-buffer" #define ETH_VHOST_EXT_BUF "ext-buffer" +#define ETH_VHOST_VECTORIZED "vectorized" #define VHOST_MAX_PKT_BURST 32 static const char *valid_arguments[] = { @@ -47,6 +48,7 @@ static const char *valid_arguments[] = { ETH_VHOST_VIRTIO_NET_F_HOST_TSO, ETH_VHOST_LINEAR_BUF, ETH_VHOST_EXT_BUF, + ETH_VHOST_VECTORIZED, NULL }; @@ -1507,6 +1509,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev) int tso = 0; int linear_buf = 0; int ext_buf = 0; + int vectorized = 0; struct rte_eth_dev *eth_dev; const char *name = rte_vdev_device_name(dev); @@ -1626,6 +1629,17 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev) flags |= RTE_VHOST_USER_EXTBUF_SUPPORT; } + if (rte_kvargs_count(kvlist, ETH_VHOST_VECTORIZED) == 1) { + ret = rte_kvargs_process(kvlist, + ETH_VHOST_VECTORIZED, + &open_int, &vectorized); + if (ret < 0) + goto out_free; + + if (vectorized == 1) + flags |= RTE_VHOST_USER_VECTORIZED; + } + if (dev->device.numa_node == SOCKET_ID_ANY) dev->device.numa_node = rte_socket_id(); @@ -1679,4 +1693,5 @@ RTE_PMD_REGISTER_PARAM_STRING(net_vhost, "postcopy-support=<0|1> " "tso=<0|1> " "linear-buffer=<0|1> " - "ext-buffer=<0|1>"); + "ext-buffer=<0|1> " + "vectorized=<0|1>"); diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index a94c84134d..c7f946c6c1 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -36,6 +36,7 @@ extern "C" { /* support only linear buffers (no chained mbufs) */ #define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6) #define RTE_VHOST_USER_ASYNC_COPY (1ULL << 7) +#define RTE_VHOST_USER_VECTORIZED (1ULL << 8) /* Features. */ #ifndef VIRTIO_NET_F_GUEST_ANNOUNCE diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c index 73e1dca95e..cc11244693 100644 --- a/lib/librte_vhost/socket.c +++ b/lib/librte_vhost/socket.c @@ -43,6 +43,7 @@ struct vhost_user_socket { bool extbuf; bool linearbuf; bool async_copy; + bool vectorized; /* * The "supported_features" indicates the feature bits the @@ -245,6 +246,9 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) dev->async_copy = 1; } + if (vsocket->vectorized) + vhost_enable_vectorized(vid); + VHOST_LOG_CONFIG(INFO, "new device, handle is %d\n", vid); if (vsocket->notify_ops->new_connection) { @@ -881,6 +885,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) vsocket->dequeue_zero_copy = flags & RTE_VHOST_USER_DEQUEUE_ZERO_COPY; vsocket->extbuf = flags & RTE_VHOST_USER_EXTBUF_SUPPORT; vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT; + vsocket->vectorized = flags & RTE_VHOST_USER_VECTORIZED; if (vsocket->dequeue_zero_copy && (flags & RTE_VHOST_USER_IOMMU_SUPPORT)) { diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 8f20a0818f..50bf033a9d 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -752,6 +752,17 @@ vhost_enable_linearbuf(int vid) dev->linearbuf = 1; } +void +vhost_enable_vectorized(int vid) +{ + struct virtio_net *dev = get_device(vid); + + if (dev == NULL) + return; + + dev->vectorized = 1; +} + int rte_vhost_get_mtu(int vid, uint16_t *mtu) { diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 632f66d532..b556eb3bf6 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -383,6 +383,7 @@ struct virtio_net { int async_copy; int extbuf; int linearbuf; + int vectorized; struct vhost_virtqueue *virtqueue[VHOST_MAX_QUEUE_PAIRS * 2]; struct inflight_mem_info *inflight_info; #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ) @@ -721,6 +722,7 @@ void vhost_enable_dequeue_zero_copy(int vid); void vhost_set_builtin_virtio_net(int vid, bool enable); void vhost_enable_extbuf(int vid); void vhost_enable_linearbuf(int vid); +void vhost_enable_vectorized(int vid); int vhost_enable_guest_notification(struct virtio_net *dev, struct vhost_virtqueue *vq, int enable);