Message ID | 20230525162551.70359-1-maxime.coquelin@redhat.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6ED6642B9D; Thu, 25 May 2023 18:26:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5601C40DF8; Thu, 25 May 2023 18:26:01 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 881C140DDB for <dev@dpdk.org>; Thu, 25 May 2023 18:25:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685031959; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=kIcglijttQxA0GBQN+FIGslAGKkZmrjOnzzJPsNfNgk=; b=irTn61liiHfoyXw3v0EFgSA/Jfyh9YX+vZybBB8XYAYv1e8qs+6xrIc1ZCkIvkNKrWBINp a+f75/4enKYSUviAcqyTfgsnn5NEthYog632powfyAGUJkbqcoNXvupTpa2rmRNQGql47k lwK86Arnb1HwxQdzVoRm1kaD+dDjbGk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-59-LKg5iX9hNe2ri9iOZeX3lw-1; Thu, 25 May 2023 12:25:55 -0400 X-MC-Unique: LKg5iX9hNe2ri9iOZeX3lw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4B40C1C05136; Thu, 25 May 2023 16:25:55 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id B7E31140E95D; Thu, 25 May 2023 16:25:52 +0000 (UTC) From: Maxime Coquelin <maxime.coquelin@redhat.com> To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com, mkp@redhat.com, fbl@redhat.com, jasowang@redhat.com, cunming.liang@intel.com, xieyongji@bytedance.com, echaudro@redhat.com, eperezma@redhat.com, amorenoz@redhat.com, lulu@redhat.com Cc: Maxime Coquelin <maxime.coquelin@redhat.com> Subject: [PATCH v3 00/28] Add VDUSE support to Vhost library Date: Thu, 25 May 2023 18:25:23 +0200 Message-Id: <20230525162551.70359-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org |
Series |
Add VDUSE support to Vhost library
|
|
Message
Maxime Coquelin
May 25, 2023, 4:25 p.m. UTC
Note: v2 is identical to v3, it is just a resend because of an issue when posting v2 breaking the series in patchwork. This series introduces a new type of backend, VDUSE, to the Vhost library. VDUSE stands for vDPA device in Userspace, it enables implementing a Virtio device in userspace and have it attached to the Kernel vDPA bus. Once attached to the vDPA bus, it can be used either by Kernel Virtio drivers, like virtio-net in our case, via the virtio-vdpa driver. Doing that, the device is visible to the Kernel networking stack and is exposed to userspace as a regular netdev. It can also be exposed to userspace thanks to the vhost-vdpa driver, via a vhost-vdpa chardev that can be passed to QEMU or Virtio-user PMD. While VDUSE support is already available in upstream Kernel, a couple of patches are required to support network device type: https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_rfc In order to attach the created VDUSE device to the vDPA bus, a recent iproute2 version containing the vdpa tool is required. Benchmark results: ================== On this v2, PVP reference benchmark has been run & compared with Vhost-user. When doing macswap forwarding in the worload, no difference is seen. When doing io forwarding in the workload, we see 4% performance degradation with VDUSE, comapred to Vhost-user/Virtio-user. It is explained by the use of the IOTLB layer in the Vhost-library when using VDUSE, whereas Vhost-user/Virtio-user does not make use of it. Usage: ====== 1. Probe required Kernel modules # modprobe vdpa # modprobe vduse # modprobe virtio-vdpa 2. Build (require vduse kernel headers to be available) # meson build # ninja -C build 3. Create a VDUSE device (vduse0) using Vhost PMD with testpmd (with 4 queue pairs in this example) # ./build/app/dpdk-testpmd --no-pci --vdev=net_vhost0,iface=/dev/vduse/vduse0,queues=4 --log-level=*:9 -- -i --txq=4 --rxq=4 4. Attach the VDUSE device to the vDPA bus # vdpa dev add name vduse0 mgmtdev vduse => The virtio-net netdev shows up (eth0 here) # ip l show eth0 21: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether c2:73:ea:a7:68:6d brd ff:ff:ff:ff:ff:ff 5. Start/stop traffic in testpmd testpmd> start testpmd> show port stats 0 ######################## NIC statistics for port 0 ######################## RX-packets: 11 RX-missed: 0 RX-bytes: 1482 RX-errors: 0 RX-nombuf: 0 TX-packets: 1 TX-errors: 0 TX-bytes: 62 Throughput (since last show) Rx-pps: 0 Rx-bps: 0 Tx-pps: 0 Tx-bps: 0 ############################################################################ testpmd> stop 6. Detach the VDUSE device from the vDPA bus # vdpa dev del vduse0 7. Quit testpmd testpmd> quit Known issues & remaining work: ============================== - Fix issue in FD manager (still polling while FD has been removed) - Add Netlink support in Vhost library - Support device reconnection -> a temporary patch to support reconnection via a tmpfs file is available, upstream solution would be in-kernel and is being developed. -> https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/5ad06ce14159a9ce36ee168dd13ef389cec91137 - Support packed ring - Provide more performance benchmark results Changes in v2/v3: ================= - Fixed mem_set_dump() parameter (patch 4) - Fixed accidental comment change (patch 7, Chenbo) - Change from __builtin_ctz to __builtin_ctzll (patch 9, Chenbo) - move change from patch 12 to 13 (Chenbo) - Enable locks annotation for control queue (Patch 17) - Send control queue notification when used descriptors enqueued (Patch 17) - Lock control queue IOTLB lock (Patch 17) - Fix error path in virtio_net_ctrl_pop() (Patch 17, Chenbo) - Set VDUSE dev FD as NONBLOCK (Patch 18) - Enable more Virtio features (Patch 18) - Remove calls to pthread_setcancelstate() (Patch 22) - Add calls to fdset_pipe_notify() when adding and deleting FDs from a set (Patch 22) - Use RTE_DIM() to get requests string array size (Patch 22) - Set reply result for IOTLB update message (Patch 25, Chenbo) - Fix queues enablement with multiqueue (Patch 26) - Move kickfd creation for better logging (Patch 26) - Improve logging (Patch 26) - Uninstall cvq kickfd in case of handler installation failure (Patch 27) - Enable CVQ notifications once handler is installed (Patch 27) - Don't advertise multiqueue and control queue if app only request single queue pair (Patch 27) - Add release notes Maxime Coquelin (28): vhost: fix missing guest notif stat increment vhost: fix invalid call FD handling vhost: fix IOTLB entries overlap check with previous entry vhost: add helper of IOTLB entries coredump vhost: add helper for IOTLB entries shared page check vhost: don't dump unneeded pages with IOTLB vhost: change to single IOTLB cache per device vhost: add offset field to IOTLB entries vhost: add page size info to IOTLB entry vhost: retry translating IOVA after IOTLB miss vhost: introduce backend ops vhost: add IOTLB cache entry removal callback vhost: add helper for IOTLB misses vhost: add helper for interrupt injection vhost: add API to set max queue pairs net/vhost: use API to set max queue pairs vhost: add control virtqueue support vhost: add VDUSE device creation and destruction vhost: add VDUSE callback for IOTLB miss vhost: add VDUSE callback for IOTLB entry removal vhost: add VDUSE callback for IRQ injection vhost: add VDUSE events handler vhost: add support for virtqueue state get event vhost: add support for VDUSE status set event vhost: add support for VDUSE IOTLB update event vhost: add VDUSE device startup vhost: add multiqueue support to VDUSE vhost: add VDUSE device stop doc/guides/prog_guide/vhost_lib.rst | 4 + doc/guides/rel_notes/release_23_07.rst | 11 + drivers/net/vhost/rte_eth_vhost.c | 3 + lib/vhost/iotlb.c | 333 +++++++------ lib/vhost/iotlb.h | 45 +- lib/vhost/meson.build | 5 + lib/vhost/rte_vhost.h | 17 + lib/vhost/socket.c | 72 ++- lib/vhost/vduse.c | 646 +++++++++++++++++++++++++ lib/vhost/vduse.h | 33 ++ lib/vhost/version.map | 3 + lib/vhost/vhost.c | 51 +- lib/vhost/vhost.h | 90 ++-- lib/vhost/vhost_user.c | 51 +- lib/vhost/vhost_user.h | 2 +- lib/vhost/virtio_net_ctrl.c | 286 +++++++++++ lib/vhost/virtio_net_ctrl.h | 10 + 17 files changed, 1424 insertions(+), 238 deletions(-) create mode 100644 lib/vhost/vduse.c create mode 100644 lib/vhost/vduse.h create mode 100644 lib/vhost/virtio_net_ctrl.c create mode 100644 lib/vhost/virtio_net_ctrl.h
Comments
On Thu, May 25, 2023 at 6:25 PM Maxime Coquelin <maxime.coquelin@redhat.com> wrote: > > Note: v2 is identical to v3, it is just a resend because > of an issue when posting v2 breaking the series in patchwork. > > This series introduces a new type of backend, VDUSE, > to the Vhost library. > > VDUSE stands for vDPA device in Userspace, it enables > implementing a Virtio device in userspace and have it > attached to the Kernel vDPA bus. > > Once attached to the vDPA bus, it can be used either by > Kernel Virtio drivers, like virtio-net in our case, via > the virtio-vdpa driver. Doing that, the device is visible > to the Kernel networking stack and is exposed to userspace > as a regular netdev. > > It can also be exposed to userspace thanks to the > vhost-vdpa driver, via a vhost-vdpa chardev that can be > passed to QEMU or Virtio-user PMD. > > While VDUSE support is already available in upstream > Kernel, a couple of patches are required to support > network device type: > > https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_rfc > > In order to attach the created VDUSE device to the vDPA > bus, a recent iproute2 version containing the vdpa tool is > required. > > Benchmark results: > ================== > > On this v2, PVP reference benchmark has been run & compared with > Vhost-user. > > When doing macswap forwarding in the worload, no difference is seen. > When doing io forwarding in the workload, we see 4% performance > degradation with VDUSE, comapred to Vhost-user/Virtio-user. It is > explained by the use of the IOTLB layer in the Vhost-library when using > VDUSE, whereas Vhost-user/Virtio-user does not make use of it. > > Usage: > ====== > > 1. Probe required Kernel modules > # modprobe vdpa > # modprobe vduse > # modprobe virtio-vdpa > > 2. Build (require vduse kernel headers to be available) > # meson build > # ninja -C build > > 3. Create a VDUSE device (vduse0) using Vhost PMD with > testpmd (with 4 queue pairs in this example) > # ./build/app/dpdk-testpmd --no-pci --vdev=net_vhost0,iface=/dev/vduse/vduse0,queues=4 --log-level=*:9 -- -i --txq=4 --rxq=4 > > 4. Attach the VDUSE device to the vDPA bus > # vdpa dev add name vduse0 mgmtdev vduse > => The virtio-net netdev shows up (eth0 here) > # ip l show eth0 > 21: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 > link/ether c2:73:ea:a7:68:6d brd ff:ff:ff:ff:ff:ff > > 5. Start/stop traffic in testpmd > testpmd> start > testpmd> show port stats 0 > ######################## NIC statistics for port 0 ######################## > RX-packets: 11 RX-missed: 0 RX-bytes: 1482 > RX-errors: 0 > RX-nombuf: 0 > TX-packets: 1 TX-errors: 0 TX-bytes: 62 > > Throughput (since last show) > Rx-pps: 0 Rx-bps: 0 > Tx-pps: 0 Tx-bps: 0 > ############################################################################ > testpmd> stop > > 6. Detach the VDUSE device from the vDPA bus > # vdpa dev del vduse0 > > 7. Quit testpmd > testpmd> quit > > Known issues & remaining work: > ============================== > - Fix issue in FD manager (still polling while FD has been removed) > - Add Netlink support in Vhost library > - Support device reconnection > -> a temporary patch to support reconnection via a tmpfs file is available, > upstream solution would be in-kernel and is being developed. > -> https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/5ad06ce14159a9ce36ee168dd13ef389cec91137 > - Support packed ring > - Provide more performance benchmark results > > Changes in v2/v3: > ================= > - Fixed mem_set_dump() parameter (patch 4) > - Fixed accidental comment change (patch 7, Chenbo) > - Change from __builtin_ctz to __builtin_ctzll (patch 9, Chenbo) > - move change from patch 12 to 13 (Chenbo) > - Enable locks annotation for control queue (Patch 17) > - Send control queue notification when used descriptors enqueued (Patch 17) > - Lock control queue IOTLB lock (Patch 17) > - Fix error path in virtio_net_ctrl_pop() (Patch 17, Chenbo) > - Set VDUSE dev FD as NONBLOCK (Patch 18) > - Enable more Virtio features (Patch 18) > - Remove calls to pthread_setcancelstate() (Patch 22) > - Add calls to fdset_pipe_notify() when adding and deleting FDs from a set (Patch 22) > - Use RTE_DIM() to get requests string array size (Patch 22) > - Set reply result for IOTLB update message (Patch 25, Chenbo) > - Fix queues enablement with multiqueue (Patch 26) > - Move kickfd creation for better logging (Patch 26) > - Improve logging (Patch 26) > - Uninstall cvq kickfd in case of handler installation failure (Patch 27) > - Enable CVQ notifications once handler is installed (Patch 27) > - Don't advertise multiqueue and control queue if app only request single queue pair (Patch 27) > - Add release notes > > Maxime Coquelin (28): > vhost: fix missing guest notif stat increment > vhost: fix invalid call FD handling > vhost: fix IOTLB entries overlap check with previous entry > vhost: add helper of IOTLB entries coredump > vhost: add helper for IOTLB entries shared page check > vhost: don't dump unneeded pages with IOTLB > vhost: change to single IOTLB cache per device > vhost: add offset field to IOTLB entries > vhost: add page size info to IOTLB entry > vhost: retry translating IOVA after IOTLB miss > vhost: introduce backend ops > vhost: add IOTLB cache entry removal callback > vhost: add helper for IOTLB misses > vhost: add helper for interrupt injection > vhost: add API to set max queue pairs > net/vhost: use API to set max queue pairs > vhost: add control virtqueue support > vhost: add VDUSE device creation and destruction > vhost: add VDUSE callback for IOTLB miss > vhost: add VDUSE callback for IOTLB entry removal > vhost: add VDUSE callback for IRQ injection > vhost: add VDUSE events handler > vhost: add support for virtqueue state get event > vhost: add support for VDUSE status set event > vhost: add support for VDUSE IOTLB update event > vhost: add VDUSE device startup > vhost: add multiqueue support to VDUSE > vhost: add VDUSE device stop > > doc/guides/prog_guide/vhost_lib.rst | 4 + > doc/guides/rel_notes/release_23_07.rst | 11 + > drivers/net/vhost/rte_eth_vhost.c | 3 + > lib/vhost/iotlb.c | 333 +++++++------ > lib/vhost/iotlb.h | 45 +- > lib/vhost/meson.build | 5 + > lib/vhost/rte_vhost.h | 17 + > lib/vhost/socket.c | 72 ++- > lib/vhost/vduse.c | 646 +++++++++++++++++++++++++ > lib/vhost/vduse.h | 33 ++ > lib/vhost/version.map | 3 + > lib/vhost/vhost.c | 51 +- > lib/vhost/vhost.h | 90 ++-- > lib/vhost/vhost_user.c | 51 +- > lib/vhost/vhost_user.h | 2 +- > lib/vhost/virtio_net_ctrl.c | 286 +++++++++++ > lib/vhost/virtio_net_ctrl.h | 10 + > 17 files changed, 1424 insertions(+), 238 deletions(-) > create mode 100644 lib/vhost/vduse.c > create mode 100644 lib/vhost/vduse.h > create mode 100644 lib/vhost/virtio_net_ctrl.c > create mode 100644 lib/vhost/virtio_net_ctrl.h I did not do a in-depth review, but overall, the series lgtm (and per patch compilation looks fine). A few comments though: - patch 2 is the same as https://patchwork.dpdk.org/project/dpdk/patch/168431454344.558450.2397970324914136724.stgit@ebuild.local/ It would be cool to report the review tags in the first series that gets applied. - there may be a bug in patch 5, see comment on patch, - patch 4, 5 and 6 go together, with patch 6 being the fix itself. I understand it was easier to review as splitted patches, but maybe it would be simpler to squash them to make the backport trivial. - patch 7 (and some other patches in the series) will increase the virtio_net structure but we are not gaining anything on the vhost_virtqueue size, so a device + vqs memory footprint will slightly increase. This is not a problem afaics? - patch 15 breaks the doc, format is incorrect but the CI reported it so you will notice it before merging :-),
Hi David, On 5/26/23 11:14, David Marchand wrote: > On Thu, May 25, 2023 at 6:25 PM Maxime Coquelin > <maxime.coquelin@redhat.com> wrote: >> >> Note: v2 is identical to v3, it is just a resend because >> of an issue when posting v2 breaking the series in patchwork. >> >> This series introduces a new type of backend, VDUSE, >> to the Vhost library. >> >> VDUSE stands for vDPA device in Userspace, it enables >> implementing a Virtio device in userspace and have it >> attached to the Kernel vDPA bus. >> >> Once attached to the vDPA bus, it can be used either by >> Kernel Virtio drivers, like virtio-net in our case, via >> the virtio-vdpa driver. Doing that, the device is visible >> to the Kernel networking stack and is exposed to userspace >> as a regular netdev. >> >> It can also be exposed to userspace thanks to the >> vhost-vdpa driver, via a vhost-vdpa chardev that can be >> passed to QEMU or Virtio-user PMD. >> >> While VDUSE support is already available in upstream >> Kernel, a couple of patches are required to support >> network device type: >> >> https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_rfc >> >> In order to attach the created VDUSE device to the vDPA >> bus, a recent iproute2 version containing the vdpa tool is >> required. >> >> Benchmark results: >> ================== >> >> On this v2, PVP reference benchmark has been run & compared with >> Vhost-user. >> >> When doing macswap forwarding in the worload, no difference is seen. >> When doing io forwarding in the workload, we see 4% performance >> degradation with VDUSE, comapred to Vhost-user/Virtio-user. It is >> explained by the use of the IOTLB layer in the Vhost-library when using >> VDUSE, whereas Vhost-user/Virtio-user does not make use of it. >> >> Usage: >> ====== >> >> 1. Probe required Kernel modules >> # modprobe vdpa >> # modprobe vduse >> # modprobe virtio-vdpa >> >> 2. Build (require vduse kernel headers to be available) >> # meson build >> # ninja -C build >> >> 3. Create a VDUSE device (vduse0) using Vhost PMD with >> testpmd (with 4 queue pairs in this example) >> # ./build/app/dpdk-testpmd --no-pci --vdev=net_vhost0,iface=/dev/vduse/vduse0,queues=4 --log-level=*:9 -- -i --txq=4 --rxq=4 >> >> 4. Attach the VDUSE device to the vDPA bus >> # vdpa dev add name vduse0 mgmtdev vduse >> => The virtio-net netdev shows up (eth0 here) >> # ip l show eth0 >> 21: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 >> link/ether c2:73:ea:a7:68:6d brd ff:ff:ff:ff:ff:ff >> >> 5. Start/stop traffic in testpmd >> testpmd> start >> testpmd> show port stats 0 >> ######################## NIC statistics for port 0 ######################## >> RX-packets: 11 RX-missed: 0 RX-bytes: 1482 >> RX-errors: 0 >> RX-nombuf: 0 >> TX-packets: 1 TX-errors: 0 TX-bytes: 62 >> >> Throughput (since last show) >> Rx-pps: 0 Rx-bps: 0 >> Tx-pps: 0 Tx-bps: 0 >> ############################################################################ >> testpmd> stop >> >> 6. Detach the VDUSE device from the vDPA bus >> # vdpa dev del vduse0 >> >> 7. Quit testpmd >> testpmd> quit >> >> Known issues & remaining work: >> ============================== >> - Fix issue in FD manager (still polling while FD has been removed) >> - Add Netlink support in Vhost library >> - Support device reconnection >> -> a temporary patch to support reconnection via a tmpfs file is available, >> upstream solution would be in-kernel and is being developed. >> -> https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/5ad06ce14159a9ce36ee168dd13ef389cec91137 >> - Support packed ring >> - Provide more performance benchmark results >> >> Changes in v2/v3: >> ================= >> - Fixed mem_set_dump() parameter (patch 4) >> - Fixed accidental comment change (patch 7, Chenbo) >> - Change from __builtin_ctz to __builtin_ctzll (patch 9, Chenbo) >> - move change from patch 12 to 13 (Chenbo) >> - Enable locks annotation for control queue (Patch 17) >> - Send control queue notification when used descriptors enqueued (Patch 17) >> - Lock control queue IOTLB lock (Patch 17) >> - Fix error path in virtio_net_ctrl_pop() (Patch 17, Chenbo) >> - Set VDUSE dev FD as NONBLOCK (Patch 18) >> - Enable more Virtio features (Patch 18) >> - Remove calls to pthread_setcancelstate() (Patch 22) >> - Add calls to fdset_pipe_notify() when adding and deleting FDs from a set (Patch 22) >> - Use RTE_DIM() to get requests string array size (Patch 22) >> - Set reply result for IOTLB update message (Patch 25, Chenbo) >> - Fix queues enablement with multiqueue (Patch 26) >> - Move kickfd creation for better logging (Patch 26) >> - Improve logging (Patch 26) >> - Uninstall cvq kickfd in case of handler installation failure (Patch 27) >> - Enable CVQ notifications once handler is installed (Patch 27) >> - Don't advertise multiqueue and control queue if app only request single queue pair (Patch 27) >> - Add release notes >> >> Maxime Coquelin (28): >> vhost: fix missing guest notif stat increment >> vhost: fix invalid call FD handling >> vhost: fix IOTLB entries overlap check with previous entry >> vhost: add helper of IOTLB entries coredump >> vhost: add helper for IOTLB entries shared page check >> vhost: don't dump unneeded pages with IOTLB >> vhost: change to single IOTLB cache per device >> vhost: add offset field to IOTLB entries >> vhost: add page size info to IOTLB entry >> vhost: retry translating IOVA after IOTLB miss >> vhost: introduce backend ops >> vhost: add IOTLB cache entry removal callback >> vhost: add helper for IOTLB misses >> vhost: add helper for interrupt injection >> vhost: add API to set max queue pairs >> net/vhost: use API to set max queue pairs >> vhost: add control virtqueue support >> vhost: add VDUSE device creation and destruction >> vhost: add VDUSE callback for IOTLB miss >> vhost: add VDUSE callback for IOTLB entry removal >> vhost: add VDUSE callback for IRQ injection >> vhost: add VDUSE events handler >> vhost: add support for virtqueue state get event >> vhost: add support for VDUSE status set event >> vhost: add support for VDUSE IOTLB update event >> vhost: add VDUSE device startup >> vhost: add multiqueue support to VDUSE >> vhost: add VDUSE device stop >> >> doc/guides/prog_guide/vhost_lib.rst | 4 + >> doc/guides/rel_notes/release_23_07.rst | 11 + >> drivers/net/vhost/rte_eth_vhost.c | 3 + >> lib/vhost/iotlb.c | 333 +++++++------ >> lib/vhost/iotlb.h | 45 +- >> lib/vhost/meson.build | 5 + >> lib/vhost/rte_vhost.h | 17 + >> lib/vhost/socket.c | 72 ++- >> lib/vhost/vduse.c | 646 +++++++++++++++++++++++++ >> lib/vhost/vduse.h | 33 ++ >> lib/vhost/version.map | 3 + >> lib/vhost/vhost.c | 51 +- >> lib/vhost/vhost.h | 90 ++-- >> lib/vhost/vhost_user.c | 51 +- >> lib/vhost/vhost_user.h | 2 +- >> lib/vhost/virtio_net_ctrl.c | 286 +++++++++++ >> lib/vhost/virtio_net_ctrl.h | 10 + >> 17 files changed, 1424 insertions(+), 238 deletions(-) >> create mode 100644 lib/vhost/vduse.c >> create mode 100644 lib/vhost/vduse.h >> create mode 100644 lib/vhost/virtio_net_ctrl.c >> create mode 100644 lib/vhost/virtio_net_ctrl.h > > I did not do a in-depth review, but overall, the series lgtm (and per > patch compilation looks fine). > > A few comments though: > - patch 2 is the same as > https://patchwork.dpdk.org/project/dpdk/patch/168431454344.558450.2397970324914136724.stgit@ebuild.local/ > It would be cool to report the review tags in the first series that > gets applied. Done > - there may be a bug in patch 5, see comment on patch, > - patch 4, 5 and 6 go together, with patch 6 being the fix itself. I > understand it was easier to review as splitted patches, but maybe it > would be simpler to squash them to make the backport trivial. If possible I would prefer to keep them as separate patches, it will be much easier to understand the code in the future if a regression happened. I'll help the LTS maintainer with backporting it (i.e. request to also pick patch 5 and 5). Does that work for you? > - patch 7 (and some other patches in the series) will increase the > virtio_net structure but we are not gaining anything on the > vhost_virtqueue size, so a device + vqs memory footprint will slightly > increase. This is not a problem afaics? I have not noticed performance degradation being introduced, but it may be good to revisit it. > - patch 15 breaks the doc, format is incorrect but the CI reported it > so you will notice it before merging :-), > Fixed! Thanks, Maxime
On Thu, Jun 1, 2023 at 4:59 PM Maxime Coquelin <maxime.coquelin@redhat.com> wrote: > > - patch 4, 5 and 6 go together, with patch 6 being the fix itself. I > > understand it was easier to review as splitted patches, but maybe it > > would be simpler to squash them to make the backport trivial. > > If possible I would prefer to keep them as separate patches, it will be > much easier to understand the code in the future if a regression > happened. > > I'll help the LTS maintainer with backporting it (i.e. request to also > pick patch 5 and 5). 4* > > Does that work for you? Ok for me.