mbox series

[v3,00/28] Add VDUSE support to Vhost library

Message ID 20230525162551.70359-1-maxime.coquelin@redhat.com (mailing list archive)
Headers
Series Add VDUSE support to Vhost library |

Message

Maxime Coquelin May 25, 2023, 4:25 p.m. UTC
  Note: v2 is identical to v3, it is just a resend because
of an issue when posting v2 breaking the series in patchwork.

This series introduces a new type of backend, VDUSE,
to the Vhost library.

VDUSE stands for vDPA device in Userspace, it enables
implementing a Virtio device in userspace and have it
attached to the Kernel vDPA bus.

Once attached to the vDPA bus, it can be used either by
Kernel Virtio drivers, like virtio-net in our case, via
the virtio-vdpa driver. Doing that, the device is visible
to the Kernel networking stack and is exposed to userspace
as a regular netdev.

It can also be exposed to userspace thanks to the
vhost-vdpa driver, via a vhost-vdpa chardev that can be
passed to QEMU or Virtio-user PMD.

While VDUSE support is already available in upstream
Kernel, a couple of patches are required to support
network device type:

https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_rfc

In order to attach the created VDUSE device to the vDPA
bus, a recent iproute2 version containing the vdpa tool is
required.

Benchmark results:
==================

On this v2, PVP reference benchmark has been run & compared with
Vhost-user.

When doing macswap forwarding in the worload, no difference is seen.
When doing io forwarding in the workload, we see 4% performance
degradation with VDUSE, comapred to Vhost-user/Virtio-user. It is
explained by the use of the IOTLB layer in the Vhost-library when using
VDUSE, whereas Vhost-user/Virtio-user does not make use of it.

Usage:
======

1. Probe required Kernel modules
# modprobe vdpa
# modprobe vduse
# modprobe virtio-vdpa

2. Build (require vduse kernel headers to be available)
# meson build
# ninja -C build

3. Create a VDUSE device (vduse0) using Vhost PMD with
testpmd (with 4 queue pairs in this example)
# ./build/app/dpdk-testpmd --no-pci --vdev=net_vhost0,iface=/dev/vduse/vduse0,queues=4 --log-level=*:9  -- -i --txq=4 --rxq=4
 
4. Attach the VDUSE device to the vDPA bus
# vdpa dev add name vduse0 mgmtdev vduse
=> The virtio-net netdev shows up (eth0 here)
# ip l show eth0
21: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether c2:73:ea:a7:68:6d brd ff:ff:ff:ff:ff:ff

5. Start/stop traffic in testpmd
testpmd> start
testpmd> show port stats 0
  ######################## NIC statistics for port 0  ########################
  RX-packets: 11         RX-missed: 0          RX-bytes:  1482
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 1          TX-errors: 0          TX-bytes:  62

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################
testpmd> stop

6. Detach the VDUSE device from the vDPA bus
# vdpa dev del vduse0

7. Quit testpmd
testpmd> quit

Known issues & remaining work:
==============================
- Fix issue in FD manager (still polling while FD has been removed)
- Add Netlink support in Vhost library
- Support device reconnection
 -> a temporary patch to support reconnection via a tmpfs file is available,
    upstream solution would be in-kernel and is being developed.
 -> https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/5ad06ce14159a9ce36ee168dd13ef389cec91137
- Support packed ring
- Provide more performance benchmark results

Changes in v2/v3:
=================
- Fixed mem_set_dump() parameter (patch 4)
- Fixed accidental comment change (patch 7, Chenbo)
- Change from __builtin_ctz to __builtin_ctzll (patch 9, Chenbo)
- move change from patch 12 to 13 (Chenbo)
- Enable locks annotation for control queue (Patch 17)
- Send control queue notification when used descriptors enqueued (Patch 17)
- Lock control queue IOTLB lock (Patch 17)
- Fix error path in virtio_net_ctrl_pop() (Patch 17, Chenbo)
- Set VDUSE dev FD as NONBLOCK (Patch 18)
- Enable more Virtio features (Patch 18)
- Remove calls to pthread_setcancelstate() (Patch 22)
- Add calls to fdset_pipe_notify() when adding and deleting FDs from a set (Patch 22)
- Use RTE_DIM() to get requests string array size (Patch 22)
- Set reply result for IOTLB update message (Patch 25, Chenbo)
- Fix queues enablement with multiqueue (Patch 26)
- Move kickfd creation for better logging (Patch 26)
- Improve logging (Patch 26)
- Uninstall cvq kickfd in case of handler installation failure (Patch 27)
- Enable CVQ notifications once handler is installed (Patch 27)
- Don't advertise multiqueue and control queue if app only request single queue pair (Patch 27)
- Add release notes

Maxime Coquelin (28):
  vhost: fix missing guest notif stat increment
  vhost: fix invalid call FD handling
  vhost: fix IOTLB entries overlap check with previous entry
  vhost: add helper of IOTLB entries coredump
  vhost: add helper for IOTLB entries shared page check
  vhost: don't dump unneeded pages with IOTLB
  vhost: change to single IOTLB cache per device
  vhost: add offset field to IOTLB entries
  vhost: add page size info to IOTLB entry
  vhost: retry translating IOVA after IOTLB miss
  vhost: introduce backend ops
  vhost: add IOTLB cache entry removal callback
  vhost: add helper for IOTLB misses
  vhost: add helper for interrupt injection
  vhost: add API to set max queue pairs
  net/vhost: use API to set max queue pairs
  vhost: add control virtqueue support
  vhost: add VDUSE device creation and destruction
  vhost: add VDUSE callback for IOTLB miss
  vhost: add VDUSE callback for IOTLB entry removal
  vhost: add VDUSE callback for IRQ injection
  vhost: add VDUSE events handler
  vhost: add support for virtqueue state get event
  vhost: add support for VDUSE status set event
  vhost: add support for VDUSE IOTLB update event
  vhost: add VDUSE device startup
  vhost: add multiqueue support to VDUSE
  vhost: add VDUSE device stop

 doc/guides/prog_guide/vhost_lib.rst    |   4 +
 doc/guides/rel_notes/release_23_07.rst |  11 +
 drivers/net/vhost/rte_eth_vhost.c      |   3 +
 lib/vhost/iotlb.c                      | 333 +++++++------
 lib/vhost/iotlb.h                      |  45 +-
 lib/vhost/meson.build                  |   5 +
 lib/vhost/rte_vhost.h                  |  17 +
 lib/vhost/socket.c                     |  72 ++-
 lib/vhost/vduse.c                      | 646 +++++++++++++++++++++++++
 lib/vhost/vduse.h                      |  33 ++
 lib/vhost/version.map                  |   3 +
 lib/vhost/vhost.c                      |  51 +-
 lib/vhost/vhost.h                      |  90 ++--
 lib/vhost/vhost_user.c                 |  51 +-
 lib/vhost/vhost_user.h                 |   2 +-
 lib/vhost/virtio_net_ctrl.c            | 286 +++++++++++
 lib/vhost/virtio_net_ctrl.h            |  10 +
 17 files changed, 1424 insertions(+), 238 deletions(-)
 create mode 100644 lib/vhost/vduse.c
 create mode 100644 lib/vhost/vduse.h
 create mode 100644 lib/vhost/virtio_net_ctrl.c
 create mode 100644 lib/vhost/virtio_net_ctrl.h
  

Comments

David Marchand May 26, 2023, 9:14 a.m. UTC | #1
On Thu, May 25, 2023 at 6:25 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> Note: v2 is identical to v3, it is just a resend because
> of an issue when posting v2 breaking the series in patchwork.
>
> This series introduces a new type of backend, VDUSE,
> to the Vhost library.
>
> VDUSE stands for vDPA device in Userspace, it enables
> implementing a Virtio device in userspace and have it
> attached to the Kernel vDPA bus.
>
> Once attached to the vDPA bus, it can be used either by
> Kernel Virtio drivers, like virtio-net in our case, via
> the virtio-vdpa driver. Doing that, the device is visible
> to the Kernel networking stack and is exposed to userspace
> as a regular netdev.
>
> It can also be exposed to userspace thanks to the
> vhost-vdpa driver, via a vhost-vdpa chardev that can be
> passed to QEMU or Virtio-user PMD.
>
> While VDUSE support is already available in upstream
> Kernel, a couple of patches are required to support
> network device type:
>
> https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_rfc
>
> In order to attach the created VDUSE device to the vDPA
> bus, a recent iproute2 version containing the vdpa tool is
> required.
>
> Benchmark results:
> ==================
>
> On this v2, PVP reference benchmark has been run & compared with
> Vhost-user.
>
> When doing macswap forwarding in the worload, no difference is seen.
> When doing io forwarding in the workload, we see 4% performance
> degradation with VDUSE, comapred to Vhost-user/Virtio-user. It is
> explained by the use of the IOTLB layer in the Vhost-library when using
> VDUSE, whereas Vhost-user/Virtio-user does not make use of it.
>
> Usage:
> ======
>
> 1. Probe required Kernel modules
> # modprobe vdpa
> # modprobe vduse
> # modprobe virtio-vdpa
>
> 2. Build (require vduse kernel headers to be available)
> # meson build
> # ninja -C build
>
> 3. Create a VDUSE device (vduse0) using Vhost PMD with
> testpmd (with 4 queue pairs in this example)
> # ./build/app/dpdk-testpmd --no-pci --vdev=net_vhost0,iface=/dev/vduse/vduse0,queues=4 --log-level=*:9  -- -i --txq=4 --rxq=4
>
> 4. Attach the VDUSE device to the vDPA bus
> # vdpa dev add name vduse0 mgmtdev vduse
> => The virtio-net netdev shows up (eth0 here)
> # ip l show eth0
> 21: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
>     link/ether c2:73:ea:a7:68:6d brd ff:ff:ff:ff:ff:ff
>
> 5. Start/stop traffic in testpmd
> testpmd> start
> testpmd> show port stats 0
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 11         RX-missed: 0          RX-bytes:  1482
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 1          TX-errors: 0          TX-bytes:  62
>
>   Throughput (since last show)
>   Rx-pps:            0          Rx-bps:            0
>   Tx-pps:            0          Tx-bps:            0
>   ############################################################################
> testpmd> stop
>
> 6. Detach the VDUSE device from the vDPA bus
> # vdpa dev del vduse0
>
> 7. Quit testpmd
> testpmd> quit
>
> Known issues & remaining work:
> ==============================
> - Fix issue in FD manager (still polling while FD has been removed)
> - Add Netlink support in Vhost library
> - Support device reconnection
>  -> a temporary patch to support reconnection via a tmpfs file is available,
>     upstream solution would be in-kernel and is being developed.
>  -> https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/5ad06ce14159a9ce36ee168dd13ef389cec91137
> - Support packed ring
> - Provide more performance benchmark results
>
> Changes in v2/v3:
> =================
> - Fixed mem_set_dump() parameter (patch 4)
> - Fixed accidental comment change (patch 7, Chenbo)
> - Change from __builtin_ctz to __builtin_ctzll (patch 9, Chenbo)
> - move change from patch 12 to 13 (Chenbo)
> - Enable locks annotation for control queue (Patch 17)
> - Send control queue notification when used descriptors enqueued (Patch 17)
> - Lock control queue IOTLB lock (Patch 17)
> - Fix error path in virtio_net_ctrl_pop() (Patch 17, Chenbo)
> - Set VDUSE dev FD as NONBLOCK (Patch 18)
> - Enable more Virtio features (Patch 18)
> - Remove calls to pthread_setcancelstate() (Patch 22)
> - Add calls to fdset_pipe_notify() when adding and deleting FDs from a set (Patch 22)
> - Use RTE_DIM() to get requests string array size (Patch 22)
> - Set reply result for IOTLB update message (Patch 25, Chenbo)
> - Fix queues enablement with multiqueue (Patch 26)
> - Move kickfd creation for better logging (Patch 26)
> - Improve logging (Patch 26)
> - Uninstall cvq kickfd in case of handler installation failure (Patch 27)
> - Enable CVQ notifications once handler is installed (Patch 27)
> - Don't advertise multiqueue and control queue if app only request single queue pair (Patch 27)
> - Add release notes
>
> Maxime Coquelin (28):
>   vhost: fix missing guest notif stat increment
>   vhost: fix invalid call FD handling
>   vhost: fix IOTLB entries overlap check with previous entry
>   vhost: add helper of IOTLB entries coredump
>   vhost: add helper for IOTLB entries shared page check
>   vhost: don't dump unneeded pages with IOTLB
>   vhost: change to single IOTLB cache per device
>   vhost: add offset field to IOTLB entries
>   vhost: add page size info to IOTLB entry
>   vhost: retry translating IOVA after IOTLB miss
>   vhost: introduce backend ops
>   vhost: add IOTLB cache entry removal callback
>   vhost: add helper for IOTLB misses
>   vhost: add helper for interrupt injection
>   vhost: add API to set max queue pairs
>   net/vhost: use API to set max queue pairs
>   vhost: add control virtqueue support
>   vhost: add VDUSE device creation and destruction
>   vhost: add VDUSE callback for IOTLB miss
>   vhost: add VDUSE callback for IOTLB entry removal
>   vhost: add VDUSE callback for IRQ injection
>   vhost: add VDUSE events handler
>   vhost: add support for virtqueue state get event
>   vhost: add support for VDUSE status set event
>   vhost: add support for VDUSE IOTLB update event
>   vhost: add VDUSE device startup
>   vhost: add multiqueue support to VDUSE
>   vhost: add VDUSE device stop
>
>  doc/guides/prog_guide/vhost_lib.rst    |   4 +
>  doc/guides/rel_notes/release_23_07.rst |  11 +
>  drivers/net/vhost/rte_eth_vhost.c      |   3 +
>  lib/vhost/iotlb.c                      | 333 +++++++------
>  lib/vhost/iotlb.h                      |  45 +-
>  lib/vhost/meson.build                  |   5 +
>  lib/vhost/rte_vhost.h                  |  17 +
>  lib/vhost/socket.c                     |  72 ++-
>  lib/vhost/vduse.c                      | 646 +++++++++++++++++++++++++
>  lib/vhost/vduse.h                      |  33 ++
>  lib/vhost/version.map                  |   3 +
>  lib/vhost/vhost.c                      |  51 +-
>  lib/vhost/vhost.h                      |  90 ++--
>  lib/vhost/vhost_user.c                 |  51 +-
>  lib/vhost/vhost_user.h                 |   2 +-
>  lib/vhost/virtio_net_ctrl.c            | 286 +++++++++++
>  lib/vhost/virtio_net_ctrl.h            |  10 +
>  17 files changed, 1424 insertions(+), 238 deletions(-)
>  create mode 100644 lib/vhost/vduse.c
>  create mode 100644 lib/vhost/vduse.h
>  create mode 100644 lib/vhost/virtio_net_ctrl.c
>  create mode 100644 lib/vhost/virtio_net_ctrl.h

I did not do a in-depth review, but overall, the series lgtm (and per
patch compilation looks fine).

A few comments though:
- patch 2 is the same as
https://patchwork.dpdk.org/project/dpdk/patch/168431454344.558450.2397970324914136724.stgit@ebuild.local/
It would be cool to report the review tags in the first series that
gets applied.
- there may be a bug in patch 5, see comment on patch,
- patch 4, 5 and 6 go together, with patch 6 being the fix itself. I
understand it was easier to review as splitted patches, but maybe it
would be simpler to squash them to make the backport trivial.
- patch 7 (and some other patches in the series) will increase the
virtio_net structure but we are not gaining anything on the
vhost_virtqueue size, so a device + vqs memory footprint will slightly
increase. This is not a problem afaics?
- patch 15 breaks the doc, format is incorrect but the CI reported it
so you will notice it before merging :-),
  
Maxime Coquelin June 1, 2023, 2:59 p.m. UTC | #2
Hi David,

On 5/26/23 11:14, David Marchand wrote:
> On Thu, May 25, 2023 at 6:25 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> Note: v2 is identical to v3, it is just a resend because
>> of an issue when posting v2 breaking the series in patchwork.
>>
>> This series introduces a new type of backend, VDUSE,
>> to the Vhost library.
>>
>> VDUSE stands for vDPA device in Userspace, it enables
>> implementing a Virtio device in userspace and have it
>> attached to the Kernel vDPA bus.
>>
>> Once attached to the vDPA bus, it can be used either by
>> Kernel Virtio drivers, like virtio-net in our case, via
>> the virtio-vdpa driver. Doing that, the device is visible
>> to the Kernel networking stack and is exposed to userspace
>> as a regular netdev.
>>
>> It can also be exposed to userspace thanks to the
>> vhost-vdpa driver, via a vhost-vdpa chardev that can be
>> passed to QEMU or Virtio-user PMD.
>>
>> While VDUSE support is already available in upstream
>> Kernel, a couple of patches are required to support
>> network device type:
>>
>> https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_rfc
>>
>> In order to attach the created VDUSE device to the vDPA
>> bus, a recent iproute2 version containing the vdpa tool is
>> required.
>>
>> Benchmark results:
>> ==================
>>
>> On this v2, PVP reference benchmark has been run & compared with
>> Vhost-user.
>>
>> When doing macswap forwarding in the worload, no difference is seen.
>> When doing io forwarding in the workload, we see 4% performance
>> degradation with VDUSE, comapred to Vhost-user/Virtio-user. It is
>> explained by the use of the IOTLB layer in the Vhost-library when using
>> VDUSE, whereas Vhost-user/Virtio-user does not make use of it.
>>
>> Usage:
>> ======
>>
>> 1. Probe required Kernel modules
>> # modprobe vdpa
>> # modprobe vduse
>> # modprobe virtio-vdpa
>>
>> 2. Build (require vduse kernel headers to be available)
>> # meson build
>> # ninja -C build
>>
>> 3. Create a VDUSE device (vduse0) using Vhost PMD with
>> testpmd (with 4 queue pairs in this example)
>> # ./build/app/dpdk-testpmd --no-pci --vdev=net_vhost0,iface=/dev/vduse/vduse0,queues=4 --log-level=*:9  -- -i --txq=4 --rxq=4
>>
>> 4. Attach the VDUSE device to the vDPA bus
>> # vdpa dev add name vduse0 mgmtdev vduse
>> => The virtio-net netdev shows up (eth0 here)
>> # ip l show eth0
>> 21: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
>>      link/ether c2:73:ea:a7:68:6d brd ff:ff:ff:ff:ff:ff
>>
>> 5. Start/stop traffic in testpmd
>> testpmd> start
>> testpmd> show port stats 0
>>    ######################## NIC statistics for port 0  ########################
>>    RX-packets: 11         RX-missed: 0          RX-bytes:  1482
>>    RX-errors: 0
>>    RX-nombuf:  0
>>    TX-packets: 1          TX-errors: 0          TX-bytes:  62
>>
>>    Throughput (since last show)
>>    Rx-pps:            0          Rx-bps:            0
>>    Tx-pps:            0          Tx-bps:            0
>>    ############################################################################
>> testpmd> stop
>>
>> 6. Detach the VDUSE device from the vDPA bus
>> # vdpa dev del vduse0
>>
>> 7. Quit testpmd
>> testpmd> quit
>>
>> Known issues & remaining work:
>> ==============================
>> - Fix issue in FD manager (still polling while FD has been removed)
>> - Add Netlink support in Vhost library
>> - Support device reconnection
>>   -> a temporary patch to support reconnection via a tmpfs file is available,
>>      upstream solution would be in-kernel and is being developed.
>>   -> https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/5ad06ce14159a9ce36ee168dd13ef389cec91137
>> - Support packed ring
>> - Provide more performance benchmark results
>>
>> Changes in v2/v3:
>> =================
>> - Fixed mem_set_dump() parameter (patch 4)
>> - Fixed accidental comment change (patch 7, Chenbo)
>> - Change from __builtin_ctz to __builtin_ctzll (patch 9, Chenbo)
>> - move change from patch 12 to 13 (Chenbo)
>> - Enable locks annotation for control queue (Patch 17)
>> - Send control queue notification when used descriptors enqueued (Patch 17)
>> - Lock control queue IOTLB lock (Patch 17)
>> - Fix error path in virtio_net_ctrl_pop() (Patch 17, Chenbo)
>> - Set VDUSE dev FD as NONBLOCK (Patch 18)
>> - Enable more Virtio features (Patch 18)
>> - Remove calls to pthread_setcancelstate() (Patch 22)
>> - Add calls to fdset_pipe_notify() when adding and deleting FDs from a set (Patch 22)
>> - Use RTE_DIM() to get requests string array size (Patch 22)
>> - Set reply result for IOTLB update message (Patch 25, Chenbo)
>> - Fix queues enablement with multiqueue (Patch 26)
>> - Move kickfd creation for better logging (Patch 26)
>> - Improve logging (Patch 26)
>> - Uninstall cvq kickfd in case of handler installation failure (Patch 27)
>> - Enable CVQ notifications once handler is installed (Patch 27)
>> - Don't advertise multiqueue and control queue if app only request single queue pair (Patch 27)
>> - Add release notes
>>
>> Maxime Coquelin (28):
>>    vhost: fix missing guest notif stat increment
>>    vhost: fix invalid call FD handling
>>    vhost: fix IOTLB entries overlap check with previous entry
>>    vhost: add helper of IOTLB entries coredump
>>    vhost: add helper for IOTLB entries shared page check
>>    vhost: don't dump unneeded pages with IOTLB
>>    vhost: change to single IOTLB cache per device
>>    vhost: add offset field to IOTLB entries
>>    vhost: add page size info to IOTLB entry
>>    vhost: retry translating IOVA after IOTLB miss
>>    vhost: introduce backend ops
>>    vhost: add IOTLB cache entry removal callback
>>    vhost: add helper for IOTLB misses
>>    vhost: add helper for interrupt injection
>>    vhost: add API to set max queue pairs
>>    net/vhost: use API to set max queue pairs
>>    vhost: add control virtqueue support
>>    vhost: add VDUSE device creation and destruction
>>    vhost: add VDUSE callback for IOTLB miss
>>    vhost: add VDUSE callback for IOTLB entry removal
>>    vhost: add VDUSE callback for IRQ injection
>>    vhost: add VDUSE events handler
>>    vhost: add support for virtqueue state get event
>>    vhost: add support for VDUSE status set event
>>    vhost: add support for VDUSE IOTLB update event
>>    vhost: add VDUSE device startup
>>    vhost: add multiqueue support to VDUSE
>>    vhost: add VDUSE device stop
>>
>>   doc/guides/prog_guide/vhost_lib.rst    |   4 +
>>   doc/guides/rel_notes/release_23_07.rst |  11 +
>>   drivers/net/vhost/rte_eth_vhost.c      |   3 +
>>   lib/vhost/iotlb.c                      | 333 +++++++------
>>   lib/vhost/iotlb.h                      |  45 +-
>>   lib/vhost/meson.build                  |   5 +
>>   lib/vhost/rte_vhost.h                  |  17 +
>>   lib/vhost/socket.c                     |  72 ++-
>>   lib/vhost/vduse.c                      | 646 +++++++++++++++++++++++++
>>   lib/vhost/vduse.h                      |  33 ++
>>   lib/vhost/version.map                  |   3 +
>>   lib/vhost/vhost.c                      |  51 +-
>>   lib/vhost/vhost.h                      |  90 ++--
>>   lib/vhost/vhost_user.c                 |  51 +-
>>   lib/vhost/vhost_user.h                 |   2 +-
>>   lib/vhost/virtio_net_ctrl.c            | 286 +++++++++++
>>   lib/vhost/virtio_net_ctrl.h            |  10 +
>>   17 files changed, 1424 insertions(+), 238 deletions(-)
>>   create mode 100644 lib/vhost/vduse.c
>>   create mode 100644 lib/vhost/vduse.h
>>   create mode 100644 lib/vhost/virtio_net_ctrl.c
>>   create mode 100644 lib/vhost/virtio_net_ctrl.h
> 
> I did not do a in-depth review, but overall, the series lgtm (and per
> patch compilation looks fine).
> 
> A few comments though:
> - patch 2 is the same as
> https://patchwork.dpdk.org/project/dpdk/patch/168431454344.558450.2397970324914136724.stgit@ebuild.local/
> It would be cool to report the review tags in the first series that
> gets applied.

Done

> - there may be a bug in patch 5, see comment on patch,
> - patch 4, 5 and 6 go together, with patch 6 being the fix itself. I
> understand it was easier to review as splitted patches, but maybe it
> would be simpler to squash them to make the backport trivial.

If possible I would prefer to keep them as separate patches, it will be
much easier to understand the code in the future if a regression
happened.

I'll help the LTS maintainer with backporting it (i.e. request to also
pick patch 5 and 5).

Does that work for you?

> - patch 7 (and some other patches in the series) will increase the
> virtio_net structure but we are not gaining anything on the
> vhost_virtqueue size, so a device + vqs memory footprint will slightly
> increase. This is not a problem afaics?

I have not noticed performance degradation being introduced, but it may 
be good to revisit it.

> - patch 15 breaks the doc, format is incorrect but the CI reported it
> so you will notice it before merging :-),
> 

Fixed!

Thanks,
Maxime
  
David Marchand June 1, 2023, 3:18 p.m. UTC | #3
On Thu, Jun 1, 2023 at 4:59 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
> > - patch 4, 5 and 6 go together, with patch 6 being the fix itself. I
> > understand it was easier to review as splitted patches, but maybe it
> > would be simpler to squash them to make the backport trivial.
>
> If possible I would prefer to keep them as separate patches, it will be
> much easier to understand the code in the future if a regression
> happened.
>
> I'll help the LTS maintainer with backporting it (i.e. request to also
> pick patch 5 and 5).

4*

>
> Does that work for you?

Ok for me.