Message ID | 20190829080000.20806-1-maxime.coquelin@redhat.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BB4691D15F; Thu, 29 Aug 2019 10:00:19 +0200 (CEST) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 5B1941D151; Thu, 29 Aug 2019 10:00:18 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B14B4300C76A; Thu, 29 Aug 2019 08:00:17 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-52.ams2.redhat.com [10.36.112.52]) by smtp.corp.redhat.com (Postfix) with ESMTP id F07A55B6A5; Thu, 29 Aug 2019 08:00:08 +0000 (UTC) From: Maxime Coquelin <maxime.coquelin@redhat.com> To: tiwei.bie@intel.com, zhihong.wang@intel.com, amorenoz@redhat.com, xiao.w.wang@intel.com, dev@dpdk.org, jfreimann@redhat.com Cc: stable@dpdk.org, Maxime Coquelin <maxime.coquelin@redhat.com> Date: Thu, 29 Aug 2019 09:59:45 +0200 Message-Id: <20190829080000.20806-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Thu, 29 Aug 2019 08:00:17 +0000 (UTC) Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Series |
Introduce Virtio vDPA driver
|
|
Message
Maxime Coquelin
Aug. 29, 2019, 7:59 a.m. UTC
vDPA allows to offload Virtio Datapath processing by supported NICs, like IFCVF for example. The control path has to be handled by a dedicated vDPA driver, so that it can translate Vhost-user protocol requests to proprietary NICs registers accesses. This driver is the vDPA driver for Virtio devices, meaning that Vhost-user protocol requests get translated to Virtio registers accesses as per defined in the Virtio spec. Basically, it can be used within a guest with a para-virtualized Virtio-net device, or even with a full Virtio HW offload NIC directly on host. Amongst the main features, all currently supported Virtio spec versions are supported (split & packed rings, but only tested with split ring for now) and also multiqueue support is added by implementing the cotnrol virtqueue in the driver. The structure of this driver is heavily based on IFCVF vDPA. Maxime Coquelin (15): vhost: remove vhost kernel header inclusion vhost: configure vDPA as soon as the device is ready net/virtio: move control path fonctions in virtqueue file net/virtio: add virtio PCI subsystem device ID declaration net/virtio: save notify bar ID in virtio HW struct net/virtio: add skeleton for virtio vDPA driver net/virtio: add vDPA ops to get number of queue net/virtio: add virtio vDPA op to get features net/virtio: add virtio vDPA op to get protocol features net/virtio: add vDPA op to configure and start the device net/virtio: add vDPA op to stop and close the device net/virtio: add vDPA op to set features net/virtio: add vDPA ops to get VFIO FDs net/virtio: add vDPA op to get notification area doc: add documentation for Virtio vDPA driver config/common_linux | 1 + doc/guides/nics/index.rst | 1 + doc/guides/nics/virtio_vdpa.rst | 45 ++ drivers/net/ifc/ifcvf_vdpa.c | 1 + drivers/net/virtio/Makefile | 4 + drivers/net/virtio/meson.build | 3 +- drivers/net/virtio/virtio_ethdev.c | 252 -------- drivers/net/virtio/virtio_pci.c | 6 +- drivers/net/virtio/virtio_pci.h | 2 + drivers/net/virtio/virtio_vdpa.c | 918 +++++++++++++++++++++++++++++ drivers/net/virtio/virtqueue.c | 255 ++++++++ drivers/net/virtio/virtqueue.h | 5 + lib/librte_vhost/rte_vdpa.h | 1 - lib/librte_vhost/rte_vhost.h | 9 +- lib/librte_vhost/vhost_user.c | 3 +- 15 files changed, 1243 insertions(+), 263 deletions(-) create mode 100644 doc/guides/nics/virtio_vdpa.rst create mode 100644 drivers/net/virtio/virtio_vdpa.c
Comments
Hi Maxime, Thursday, August 29, 2019 11:00 AM, Maxime Coquelin: > Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver > > vDPA allows to offload Virtio Datapath processing by supported NICs, like > IFCVF for example. > > The control path has to be handled by a dedicated vDPA driver, so that it can > translate Vhost-user protocol requests to proprietary NICs registers > accesses. > > This driver is the vDPA driver for Virtio devices, meaning that Vhost-user > protocol requests get translated to Virtio registers accesses as per defined in > the Virtio spec. > > Basically, it can be used within a guest with a para-virtualized Virtio-net > device, or even with a full Virtio HW offload NIC directly on host. Can you elaborate more on the use cases to use such driver? 1. If the underlying HW can support full virtio device, why we need to work w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one? 2. why it is preferable to work w/ virtio device as the backend device to be used w/ vDPA v.s. working w/ the underlying HW VF? Is nested virtualization is what you have in mind? > > Amongst the main features, all currently supported Virtio spec versions are > supported (split & packed rings, but only tested with split ring for now) and > also multiqueue support is added by implementing the cotnrol virtqueue in > the driver. > > The structure of this driver is heavily based on IFCVF vDPA. > > Maxime Coquelin (15): > vhost: remove vhost kernel header inclusion > vhost: configure vDPA as soon as the device is ready > net/virtio: move control path fonctions in virtqueue file > net/virtio: add virtio PCI subsystem device ID declaration > net/virtio: save notify bar ID in virtio HW struct > net/virtio: add skeleton for virtio vDPA driver > net/virtio: add vDPA ops to get number of queue > net/virtio: add virtio vDPA op to get features > net/virtio: add virtio vDPA op to get protocol features > net/virtio: add vDPA op to configure and start the device > net/virtio: add vDPA op to stop and close the device > net/virtio: add vDPA op to set features > net/virtio: add vDPA ops to get VFIO FDs > net/virtio: add vDPA op to get notification area > doc: add documentation for Virtio vDPA driver > > config/common_linux | 1 + > doc/guides/nics/index.rst | 1 + > doc/guides/nics/virtio_vdpa.rst | 45 ++ > drivers/net/ifc/ifcvf_vdpa.c | 1 + > drivers/net/virtio/Makefile | 4 + > drivers/net/virtio/meson.build | 3 +- > drivers/net/virtio/virtio_ethdev.c | 252 -------- > drivers/net/virtio/virtio_pci.c | 6 +- > drivers/net/virtio/virtio_pci.h | 2 + > drivers/net/virtio/virtio_vdpa.c | 918 +++++++++++++++++++++++++++++ > drivers/net/virtio/virtqueue.c | 255 ++++++++ > drivers/net/virtio/virtqueue.h | 5 + > lib/librte_vhost/rte_vdpa.h | 1 - > lib/librte_vhost/rte_vhost.h | 9 +- > lib/librte_vhost/vhost_user.c | 3 +- > 15 files changed, 1243 insertions(+), 263 deletions(-) create mode 100644 > doc/guides/nics/virtio_vdpa.rst create mode 100644 > drivers/net/virtio/virtio_vdpa.c > > -- > 2.21.0
Hi Shahaf, On 9/9/19 1:55 PM, Shahaf Shuler wrote: > Hi Maxime, > > Thursday, August 29, 2019 11:00 AM, Maxime Coquelin: >> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver >> >> vDPA allows to offload Virtio Datapath processing by supported NICs, like >> IFCVF for example. >> >> The control path has to be handled by a dedicated vDPA driver, so that it can >> translate Vhost-user protocol requests to proprietary NICs registers >> accesses. >> >> This driver is the vDPA driver for Virtio devices, meaning that Vhost-user >> protocol requests get translated to Virtio registers accesses as per defined in >> the Virtio spec. >> >> Basically, it can be used within a guest with a para-virtualized Virtio-net >> device, or even with a full Virtio HW offload NIC directly on host. > > Can you elaborate more on the use cases to use such driver? > > 1. If the underlying HW can support full virtio device, why we need to work w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one? > 2. why it is preferable to work w/ virtio device as the backend device to be used w/ vDPA v.s. working w/ the underlying HW VF? IMHO, I see two uses cases where it can make sense to use vDPA with a full offload HW device: 1. Live-migration support: It makes it possible to switch to rings processing in SW during the migration as Virtio HH does not support dirty pages logging. 2. Can be used to provide a single standard interface (the vhost-user socket) to containers in the scope of CNFs. Doing so, the container does not need to be modified, whatever the HW NIC: Virtio datapath offload only, full Virtio offload, or no offload at all. In the latter case, it would not be optimal as it implies forwarding between the Vhost PMD and the HW NIC PMD but it would work. > Is nested virtualization is what you have in mind? For the para-virtualized virtio device, either nested virtualization, or container running within the guest. Maxime
Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin: > Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver > > Hi Shahaf, > > On 9/9/19 1:55 PM, Shahaf Shuler wrote: > > Hi Maxime, > > > > Thursday, August 29, 2019 11:00 AM, Maxime Coquelin: > >> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver > >> > >> vDPA allows to offload Virtio Datapath processing by supported NICs, > >> like IFCVF for example. > >> > >> The control path has to be handled by a dedicated vDPA driver, so > >> that it can translate Vhost-user protocol requests to proprietary > >> NICs registers accesses. > >> > >> This driver is the vDPA driver for Virtio devices, meaning that > >> Vhost-user protocol requests get translated to Virtio registers > >> accesses as per defined in the Virtio spec. > >> > >> Basically, it can be used within a guest with a para-virtualized > >> Virtio-net device, or even with a full Virtio HW offload NIC directly on > host. > > > > Can you elaborate more on the use cases to use such driver? > > > > 1. If the underlying HW can support full virtio device, why we need to work > w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one? > > 2. why it is preferable to work w/ virtio device as the backend device to be > used w/ vDPA v.s. working w/ the underlying HW VF? > > > IMHO, I see two uses cases where it can make sense to use vDPA with a full > offload HW device: > 1. Live-migration support: It makes it possible to switch to rings > processing in SW during the migration as Virtio HH does not support > dirty pages logging. Can you elaborate why specifically using virtio_vdpa PMD enables this SW relay during migration? e.g. the vdpa PMD of intel that runs on top of VF do that today as well. > > 2. Can be used to provide a single standard interface (the vhost-user > socket) to containers in the scope of CNFs. Doing so, the container > does not need to be modified, whatever the HW NIC: Virtio datapath > offload only, full Virtio offload, or no offload at all. In the > latter case, it would not be optimal as it implies forwarding between > the Vhost PMD and the HW NIC PMD but it would work. It is not clear to me the interface map in such system. From what I understand the container will have virtio-user i/f and the host will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or not. For full emulation I guess you will need to expose the netdev of the fully emulated virtio device to the container? Am trying to map when it is beneficial to use this virtio_vdpa PMD and when it is better to use the vendor specific vDPA PMD on top of VF. > > > Is nested virtualization is what you have in mind? > > For the para-virtualized virtio device, either nested virtualization, or > container running within the guest. > > Maxime
On 9/10/19 3:44 PM, Shahaf Shuler wrote: > Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin: >> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver >> >> Hi Shahaf, >> >> On 9/9/19 1:55 PM, Shahaf Shuler wrote: >>> Hi Maxime, >>> >>> Thursday, August 29, 2019 11:00 AM, Maxime Coquelin: >>>> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver >>>> >>>> vDPA allows to offload Virtio Datapath processing by supported NICs, >>>> like IFCVF for example. >>>> >>>> The control path has to be handled by a dedicated vDPA driver, so >>>> that it can translate Vhost-user protocol requests to proprietary >>>> NICs registers accesses. >>>> >>>> This driver is the vDPA driver for Virtio devices, meaning that >>>> Vhost-user protocol requests get translated to Virtio registers >>>> accesses as per defined in the Virtio spec. >>>> >>>> Basically, it can be used within a guest with a para-virtualized >>>> Virtio-net device, or even with a full Virtio HW offload NIC directly on >> host. >>> >>> Can you elaborate more on the use cases to use such driver? >>> >>> 1. If the underlying HW can support full virtio device, why we need to work >> w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one? >>> 2. why it is preferable to work w/ virtio device as the backend device to be >> used w/ vDPA v.s. working w/ the underlying HW VF? >> >> >> IMHO, I see two uses cases where it can make sense to use vDPA with a full >> offload HW device: >> 1. Live-migration support: It makes it possible to switch to rings >> processing in SW during the migration as Virtio HH does not support >> dirty pages logging. > > Can you elaborate why specifically using virtio_vdpa PMD enables this SW relay during migration? > e.g. the vdpa PMD of intel that runs on top of VF do that today as well. I think there were a misunderstanding. When I said: " I see two uses cases where it can make sense to use vDPA with a full offload HW device " I meant, I see two uses cases where it can make sense to use vDPA with a full offload HW device, instead of the full offload HW device to use Virtio PMD. In other words, I think it is preferable to only offload the datapath, so that it is possible to support SW live-migration. >> >> 2. Can be used to provide a single standard interface (the vhost-user >> socket) to containers in the scope of CNFs. Doing so, the container >> does not need to be modified, whatever the HW NIC: Virtio datapath >> offload only, full Virtio offload, or no offload at all. In the >> latter case, it would not be optimal as it implies forwarding between >> the Vhost PMD and the HW NIC PMD but it would work. > > It is not clear to me the interface map in such system. > From what I understand the container will have virtio-user i/f and the host will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or not. > For full emulation I guess you will need to expose the netdev of the fully emulated virtio device to the container? > > Am trying to map when it is beneficial to use this virtio_vdpa PMD and when it is better to use the vendor specific vDPA PMD on top of VF. I think that with above clarification, I made it clear that the goal of this driver is not to replace vendors vDPA drivers (their control path maybe not even be compatible), but instead to provide a generic driver that can be used either within a guest with a para-virtualized Virtio- net device or with HW NIC that fully offloads Virtio (both data and control paths).
Tuesday, September 10, 2019 4:56 PM, Maxime Coquelin: > Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver > On 9/10/19 3:44 PM, Shahaf Shuler wrote: > > Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin: > >> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver [...] > >> > >> Hi Shahaf, > >> > >> > >> IMHO, I see two uses cases where it can make sense to use vDPA with a > >> full offload HW device: > >> 1. Live-migration support: It makes it possible to switch to rings > >> processing in SW during the migration as Virtio HH does not support > >> dirty pages logging. > > > > Can you elaborate why specifically using virtio_vdpa PMD enables this SW > relay during migration? > > e.g. the vdpa PMD of intel that runs on top of VF do that today as well. > > I think there were a misunderstanding. When I said: > " > I see two uses cases where it can make sense to use vDPA with a full offload > HW device " > > I meant, I see two uses cases where it can make sense to use vDPA with a full > offload HW device, instead of the full offload HW device to use Virtio PMD. > > In other words, I think it is preferable to only offload the datapath, so that it > is possible to support SW live-migration. > > >> > >> 2. Can be used to provide a single standard interface (the vhost-user > >> socket) to containers in the scope of CNFs. Doing so, the container > >> does not need to be modified, whatever the HW NIC: Virtio datapath > >> offload only, full Virtio offload, or no offload at all. In the > >> latter case, it would not be optimal as it implies forwarding between > >> the Vhost PMD and the HW NIC PMD but it would work. > > > > It is not clear to me the interface map in such system. > > From what I understand the container will have virtio-user i/f and the host > will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or > not. > > For full emulation I guess you will need to expose the netdev of the fully > emulated virtio device to the container? > > > > Am trying to map when it is beneficial to use this virtio_vdpa PMD and > when it is better to use the vendor specific vDPA PMD on top of VF. > > I think that with above clarification, I made it clear that the goal of this driver > is not to replace vendors vDPA drivers (their control path maybe not even be > compatible), but instead to provide a generic driver that can be used either > within a guest with a para-virtualized Virtio- net device or with HW NIC that > fully offloads Virtio (both data and control paths). Thanks Maxim, It is clearer now. From what I understand this driver is to be used w/ vDPA when the underlying device is virtio. I can perfectly understand the para-virt ( + nested virtualization / container inside VM) use case. Regarding the fully emulated virtio device on the host (instead of a plain VF) - for me the benefit still not clear - if you have HW that can expose VF why not use VF + vendor specific vDPA driver. Anyway - for the series, Acked-by: Shahaf Shuler <shahafs@mellanox.com>
On 9/11/19 7:15 AM, Shahaf Shuler wrote: > Tuesday, September 10, 2019 4:56 PM, Maxime Coquelin: >> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver >> On 9/10/19 3:44 PM, Shahaf Shuler wrote: >>> Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin: >>>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver > > [...] > >>>> >>>> Hi Shahaf, >>>> >>>> >>>> IMHO, I see two uses cases where it can make sense to use vDPA with a >>>> full offload HW device: >>>> 1. Live-migration support: It makes it possible to switch to rings >>>> processing in SW during the migration as Virtio HH does not support >>>> dirty pages logging. >>> >>> Can you elaborate why specifically using virtio_vdpa PMD enables this SW >> relay during migration? >>> e.g. the vdpa PMD of intel that runs on top of VF do that today as well. >> >> I think there were a misunderstanding. When I said: >> " >> I see two uses cases where it can make sense to use vDPA with a full offload >> HW device " >> >> I meant, I see two uses cases where it can make sense to use vDPA with a full >> offload HW device, instead of the full offload HW device to use Virtio PMD. >> >> In other words, I think it is preferable to only offload the datapath, so that it >> is possible to support SW live-migration. >> >>>> >>>> 2. Can be used to provide a single standard interface (the vhost-user >>>> socket) to containers in the scope of CNFs. Doing so, the container >>>> does not need to be modified, whatever the HW NIC: Virtio datapath >>>> offload only, full Virtio offload, or no offload at all. In the >>>> latter case, it would not be optimal as it implies forwarding between >>>> the Vhost PMD and the HW NIC PMD but it would work. >>> >>> It is not clear to me the interface map in such system. >>> From what I understand the container will have virtio-user i/f and the host >> will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or >> not. >>> For full emulation I guess you will need to expose the netdev of the fully >> emulated virtio device to the container? >>> >>> Am trying to map when it is beneficial to use this virtio_vdpa PMD and >> when it is better to use the vendor specific vDPA PMD on top of VF. >> >> I think that with above clarification, I made it clear that the goal of this driver >> is not to replace vendors vDPA drivers (their control path maybe not even be >> compatible), but instead to provide a generic driver that can be used either >> within a guest with a para-virtualized Virtio- net device or with HW NIC that >> fully offloads Virtio (both data and control paths). > > Thanks Maxim, It is clearer now. > From what I understand this driver is to be used w/ vDPA when the underlying device is virtio. > > I can perfectly understand the para-virt ( + nested virtualization / container inside VM) use case. > > Regarding the fully emulated virtio device on the host (instead of a plain VF) - for me the benefit still not clear - if you have HW that can expose VF why not use VF + vendor specific vDPA driver. If you need a vendor specific vDPA driver for the VF, then you definitely want to use the vendor specific driver. However, if there is a HW or VF that implements the Virtio Spec even for the control path (i.e. the PCI registers layout), one may be tempted to do device assignment directly to the guest and use Virtio PMD. The downside of doing that is it won't support live-migration. The benefit of using vDPA with virtio vDPA driver in this case is provide a way to support live-migration (by switch to SW ring processing and perform dirty pages logging). > > Anyway - for the series, > Acked-by: Shahaf Shuler <shahafs@mellanox.com> > Thanks! Maxime
On 8/29/19 9:59 AM, Maxime Coquelin wrote: > vDPA allows to offload Virtio Datapath processing by supported > NICs, like IFCVF for example. > > The control path has to be handled by a dedicated vDPA driver, > so that it can translate Vhost-user protocol requests to > proprietary NICs registers accesses. > > This driver is the vDPA driver for Virtio devices, meaning > that Vhost-user protocol requests get translated to Virtio > registers accesses as per defined in the Virtio spec. > > Basically, it can be used within a guest with a para-virtualized > Virtio-net device, or even with a full Virtio HW offload NIC > directly on host. > > Amongst the main features, all currently supported Virtio spec > versions are supported (split & packed rings, but only tested > with split ring for now) and also multiqueue support is added > by implementing the cotnrol virtqueue in the driver. > > The structure of this driver is heavily based on IFCVF vDPA. > > Maxime Coquelin (15): > vhost: remove vhost kernel header inclusion > vhost: configure vDPA as soon as the device is ready > net/virtio: move control path fonctions in virtqueue file > net/virtio: add virtio PCI subsystem device ID declaration > net/virtio: save notify bar ID in virtio HW struct > net/virtio: add skeleton for virtio vDPA driver > net/virtio: add vDPA ops to get number of queue > net/virtio: add virtio vDPA op to get features > net/virtio: add virtio vDPA op to get protocol features > net/virtio: add vDPA op to configure and start the device > net/virtio: add vDPA op to stop and close the device > net/virtio: add vDPA op to set features > net/virtio: add vDPA ops to get VFIO FDs > net/virtio: add vDPA op to get notification area > doc: add documentation for Virtio vDPA driver > > config/common_linux | 1 + > doc/guides/nics/index.rst | 1 + > doc/guides/nics/virtio_vdpa.rst | 45 ++ > drivers/net/ifc/ifcvf_vdpa.c | 1 + > drivers/net/virtio/Makefile | 4 + > drivers/net/virtio/meson.build | 3 +- > drivers/net/virtio/virtio_ethdev.c | 252 -------- > drivers/net/virtio/virtio_pci.c | 6 +- > drivers/net/virtio/virtio_pci.h | 2 + > drivers/net/virtio/virtio_vdpa.c | 918 +++++++++++++++++++++++++++++ > drivers/net/virtio/virtqueue.c | 255 ++++++++ > drivers/net/virtio/virtqueue.h | 5 + > lib/librte_vhost/rte_vdpa.h | 1 - > lib/librte_vhost/rte_vhost.h | 9 +- > lib/librte_vhost/vhost_user.c | 3 +- > 15 files changed, 1243 insertions(+), 263 deletions(-) > create mode 100644 doc/guides/nics/virtio_vdpa.rst > create mode 100644 drivers/net/virtio/virtio_vdpa.c > Deferring the series to v20.02.