mbox series

[00/15] Introduce Virtio vDPA driver

Message ID 20190829080000.20806-1-maxime.coquelin@redhat.com (mailing list archive)
Headers
Series Introduce Virtio vDPA driver |

Message

Maxime Coquelin Aug. 29, 2019, 7:59 a.m. UTC
  vDPA allows to offload Virtio Datapath processing by supported
NICs, like IFCVF for example.

The control path has to be handled by a dedicated vDPA driver,
so that it can translate Vhost-user protocol requests to
proprietary NICs registers accesses.

This driver is the vDPA driver for Virtio devices, meaning
that Vhost-user protocol requests get translated to Virtio
registers accesses as per defined in the Virtio spec.

Basically, it can be used within a guest with a para-virtualized
Virtio-net device, or even with a full Virtio HW offload NIC
directly on host.

Amongst the main features, all currently supported Virtio spec
versions are supported (split & packed rings, but only tested
with split ring for now) and also multiqueue support is added
by implementing the cotnrol virtqueue in the driver.

The structure of this driver is heavily based on IFCVF vDPA.

Maxime Coquelin (15):
  vhost: remove vhost kernel header inclusion
  vhost: configure vDPA as soon as the device is ready
  net/virtio: move control path fonctions in virtqueue file
  net/virtio: add virtio PCI subsystem device ID declaration
  net/virtio: save notify bar ID in virtio HW struct
  net/virtio: add skeleton for virtio vDPA driver
  net/virtio: add vDPA ops to get number of queue
  net/virtio: add virtio vDPA op to get features
  net/virtio: add virtio vDPA op to get protocol features
  net/virtio: add vDPA op to configure and start the device
  net/virtio: add vDPA op to stop and close the device
  net/virtio: add vDPA op to set features
  net/virtio: add vDPA ops to get VFIO FDs
  net/virtio: add vDPA op to get notification area
  doc: add documentation for Virtio vDPA driver

 config/common_linux                |   1 +
 doc/guides/nics/index.rst          |   1 +
 doc/guides/nics/virtio_vdpa.rst    |  45 ++
 drivers/net/ifc/ifcvf_vdpa.c       |   1 +
 drivers/net/virtio/Makefile        |   4 +
 drivers/net/virtio/meson.build     |   3 +-
 drivers/net/virtio/virtio_ethdev.c | 252 --------
 drivers/net/virtio/virtio_pci.c    |   6 +-
 drivers/net/virtio/virtio_pci.h    |   2 +
 drivers/net/virtio/virtio_vdpa.c   | 918 +++++++++++++++++++++++++++++
 drivers/net/virtio/virtqueue.c     | 255 ++++++++
 drivers/net/virtio/virtqueue.h     |   5 +
 lib/librte_vhost/rte_vdpa.h        |   1 -
 lib/librte_vhost/rte_vhost.h       |   9 +-
 lib/librte_vhost/vhost_user.c      |   3 +-
 15 files changed, 1243 insertions(+), 263 deletions(-)
 create mode 100644 doc/guides/nics/virtio_vdpa.rst
 create mode 100644 drivers/net/virtio/virtio_vdpa.c
  

Comments

Shahaf Shuler Sept. 9, 2019, 11:55 a.m. UTC | #1
Hi Maxime, 

Thursday, August 29, 2019 11:00 AM, Maxime Coquelin:
> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> 
> vDPA allows to offload Virtio Datapath processing by supported NICs, like
> IFCVF for example.
> 
> The control path has to be handled by a dedicated vDPA driver, so that it can
> translate Vhost-user protocol requests to proprietary NICs registers
> accesses.
> 
> This driver is the vDPA driver for Virtio devices, meaning that Vhost-user
> protocol requests get translated to Virtio registers accesses as per defined in
> the Virtio spec.
> 
> Basically, it can be used within a guest with a para-virtualized Virtio-net
> device, or even with a full Virtio HW offload NIC directly on host.

Can you elaborate more on the use cases to use such driver? 

1. If the underlying HW can support full virtio device, why we need to work w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one?
2. why it is preferable to work w/ virtio device as the backend device to be used w/ vDPA v.s. working w/ the underlying HW VF? 

Is nested virtualization is what you have in mind?

> 
> Amongst the main features, all currently supported Virtio spec versions are
> supported (split & packed rings, but only tested with split ring for now) and
> also multiqueue support is added by implementing the cotnrol virtqueue in
> the driver.
> 
> The structure of this driver is heavily based on IFCVF vDPA.
> 
> Maxime Coquelin (15):
>   vhost: remove vhost kernel header inclusion
>   vhost: configure vDPA as soon as the device is ready
>   net/virtio: move control path fonctions in virtqueue file
>   net/virtio: add virtio PCI subsystem device ID declaration
>   net/virtio: save notify bar ID in virtio HW struct
>   net/virtio: add skeleton for virtio vDPA driver
>   net/virtio: add vDPA ops to get number of queue
>   net/virtio: add virtio vDPA op to get features
>   net/virtio: add virtio vDPA op to get protocol features
>   net/virtio: add vDPA op to configure and start the device
>   net/virtio: add vDPA op to stop and close the device
>   net/virtio: add vDPA op to set features
>   net/virtio: add vDPA ops to get VFIO FDs
>   net/virtio: add vDPA op to get notification area
>   doc: add documentation for Virtio vDPA driver
> 
>  config/common_linux                |   1 +
>  doc/guides/nics/index.rst          |   1 +
>  doc/guides/nics/virtio_vdpa.rst    |  45 ++
>  drivers/net/ifc/ifcvf_vdpa.c       |   1 +
>  drivers/net/virtio/Makefile        |   4 +
>  drivers/net/virtio/meson.build     |   3 +-
>  drivers/net/virtio/virtio_ethdev.c | 252 --------
>  drivers/net/virtio/virtio_pci.c    |   6 +-
>  drivers/net/virtio/virtio_pci.h    |   2 +
>  drivers/net/virtio/virtio_vdpa.c   | 918 +++++++++++++++++++++++++++++
>  drivers/net/virtio/virtqueue.c     | 255 ++++++++
>  drivers/net/virtio/virtqueue.h     |   5 +
>  lib/librte_vhost/rte_vdpa.h        |   1 -
>  lib/librte_vhost/rte_vhost.h       |   9 +-
>  lib/librte_vhost/vhost_user.c      |   3 +-
>  15 files changed, 1243 insertions(+), 263 deletions(-)  create mode 100644
> doc/guides/nics/virtio_vdpa.rst  create mode 100644
> drivers/net/virtio/virtio_vdpa.c
> 
> --
> 2.21.0
  
Maxime Coquelin Sept. 10, 2019, 7:46 a.m. UTC | #2
Hi Shahaf,

On 9/9/19 1:55 PM, Shahaf Shuler wrote:
> Hi Maxime, 
> 
> Thursday, August 29, 2019 11:00 AM, Maxime Coquelin:
>> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
>>
>> vDPA allows to offload Virtio Datapath processing by supported NICs, like
>> IFCVF for example.
>>
>> The control path has to be handled by a dedicated vDPA driver, so that it can
>> translate Vhost-user protocol requests to proprietary NICs registers
>> accesses.
>>
>> This driver is the vDPA driver for Virtio devices, meaning that Vhost-user
>> protocol requests get translated to Virtio registers accesses as per defined in
>> the Virtio spec.
>>
>> Basically, it can be used within a guest with a para-virtualized Virtio-net
>> device, or even with a full Virtio HW offload NIC directly on host.
> 
> Can you elaborate more on the use cases to use such driver? 
> 
> 1. If the underlying HW can support full virtio device, why we need to work w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one?
> 2. why it is preferable to work w/ virtio device as the backend device to be used w/ vDPA v.s. working w/ the underlying HW VF?


IMHO, I see two uses cases where it can make sense to use vDPA with a
full offload HW device:
1. Live-migration support:  It makes it possible to switch to rings
   processing in SW during the migration as Virtio HH does not support
   dirty pages logging.

2. Can be used to provide a single standard interface (the vhost-user
   socket) to containers in the scope of CNFs. Doing so, the container
   does not need to be modified, whatever the HW NIC: Virtio datapath
   offload only, full Virtio offload, or no offload at all. In the
   latter case, it would not be optimal as it implies forwarding between
   the Vhost PMD and the HW NIC PMD but it would work.

> Is nested virtualization is what you have in mind?

For the para-virtualized virtio device, either nested virtualization,
or container running within the guest.

Maxime
  
Shahaf Shuler Sept. 10, 2019, 1:44 p.m. UTC | #3
Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin:
> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> 
> Hi Shahaf,
> 
> On 9/9/19 1:55 PM, Shahaf Shuler wrote:
> > Hi Maxime,
> >
> > Thursday, August 29, 2019 11:00 AM, Maxime Coquelin:
> >> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> >>
> >> vDPA allows to offload Virtio Datapath processing by supported NICs,
> >> like IFCVF for example.
> >>
> >> The control path has to be handled by a dedicated vDPA driver, so
> >> that it can translate Vhost-user protocol requests to proprietary
> >> NICs registers accesses.
> >>
> >> This driver is the vDPA driver for Virtio devices, meaning that
> >> Vhost-user protocol requests get translated to Virtio registers
> >> accesses as per defined in the Virtio spec.
> >>
> >> Basically, it can be used within a guest with a para-virtualized
> >> Virtio-net device, or even with a full Virtio HW offload NIC directly on
> host.
> >
> > Can you elaborate more on the use cases to use such driver?
> >
> > 1. If the underlying HW can support full virtio device, why we need to work
> w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one?
> > 2. why it is preferable to work w/ virtio device as the backend device to be
> used w/ vDPA v.s. working w/ the underlying HW VF?
> 
> 
> IMHO, I see two uses cases where it can make sense to use vDPA with a full
> offload HW device:
> 1. Live-migration support:  It makes it possible to switch to rings
>    processing in SW during the migration as Virtio HH does not support
>    dirty pages logging.

Can you elaborate why specifically using virtio_vdpa PMD enables this SW relay during migration?
e.g. the vdpa PMD of intel that runs on top of VF do that today as well. 

> 
> 2. Can be used to provide a single standard interface (the vhost-user
>    socket) to containers in the scope of CNFs. Doing so, the container
>    does not need to be modified, whatever the HW NIC: Virtio datapath
>    offload only, full Virtio offload, or no offload at all. In the
>    latter case, it would not be optimal as it implies forwarding between
>    the Vhost PMD and the HW NIC PMD but it would work.

It is not clear to me the interface map in such system.
From what I understand the container will have virtio-user i/f and the host will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or not. 
For full emulation I guess you will need to expose the netdev of the fully emulated virtio device to the container?

Am trying to map when it is beneficial to use this virtio_vdpa PMD and when it is better to use the vendor specific vDPA PMD on top of VF. 

> 
> > Is nested virtualization is what you have in mind?
> 
> For the para-virtualized virtio device, either nested virtualization, or
> container running within the guest.
> 
> Maxime
  
Maxime Coquelin Sept. 10, 2019, 1:56 p.m. UTC | #4
On 9/10/19 3:44 PM, Shahaf Shuler wrote:
> Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin:
>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
>>
>> Hi Shahaf,
>>
>> On 9/9/19 1:55 PM, Shahaf Shuler wrote:
>>> Hi Maxime,
>>>
>>> Thursday, August 29, 2019 11:00 AM, Maxime Coquelin:
>>>> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
>>>>
>>>> vDPA allows to offload Virtio Datapath processing by supported NICs,
>>>> like IFCVF for example.
>>>>
>>>> The control path has to be handled by a dedicated vDPA driver, so
>>>> that it can translate Vhost-user protocol requests to proprietary
>>>> NICs registers accesses.
>>>>
>>>> This driver is the vDPA driver for Virtio devices, meaning that
>>>> Vhost-user protocol requests get translated to Virtio registers
>>>> accesses as per defined in the Virtio spec.
>>>>
>>>> Basically, it can be used within a guest with a para-virtualized
>>>> Virtio-net device, or even with a full Virtio HW offload NIC directly on
>> host.
>>>
>>> Can you elaborate more on the use cases to use such driver?
>>>
>>> 1. If the underlying HW can support full virtio device, why we need to work
>> w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one?
>>> 2. why it is preferable to work w/ virtio device as the backend device to be
>> used w/ vDPA v.s. working w/ the underlying HW VF?
>>
>>
>> IMHO, I see two uses cases where it can make sense to use vDPA with a full
>> offload HW device:
>> 1. Live-migration support:  It makes it possible to switch to rings
>>    processing in SW during the migration as Virtio HH does not support
>>    dirty pages logging.
> 
> Can you elaborate why specifically using virtio_vdpa PMD enables this SW relay during migration?
> e.g. the vdpa PMD of intel that runs on top of VF do that today as well. 

I think there were a misunderstanding. When I said:
"
I see two uses cases where it can make sense to use vDPA with a full
offload HW device
"

I meant, I see two uses cases where it can make sense to use vDPA with a
full offload HW device, instead of the full offload HW device to use
Virtio PMD.

In other words, I think it is preferable to only offload the datapath,
so that it is possible to support SW live-migration.

>>
>> 2. Can be used to provide a single standard interface (the vhost-user
>>    socket) to containers in the scope of CNFs. Doing so, the container
>>    does not need to be modified, whatever the HW NIC: Virtio datapath
>>    offload only, full Virtio offload, or no offload at all. In the
>>    latter case, it would not be optimal as it implies forwarding between
>>    the Vhost PMD and the HW NIC PMD but it would work.
> 
> It is not clear to me the interface map in such system.
> From what I understand the container will have virtio-user i/f and the host will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or not. 
> For full emulation I guess you will need to expose the netdev of the fully emulated virtio device to the container?
> 
> Am trying to map when it is beneficial to use this virtio_vdpa PMD and when it is better to use the vendor specific vDPA PMD on top of VF.

I think that with above clarification, I made it clear that the goal of
this driver is not to replace vendors vDPA drivers (their control path
maybe not even be compatible), but instead to provide a generic driver
that can be used either within a guest with a para-virtualized Virtio-
net device or with HW NIC that fully offloads Virtio (both data and
control paths).
  
Shahaf Shuler Sept. 11, 2019, 5:15 a.m. UTC | #5
Tuesday, September 10, 2019 4:56 PM, Maxime Coquelin:
> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> On 9/10/19 3:44 PM, Shahaf Shuler wrote:
> > Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin:
> >> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver

[...]

> >>
> >> Hi Shahaf,
> >>
> >>
> >> IMHO, I see two uses cases where it can make sense to use vDPA with a
> >> full offload HW device:
> >> 1. Live-migration support:  It makes it possible to switch to rings
> >>    processing in SW during the migration as Virtio HH does not support
> >>    dirty pages logging.
> >
> > Can you elaborate why specifically using virtio_vdpa PMD enables this SW
> relay during migration?
> > e.g. the vdpa PMD of intel that runs on top of VF do that today as well.
> 
> I think there were a misunderstanding. When I said:
> "
> I see two uses cases where it can make sense to use vDPA with a full offload
> HW device "
> 
> I meant, I see two uses cases where it can make sense to use vDPA with a full
> offload HW device, instead of the full offload HW device to use Virtio PMD.
> 
> In other words, I think it is preferable to only offload the datapath, so that it
> is possible to support SW live-migration.
> 
> >>
> >> 2. Can be used to provide a single standard interface (the vhost-user
> >>    socket) to containers in the scope of CNFs. Doing so, the container
> >>    does not need to be modified, whatever the HW NIC: Virtio datapath
> >>    offload only, full Virtio offload, or no offload at all. In the
> >>    latter case, it would not be optimal as it implies forwarding between
> >>    the Vhost PMD and the HW NIC PMD but it would work.
> >
> > It is not clear to me the interface map in such system.
> > From what I understand the container will have virtio-user i/f and the host
> will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or
> not.
> > For full emulation I guess you will need to expose the netdev of the fully
> emulated virtio device to the container?
> >
> > Am trying to map when it is beneficial to use this virtio_vdpa PMD and
> when it is better to use the vendor specific vDPA PMD on top of VF.
> 
> I think that with above clarification, I made it clear that the goal of this driver
> is not to replace vendors vDPA drivers (their control path maybe not even be
> compatible), but instead to provide a generic driver that can be used either
> within a guest with a para-virtualized Virtio- net device or with HW NIC that
> fully offloads Virtio (both data and control paths).

Thanks Maxim, It is clearer now. 
From what I understand this driver is to be used w/ vDPA when the underlying device is virtio. 

I can perfectly understand the para-virt ( + nested virtualization / container inside VM) use case. 

Regarding the fully emulated virtio device on the host (instead of a plain VF) - for me the benefit still not clear - if you have HW that can expose VF why not use VF + vendor specific vDPA driver.

Anyway - for the series,
Acked-by: Shahaf Shuler <shahafs@mellanox.com>
  
Maxime Coquelin Sept. 11, 2019, 7:15 a.m. UTC | #6
On 9/11/19 7:15 AM, Shahaf Shuler wrote:
> Tuesday, September 10, 2019 4:56 PM, Maxime Coquelin:
>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
>> On 9/10/19 3:44 PM, Shahaf Shuler wrote:
>>> Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin:
>>>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
> 
> [...]
> 
>>>>
>>>> Hi Shahaf,
>>>>
>>>>
>>>> IMHO, I see two uses cases where it can make sense to use vDPA with a
>>>> full offload HW device:
>>>> 1. Live-migration support:  It makes it possible to switch to rings
>>>>    processing in SW during the migration as Virtio HH does not support
>>>>    dirty pages logging.
>>>
>>> Can you elaborate why specifically using virtio_vdpa PMD enables this SW
>> relay during migration?
>>> e.g. the vdpa PMD of intel that runs on top of VF do that today as well.
>>
>> I think there were a misunderstanding. When I said:
>> "
>> I see two uses cases where it can make sense to use vDPA with a full offload
>> HW device "
>>
>> I meant, I see two uses cases where it can make sense to use vDPA with a full
>> offload HW device, instead of the full offload HW device to use Virtio PMD.
>>
>> In other words, I think it is preferable to only offload the datapath, so that it
>> is possible to support SW live-migration.
>>
>>>>
>>>> 2. Can be used to provide a single standard interface (the vhost-user
>>>>    socket) to containers in the scope of CNFs. Doing so, the container
>>>>    does not need to be modified, whatever the HW NIC: Virtio datapath
>>>>    offload only, full Virtio offload, or no offload at all. In the
>>>>    latter case, it would not be optimal as it implies forwarding between
>>>>    the Vhost PMD and the HW NIC PMD but it would work.
>>>
>>> It is not clear to me the interface map in such system.
>>> From what I understand the container will have virtio-user i/f and the host
>> will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or
>> not.
>>> For full emulation I guess you will need to expose the netdev of the fully
>> emulated virtio device to the container?
>>>
>>> Am trying to map when it is beneficial to use this virtio_vdpa PMD and
>> when it is better to use the vendor specific vDPA PMD on top of VF.
>>
>> I think that with above clarification, I made it clear that the goal of this driver
>> is not to replace vendors vDPA drivers (their control path maybe not even be
>> compatible), but instead to provide a generic driver that can be used either
>> within a guest with a para-virtualized Virtio- net device or with HW NIC that
>> fully offloads Virtio (both data and control paths).
> 
> Thanks Maxim, It is clearer now. 
> From what I understand this driver is to be used w/ vDPA when the underlying device is virtio. 
> 
> I can perfectly understand the para-virt ( + nested virtualization / container inside VM) use case. 
> 
> Regarding the fully emulated virtio device on the host (instead of a plain VF) - for me the benefit still not clear - if you have HW that can expose VF why not use VF + vendor specific vDPA driver.

If you need a vendor specific vDPA driver for the VF, then you
definitely want to use the vendor specific driver.

However, if there is a HW or VF that implements the Virtio Spec even for
the control path (i.e. the PCI registers layout), one may be tempted to
do device assignment directly to the guest and use Virtio PMD.
The downside of doing that is it won't support live-migration.

The benefit of using vDPA with virtio vDPA driver in this case is
provide a way to support live-migration (by switch to SW ring processing
and perform dirty pages logging).

> 
> Anyway - for the series,
> Acked-by: Shahaf Shuler <shahafs@mellanox.com>
> 

Thanks!
Maxime
  
Maxime Coquelin Oct. 24, 2019, 6:32 a.m. UTC | #7
On 8/29/19 9:59 AM, Maxime Coquelin wrote:
> vDPA allows to offload Virtio Datapath processing by supported
> NICs, like IFCVF for example.
> 
> The control path has to be handled by a dedicated vDPA driver,
> so that it can translate Vhost-user protocol requests to
> proprietary NICs registers accesses.
> 
> This driver is the vDPA driver for Virtio devices, meaning
> that Vhost-user protocol requests get translated to Virtio
> registers accesses as per defined in the Virtio spec.
> 
> Basically, it can be used within a guest with a para-virtualized
> Virtio-net device, or even with a full Virtio HW offload NIC
> directly on host.
> 
> Amongst the main features, all currently supported Virtio spec
> versions are supported (split & packed rings, but only tested
> with split ring for now) and also multiqueue support is added
> by implementing the cotnrol virtqueue in the driver.
> 
> The structure of this driver is heavily based on IFCVF vDPA.
> 
> Maxime Coquelin (15):
>   vhost: remove vhost kernel header inclusion
>   vhost: configure vDPA as soon as the device is ready
>   net/virtio: move control path fonctions in virtqueue file
>   net/virtio: add virtio PCI subsystem device ID declaration
>   net/virtio: save notify bar ID in virtio HW struct
>   net/virtio: add skeleton for virtio vDPA driver
>   net/virtio: add vDPA ops to get number of queue
>   net/virtio: add virtio vDPA op to get features
>   net/virtio: add virtio vDPA op to get protocol features
>   net/virtio: add vDPA op to configure and start the device
>   net/virtio: add vDPA op to stop and close the device
>   net/virtio: add vDPA op to set features
>   net/virtio: add vDPA ops to get VFIO FDs
>   net/virtio: add vDPA op to get notification area
>   doc: add documentation for Virtio vDPA driver
> 
>  config/common_linux                |   1 +
>  doc/guides/nics/index.rst          |   1 +
>  doc/guides/nics/virtio_vdpa.rst    |  45 ++
>  drivers/net/ifc/ifcvf_vdpa.c       |   1 +
>  drivers/net/virtio/Makefile        |   4 +
>  drivers/net/virtio/meson.build     |   3 +-
>  drivers/net/virtio/virtio_ethdev.c | 252 --------
>  drivers/net/virtio/virtio_pci.c    |   6 +-
>  drivers/net/virtio/virtio_pci.h    |   2 +
>  drivers/net/virtio/virtio_vdpa.c   | 918 +++++++++++++++++++++++++++++
>  drivers/net/virtio/virtqueue.c     | 255 ++++++++
>  drivers/net/virtio/virtqueue.h     |   5 +
>  lib/librte_vhost/rte_vdpa.h        |   1 -
>  lib/librte_vhost/rte_vhost.h       |   9 +-
>  lib/librte_vhost/vhost_user.c      |   3 +-
>  15 files changed, 1243 insertions(+), 263 deletions(-)
>  create mode 100644 doc/guides/nics/virtio_vdpa.rst
>  create mode 100644 drivers/net/virtio/virtio_vdpa.c
> 

Deferring the series to v20.02.