vhost: fix wrong IOTLB initialization
Checks
Commit Message
This patch fixes an issue of application crash because of vhost iotlb
not initialized when virtio has multiqueue enabled.
iotlb messages will be sent when some queues are not enabled. If we
initialize iotlb in vhost_user_set_vring_num, it could happen that
iotlb update comes when iotlb pool of disabled queues are not
initialized.
Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
---
lib/vhost/vhost_user.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
Comments
On 5/13/2021 1:28 PM, Chenbo Xia wrote:
> This patch fixes an issue of application crash because of vhost iotlb
> not initialized when virtio has multiqueue enabled.
>
> iotlb messages will be sent when some queues are not enabled. If we
> initialize iotlb in vhost_user_set_vring_num, it could happen that
> iotlb update comes when iotlb pool of disabled queues are not
> initialized.
>
> Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
>
> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Hi Maxime, David,
Can Red Hat QA verify this patch please?
Btw, the issue is for defect:
https://bugs.dpdk.org/show_bug.cgi?id=703
On 13/05/2021 14:10, Ferruh Yigit wrote:
> On 5/13/2021 1:28 PM, Chenbo Xia wrote:
>> This patch fixes an issue of application crash because of vhost iotlb
>> not initialized when virtio has multiqueue enabled.
>>
>> iotlb messages will be sent when some queues are not enabled. If we
>> initialize iotlb in vhost_user_set_vring_num, it could happen that
>> iotlb update comes when iotlb pool of disabled queues are not
>> initialized.
>>
>> Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
>>
>> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
>
>
> Hi Maxime, David,
>
> Can Red Hat QA verify this patch please?
>
I've requested for QA to test RC3+this patch.
> Btw, the issue is for defect:
> https://bugs.dpdk.org/show_bug.cgi?id=703
>
On Thu, May 13, 2021 at 2:38 PM Chenbo Xia <chenbo.xia@intel.com> wrote:
>
> This patch fixes an issue of application crash because of vhost iotlb
> not initialized when virtio has multiqueue enabled.
>
> iotlb messages will be sent when some queues are not enabled. If we
> initialize iotlb in vhost_user_set_vring_num, it could happen that
> iotlb update comes when iotlb pool of disabled queues are not
> initialized.
This makes the problem I reproduced disappear at init, but I noticed
the segfault after restarting testpmd once.
And a little bit after this, my vm crashed.
This is not systematic, so I guess there is some condition with how
the virtio device is initialised in the vm.
One question below.
Bugzilla ID: 703
> Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
>
Reported-by: Pei Zhang <pezhang@redhat.com>
> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> ---
> lib/vhost/vhost_user.c | 13 +++++++++----
> 1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index 611ff209e3..ae4df8eb69 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -311,6 +311,7 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
> uint64_t features = msg->payload.u64;
> uint64_t vhost_features = 0;
> struct rte_vdpa_device *vdpa_dev;
> + uint32_t i;
>
> if (validate_msg_fds(msg, 0) != 0)
> return RTE_VHOST_MSG_RESULT_ERR;
> @@ -389,6 +390,14 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
> vdpa_dev->ops->set_features(dev->vid);
>
> dev->flags &= ~VIRTIO_DEV_FEATURES_FAILED;
> +
> + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
> + for (i = 0; i < dev->nr_vring; i++) {
I don't know the vhost-user protocol.
At this point of the device init/life, are we sure nr_vring is set to
the max number of vring?
The logs I have tend to say it is the case, but is there a guarantee
in the protocol?
Another way to fix would be to allocate on the first
VHOST_USER_IOTLB_MSG message received for a vring.
> + if (vhost_user_iotlb_init(dev, i))
> + return RTE_VHOST_MSG_RESULT_ERR;
> + }
> + }
> +
> return RTE_VHOST_MSG_RESULT_OK;
> }
>
On 13/05/2021 15:11, David Marchand wrote:
> On Thu, May 13, 2021 at 2:38 PM Chenbo Xia <chenbo.xia@intel.com> wrote:
>>
>> This patch fixes an issue of application crash because of vhost iotlb
>> not initialized when virtio has multiqueue enabled.
>>
>> iotlb messages will be sent when some queues are not enabled. If we
>> initialize iotlb in vhost_user_set_vring_num, it could happen that
>> iotlb update comes when iotlb pool of disabled queues are not
>> initialized.
>
> This makes the problem I reproduced disappear at init, but I noticed
> the segfault after restarting testpmd once.
> And a little bit after this, my vm crashed.
>
> This is not systematic, so I guess there is some condition with how
> the virtio device is initialised in the vm.
>
Ok, no point in Red Hat QA testing RC3 yet, if it is still faulty.
fyi - if you want to fix with a new patch it will likely delay Red Hat
QA testing RC3 (maybe others?) and probably they will only have cycles
for one RC3 test run.
If you choose to revert, we can ask Red Hat QA to test RC3 without
further delay. Please let us know when you consider the options.
>
> One question below.
>
>
> Bugzilla ID: 703
>
>> Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
>>
>
> Reported-by: Pei Zhang <pezhang@redhat.com>
>
>> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
>> ---
>> lib/vhost/vhost_user.c | 13 +++++++++----
>> 1 file changed, 9 insertions(+), 4 deletions(-)
>>
>> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
>> index 611ff209e3..ae4df8eb69 100644
>> --- a/lib/vhost/vhost_user.c
>> +++ b/lib/vhost/vhost_user.c
>> @@ -311,6 +311,7 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
>> uint64_t features = msg->payload.u64;
>> uint64_t vhost_features = 0;
>> struct rte_vdpa_device *vdpa_dev;
>> + uint32_t i;
>>
>> if (validate_msg_fds(msg, 0) != 0)
>> return RTE_VHOST_MSG_RESULT_ERR;
>> @@ -389,6 +390,14 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
>> vdpa_dev->ops->set_features(dev->vid);
>>
>> dev->flags &= ~VIRTIO_DEV_FEATURES_FAILED;
>> +
>> + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
>> + for (i = 0; i < dev->nr_vring; i++) {
>
> I don't know the vhost-user protocol.
> At this point of the device init/life, are we sure nr_vring is set to
> the max number of vring?
> The logs I have tend to say it is the case, but is there a guarantee
> in the protocol?
>
>
> Another way to fix would be to allocate on the first
> VHOST_USER_IOTLB_MSG message received for a vring.
>
>
>> + if (vhost_user_iotlb_init(dev, i))
>> + return RTE_VHOST_MSG_RESULT_ERR;
>> + }
>> + }
>> +
>> return RTE_VHOST_MSG_RESULT_OK;
>> }
>>
>
>
On 5/13/2021 3:38 PM, Kevin Traynor wrote:
> On 13/05/2021 15:11, David Marchand wrote:
>> On Thu, May 13, 2021 at 2:38 PM Chenbo Xia <chenbo.xia@intel.com> wrote:
>>>
>>> This patch fixes an issue of application crash because of vhost iotlb
>>> not initialized when virtio has multiqueue enabled.
>>>
>>> iotlb messages will be sent when some queues are not enabled. If we
>>> initialize iotlb in vhost_user_set_vring_num, it could happen that
>>> iotlb update comes when iotlb pool of disabled queues are not
>>> initialized.
>>
>> This makes the problem I reproduced disappear at init, but I noticed
>> the segfault after restarting testpmd once.
>> And a little bit after this, my vm crashed.
>>
>> This is not systematic, so I guess there is some condition with how
>> the virtio device is initialised in the vm.
>>
>
> Ok, no point in Red Hat QA testing RC3 yet, if it is still faulty.
>
> fyi - if you want to fix with a new patch it will likely delay Red Hat
> QA testing RC3 (maybe others?) and probably they will only have cycles
> for one RC3 test run.
>
> If you choose to revert, we can ask Red Hat QA to test RC3 without
> further delay. Please let us know when you consider the options.
>
If the patch is not good to go as it is I suggest reverting it, as far as I know
Chenbo will be off for Friday & Monday, so it doesn't leave much time to
update/test a new version.
>>
>> One question below.
>>
>>
>> Bugzilla ID: 703
>>
>>> Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
>>>
>>
>> Reported-by: Pei Zhang <pezhang@redhat.com>
>>
>>> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
>>> ---
>>> lib/vhost/vhost_user.c | 13 +++++++++----
>>> 1 file changed, 9 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
>>> index 611ff209e3..ae4df8eb69 100644
>>> --- a/lib/vhost/vhost_user.c
>>> +++ b/lib/vhost/vhost_user.c
>>> @@ -311,6 +311,7 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
>>> uint64_t features = msg->payload.u64;
>>> uint64_t vhost_features = 0;
>>> struct rte_vdpa_device *vdpa_dev;
>>> + uint32_t i;
>>>
>>> if (validate_msg_fds(msg, 0) != 0)
>>> return RTE_VHOST_MSG_RESULT_ERR;
>>> @@ -389,6 +390,14 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
>>> vdpa_dev->ops->set_features(dev->vid);
>>>
>>> dev->flags &= ~VIRTIO_DEV_FEATURES_FAILED;
>>> +
>>> + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
>>> + for (i = 0; i < dev->nr_vring; i++) {
>>
>> I don't know the vhost-user protocol.
>> At this point of the device init/life, are we sure nr_vring is set to
>> the max number of vring?
>> The logs I have tend to say it is the case, but is there a guarantee
>> in the protocol?
>>
>>
>> Another way to fix would be to allocate on the first
>> VHOST_USER_IOTLB_MSG message received for a vring.
>>
>>
>>> + if (vhost_user_iotlb_init(dev, i))
>>> + return RTE_VHOST_MSG_RESULT_ERR;
>>> + }
>>> + }
>>> +
>>> return RTE_VHOST_MSG_RESULT_OK;
>>> }
>>>
>>
>>
>
On Thu, May 13, 2021 at 4:11 PM David Marchand
<david.marchand@redhat.com> wrote:
> On Thu, May 13, 2021 at 2:38 PM Chenbo Xia <chenbo.xia@intel.com> wrote:
> >
> > This patch fixes an issue of application crash because of vhost iotlb
> > not initialized when virtio has multiqueue enabled.
> >
> > iotlb messages will be sent when some queues are not enabled. If we
> > initialize iotlb in vhost_user_set_vring_num, it could happen that
> > iotlb update comes when iotlb pool of disabled queues are not
> > initialized.
>
> This makes the problem I reproduced disappear at init, but I noticed
> the segfault after restarting testpmd once.
> And a little bit after this, my vm crashed.
>
> This is not systematic, so I guess there is some condition with how
> the virtio device is initialised in the vm.
The crash is systematic (not sure what I missed yesterday, but I
always get it with simple steps below).
Full logs:
# dpdk-testpmd --vdev
net_vhost0,iface=/var/lib/vhost_sockets/vhost0,client=1,iommu-support=1,queues=2
-w 0:0:0.0 --log-level=lib.vhost.config:debug -- -ia --rxq=2
VHOST_CONFIG: vhost-user client: socket created, fd: 31
VHOST_CONFIG: failed to connect to /var/lib/vhost_sockets/vhost0: No
such file or directory
VHOST_CONFIG: /var/lib/vhost_sockets/vhost0: reconnecting...
# start vm (with virtio device in the guest OS bound to kernel kmod,
i.e. no special configuration)
# testpmd logs:
testpmd> VHOST_CONFIG: /var/lib/vhost_sockets/vhost0: connected
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_PROTOCOL_FEATURES succeeded
and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: Processing VHOST_USER_SET_PROTOCOL_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: Processing VHOST_USER_GET_QUEUE_NUM succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: Processing VHOST_USER_SET_SLAVE_REQ_FD succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: Processing VHOST_USER_SET_OWNER succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:36
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:37
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_PROTOCOL_FEATURES succeeded
and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: Processing VHOST_USER_SET_PROTOCOL_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: Processing VHOST_USER_SET_SLAVE_REQ_FD succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:35
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:39
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 2
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 0 to qp idx: 3
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x370607f83
VHOST_CONFIG: (0) mergeable RX buffers off, virtio 1 on
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_0
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_1
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_2
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_3
VHOST_CONFIG: Processing VHOST_USER_SET_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7f2400000000
host virtual addr: 0x7fff40000000
mmap addr : 0x7fff40000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0
VHOST_CONFIG: guest memory region size: 0x180000000
guest physical addr: 0x100000000
guest virtual addr: 0x7f2480000000
host virtual addr: 0x7ffdc0000000
mmap addr : 0x7ffd40000000
mmap size : 0x200000000
mmap align: 0x40000000
mmap off : 0x80000000
VHOST_CONFIG: Processing VHOST_USER_SET_MEM_TABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:42
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:43
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:36
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:44
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x370607f83
VHOST_CONFIG: (0) mergeable RX buffers off, virtio 1 on
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_0
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_1
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_2
VHOST_CONFIG: IOTLB cache name: iotlb_100873_0_3
VHOST_CONFIG: Processing VHOST_USER_SET_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:37
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:45
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:35
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:46
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map avail ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map avail ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) mapped address desc: 0x7fff1d886000
VHOST_CONFIG: (0) mapped address avail: 0x7fff1d887000
VHOST_CONFIG: (0) mapped address used: 0x7fff1d887240
VHOST_CONFIG: (0) log_guest_addr: 0
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) mapped address desc: 0x7fff1d89e000
VHOST_CONFIG: (0) mapped address avail: 0x7fff1d89f000
VHOST_CONFIG: (0) mapped address used: 0x7fff1d89f240
VHOST_CONFIG: (0) log_guest_addr: 0
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) mapped address desc: 0x7fff1d8da000
VHOST_CONFIG: (0) mapped address avail: 0x7fff1d8db000
VHOST_CONFIG: (0) mapped address used: 0x7fff1d8db240
VHOST_CONFIG: (0) log_guest_addr: 0
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) mapped address desc: 0x7fff1e198000
VHOST_CONFIG: (0) mapped address avail: 0x7fff1e199000
VHOST_CONFIG: (0) mapped address used: 0x7fff1e199240
VHOST_CONFIG: (0) log_guest_addr: 0
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
Port 0: queue state event
VHOST_CONFIG: virtio is now ready for processing.
Port 0: link state change event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 2
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
Port 0: queue state event
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
So far, everything looks good.
Now I quit testpmd.
VHOST_CONFIG: free connfd = 31 for device '/var/lib/vhost_sockets/vhost0'
And I restart testpmd with the same command as above:
VHOST_CONFIG: vhost-user client: socket created, fd: 31
VHOST_CONFIG: new device, handle is 0
Port 0: 56:48:4F:53:54:00
Checking link statuses...
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_PROTOCOL_FEATURES succeeded
and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: Processing VHOST_USER_SET_PROTOCOL_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: Processing VHOST_USER_GET_QUEUE_NUM succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: Processing VHOST_USER_SET_SLAVE_REQ_FD succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: Processing VHOST_USER_SET_OWNER succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:36
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:37
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_PROTOCOL_FEATURES succeeded
and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: negotiated Vhost-user protocol features: 0xcbf
VHOST_CONFIG: Processing VHOST_USER_SET_PROTOCOL_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_SLAVE_REQ_FD
VHOST_CONFIG: Processing VHOST_USER_SET_SLAVE_REQ_FD succeeded.
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: Processing VHOST_USER_GET_FEATURES succeeded and needs reply.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:35
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:39
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x370607f83
VHOST_CONFIG: (0) mergeable RX buffers off, virtio 1 on
VHOST_CONFIG: IOTLB cache name: iotlb_101328_0_0
VHOST_CONFIG: IOTLB cache name: iotlb_101328_0_1
VHOST_CONFIG: IOTLB cache name: iotlb_101328_0_2
VHOST_CONFIG: IOTLB cache name: iotlb_101328_0_3
VHOST_CONFIG: Processing VHOST_USER_SET_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: guest memory region size: 0x80000000
guest physical addr: 0x0
guest virtual addr: 0x7f2400000000
host virtual addr: 0x7fff40000000
mmap addr : 0x7fff40000000
mmap size : 0x80000000
mmap align: 0x40000000
mmap off : 0x0
VHOST_CONFIG: guest memory region size: 0x180000000
guest physical addr: 0x100000000
guest virtual addr: 0x7f2480000000
host virtual addr: 0x7ffdc0000000
mmap addr : 0x7ffd40000000
mmap size : 0x200000000
mmap align: 0x40000000
mmap off : 0x80000000
VHOST_CONFIG: Processing VHOST_USER_SET_MEM_TABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:42
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:43
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:36
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:44
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_IOTLB_MSG succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ENABLE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: negotiated Virtio features: 0x40000000
VHOST_CONFIG: (0) mergeable RX buffers off, virtio 1 off
VHOST_CONFIG: Processing VHOST_USER_SET_FEATURES succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:2 file:35
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:37
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_NUM succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_BASE succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_ADDR succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:3 file:39
VHOST_CONFIG: (0) failed to map desc ring.
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_KICK succeeded.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:3 file:45
VHOST_CONFIG: Processing VHOST_USER_SET_VRING_CALL succeeded.
VHOST_CONFIG: read message VHOST_USER_IOTLB_MSG
Thread 9 "vhost-events" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffee7fc400 (LWP 101339)]
0x0000000000e30803 in rte_mempool_default_cache (lcore_id=4294967295,
mp=0x0) at ../lib/mempool/rte_mempool.h:1244
1244 if (mp->cache_size == 0)
Missing separate debuginfos, use: yum debuginfo-install
glibc-2.28-127.el8_3.2.x86_64 libibverbs-29.0-3.el8.x86_64
libnl3-3.5.0-1.el8.x86_64 numactl-libs-2.0.12-11.el8.x86_64
zlib-1.2.11-16.el8_2.x86_64
(gdb) bt full
#0 0x0000000000e30803 in rte_mempool_default_cache
(lcore_id=4294967295, mp=0x0) at ../lib/mempool/rte_mempool.h:1244
No locals.
#1 rte_mempool_get_bulk (n=1, obj_table=0x7fffee7f5e98, mp=0x0) at
../lib/mempool/rte_mempool.h:1533
cache = 0x0
cache = <optimized out>
#2 rte_mempool_get (obj_p=0x7fffee7f5e98, mp=0x0) at
../lib/mempool/rte_mempool.h:1561
No locals.
#3 vhost_user_iotlb_cache_insert (vq=0x1673bff00, iova=10159157248,
uaddr=140733688868864, size=4096, perm=3 '\003') at
../lib/vhost/iotlb.c:164
node = 0x1673c08c0
new_node = 0x1673671f0
ret = 0
#4 0x000000000100d51d in vhost_user_iotlb_msg (pdev=0x7fffee7f92b8,
msg=0x7fffee7f9010, main_fd=31) at ../lib/vhost/vhost_user.c:2409
vq = 0x1673bff00
dev = 0x1673c08c0
imsg = 0x7fffee7f901c
i = 2
vva = 140733688868864
len = 4096
#5 0x000000000100e31e in vhost_user_msg_handler (vid=0, fd=31) at
../lib/vhost/vhost_user.c:2882
dev = 0x1673c08c0
msg = {request = {master = 22, slave = 22}, flags = 9, size =
32, payload = {u64 = 10159157248, state = {index = 1569222656, num =
2}, addr = {index = 1569222656, flags = 2, desc_user_addr = 4096,
used_user_addr = 139800607223808, avail_user_addr =
7907112783476097539, log_guest_addr = 10159256128}, memory = {nregions
= 1569222656, padding = 2, regions = {{guest_phys_addr = 4096,
memory_size = 139800607223808, userspace_addr =
7907112783476097539, mmap_offset = 10159256128}, {guest_phys_addr =
4294967296, memory_size = 6442450944, userspace_addr =
139794743033856,
mmap_offset = 2147483648}, {guest_phys_addr = 0,
memory_size = 0, userspace_addr = 0, mmap_offset = 0},
{guest_phys_addr = 0, memory_size = 0, userspace_addr = 0, mmap_offset
= 4294967296}, {
guest_phys_addr = 0, memory_size = 0, userspace_addr
= 0, mmap_offset = 0}, {guest_phys_addr = 0, memory_size = 0,
userspace_addr = 0, mmap_offset = 0}, {guest_phys_addr = 0,
memory_size = 0,
userspace_addr = 0, mmap_offset = 0},
{guest_phys_addr = 0, memory_size = 4294967296, userspace_addr = 0,
mmap_offset = 3419188017980506112}}}, log = {mmap_size = 10159157248,
mmap_offset = 4096}, iotlb = {iova = 10159157248, size =
4096, uaddr = 139800607223808, perm = 3 '\003', type = 2 '\002'},
crypto_session = {session_id = 10159157248, op_code = 4096,
cipher_algo = 0, cipher_key_len = 3716706304, hash_algo
= 32549, digest_len = 3523936771, auth_key_len = 1841018158, aad_len =
1569321536, op_type = 2 '\002', dir = 0 '\000',
hash_mode = 0 '\000', chaining_dir = 0 '\000', ciphe_key
= 0x100000000 "\356o.\001\003", auth_key = 0x180000000 "",
cipher_key_buf =
"\000\000\000\200$\177\000\000\000\000\000\200", '\000' <repeats 51
times>,
auth_key_buf = '\000' <repeats 12 times>, "\001", '\000'
<repeats 111 times>, "\001", '\000' <repeats 15 times>,
"ces/system/node", '\000' <repeats 209 times>...}, area = {u64 =
10159157248,
size = 4096, offset = 139800607223808}, inflight =
{mmap_size = 10159157248, mmap_offset = 4096, num_queues = 28672,
queue_size = 56712}}, fds = {-1, -1, -1, -1, -1, -1, -1, -1}, fd_num =
0}
vdpa_dev = 0x0
ret = 0
unlock_required = 0
handled = false
request = 22
i = 4
#6 0x0000000000e41f75 in vhost_user_read_cb (connfd=31,
dat=0x6e28510, remove=0x7fffee7f9394) at ../lib/vhost/socket.c:309
conn = 0x6e28510
vsocket = 0x6e1f140
ret = 1
#7 0x0000000000e0f9c4 in fdset_event_dispatch (arg=0x66f9a60
<vhost_user+8192>) at ../lib/vhost/fd_man.c:286
i = 1
pfd = 0x66f9a68 <vhost_user+8200>
pfdentry = 0x66fba88 <vhost_user+16424>
rcb = 0xe41f3c <vhost_user_read_cb>
wcb = 0x0
dat = 0x6e28510
fd = 31
numfds = 2
...
And qemu crashes right after this:
(gdb) bt full
#0 0x000056269a497160 in vhost_device_iotlb_miss
(dev=dev@entry=0x56269d6ed800, iova=10159497216, write=<optimized
out>)
at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+10437+1ca0c2ba.5.x86_64/hw/virtio/vhost.c:944
iotlb = <optimized out>
uaddr = <optimized out>
len = <optimized out>
ret = -14
_rcu_read_auto = 0x1
#1 0x000056269a499361 in vhost_backend_handle_iotlb_msg
(imsg=0x7ffd5eddeea0, dev=0x56269d6ed800) at
/usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+10437+1ca0c2ba.5.x86_64/hw/virtio/vhost-backend.c:351
ret = <optimized out>
ret = <optimized out>
#2 vhost_backend_handle_iotlb_msg (dev=dev@entry=0x56269d6ed800,
imsg=imsg@entry=0x7ffd5eddeea0) at
/usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+10437+1ca0c2ba.5.x86_64/hw/virtio/vhost-backend.c:344
ret = 0
#3 0x000056269a499f0b in slave_read (opaque=0x56269d6ed800) at
/usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+10437+1ca0c2ba.5.x86_64/hw/virtio/vhost-user.c:1048
dev = 0x56269d6ed800
u = 0x56269cd3f7b0
hdr = {request = VHOST_USER_GET_FEATURES, flags = 1, size = 32}
payload = {u64 = 10159497216, state = {index = 1569562624, num
= 2}, addr = {index = 1569562624, flags = 2, desc_user_addr = 0,
used_user_addr = 0, avail_user_addr = 259, log_guest_addr = 0}, memory
= {
nregions = 1569562624, padding = 2, regions =
{{guest_phys_addr = 0, memory_size = 0, userspace_addr = 259,
mmap_offset = 0}, {guest_phys_addr = 0, memory_size = 0,
userspace_addr = 0,
mmap_offset = 0}, {guest_phys_addr = 0, memory_size =
0, userspace_addr = 0, mmap_offset = 0}, {guest_phys_addr = 0,
memory_size = 0, userspace_addr = 0, mmap_offset = 0},
{guest_phys_addr = 0,
memory_size = 0, userspace_addr = 0, mmap_offset = 0},
{guest_phys_addr = 0, memory_size = 0, userspace_addr = 0, mmap_offset
= 0}, {guest_phys_addr = 0, memory_size = 0, userspace_addr = 0,
mmap_offset = 0}, {guest_phys_addr = 0, memory_size =
0, userspace_addr = 0, mmap_offset = 0}}}, log = {mmap_size =
10159497216, mmap_offset = 0}, iotlb = {iova = 10159497216, size = 0,
uaddr = 0, perm = 3 '\003', type = 1 '\001'}, config =
{offset = 1569562624, size = 2, flags = 0, region = '\000' <repeats 12
times>, "\003\001", '\000' <repeats 241 times>}, session = {
session_id = 10159497216, session_setup_data = {op_code =
0, cipher_alg = 0, key_len = 0, hash_alg = 0, hash_result_len = 259,
auth_key_len = 0, add_len = 0, op_type = 0 '\000',
direction = 0 '\000', hash_mode = 0 '\000',
alg_chain_order = 0 '\000', cipher_key = 0x0, auth_key = 0x0}, key =
'\000' <repeats 63 times>, auth_key = '\000' <repeats 511 times>},
area = {
u64 = 10159497216, size = 0, offset = 0}, inflight =
{mmap_size = 10159497216, mmap_offset = 0, num_queues = 0, queue_size
= 0}}
size = <optimized out>
ret = 0
iov = {iov_base = 0x7ffd5eddee04, iov_len = 12}
msgh = {msg_name = 0x0, msg_namelen = 0, msg_iov =
0x7ffd5eddee10, msg_iovlen = 1, msg_control = 0x7ffd5eddf120,
msg_controllen = 0, msg_flags = 0}
fd = {-1, -1, -1, -1, -1, -1, -1, -1}
control =
"^\a\022\000\000\000\000\000\226\070\270\t\000\000\000\000\334\345\061\234&V\000\000\000x\v\322.\265\273mP\347\061\234&V\000\000\000\000\000\000\000\000\000"
cmsg = <optimized out>
i = <optimized out>
fdsize = 0
...
On Thu, May 13, 2021 at 5:04 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> > If you choose to revert, we can ask Red Hat QA to test RC3 without
> > further delay. Please let us know when you consider the options.
> >
>
> If the patch is not good to go as it is I suggest reverting it, as far as I know
> Chenbo will be off for Friday & Monday, so it doesn't leave much time to
> update/test a new version.
Yes, reverting is the safer, and the author proposed the same.
On 5/14/2021 9:18 AM, David Marchand wrote:
> On Thu, May 13, 2021 at 5:04 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>> If you choose to revert, we can ask Red Hat QA to test RC3 without
>>> further delay. Please let us know when you consider the options.
>>>
>>
>> If the patch is not good to go as it is I suggest reverting it, as far as I know
>> Chenbo will be off for Friday & Monday, so it doesn't leave much time to
>> update/test a new version.
>
> Yes, reverting is the safer, and the author proposed the same.
>
ack. Chenbo mentioned this is an optimization, so it should be OK to revert.
Thomas,
Can you revert the patch [1] in the main repo, or do you prefer a patch for it?
[1]
968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
14/05/2021 11:09, Ferruh Yigit:
> On 5/14/2021 9:18 AM, David Marchand wrote:
> > On Thu, May 13, 2021 at 5:04 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >>> If you choose to revert, we can ask Red Hat QA to test RC3 without
> >>> further delay. Please let us know when you consider the options.
> >>>
> >>
> >> If the patch is not good to go as it is I suggest reverting it, as far as I know
> >> Chenbo will be off for Friday & Monday, so it doesn't leave much time to
> >> update/test a new version.
> >
> > Yes, reverting is the safer, and the author proposed the same.
> >
>
> ack. Chenbo mentioned this is an optimization, so it should be OK to revert.
>
>
> Thomas,
>
> Can you revert the patch [1] in the main repo, or do you prefer a patch for it?
>
> [1]
> 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU disabled")
It would be better to have a patch with the correct explanation of the issue
and few acks if possible.
Hi David,
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Thursday, May 13, 2021 10:12 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>
> Cc: dev <dev@dpdk.org>; Kevin Traynor <ktraynor@redhat.com>; Pei Zhang
> <pezhang@redhat.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Thomas
> Monjalon <thomas@monjalon.net>
> Subject: Re: [PATCH] vhost: fix wrong IOTLB initialization
>
> On Thu, May 13, 2021 at 2:38 PM Chenbo Xia <chenbo.xia@intel.com> wrote:
> >
> > This patch fixes an issue of application crash because of vhost iotlb
> > not initialized when virtio has multiqueue enabled.
> >
> > iotlb messages will be sent when some queues are not enabled. If we
> > initialize iotlb in vhost_user_set_vring_num, it could happen that
> > iotlb update comes when iotlb pool of disabled queues are not
> > initialized.
>
> This makes the problem I reproduced disappear at init, but I noticed
> the segfault after restarting testpmd once.
> And a little bit after this, my vm crashed.
Oops.. Maybe there's some env difference. My env works well with the 'restart' test.
After checking the logs you provided, is the segfault still because of iotlb cache
not init? IMHO, based on the message sequence, the cache should be inited.
>
> This is not systematic, so I guess there is some condition with how
> the virtio device is initialised in the vm.
>
>
> One question below.
>
>
> Bugzilla ID: 703
>
> > Fixes: 968bbc7e2e50 ("vhost: avoid IOTLB mempool allocation while IOMMU
> disabled")
> >
>
> Reported-by: Pei Zhang <pezhang@redhat.com>
>
> > Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> > ---
> > lib/vhost/vhost_user.c | 13 +++++++++----
> > 1 file changed, 9 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> > index 611ff209e3..ae4df8eb69 100644
> > --- a/lib/vhost/vhost_user.c
> > +++ b/lib/vhost/vhost_user.c
> > @@ -311,6 +311,7 @@ vhost_user_set_features(struct virtio_net **pdev,
> struct VhostUserMsg *msg,
> > uint64_t features = msg->payload.u64;
> > uint64_t vhost_features = 0;
> > struct rte_vdpa_device *vdpa_dev;
> > + uint32_t i;
> >
> > if (validate_msg_fds(msg, 0) != 0)
> > return RTE_VHOST_MSG_RESULT_ERR;
> > @@ -389,6 +390,14 @@ vhost_user_set_features(struct virtio_net **pdev,
> struct VhostUserMsg *msg,
> > vdpa_dev->ops->set_features(dev->vid);
> >
> > dev->flags &= ~VIRTIO_DEV_FEATURES_FAILED;
> > +
> > + if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
> > + for (i = 0; i < dev->nr_vring; i++) {
>
> I don't know the vhost-user protocol.
> At this point of the device init/life, are we sure nr_vring is set to
> the max number of vring?
> The logs I have tend to say it is the case, but is there a guarantee
> in the protocol?
I think you are correct.. Based on current QEMU implementation, nr_vring should be
the correct value (Correct me if there're corner cases). But I don't think there
is a guarantee as vhost-user protocol doesn't mention about 'SET_FEATURES' comes
after per-vring messages. @Maxime Coquelin Do I miss anything?
>
>
> Another way to fix would be to allocate on the first
> VHOST_USER_IOTLB_MSG message received for a vring.
Emmm.. Could there be a case that some hypervisor init certain queue after the first
IOTLB msg? If there is, we may also need to check nr_vring is not changed/there's new
queue inited.
And David, thanks for testing and writing the revert patch for me during my leave.
That's much appreciated!
Thanks,
Chenbo
>
>
> > + if (vhost_user_iotlb_init(dev, i))
> > + return RTE_VHOST_MSG_RESULT_ERR;
> > + }
> > + }
> > +
> > return RTE_VHOST_MSG_RESULT_OK;
> > }
> >
>
>
> --
> David Marchand
@@ -311,6 +311,7 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
uint64_t features = msg->payload.u64;
uint64_t vhost_features = 0;
struct rte_vdpa_device *vdpa_dev;
+ uint32_t i;
if (validate_msg_fds(msg, 0) != 0)
return RTE_VHOST_MSG_RESULT_ERR;
@@ -389,6 +390,14 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg,
vdpa_dev->ops->set_features(dev->vid);
dev->flags &= ~VIRTIO_DEV_FEATURES_FAILED;
+
+ if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
+ for (i = 0; i < dev->nr_vring; i++) {
+ if (vhost_user_iotlb_init(dev, i))
+ return RTE_VHOST_MSG_RESULT_ERR;
+ }
+ }
+
return RTE_VHOST_MSG_RESULT_OK;
}
@@ -469,10 +478,6 @@ vhost_user_set_vring_num(struct virtio_net **pdev,
return RTE_VHOST_MSG_RESULT_ERR;
}
- if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)) {
- if (vhost_user_iotlb_init(dev, msg->payload.state.index))
- return RTE_VHOST_MSG_RESULT_ERR;
- }
return RTE_VHOST_MSG_RESULT_OK;
}