vhost: fix potential buffer overflow
Checks
Commit Message
In vhost datapath, descriptor's length are mostly used in two coherent
operations. First step is used for address translation, second step is
used for memory transaction from guest to host. But the iterval between
two steps will give a window for malicious guest, in which can change
descriptor length after vhost calcuated buffer size. Thus may lead to
buffer overflow in vhost side. This potential risk can be eliminated by
accessing the descriptor length once.
Fixes: 1be4ebb1c464 ("vhost: support indirect descriptor in mergeable Rx")
Fixes: 2f3225a7d69b ("vhost: add vector filling support for packed ring")
Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue")
Signed-off-by: Marvin Liu <yong.liu@intel.com>
Cc: stable@dpdk.org
Comments
Hi Marvin,
On 2/26/21 8:33 AM, Marvin Liu wrote:
> In vhost datapath, descriptor's length are mostly used in two coherent
> operations. First step is used for address translation, second step is
> used for memory transaction from guest to host. But the iterval between
> two steps will give a window for malicious guest, in which can change
> descriptor length after vhost calcuated buffer size. Thus may lead to
> buffer overflow in vhost side. This potential risk can be eliminated by
> accessing the descriptor length once.
>
> Fixes: 1be4ebb1c464 ("vhost: support indirect descriptor in mergeable Rx")
> Fixes: 2f3225a7d69b ("vhost: add vector filling support for packed ring")
> Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue")
As the offending commits have been introduced in different LTS, I would
prefer the patch to be split. It will make is easier for backporting later.
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
> Cc: stable@dpdk.org
>
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 583bf379c6..0a7d008a91 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -548,10 +548,11 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> return -1;
> }
>
> - len += descs[idx].len;
> + dlen = descs[idx].len;
> + len += dlen;
>
> if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> - descs[idx].addr, descs[idx].len,
> + descs[idx].addr, dlen,
> perm))) {
> free_ind_table(idesc);
> return -1;
> @@ -668,9 +669,10 @@ fill_vec_buf_packed_indirect(struct virtio_net *dev,
> return -1;
> }
>
> - *len += descs[i].len;
> + dlen = descs[i].len;
> + *len += dlen;
> if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> - descs[i].addr, descs[i].len,
> + descs[i].addr, dlen,
> perm)))
> return -1;
> }
> @@ -691,6 +693,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> bool wrap_counter = vq->avail_wrap_counter;
> struct vring_packed_desc *descs = vq->desc_packed;
> uint16_t vec_id = *vec_idx;
> + uint64_t dlen;
>
> if (avail_idx < vq->last_avail_idx)
> wrap_counter ^= 1;
> @@ -723,11 +726,12 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> len, perm) < 0))
> return -1;
> } else {
> - *len += descs[avail_idx].len;
> + dlen = descs[avail_idx].len;
> + *len += dlen;
>
> if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> descs[avail_idx].addr,
> - descs[avail_idx].len,
> + dlen,
> perm)))
> return -1;
> }
> @@ -2314,7 +2318,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev,
> }
>
> vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
> - pkts[i]->pkt_len = descs[avail_idx + i].len - buf_offset;
> + pkts[i]->pkt_len = lens[i] - buf_offset;
> pkts[i]->data_len = pkts[i]->pkt_len;
> ids[i] = descs[avail_idx + i].id;
> }
>
Other than that, the patch looks valid to me.
With the split done:
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, March 24, 2021 4:56 PM
> To: Liu, Yong <yong.liu@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: Re: [PATCH] vhost: fix potential buffer overflow
>
> Hi Marvin,
>
> On 2/26/21 8:33 AM, Marvin Liu wrote:
> > In vhost datapath, descriptor's length are mostly used in two coherent
> > operations. First step is used for address translation, second step is
> > used for memory transaction from guest to host. But the iterval between
> > two steps will give a window for malicious guest, in which can change
> > descriptor length after vhost calcuated buffer size. Thus may lead to
> > buffer overflow in vhost side. This potential risk can be eliminated by
> > accessing the descriptor length once.
> >
> > Fixes: 1be4ebb1c464 ("vhost: support indirect descriptor in mergeable Rx")
> > Fixes: 2f3225a7d69b ("vhost: add vector filling support for packed ring")
> > Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue")
>
> As the offending commits have been introduced in different LTS, I would
> prefer the patch to be split. It will make is easier for backporting later.
>
Maxime,
Thanks for your suggestion, I will split this patch into three parts as they were spread over three different LTS.
Regards,
Marvin
> > Signed-off-by: Marvin Liu <yong.liu@intel.com>
> > Cc: stable@dpdk.org
> >
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > index 583bf379c6..0a7d008a91 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -548,10 +548,11 @@ fill_vec_buf_split(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> > return -1;
> > }
> >
> > - len += descs[idx].len;
> > + dlen = descs[idx].len;
> > + len += dlen;
> >
> > if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> > - descs[idx].addr,
> descs[idx].len,
> > + descs[idx].addr, dlen,
> > perm))) {
> > free_ind_table(idesc);
> > return -1;
> > @@ -668,9 +669,10 @@ fill_vec_buf_packed_indirect(struct virtio_net
> *dev,
> > return -1;
> > }
> >
> > - *len += descs[i].len;
> > + dlen = descs[i].len;
> > + *len += dlen;
> > if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> > - descs[i].addr, descs[i].len,
> > + descs[i].addr, dlen,
> > perm)))
> > return -1;
> > }
> > @@ -691,6 +693,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> > bool wrap_counter = vq->avail_wrap_counter;
> > struct vring_packed_desc *descs = vq->desc_packed;
> > uint16_t vec_id = *vec_idx;
> > + uint64_t dlen;
> >
> > if (avail_idx < vq->last_avail_idx)
> > wrap_counter ^= 1;
> > @@ -723,11 +726,12 @@ fill_vec_buf_packed(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> > len, perm) < 0))
> > return -1;
> > } else {
> > - *len += descs[avail_idx].len;
> > + dlen = descs[avail_idx].len;
> > + *len += dlen;
> >
> > if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> > descs[avail_idx].addr,
> > - descs[avail_idx].len,
> > + dlen,
> > perm)))
> > return -1;
> > }
> > @@ -2314,7 +2318,7 @@ vhost_reserve_avail_batch_packed(struct
> virtio_net *dev,
> > }
> >
> > vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
> > - pkts[i]->pkt_len = descs[avail_idx + i].len - buf_offset;
> > + pkts[i]->pkt_len = lens[i] - buf_offset;
> > pkts[i]->data_len = pkts[i]->pkt_len;
> > ids[i] = descs[avail_idx + i].id;
> > }
> >
>
> Other than that, the patch looks valid to me.
> With the split done:
>
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> Thanks,
> Maxime
@@ -548,10 +548,11 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
return -1;
}
- len += descs[idx].len;
+ dlen = descs[idx].len;
+ len += dlen;
if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
- descs[idx].addr, descs[idx].len,
+ descs[idx].addr, dlen,
perm))) {
free_ind_table(idesc);
return -1;
@@ -668,9 +669,10 @@ fill_vec_buf_packed_indirect(struct virtio_net *dev,
return -1;
}
- *len += descs[i].len;
+ dlen = descs[i].len;
+ *len += dlen;
if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
- descs[i].addr, descs[i].len,
+ descs[i].addr, dlen,
perm)))
return -1;
}
@@ -691,6 +693,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
bool wrap_counter = vq->avail_wrap_counter;
struct vring_packed_desc *descs = vq->desc_packed;
uint16_t vec_id = *vec_idx;
+ uint64_t dlen;
if (avail_idx < vq->last_avail_idx)
wrap_counter ^= 1;
@@ -723,11 +726,12 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
len, perm) < 0))
return -1;
} else {
- *len += descs[avail_idx].len;
+ dlen = descs[avail_idx].len;
+ *len += dlen;
if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
descs[avail_idx].addr,
- descs[avail_idx].len,
+ dlen,
perm)))
return -1;
}
@@ -2314,7 +2318,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev,
}
vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) {
- pkts[i]->pkt_len = descs[avail_idx + i].len - buf_offset;
+ pkts[i]->pkt_len = lens[i] - buf_offset;
pkts[i]->data_len = pkts[i]->pkt_len;
ids[i] = descs[avail_idx + i].id;
}