[1/2] vhost: discard too small descriptor chains

Message ID 20220829150936.1455069-1-david.marchand@redhat.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series [1/2] vhost: discard too small descriptor chains |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-testing warning apply patch failure

Commit Message

David Marchand Aug. 29, 2022, 3:09 p.m. UTC
  From: Maxime Coquelin <maxime.coquelin@redhat.com>

This patch discards descriptor chains which are smaller
than the Virtio-net header size, and ones that are equal.

Indeed, such descriptor chains sizes mean there is no
packet data.

This patch also has the advantage of requesting the exact
packets sizes for the mbufs.

CVE-2022-2132
Fixes: 62250c1d0978 ("vhost: extract split ring handling from Rx and Tx functions")
Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer")
Fixes: 84d5204310d7 ("vhost: support async dequeue for split ring")
Cc: stable@dpdk.org

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
 lib/vhost/virtio_net.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)
  

Patch

diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 35fa4670fd..757d8dee17 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -2677,8 +2677,10 @@  desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	buf_iova = buf_vec[vec_idx].buf_iova;
 	buf_len = buf_vec[vec_idx].buf_len;
 
-	if (unlikely(buf_len < dev->vhost_hlen && nr_vec <= 1))
-		return -1;
+	/*
+	 * The caller has checked the descriptors chain is larger than the
+	 * header size.
+	 */
 
 	if (virtio_net_with_host_offload(dev)) {
 		if (unlikely(buf_len < sizeof(struct virtio_net_hdr))) {
@@ -2922,6 +2924,14 @@  virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 
 		update_shadow_used_ring_split(vq, head_idx, 0);
 
+		if (unlikely(buf_len <= dev->vhost_hlen)) {
+			dropped += 1;
+			i++;
+			break;
+		}
+
+		buf_len -= dev->vhost_hlen;
+
 		err = virtio_dev_pktmbuf_prep(dev, pkts[i], buf_len);
 		if (unlikely(err)) {
 			/*
@@ -3124,6 +3134,11 @@  vhost_dequeue_single_packed(struct virtio_net *dev,
 					 VHOST_ACCESS_RO) < 0))
 		return -1;
 
+	if (unlikely(buf_len <= dev->vhost_hlen))
+		return -1;
+
+	buf_len -= dev->vhost_hlen;
+
 	if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) {
 		if (!allocerr_warned) {
 			VHOST_LOG_DATA(dev->ifname, ERR,
@@ -3448,6 +3463,13 @@  virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			break;
 		}
 
+		if (unlikely(buf_len <= dev->vhost_hlen)) {
+			dropped = true;
+			break;
+		}
+
+		buf_len -= dev->vhost_hlen;
+
 		err = virtio_dev_pktmbuf_prep(dev, pkt, buf_len);
 		if (unlikely(err)) {
 			/**