From patchwork Wed Jan 31 19:53:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 136239 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F404D43A20; Wed, 31 Jan 2024 20:53:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B84D740E01; Wed, 31 Jan 2024 20:53:18 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 9BC9E40E01 for ; Wed, 31 Jan 2024 20:53:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1706730797; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=7ZI9siBTfdDtEFSJwjGH8Ae+TwRVA1xjSXAHu49zqQQ=; b=PMqcN37gIr3daG0mK0VOnT1WnCwDzMkq4glcxzWlDKPoEnsAI2HpQk7n1d5ERGHiZzhRHJ FQ26ngYlYN20U1AvbADeu7EftieC6BMWOsg2VMjM8C8BFtm/pMNcUiPSEs4Jim6NLUcrCO BjSxbiK2HbKeBzGdf7tKskVXVPtJ02I= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-392-khDXmgUIOJmp4WG9Q6sLOw-1; Wed, 31 Jan 2024 14:53:13 -0500 X-MC-Unique: khDXmgUIOJmp4WG9Q6sLOw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7E7FA88B7A0; Wed, 31 Jan 2024 19:53:13 +0000 (UTC) Received: from max-p1.redhat.com (unknown [10.39.208.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 090201C060AF; Wed, 31 Jan 2024 19:53:11 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbox@nvidia.com, david.marchand@redhat.com, bnemeth@redhat.com, echaudro@redhat.com Cc: Maxime Coquelin , stable@dpdk.org Subject: [PATCH v2 1/2] vhost: fix memory leak in Virtio Tx split path Date: Wed, 31 Jan 2024 20:53:08 +0100 Message-ID: <20240131195309.2808015-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When vIOMMU is enabled and Virtio device is bound to kernel driver in guest, rte_vhost_dequeue_burst() will often return early because of IOTLB misses. This patch fixes a mbuf leak occurring in this case. Fixes: 242695f6122a ("vhost: allocate and free packets in bulk in Tx split") Cc: stable@dpdk.org Signed-off-by: Maxime Coquelin Signed-off-by: David Marchand Reviewed-by: David Marchand --- Changes in v2: ============== - Fix descriptors leak (David) - Rebased on top of next-virtio --- lib/vhost/virtio_net.c | 24 ++++++------------------ 1 file changed, 6 insertions(+), 18 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index c738b7edc9..9951842b9f 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -3104,7 +3104,6 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint16_t i; uint16_t avail_entries; - uint16_t dropped = 0; static bool allocerr_warned; /* @@ -3143,11 +3142,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, update_shadow_used_ring_split(vq, head_idx, 0); - if (unlikely(buf_len <= dev->vhost_hlen)) { - dropped += 1; - i++; + if (unlikely(buf_len <= dev->vhost_hlen)) break; - } buf_len -= dev->vhost_hlen; @@ -3164,8 +3160,6 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_len, mbuf_pool->name); allocerr_warned = true; } - dropped += 1; - i++; break; } @@ -3176,27 +3170,21 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } - dropped += 1; - i++; break; } - } - if (dropped) - rte_pktmbuf_free_bulk(&pkts[i - 1], count - i + 1); + if (unlikely(count != i)) + rte_pktmbuf_free_bulk(&pkts[i], count - i); - vq->last_avail_idx += i; - - do_data_copy_dequeue(vq); - if (unlikely(i < count)) - vq->shadow_used_idx = i; if (likely(vq->shadow_used_idx)) { + vq->last_avail_idx += vq->shadow_used_idx; + do_data_copy_dequeue(vq); flush_shadow_used_ring_split(dev, vq); vhost_vring_call_split(dev, vq); } - return (i - dropped); + return i; } __rte_noinline