From patchwork Tue Dec 6 15:05:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Pattrick X-Patchwork-Id: 120497 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89B0FA054A; Tue, 6 Dec 2022 16:05:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36FBC40687; Tue, 6 Dec 2022 16:05:43 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 5798F4021D for ; Tue, 6 Dec 2022 16:05:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670339141; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=HME/2Oofj4CKyjvkHRDCeR/R8QueJL86OUfxDBup1w8=; b=dcMi6G+YqRkWhu8u5J2LYK+W1VYpmOts17vJi4bsPjNE0Xsi/GID9m3mvmbhr0eZwpScjZ VRWGOYLmTkjSWZLlqPwW6qwqm74P5u6w646zLkwsecER1NRC6aE7sU3KDpVYwJrwWaW+AU Zssk3K6bQS+ZFhMm8wjFiB8Ry3Cf/YY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-342-yUsAHSzENPWIAyaUWwGXXw-1; Tue, 06 Dec 2022 10:05:39 -0500 X-MC-Unique: yUsAHSzENPWIAyaUWwGXXw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16C2388283B; Tue, 6 Dec 2022 15:05:36 +0000 (UTC) Received: from mpattric.remote.csb (unknown [10.22.32.61]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9CF54112131B; Tue, 6 Dec 2022 15:05:35 +0000 (UTC) From: Mike Pattrick To: Maxime Coquelin , Chenbo Xia Cc: dev@dpdk.org, Mike Pattrick Subject: [PATCH v2] vhost: exclude VM hugepages from coredumps Date: Tue, 6 Dec 2022 10:05:09 -0500 Message-Id: <20221206150509.772408-1-mkp@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently if an application wants to include shared hugepages in coredumps in conjunction with the vhost library, the coredump will be larger than expected and include unneeded virtual machine memory. This patch will mark all vhost huge pages as DONTDUMP, except for some select pages used by DPDK. Signed-off-by: Mike Pattrick --- v2: * Removed warning on unsupported platforms --- lib/vhost/iotlb.c | 5 +++++ lib/vhost/vhost.h | 12 ++++++++++++ lib/vhost/vhost_user.c | 10 ++++++++++ 3 files changed, 27 insertions(+) diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 6a729e8804..2f89f88817 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -149,6 +149,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq) rte_rwlock_write_lock(&vq->iotlb_lock); RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_REMOVE(&vq->iotlb_list, node, next); vhost_user_iotlb_pool_put(vq, node); } @@ -170,6 +171,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq) RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) { if (!entry_idx) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_REMOVE(&vq->iotlb_list, node, next); vhost_user_iotlb_pool_put(vq, node); vq->iotlb_cache_nr--; @@ -222,12 +224,14 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, struct vhost_virtqueue *vq vhost_user_iotlb_pool_put(vq, new_node); goto unlock; } else if (node->iova > new_node->iova) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_INSERT_BEFORE(node, new_node, next); vq->iotlb_cache_nr++; goto unlock; } } + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_INSERT_TAIL(&vq->iotlb_list, new_node, next); vq->iotlb_cache_nr++; @@ -255,6 +259,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq, break; if (iova < node->iova + node->size) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_REMOVE(&vq->iotlb_list, node, next); vhost_user_iotlb_pool_put(vq, node); vq->iotlb_cache_nr--; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index ef211ed519..1f913803f6 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -987,4 +988,15 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } + +static __rte_always_inline void +mem_set_dump(__rte_unused void *ptr, __rte_unused size_t size, __rte_unused bool enable) +{ +#ifdef MADV_DONTDUMP + if (madvise(ptr, size, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { + rte_log(RTE_LOG_INFO, vhost_config_log_level, + "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno)); + } +#endif +} #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 9902ae9944..8f33d5f4d9 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -793,6 +793,9 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } + mem_set_dump(vq->desc_packed, len, true); + mem_set_dump(vq->driver_event, len, true); + mem_set_dump(vq->device_event, len, true); vq->access_ok = true; return; } @@ -846,6 +849,9 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) "some packets maybe resent for Tx and dropped for Rx\n"); } + mem_set_dump(vq->desc, len, true); + mem_set_dump(vq->avail, len, true); + mem_set_dump(vq->used, len, true); vq->access_ok = true; VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc); @@ -1224,6 +1230,7 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_addr = mmap_addr; region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; + mem_set_dump(mmap_addr, mmap_size, false); if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { @@ -1528,6 +1535,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f return NULL; } + mem_set_dump(ptr, size, false); *fd = mfd; return ptr; } @@ -1736,6 +1744,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info->fd = -1; } + mem_set_dump(addr, mmap_size, false); dev->inflight_info->fd = fd; dev->inflight_info->addr = addr; dev->inflight_info->size = mmap_size; @@ -2283,6 +2292,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, dev->log_addr = (uint64_t)(uintptr_t)addr; dev->log_base = dev->log_addr + off; dev->log_size = size; + mem_set_dump(addr, size, false); for (i = 0; i < dev->nr_vring; i++) { struct vhost_virtqueue *vq = dev->virtqueue[i];