From patchwork Wed Dec 20 15:36:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 135410 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9482143728; Wed, 20 Dec 2023 16:39:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC79B40689; Wed, 20 Dec 2023 16:38:39 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 8512540689 for ; Wed, 20 Dec 2023 16:38:38 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703086718; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UIjhKChoQXgTGfAhP5fZa6kRWwizxKXjYysdRpgeYY4=; b=MxU84z/M8mFJTpj7SZ+sObaco6DP9MMGRdkPvJuCqM3WeN9dbIVvDb/P+y0P/b0oEngdef sg0wt8UngmLR9ds9iQWWzvVD09axhQI9myraRuPBTOCYosqYb7aGqbo9ZRj+HCRXlyea2z 2M5m/UplRljgKtuA67qrCby2zhH/+M0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-368-58rNn4BBOfidWjZQjqwTIQ-1; Wed, 20 Dec 2023 10:38:35 -0500 X-MC-Unique: 58rNn4BBOfidWjZQjqwTIQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 33C5A102F0E2; Wed, 20 Dec 2023 15:38:30 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.224.218]) by smtp.corp.redhat.com (Postfix) with ESMTP id B24542166B31; Wed, 20 Dec 2023 15:38:28 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: thomas@monjalon.net, ferruh.yigit@amd.com, bruce.richardson@intel.com, stephen@networkplumber.org, mb@smartsharesystems.com, Maxime Coquelin , Chenbo Xia Subject: [PATCH v5 10/13] vhost: improve log for memory dumping configuration Date: Wed, 20 Dec 2023 16:36:03 +0100 Message-ID: <20231220153607.718606-11-david.marchand@redhat.com> In-Reply-To: <20231220153607.718606-1-david.marchand@redhat.com> References: <20231117131824.1977792-1-david.marchand@redhat.com> <20231220153607.718606-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the device name as a prefix of logs associated to madvise() calls. Signed-off-by: David Marchand Acked-by: Stephen Hemminger --- lib/vhost/iotlb.c | 18 +++++++++--------- lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 26 +++++++++++++------------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 87ac0e5126..10ab77262e 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -54,16 +54,16 @@ vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct vhost_iotlb_entr } static void -vhost_user_iotlb_set_dump(struct vhost_iotlb_entry *node) +vhost_user_iotlb_set_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node) { uint64_t start; start = node->uaddr + node->uoffset; - mem_set_dump((void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); + mem_set_dump(dev, (void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); } static void -vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, +vhost_user_iotlb_clear_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node, struct vhost_iotlb_entry *prev, struct vhost_iotlb_entry *next) { uint64_t start, end; @@ -80,7 +80,7 @@ vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, end = RTE_ALIGN_FLOOR(end, RTE_BIT64(node->page_shift)); if (end > start) - mem_set_dump((void *)(uintptr_t)start, end - start, false, + mem_set_dump(dev, (void *)(uintptr_t)start, end - start, false, RTE_BIT64(node->page_shift)); } @@ -204,7 +204,7 @@ vhost_user_iotlb_cache_remove_all(struct virtio_net *dev) vhost_user_iotlb_wr_lock_all(dev); RTE_TAILQ_FOREACH_SAFE(node, &dev->iotlb_list, next, temp_node) { - vhost_user_iotlb_clear_dump(node, NULL, NULL); + vhost_user_iotlb_clear_dump(dev, node, NULL, NULL); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -230,7 +230,7 @@ vhost_user_iotlb_cache_random_evict(struct virtio_net *dev) if (!entry_idx) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -285,7 +285,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua vhost_user_iotlb_pool_put(dev, new_node); goto unlock; } else if (node->iova > new_node->iova) { - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_BEFORE(node, new_node, next); dev->iotlb_cache_nr++; @@ -293,7 +293,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua } } - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_TAIL(&dev->iotlb_list, new_node, next); dev->iotlb_cache_nr++; @@ -322,7 +322,7 @@ vhost_user_iotlb_cache_remove(struct virtio_net *dev, uint64_t iova, uint64_t si if (iova < node->iova + node->size) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index f8624fba3d..5f24911190 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -1062,6 +1062,6 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } -void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment); +void mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t alignment); #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index e36312181a..413f068bcd 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -763,7 +763,7 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr) } void -mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) +mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t pagesz) { #ifdef MADV_DONTDUMP void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz); @@ -771,8 +771,8 @@ mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - rte_log(RTE_LOG_INFO, vhost_config_log_level, - "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno)); + VHOST_LOG_CONFIG(dev->ifname, INFO, + "could not set coredump preference (%s).\n", strerror(errno)); } #endif } @@ -807,7 +807,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc_packed, len, true, + mem_set_dump(dev, vq->desc_packed, len, true, hua_to_alignment(dev->mem, vq->desc_packed)); numa_realloc(&dev, &vq); *pdev = dev; @@ -824,7 +824,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->driver_event, len, true, + mem_set_dump(dev, vq->driver_event, len, true, hua_to_alignment(dev->mem, vq->driver_event)); len = sizeof(struct vring_packed_desc_event); vq->device_event = (struct vring_packed_desc_event *) @@ -837,7 +837,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->device_event, len, true, + mem_set_dump(dev, vq->device_event, len, true, hua_to_alignment(dev->mem, vq->device_event)); vq->access_ok = true; return; @@ -855,7 +855,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); + mem_set_dump(dev, vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); numa_realloc(&dev, &vq); *pdev = dev; *pvq = vq; @@ -871,7 +871,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); + mem_set_dump(dev, vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); len = sizeof(struct vring_used) + sizeof(struct vring_used_elem) * vq->size; if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) @@ -884,7 +884,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); + mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { VHOST_LOG_CONFIG(dev->ifname, WARNING, @@ -1274,7 +1274,7 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_addr = mmap_addr; region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; - mem_set_dump(mmap_addr, mmap_size, false, alignment); + mem_set_dump(dev, mmap_addr, mmap_size, false, alignment); if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { @@ -1580,7 +1580,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } alignment = get_blk_size(mfd); - mem_set_dump(ptr, size, false, alignment); + mem_set_dump(dev, ptr, size, false, alignment); *fd = mfd; return ptr; } @@ -1789,7 +1789,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info->fd = -1; } - mem_set_dump(addr, mmap_size, false, get_blk_size(fd)); + mem_set_dump(dev, addr, mmap_size, false, get_blk_size(fd)); dev->inflight_info->fd = fd; dev->inflight_info->addr = addr; dev->inflight_info->size = mmap_size; @@ -2343,7 +2343,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, dev->log_addr = (uint64_t)(uintptr_t)addr; dev->log_base = dev->log_addr + off; dev->log_size = size; - mem_set_dump(addr, size + off, false, alignment); + mem_set_dump(dev, addr, size + off, false, alignment); for (i = 0; i < dev->nr_vring; i++) { struct vhost_virtqueue *vq = dev->virtqueue[i];