From patchwork Fri Apr 15 17:31:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 109750 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4D584A050F; Fri, 15 Apr 2022 19:31:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C80D041132; Fri, 15 Apr 2022 19:31:50 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 899E140042 for ; Fri, 15 Apr 2022 19:31:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1650043908; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IkWw//SH2cub6Oce+SygYTSLGNJw5n5cHYHMGQyfwTg=; b=a7upPCcaLptGT5EluL1pTTjEG54ORBmA7DmLrnpQshx14aeoeqCo/Iw8fOIL2a/dIC3A+z 5a9whIPlRew5IML/PL3pw9V28O5y1PKJ9IBWGmPTQVfOvej/5uiR6H7xYB+UaDQZAUQbj0 KKRlvzIWOs+NhbcphBI5n0ZNkDOOdHE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-557-pPgnBbTHMMCohMmsDhbM8g-1; Fri, 15 Apr 2022 13:31:44 -0400 X-MC-Unique: pPgnBbTHMMCohMmsDhbM8g-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8EFE6800B21; Fri, 15 Apr 2022 17:31:43 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.83]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2930A4472B6; Fri, 15 Apr 2022 17:31:42 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: john.mcnamara@intel.com, dmitry.kozliuk@gmail.com, stable@dpdk.org, Anatoly Burakov , Xueqin Lin , Zhihong Peng Subject: [PATCH 2/3] mem: fix ASan shadow for remapped memory segments Date: Fri, 15 Apr 2022 19:31:26 +0200 Message-Id: <20220415173127.3838-3-david.marchand@redhat.com> In-Reply-To: <20220415173127.3838-1-david.marchand@redhat.com> References: <20220415173127.3838-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.9 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When releasing some memory, the allocator can choose to return some pages to the OS. At the same time, this memory was poisoned in ASAn shadow. Doing the latter made it impossible to remap this same page later. On the other hand, without this poison, the OS would pagefault in any case for this page. Remove the poisoning for unmapped pages. Bugzilla ID: 994 Fixes: 6cc51b1293ce ("mem: instrument allocator for ASan") Cc: stable@dpdk.org Signed-off-by: David Marchand --- lib/eal/common/malloc_elem.h | 4 ++++ lib/eal/common/malloc_heap.c | 12 +++++++++++- 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/lib/eal/common/malloc_elem.h b/lib/eal/common/malloc_elem.h index 228f178418..b859003722 100644 --- a/lib/eal/common/malloc_elem.h +++ b/lib/eal/common/malloc_elem.h @@ -272,6 +272,10 @@ old_malloc_size(struct malloc_elem *elem) #else /* !RTE_MALLOC_ASAN */ +static inline void +asan_set_zone(void *ptr __rte_unused, size_t len __rte_unused, + uint32_t val __rte_unused) { } + static inline void asan_set_freezone(void *ptr __rte_unused, size_t size __rte_unused) { } diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index 6c572b6f2c..5913d9f862 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -860,6 +860,7 @@ malloc_heap_free(struct malloc_elem *elem) size_t len, aligned_len, page_sz; struct rte_memseg_list *msl; unsigned int i, n_segs, before_space, after_space; + bool unmapped_pages = false; int ret; const struct internal_config *internal_conf = eal_get_internal_configuration(); @@ -999,6 +1000,13 @@ malloc_heap_free(struct malloc_elem *elem) /* don't care if any of this fails */ malloc_heap_free_pages(aligned_start, aligned_len); + /* + * Clear any poisoning in ASan for the associated pages so that + * next time EAL maps those pages, the allocator can access + * them. + */ + asan_set_zone(aligned_start, aligned_len, 0x00); + unmapped_pages = true; request_sync(); } else { @@ -1032,7 +1040,9 @@ malloc_heap_free(struct malloc_elem *elem) rte_mcfg_mem_write_unlock(); free_unlock: - asan_set_freezone(asan_ptr, asan_data_len); + /* Poison memory range if belonging to some still mapped pages. */ + if (!unmapped_pages) + asan_set_freezone(asan_ptr, asan_data_len); rte_spinlock_unlock(&(heap->lock)); return ret;