From patchwork Mon Oct 9 11:03:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Artur Paszkiewicz X-Patchwork-Id: 132421 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31E884233B; Mon, 9 Oct 2023 13:04:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 073A640282; Mon, 9 Oct 2023 13:04:10 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by mails.dpdk.org (Postfix) with ESMTP id E5C7F4026B for ; Mon, 9 Oct 2023 13:04:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1696849450; x=1728385450; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=riD77xNq/eL7pQJiW/fOyNM0U5YQVdP51V+ZmCxmifY=; b=OL/ussMOD/96BH/Huyp7+/pZFf5zT0jHVWu7mRma0+171aeS7G2MGcGK GFKHhqnybgsWpSi+5ZBreVQoUYyrp/zMLCZQNJF0WXyl6FtY/m5/jiUfA Nohlt1BXYSM2N5shMVjleSqXLC5Jd5k5/wgRpxSfTWRxGTda9t7B69kjm dxssPmSdrhdIasYpJbVKSOxuDvDdAPFbR9DJhq+i7JgGJN8SJz/1wl8a3 yBLnGo/L46EauEfTpc55z3ZanwONycr+l5McBcUG+oHJzj2EqM3a/GbUu WTXkg7Bo8mgU93zxW9QHnP+9vs9tj9MXwPb6Los5SVFbxoRr+/Q4NlpG7 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10857"; a="2716273" X-IronPort-AV: E=Sophos;i="6.03,210,1694761200"; d="scan'208";a="2716273" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2023 04:04:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10857"; a="926727441" X-IronPort-AV: E=Sophos;i="6.03,210,1694761200"; d="scan'208";a="926727441" Received: from apaszkie-mobl2.apaszkie-mobl2 (HELO apaszkie-mobl2.intel.com) ([10.213.16.91]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2023 04:04:01 -0700 From: Artur Paszkiewicz To: anatoly.burakov@intel.com Cc: dev@dpdk.org, Artur Paszkiewicz Subject: [PATCH v2] mem: allow using ASan in multi-process mode Date: Mon, 9 Oct 2023 13:03:56 +0200 Message-Id: <20231009110356.15382-1-artur.paszkiewicz@intel.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231004142308.15395-1-artur.paszkiewicz@intel.com> References: <20231004142308.15395-1-artur.paszkiewicz@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Multi-process applications operate on shared hugepage memory but each process has its own ASan shadow region which is not synchronized with the other processes. This causes issues when different processes try to use the same memory because they have their own view of which addresses are valid. Fix it by mapping the shadow regions for memseg lists as shared memory. The primary process is responsible for creating and removing the shared memory objects. Disable ASan instrumentation for triggering the page fault in alloc_seg() because if the segment is already allocated by another process and is marked as free in the shadow, accessing this address will cause an ASan error. Signed-off-by: Artur Paszkiewicz --- v2: - Added checks for config options disabling multi-process support. - Fixed missing unmap in legacy mode. lib/eal/common/eal_common_memory.c | 9 +++ lib/eal/common/eal_private.h | 22 +++++++ lib/eal/linux/eal_memalloc.c | 9 ++- lib/eal/linux/eal_memory.c | 97 ++++++++++++++++++++++++++++++ lib/eal/linux/meson.build | 4 ++ 5 files changed, 140 insertions(+), 1 deletion(-) diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c index d9433db623..15f950810b 100644 --- a/lib/eal/common/eal_common_memory.c +++ b/lib/eal/common/eal_common_memory.c @@ -263,6 +263,12 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) RTE_LOG(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx\n", addr, mem_sz); +#ifdef RTE_MALLOC_ASAN + if (eal_memseg_list_map_asan_shadow(msl) != 0) { + RTE_LOG(ERR, EAL, "Failed to map ASan shadow region for memseg list"); + return -1; + } +#endif return 0; } @@ -1050,6 +1056,9 @@ rte_eal_memory_detach(void) RTE_LOG(ERR, EAL, "Could not unmap memory: %s\n", rte_strerror(rte_errno)); +#ifdef RTE_MALLOC_ASAN + eal_memseg_list_unmap_asan_shadow(msl); +#endif /* * we are detaching the fbarray rather than destroying because * other processes might still reference this fbarray, and we diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h index 5eadba4902..48df338cf9 100644 --- a/lib/eal/common/eal_private.h +++ b/lib/eal/common/eal_private.h @@ -300,6 +300,28 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags); void eal_memseg_list_populate(struct rte_memseg_list *msl, void *addr, int n_segs); +#ifdef RTE_MALLOC_ASAN +/** + * Map shared memory for MSL ASan shadow region. + * + * @param msl + * Memory segment list. + * @return + * 0 on success, (-1) on failure. + */ +int +eal_memseg_list_map_asan_shadow(struct rte_memseg_list *msl); + +/** + * Unmap the MSL ASan shadow region. + * + * @param msl + * Memory segment list. + */ +void +eal_memseg_list_unmap_asan_shadow(struct rte_memseg_list *msl); +#endif + /** * Distribute available memory between MSLs. * diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c index f8b1588cae..5212ae6b56 100644 --- a/lib/eal/linux/eal_memalloc.c +++ b/lib/eal/linux/eal_memalloc.c @@ -511,6 +511,13 @@ resize_hugefile(int fd, uint64_t fa_offset, uint64_t page_sz, bool grow, grow, dirty); } +__rte_no_asan +static inline void +page_fault(void *addr) +{ + *(volatile int *)addr = *(volatile int *)addr; +} + static int alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, struct hugepage_info *hi, unsigned int list_idx, @@ -641,7 +648,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * that is already there, so read the old value, and write itback. * kernel populates the page with zeroes initially. */ - *(volatile int *)addr = *(volatile int *)addr; + page_fault(addr); iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_PHYS_ADDR) { diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c index 9b6f08fba8..3dca532874 100644 --- a/lib/eal/linux/eal_memory.c +++ b/lib/eal/linux/eal_memory.c @@ -41,6 +41,7 @@ #include "eal_filesystem.h" #include "eal_hugepages.h" #include "eal_options.h" +#include "malloc_elem.h" #define PFN_MASK_SIZE 8 @@ -1469,6 +1470,9 @@ eal_legacy_hugepage_init(void) if (msl->memseg_arr.count > 0) continue; /* this is an unused list, deallocate it */ +#ifdef RTE_MALLOC_ASAN + eal_memseg_list_unmap_asan_shadow(msl); +#endif mem_sz = msl->len; munmap(msl->base_va, mem_sz); msl->base_va = NULL; @@ -1956,3 +1960,96 @@ rte_eal_memseg_init(void) #endif memseg_secondary_init(); } + +#ifdef RTE_MALLOC_ASAN +int +eal_memseg_list_map_asan_shadow(struct rte_memseg_list *msl) +{ + const struct internal_config *internal_conf = + eal_get_internal_configuration(); + void *addr; + void *shadow_addr; + size_t shadow_sz; + int shm_oflag; + char shm_path[PATH_MAX]; + int shm_fd; + int ret = 0; + + if (!msl->heap || internal_conf->hugepage_file.unlink_before_mapping || + internal_conf->no_shconf || internal_conf->no_hugetlbfs) + return 0; + + shadow_addr = ASAN_MEM_TO_SHADOW(msl->base_va); + shadow_sz = msl->len >> ASAN_SHADOW_SCALE; + + snprintf(shm_path, sizeof(shm_path), "/%s_%s_shadow", + eal_get_hugefile_prefix(), msl->memseg_arr.name); + + shm_oflag = O_RDWR; + if (internal_conf->process_type == RTE_PROC_PRIMARY) + shm_oflag |= O_CREAT | O_TRUNC; + + shm_fd = shm_open(shm_path, shm_oflag, 0600); + if (shm_fd == -1) { + RTE_LOG(DEBUG, EAL, "shadow shm_open() failed: %s\n", + strerror(errno)); + return -1; + } + + if (internal_conf->process_type == RTE_PROC_PRIMARY) { + ret = ftruncate(shm_fd, shadow_sz); + if (ret == -1) { + RTE_LOG(DEBUG, EAL, "shadow ftruncate() failed: %s\n", + strerror(errno)); + goto out; + } + } + + addr = mmap(shadow_addr, shadow_sz, PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_FIXED, shm_fd, 0); + if (addr == MAP_FAILED) { + RTE_LOG(DEBUG, EAL, "shadow mmap() failed: %s\n", + strerror(errno)); + ret = -1; + goto out; + } + + if (addr != shadow_addr) { + RTE_LOG(DEBUG, EAL, "wrong shadow mmap() address\n"); + munmap(addr, shadow_sz); + ret = -1; + } +out: + close(shm_fd); + if (ret != 0) { + if (internal_conf->process_type == RTE_PROC_PRIMARY) + shm_unlink(shm_path); + } + + return ret; +} + +void +eal_memseg_list_unmap_asan_shadow(struct rte_memseg_list *msl) +{ + const struct internal_config *internal_conf = + eal_get_internal_configuration(); + + if (!msl->heap || internal_conf->hugepage_file.unlink_before_mapping || + internal_conf->no_shconf || internal_conf->no_hugetlbfs) + return; + + if (munmap(ASAN_MEM_TO_SHADOW(msl->base_va), + msl->len >> ASAN_SHADOW_SCALE) != 0) + RTE_LOG(ERR, EAL, "Could not unmap asan shadow memory: %s\n", + strerror(errno)); + if (internal_conf->process_type == RTE_PROC_PRIMARY) { + char shm_path[PATH_MAX]; + + snprintf(shm_path, sizeof(shm_path), "/%s_%s_shadow", + eal_get_hugefile_prefix(), + msl->memseg_arr.name); + shm_unlink(shm_path); + } +} +#endif diff --git a/lib/eal/linux/meson.build b/lib/eal/linux/meson.build index e99ebed256..1e8a48c8d3 100644 --- a/lib/eal/linux/meson.build +++ b/lib/eal/linux/meson.build @@ -23,3 +23,7 @@ deps += ['kvargs', 'telemetry'] if has_libnuma dpdk_conf.set10('RTE_EAL_NUMA_AWARE_HUGEPAGES', true) endif + +if dpdk_conf.has('RTE_MALLOC_ASAN') + ext_deps += cc.find_library('rt') +endif