From patchwork Wed Jun 21 08:04:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Maximets X-Patchwork-Id: 25542 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id E894A559A; Wed, 21 Jun 2017 10:04:44 +0200 (CEST) Received: from mailout1.w1.samsung.com (mailout1.w1.samsung.com [210.118.77.11]) by dpdk.org (Postfix) with ESMTP id AB459374 for ; Wed, 21 Jun 2017 10:04:43 +0200 (CEST) Received: from eucas1p2.samsung.com (unknown [182.198.249.207]) by mailout1.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0ORW002O813VCK40@mailout1.w1.samsung.com> for dev@dpdk.org; Wed, 21 Jun 2017 09:04:43 +0100 (BST) Received: from eusmges1.samsung.com (unknown [203.254.199.239]) by eucas1p2.samsung.com (KnoxPortal) with ESMTP id 20170621080442eucas1p2f70d9c0a4da6333d9ccd4e9c0bb8f152~KFGyMdPjO1309513095eucas1p2a; Wed, 21 Jun 2017 08:04:42 +0000 (GMT) Received: from eucas1p2.samsung.com ( [182.198.249.207]) by eusmges1.samsung.com (EUCPMTA) with SMTP id 99.8C.14140.C982A495; Wed, 21 Jun 2017 09:04:44 +0100 (BST) Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by eucas1p2.samsung.com (KnoxPortal) with ESMTP id 20170621080441eucas1p2dc01b29e7c8e4c1546ace6cd76ae51ff~KFGxlQ11m1308513085eucas1p2e; Wed, 21 Jun 2017 08:04:41 +0000 (GMT) X-AuditID: cbfec7ef-f796a6d00000373c-d9-594a289ce542 Received: from eusync1.samsung.com ( [203.254.199.211]) by eusmgms1.samsung.com (EUCPMTA) with SMTP id 84.0D.17452.9982A495; Wed, 21 Jun 2017 09:04:41 +0100 (BST) Received: from imaximets.rnd.samsung.ru ([106.109.129.180]) by eusync1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0ORW001JI13HQM70@eusync1.samsung.com>; Wed, 21 Jun 2017 09:04:41 +0100 (BST) From: Ilya Maximets To: dev@dpdk.org, David Marchand , Sergio Gonzalez Monroy , Thomas Monjalon Cc: Heetae Ahn , Yuanhan Liu , Jianfeng Tan , Neil Horman , Yulong Pei , Bruce Richardson , Jerin Jacob , Ilya Maximets Date: Wed, 21 Jun 2017 11:04:09 +0300 Message-id: <1498032250-24924-2-git-send-email-i.maximets@samsung.com> X-Mailer: git-send-email 2.7.4 In-reply-to: <1498032250-24924-1-git-send-email-i.maximets@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrMIsWRmVeSWpSXmKPExsWy7djP87pzNLwiDb7ON7a4screYkVHO4vF u0/bmSymfb7NbnGl/Se7xcRJJhbds7+wWdxqPslmsWLCEUaLTw9OsFi0LNnJZPHtwXdmBx6P i/13GD02nOhn9fi1YCmrx41/t9g8Fu95yeRx7OY0do++LasYPa58X80YwBHFZZOSmpNZllqk b5fAlXH5x2vmgidhFS8eyTUwLnTrYuTkkBAwkbi+cj87hC0mceHeerYuRi4OIYFljBJbn7+C cj4zSjR9ncoC07Fp42GEqkMztjBDOM1MEmdaT4DNYhPQkTi1+ggjSEJEYCGjROPPE2BVzAIn mST2/z3BBlIlLGArMW3ParAOFgFViel9G8DivAJuEn1PD7JB7JOTuHmukxnE5hRwl/hz+SwL yCAJgXXsEt3XjgE1cwA5shKbDjBD1LtILL7QxARhC0u8Or4F6jsZic6Og0wQvc2MEg2rLjFC OBMYJb40L4fqsJc4dfMqmM0swCcxadt0ZogFvBIdbUIQJR4SU/v/QwPDUeLq226o/68xSvya /INxAqPMAkaGVYwiqaXFuempxYZ6xYm5xaV56XrJ+bmbGIFJ4fS/4+93MD5tDjnEKMDBqMTD G6HsGSnEmlhWXJl7iFGCg1lJhNef1ytSiDclsbIqtSg/vqg0J7X4EKM0B4uSOC/vqWsRQgLp iSWp2ampBalFMFkmDk4pYMRmdK+QTuOLVdJ4Mf1scaj2h6eLt+VIbTt9mPnt4fygs4ejls+c yPlmy64ugzfG7BZVnkuStturr9FabfDt+zLTOYnqEZ2cRUa1q98bWDzoOL3jq8Ak7gsPVVY5 JKXUHsnc9eXYpa095uePZX/91HHv8IIVTycs87tResHrrJ7ScbWWfdeSF/UosRRnJBpqMRcV JwIAocHDnAYDAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprEIsWRmVeSWpSXmKPExsVy+t/xy7ozNbwiDVbvNLS4screYkVHO4vF u0/bmSymfb7NbnGl/Se7xcRJJhbds7+wWdxqPslmsWLCEUaLTw9OsFi0LNnJZPHtwXdmBx6P i/13GD02nOhn9fi1YCmrx41/t9g8Fu95yeRx7OY0do++LasYPa58X80YwBHlZpORmpiSWqSQ mpecn5KZl26rFBripmuhpJCXmJtqqxSh6xsSpKRQlphTCuQZGaABB+cA92AlfbsEt4zLP14z FzwJq3jxSK6BcaFbFyMnh4SAicSmjYfZIGwxiQv31gPZXBxCAksYJf7vOsUC4bQySXSt62EE qWIT0JE4tfoII0hCRGAho8SF1V+YQRxmgdNMEm+W3WcBqRIWsJWYtmc1O4jNIqAqMb1vA9gO XgE3ib6nB6H2yUncPNfJDGJzCrhL/Ll8FmpdE6PEx/8b2SYw8i5gZFjFKJJaWpybnltsqFec mFtcmpeul5yfu4kRGB/bjv3cvIPx0sbgQ4wCHIxKPLwMip6RQqyJZcWVuYcYJTiYlUR4/Xm9 IoV4UxIrq1KL8uOLSnNSiw8xmgJdNZFZSjQ5Hxi7eSXxhiaG5paGRsYWFuZGRkrivCUfroQL CaQnlqRmp6YWpBbB9DFxcEo1MDamSL+9tXjlkhvsu05ZH17JPkkoxTx42SrtOTrFaw2jppkv uvAnIp/zy5y58xbMKb543LKzxKHu/MKX3F+WPZ90L/L/DFFzF5kDLUlf77mIPNrI0JU9e/vV CyYTeCeuc1jQ8GFxKf8liSsSljyerNO2+j3pPNP5+1qx5p5+/6R49b1/UrriNmkosRRnJBpq MRcVJwIAv8+JS6UCAAA= X-MTR: 20000000000000000@CPGS X-CMS-MailID: 20170621080441eucas1p2dc01b29e7c8e4c1546ace6cd76ae51ff X-Msg-Generator: CA X-Sender-IP: 182.198.249.179 X-Local-Sender: =?utf-8?q?Ilya_Maximets=1BSRR-Virtualization_Lab=1B?= =?utf-8?b?7IK87ISx7KCE7J6QG0xlYWRpbmcgRW5naW5lZXI=?= X-Global-Sender: =?utf-8?q?Ilya_Maximets=1BSRR-Virtualization_Lab=1BSamsu?= =?utf-8?q?ng_Electronics=1BLeading_Engineer?= X-Sender-Code: =?utf-8?q?C10=1BCISHQ=1BC10GD01GD010154?= CMS-TYPE: 201P X-HopCount: 7 X-CMS-RootMailID: 20170621080441eucas1p2dc01b29e7c8e4c1546ace6cd76ae51ff X-RootMTR: 20170621080441eucas1p2dc01b29e7c8e4c1546ace6cd76ae51ff References: <1496756020-4579-1-git-send-email-i.maximets@samsung.com> <1498032250-24924-1-git-send-email-i.maximets@samsung.com> Subject: [dpdk-dev] [PATCH v6 1/2] mem: balanced allocation of hugepages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently EAL allocates hugepages one by one not paying attention from which NUMA node allocation was done. Such behaviour leads to allocation failure if number of available hugepages for application limited by cgroups or hugetlbfs and memory requested not only from the first socket. Example: # 90 x 1GB hugepages availavle in a system cgcreate -g hugetlb:/test # Limit to 32GB of hugepages cgset -r hugetlb.1GB.limit_in_bytes=34359738368 test # Request 4GB from each of 2 sockets cgexec -g hugetlb:test testpmd --socket-mem=4096,4096 ... EAL: SIGBUS: Cannot mmap more hugepages of size 1024 MB EAL: 32 not 90 hugepages of size 1024 MB allocated EAL: Not enough memory available on socket 1! Requested: 4096MB, available: 0MB PANIC in rte_eal_init(): Cannot init memory This happens beacause all allocated pages are on socket 0. Fix this issue by setting mempolicy MPOL_PREFERRED for each hugepage to one of requested nodes using following schema: 1) Allocate essential hugepages: 1.1) Allocate as many hugepages from numa N to only fit requested memory for this numa. 1.2) repeat 1.1 for all numa nodes. 2) Try to map all remaining free hugepages in a round-robin fashion. 3) Sort pages and choose the most suitable. In this case all essential memory will be allocated and all remaining pages will be fairly distributed between all requested nodes. New config option RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES introduced and enabled by default for linuxapp on x86, ppc and thunderx. Enabling of this option adds libnuma as a dependency for EAL. Fixes: 77988fc08dc5 ("mem: fix allocating all free hugepages") Signed-off-by: Ilya Maximets --- config/common_base | 1 + config/common_linuxapp | 2 + config/defconfig_arm-armv7a-linuxapp-gcc | 3 + config/defconfig_arm64-armv8a-linuxapp-gcc | 3 + config/defconfig_arm64-thunderx-linuxapp-gcc | 3 + lib/librte_eal/linuxapp/eal/Makefile | 3 + lib/librte_eal/linuxapp/eal/eal_memory.c | 105 ++++++++++++++++++++++++++- mk/rte.app.mk | 3 + 8 files changed, 119 insertions(+), 4 deletions(-) diff --git a/config/common_base b/config/common_base index f6aafd1..b9efdf2 100644 --- a/config/common_base +++ b/config/common_base @@ -103,6 +103,7 @@ CONFIG_RTE_EAL_ALWAYS_PANIC_ON_ERROR=n CONFIG_RTE_EAL_IGB_UIO=n CONFIG_RTE_EAL_VFIO=n CONFIG_RTE_MALLOC_DEBUG=n +CONFIG_RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES=n # # Recognize/ignore the AVX/AVX512 CPU flags for performance/power testing. diff --git a/config/common_linuxapp b/config/common_linuxapp index b3cf41b..5eb568b 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -35,6 +35,8 @@ CONFIG_RTE_EXEC_ENV="linuxapp" CONFIG_RTE_EXEC_ENV_LINUXAPP=y +CONFIG_RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES=y + CONFIG_RTE_EAL_IGB_UIO=y CONFIG_RTE_EAL_VFIO=y CONFIG_RTE_KNI_KMOD=y diff --git a/config/defconfig_arm-armv7a-linuxapp-gcc b/config/defconfig_arm-armv7a-linuxapp-gcc index 19607eb..5c5226a 100644 --- a/config/defconfig_arm-armv7a-linuxapp-gcc +++ b/config/defconfig_arm-armv7a-linuxapp-gcc @@ -47,6 +47,9 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=y CONFIG_RTE_TOOLCHAIN="gcc" CONFIG_RTE_TOOLCHAIN_GCC=y +# NUMA is not supported on ARM +CONFIG_RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES=n + # ARM doesn't have support for vmware TSC map CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n diff --git a/config/defconfig_arm64-armv8a-linuxapp-gcc b/config/defconfig_arm64-armv8a-linuxapp-gcc index 9f32766..d9667d3 100644 --- a/config/defconfig_arm64-armv8a-linuxapp-gcc +++ b/config/defconfig_arm64-armv8a-linuxapp-gcc @@ -47,6 +47,9 @@ CONFIG_RTE_TOOLCHAIN_GCC=y # to address minimum DMA alignment across all arm64 implementations. CONFIG_RTE_CACHE_LINE_SIZE=128 +# Most ARMv8 systems doesn't support NUMA +CONFIG_RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES=n + CONFIG_RTE_EAL_IGB_UIO=n CONFIG_RTE_LIBRTE_FM10K_PMD=n diff --git a/config/defconfig_arm64-thunderx-linuxapp-gcc b/config/defconfig_arm64-thunderx-linuxapp-gcc index f64da4c..e486c1d 100644 --- a/config/defconfig_arm64-thunderx-linuxapp-gcc +++ b/config/defconfig_arm64-thunderx-linuxapp-gcc @@ -37,6 +37,9 @@ CONFIG_RTE_CACHE_LINE_SIZE=128 CONFIG_RTE_MAX_NUMA_NODES=2 CONFIG_RTE_MAX_LCORE=96 +# ThunderX supports NUMA +CONFIG_RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES=y + # # Compile PMD for octeontx sso event device # diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile index 640afd0..bd10489 100644 --- a/lib/librte_eal/linuxapp/eal/Makefile +++ b/lib/librte_eal/linuxapp/eal/Makefile @@ -50,6 +50,9 @@ LDLIBS += -ldl LDLIBS += -lpthread LDLIBS += -lgcc_s LDLIBS += -lrt +ifeq ($(CONFIG_RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES),y) +LDLIBS += -lnuma +endif # specific to linuxapp exec-env SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) := eal.c diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index e17c9cb..9a0087c 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -54,6 +54,9 @@ #include #include #include +#ifdef RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES +#include +#endif #include #include @@ -348,6 +351,21 @@ static int huge_wrap_sigsetjmp(void) return sigsetjmp(huge_jmpenv, 1); } +#ifdef RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES +#ifndef ULONG_SIZE +#define ULONG_SIZE sizeof(unsigned long) +#endif +#ifndef ULONG_BITS +#define ULONG_BITS (ULONG_SIZE * CHAR_BIT) +#endif +#ifndef DIV_ROUND_UP +#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) +#endif +#ifndef BITS_TO_LONGS +#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, ULONG_SIZE) +#endif +#endif + /* * Mmap all hugepages of hugepage table: it first open a file in * hugetlbfs, then mmap() hugepage_sz data in it. If orig is set, the @@ -356,18 +374,82 @@ static int huge_wrap_sigsetjmp(void) * map continguous physical blocks in contiguous virtual blocks. */ static unsigned -map_all_hugepages(struct hugepage_file *hugepg_tbl, - struct hugepage_info *hpi, int orig) +map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, + uint64_t *essential_memory __rte_unused, int orig) { int fd; unsigned i; void *virtaddr; void *vma_addr = NULL; size_t vma_len = 0; +#ifdef RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES + unsigned long nodemask[BITS_TO_LONGS(RTE_MAX_NUMA_NODES)] = {0UL}; + unsigned long maxnode = 0; + int node_id = -1; + bool numa_available = true; + + /* Check if kernel supports NUMA. */ + if (get_mempolicy(NULL, NULL, 0, 0, 0) < 0 && errno == ENOSYS) { + RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + numa_available = false; + } + + if (orig && numa_available) { + for (i = 0; i < RTE_MAX_NUMA_NODES; i++) + if (internal_config.socket_mem[i]) + maxnode = i + 1; + } +#endif for (i = 0; i < hpi->num_pages[0]; i++) { uint64_t hugepage_sz = hpi->hugepage_sz; +#ifdef RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES + if (maxnode) { + unsigned int j; + + for (j = 0; j < RTE_MAX_NUMA_NODES; j++) + if (essential_memory[j]) + break; + + if (j == RTE_MAX_NUMA_NODES) { + node_id = (node_id + 1) % RTE_MAX_NUMA_NODES; + while (!internal_config.socket_mem[node_id]) { + node_id++; + node_id %= RTE_MAX_NUMA_NODES; + } + } else { + node_id = j; + if (essential_memory[j] < hugepage_sz) + essential_memory[j] = 0; + else + essential_memory[j] -= hugepage_sz; + } + + nodemask[node_id / ULONG_BITS] = + 1UL << (node_id % ULONG_BITS); + + RTE_LOG(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d\n", + node_id); + /* + * Due to old linux kernel bug (feature?) we have to + * increase maxnode by 1. It will be unconditionally + * decreased back to normal value inside the syscall + * handler. + */ + if (set_mempolicy(MPOL_PREFERRED, + nodemask, maxnode + 1) < 0) { + RTE_LOG(ERR, EAL, + "Failed to set policy MPOL_PREFERRED: " + "%s\n", strerror(errno)); + return i; + } + + nodemask[node_id / ULONG_BITS] = 0UL; + } +#endif + if (orig) { hugepg_tbl[i].file_id = i; hugepg_tbl[i].size = hugepage_sz; @@ -478,6 +560,10 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, vma_len -= hugepage_sz; } +#ifdef RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES + if (maxnode && set_mempolicy(MPOL_DEFAULT, NULL, 0) < 0) + RTE_LOG(ERR, EAL, "Failed to set mempolicy MPOL_DEFAULT\n"); +#endif return i; } @@ -562,6 +648,11 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) if (hugepg_tbl[i].orig_va == va) { hugepg_tbl[i].socket_id = socket_id; hp_count++; +#ifdef RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES + RTE_LOG(DEBUG, EAL, + "Hugepage %s is on socket %d\n", + hugepg_tbl[i].filepath, socket_id); +#endif } } } @@ -1000,6 +1091,11 @@ rte_eal_hugepage_init(void) huge_register_sigbus(); + /* make a copy of socket_mem, needed for balanced allocation. */ + for (i = 0; i < RTE_MAX_NUMA_NODES; i++) + memory[i] = internal_config.socket_mem[i]; + + /* map all hugepages and sort them */ for (i = 0; i < (int)internal_config.num_hugepage_sizes; i ++){ unsigned pages_old, pages_new; @@ -1017,7 +1113,8 @@ rte_eal_hugepage_init(void) /* map all hugepages available */ pages_old = hpi->num_pages[0]; - pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, 1); + pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, + memory, 1); if (pages_new < pages_old) { RTE_LOG(DEBUG, EAL, "%d not %d hugepages of size %u MB allocated\n", @@ -1060,7 +1157,7 @@ rte_eal_hugepage_init(void) sizeof(struct hugepage_file), cmp_physaddr); /* remap all hugepages */ - if (map_all_hugepages(&tmp_hp[hp_offset], hpi, 0) != + if (map_all_hugepages(&tmp_hp[hp_offset], hpi, NULL, 0) != hpi->num_pages[0]) { RTE_LOG(ERR, EAL, "Failed to remap %u MB pages\n", (unsigned)(hpi->hugepage_sz / 0x100000)); diff --git a/mk/rte.app.mk b/mk/rte.app.mk index bcaf1b3..cfc743a 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -186,6 +186,9 @@ ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n) # The static libraries do not know their dependencies. # So linking with static library requires explicit dependencies. _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrt +ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP)$(CONFIG_RTE_LIBRTE_EAL_NUMA_AWARE_HUGEPAGES),yy) +_LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lnuma +endif _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lm _LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrt _LDLIBS-$(CONFIG_RTE_LIBRTE_METER) += -lm