From patchwork Sat Mar 3 13:46:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Burakov, Anatoly" X-Patchwork-Id: 35617 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 000FD1B171; Sat, 3 Mar 2018 14:47:02 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 87471548B for ; Sat, 3 Mar 2018 14:46:41 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Mar 2018 05:46:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,418,1515484800"; d="scan'208";a="32133991" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga003.jf.intel.com with ESMTP; 03 Mar 2018 05:46:33 -0800 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id w23DkWOW012201; Sat, 3 Mar 2018 13:46:32 GMT Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id w23DkWgc023789; Sat, 3 Mar 2018 13:46:32 GMT Received: (from aburakov@localhost) by sivswdev01.ir.intel.com with LOCAL id w23DkWRX023784; Sat, 3 Mar 2018 13:46:32 GMT From: Anatoly Burakov To: dev@dpdk.org Cc: keith.wiles@intel.com, jianfeng.tan@intel.com, andras.kovacs@ericsson.com, laszlo.vadkeri@ericsson.com, benjamin.walker@intel.com, bruce.richardson@intel.com, thomas@monjalon.net, konstantin.ananyev@intel.com, kuralamudhan.ramakrishnan@intel.com, louise.m.daly@intel.com, nelio.laranjeiro@6wind.com, yskoh@mellanox.com, pepperjo@japf.ch, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, olivier.matz@6wind.com Date: Sat, 3 Mar 2018 13:46:06 +0000 Message-Id: X-Mailer: git-send-email 1.7.0.7 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH 18/41] test: fix malloc autotest to support memory hotplug X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The test was expecting memory already being allocated on all sockets, and thus was failing because calling rte_malloc could trigger memory hotplug event and allocate memory where there was none before. Fix it to instead report availability of memory on specific sockets by attempting to allocate a page and see if that succeeds. Technically, this can still cause failure as memory might not be available at the time of check, but become available by the time the test is run, but this is a corner case not worth considering. Signed-off-by: Anatoly Burakov --- test/test/test_malloc.c | 52 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 44 insertions(+), 8 deletions(-) diff --git a/test/test/test_malloc.c b/test/test/test_malloc.c index 8484fb6..2aaf1b8 100644 --- a/test/test/test_malloc.c +++ b/test/test/test_malloc.c @@ -22,6 +22,8 @@ #include #include +#include "../../lib/librte_eal/common/eal_memalloc.h" + #include "test.h" #define N 10000 @@ -708,22 +710,56 @@ test_malloc_bad_params(void) /* Check if memory is avilable on a specific socket */ static int -is_mem_on_socket(int32_t socket) +is_mem_on_socket(unsigned int socket) { + struct rte_malloc_socket_stats stats; const struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; - unsigned i; + uint64_t prev_pgsz; + unsigned int i; + + /* we cannot know if there's memory on a specific socket, since it might + * be available, but not yet allocated. so, in addition to checking + * already mapped memory, we will attempt to allocate a page from that + * socket and see if it works. + */ + if (socket >= rte_num_sockets()) + return 0; + rte_malloc_get_socket_stats(socket, &stats); + + /* if heap has memory allocated, stop */ + if (stats.heap_totalsz_bytes > 0) + return 1; + + /* to allocate a page, we will have to know its size, so go through all + * supported page sizes and try with each one. + */ + prev_pgsz = 0; for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) { - const struct rte_memseg_list *msl = - &mcfg->memsegs[i]; - const struct rte_fbarray *arr = &msl->memseg_arr; + const struct rte_memseg_list *msl = &mcfg->memsegs[i]; + uint64_t page_sz; - if (msl->socket_id != socket) + /* skip unused memseg lists */ + if (msl->memseg_arr.len == 0) continue; + page_sz = msl->hugepage_sz; - if (arr->count) - return 1; + /* skip page sizes we've tried already */ + if (prev_pgsz == page_sz) + continue; + + prev_pgsz = page_sz; + + struct rte_memseg *ms = eal_memalloc_alloc_page(page_sz, + socket); + + if (ms == NULL) + continue; + + eal_memalloc_free_page(ms); + + return 1; } return 0; }