Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/42508/?format=api
http://patchwork.dpdk.org/api/patches/42508/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/3a31e2adf03569582e4ecd1acdec80c599ee884e.1530881548.git.anatoly.burakov@intel.com/", "project": { "id": 1, "url": "http://patchwork.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<3a31e2adf03569582e4ecd1acdec80c599ee884e.1530881548.git.anatoly.burakov@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/3a31e2adf03569582e4ecd1acdec80c599ee884e.1530881548.git.anatoly.burakov@intel.com", "date": "2018-07-06T13:17:32", "name": "[RFC,11/11] memzone: enable reserving memory from named heaps", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "9f9a96c935044a449b4d205f9c3dc93f9cedc86c", "submitter": { "id": 4, "url": "http://patchwork.dpdk.org/api/people/4/?format=api", "name": "Anatoly Burakov", "email": "anatoly.burakov@intel.com" }, "delegate": { "id": 1, "url": "http://patchwork.dpdk.org/api/users/1/?format=api", "username": "tmonjalo", "first_name": "Thomas", "last_name": "Monjalon", "email": "thomas@monjalon.net" }, "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/3a31e2adf03569582e4ecd1acdec80c599ee884e.1530881548.git.anatoly.burakov@intel.com/mbox/", "series": [ { "id": 453, "url": "http://patchwork.dpdk.org/api/series/453/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=453", "date": "2018-07-06T13:17:21", "name": "Support externally allocated memory in DPDK", "version": 1, "mbox": "http://patchwork.dpdk.org/series/453/mbox/" } ], "comments": "http://patchwork.dpdk.org/api/patches/42508/comments/", "check": "success", "checks": "http://patchwork.dpdk.org/api/patches/42508/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 5DF1B1BF43;\n\tFri, 6 Jul 2018 15:17:58 +0200 (CEST)", "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n\tby dpdk.org (Postfix) with ESMTP id 71BED1BF43\n\tfor <dev@dpdk.org>; Fri, 6 Jul 2018 15:17:56 +0200 (CEST)", "from fmsmga008.fm.intel.com ([10.253.24.58])\n\tby fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t06 Jul 2018 06:17:55 -0700", "from irvmail001.ir.intel.com ([163.33.26.43])\n\tby fmsmga008.fm.intel.com with ESMTP; 06 Jul 2018 06:17:35 -0700", "from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com\n\t[10.237.217.45])\n\tby irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id\n\tw66DHYjX027500; Fri, 6 Jul 2018 14:17:34 +0100", "from sivswdev01.ir.intel.com (localhost [127.0.0.1])\n\tby sivswdev01.ir.intel.com with ESMTP id w66DHYeL003825;\n\tFri, 6 Jul 2018 14:17:34 +0100", "(from aburakov@localhost)\n\tby sivswdev01.ir.intel.com with LOCAL id w66DHYSA003821;\n\tFri, 6 Jul 2018 14:17:34 +0100" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.51,316,1526367600\"; d=\"scan'208\";a=\"53071166\"", "From": "Anatoly Burakov <anatoly.burakov@intel.com>", "To": "dev@dpdk.org", "Cc": "srinath.mannam@broadcom.com, scott.branden@broadcom.com,\n\tajit.khaparde@broadcom.com", "Date": "Fri, 6 Jul 2018 14:17:32 +0100", "Message-Id": "<3a31e2adf03569582e4ecd1acdec80c599ee884e.1530881548.git.anatoly.burakov@intel.com>", "X-Mailer": "git-send-email 1.7.0.7", "In-Reply-To": [ "<cover.1530881548.git.anatoly.burakov@intel.com>", "<cover.1530881548.git.anatoly.burakov@intel.com>" ], "References": [ "<cover.1530881548.git.anatoly.burakov@intel.com>", "<cover.1530881548.git.anatoly.burakov@intel.com>" ], "Subject": "[dpdk-dev] [RFC 11/11] memzone: enable reserving memory from named\n\theaps", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Add ability to allocate memory for memzones from named heaps. The\nsemantics are kept similar to regular allocations, and as much of\nthe code as possible is shared.\n\nSigned-off-by: Anatoly Burakov <anatoly.burakov@intel.com>\n---\n lib/librte_eal/common/eal_common_memzone.c | 237 +++++++++++++++-----\n lib/librte_eal/common/include/rte_memzone.h | 183 +++++++++++++++\n lib/librte_eal/rte_eal_version.map | 3 +\n 3 files changed, 373 insertions(+), 50 deletions(-)", "diff": "diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c\nindex 25c56052c..d37e7ae1d 100644\n--- a/lib/librte_eal/common/eal_common_memzone.c\n+++ b/lib/librte_eal/common/eal_common_memzone.c\n@@ -98,17 +98,14 @@ find_heap_max_free_elem(int *s, unsigned align)\n \treturn len;\n }\n \n-static const struct rte_memzone *\n-memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n-\t\tint socket_id, unsigned int flags, unsigned int align,\n+static int\n+common_checks(const char *name, size_t len, unsigned int align,\n \t\tunsigned int bound)\n {\n \tstruct rte_memzone *mz;\n \tstruct rte_mem_config *mcfg;\n \tstruct rte_fbarray *arr;\n \tsize_t requested_len;\n-\tint mz_idx;\n-\tbool contig;\n \n \t/* get pointer to global configuration */\n \tmcfg = rte_eal_get_configuration()->mem_config;\n@@ -118,14 +115,14 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \tif (arr->count >= arr->len) {\n \t\tRTE_LOG(ERR, EAL, \"%s(): No more room in config\\n\", __func__);\n \t\trte_errno = ENOSPC;\n-\t\treturn NULL;\n+\t\treturn -1;\n \t}\n \n \tif (strlen(name) > sizeof(mz->name) - 1) {\n \t\tRTE_LOG(DEBUG, EAL, \"%s(): memzone <%s>: name too long\\n\",\n \t\t\t__func__, name);\n \t\trte_errno = ENAMETOOLONG;\n-\t\treturn NULL;\n+\t\treturn -1;\n \t}\n \n \t/* zone already exist */\n@@ -133,7 +130,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \t\tRTE_LOG(DEBUG, EAL, \"%s(): memzone <%s> already exists\\n\",\n \t\t\t__func__, name);\n \t\trte_errno = EEXIST;\n-\t\treturn NULL;\n+\t\treturn -1;\n \t}\n \n \t/* if alignment is not a power of two */\n@@ -141,7 +138,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \t\tRTE_LOG(ERR, EAL, \"%s(): Invalid alignment: %u\\n\", __func__,\n \t\t\t\talign);\n \t\trte_errno = EINVAL;\n-\t\treturn NULL;\n+\t\treturn -1;\n \t}\n \n \t/* alignment less than cache size is not allowed */\n@@ -151,7 +148,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \t/* align length on cache boundary. Check for overflow before doing so */\n \tif (len > SIZE_MAX - RTE_CACHE_LINE_MASK) {\n \t\trte_errno = EINVAL; /* requested size too big */\n-\t\treturn NULL;\n+\t\treturn -1;\n \t}\n \n \tlen += RTE_CACHE_LINE_MASK;\n@@ -163,49 +160,23 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \t/* check that boundary condition is valid */\n \tif (bound != 0 && (requested_len > bound || !rte_is_power_of_2(bound))) {\n \t\trte_errno = EINVAL;\n-\t\treturn NULL;\n-\t}\n-\n-\tif ((socket_id != SOCKET_ID_ANY) &&\n-\t (socket_id >= RTE_MAX_NUMA_NODES || socket_id < 0)) {\n-\t\trte_errno = EINVAL;\n-\t\treturn NULL;\n-\t}\n-\n-\tif (!rte_eal_has_hugepages())\n-\t\tsocket_id = SOCKET_ID_ANY;\n-\n-\tcontig = (flags & RTE_MEMZONE_IOVA_CONTIG) != 0;\n-\t/* malloc only cares about size flags, remove contig flag from flags */\n-\tflags &= ~RTE_MEMZONE_IOVA_CONTIG;\n-\n-\tif (len == 0) {\n-\t\t/* len == 0 is only allowed for non-contiguous zones */\n-\t\tif (contig) {\n-\t\t\tRTE_LOG(DEBUG, EAL, \"Reserving zero-length contiguous memzones is not supported\\n\");\n-\t\t\trte_errno = EINVAL;\n-\t\t\treturn NULL;\n-\t\t}\n-\t\tif (bound != 0)\n-\t\t\trequested_len = bound;\n-\t\telse {\n-\t\t\trequested_len = find_heap_max_free_elem(&socket_id, align);\n-\t\t\tif (requested_len == 0) {\n-\t\t\t\trte_errno = ENOMEM;\n-\t\t\t\treturn NULL;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\t/* allocate memory on heap */\n-\tvoid *mz_addr = malloc_heap_alloc(NULL, requested_len, socket_id, flags,\n-\t\t\talign, bound, contig);\n-\tif (mz_addr == NULL) {\n-\t\trte_errno = ENOMEM;\n-\t\treturn NULL;\n+\t\treturn -1;\n \t}\n+\treturn 0;\n+}\n \n+static const struct rte_memzone *\n+create_memzone(const char *name, void *mz_addr, size_t requested_len)\n+{\n+\tstruct rte_mem_config *mcfg;\n+\tstruct rte_fbarray *arr;\n \tstruct malloc_elem *elem = malloc_elem_from_data(mz_addr);\n+\tstruct rte_memzone *mz;\n+\tint mz_idx;\n+\n+\t/* get pointer to global configuration */\n+\tmcfg = rte_eal_get_configuration()->mem_config;\n+\tarr = &mcfg->memzones;\n \n \t/* fill the zone in config */\n \tmz_idx = rte_fbarray_find_next_free(arr, 0);\n@@ -236,6 +207,134 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \treturn mz;\n }\n \n+static const struct rte_memzone *\n+memzone_reserve_from_heap_aligned_thread_unsafe(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags, unsigned int align,\n+\t\tunsigned int bound)\n+{\n+\tsize_t requested_len = len;\n+\tvoid *mz_addr;\n+\tint heap_idx;\n+\tbool contig;\n+\n+\t/* this function sets rte_errno */\n+\tif (common_checks(name, len, align, bound) < 0)\n+\t\treturn NULL;\n+\n+\theap_idx = malloc_heap_find_named_heap_idx(heap_name);\n+\tif (heap_idx < 0) {\n+\t\trte_errno = ENOENT;\n+\t\treturn NULL;\n+\t}\n+\n+\tcontig = (flags & RTE_MEMZONE_IOVA_CONTIG) != 0;\n+\t/* malloc only cares about size flags, remove contig flag from flags */\n+\tflags &= ~RTE_MEMZONE_IOVA_CONTIG;\n+\n+\tif (len == 0) {\n+\t\t/* len == 0 is only allowed for non-contiguous zones */\n+\t\tif (contig) {\n+\t\t\tRTE_LOG(DEBUG, EAL, \"Reserving zero-length contiguous memzones is not supported\\n\");\n+\t\t\trte_errno = EINVAL;\n+\t\t\treturn NULL;\n+\t\t}\n+\t\tif (bound != 0)\n+\t\t\trequested_len = bound;\n+\t\telse {\n+\t\t\trequested_len = heap_max_free_elem(heap_idx, align);\n+\t\t\tif (requested_len == 0) {\n+\t\t\t\trte_errno = ENOMEM;\n+\t\t\t\treturn NULL;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\t/* allocate memory on heap */\n+\tmz_addr = malloc_heap_alloc_on_heap_id(NULL, requested_len, heap_idx,\n+\t\t\tflags, align, bound, contig);\n+\tif (mz_addr == NULL) {\n+\t\trte_errno = ENOMEM;\n+\t\treturn NULL;\n+\t}\n+\treturn create_memzone(name, mz_addr, requested_len);\n+}\n+\n+static const struct rte_memzone *\n+memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n+\t\tint socket_id, unsigned int flags, unsigned int align,\n+\t\tunsigned int bound)\n+{\n+\tsize_t requested_len = len;\n+\tbool contig;\n+\tvoid *mz_addr;\n+\n+\t/* this function sets rte_errno */\n+\tif (common_checks(name, len, align, bound) < 0)\n+\t\treturn NULL;\n+\n+\tif ((socket_id != SOCKET_ID_ANY) &&\n+\t\t\t(socket_id >= RTE_MAX_NUMA_NODES || socket_id < 0)) {\n+\t\trte_errno = EINVAL;\n+\t\treturn NULL;\n+\t}\n+\n+\tif (!rte_eal_has_hugepages())\n+\t\tsocket_id = SOCKET_ID_ANY;\n+\n+\tcontig = (flags & RTE_MEMZONE_IOVA_CONTIG) != 0;\n+\t/* malloc only cares about size flags, remove contig flag from flags */\n+\tflags &= ~RTE_MEMZONE_IOVA_CONTIG;\n+\n+\tif (len == 0) {\n+\t\t/* len == 0 is only allowed for non-contiguous zones */\n+\t\tif (contig) {\n+\t\t\tRTE_LOG(DEBUG, EAL, \"Reserving zero-length contiguous memzones is not supported\\n\");\n+\t\t\trte_errno = EINVAL;\n+\t\t\treturn NULL;\n+\t\t}\n+\t\tif (bound != 0)\n+\t\t\trequested_len = bound;\n+\t\telse {\n+\t\t\trequested_len = find_heap_max_free_elem(&socket_id,\n+\t\t\t\t\talign);\n+\t\t\tif (requested_len == 0) {\n+\t\t\t\trte_errno = ENOMEM;\n+\t\t\t\treturn NULL;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\t/* allocate memory on heap */\n+\tmz_addr = malloc_heap_alloc(NULL, requested_len, socket_id, flags,\n+\t\t\talign, bound, contig);\n+\tif (mz_addr == NULL) {\n+\t\trte_errno = ENOMEM;\n+\t\treturn NULL;\n+\t}\n+\treturn create_memzone(name, mz_addr, requested_len);\n+}\n+\n+static const struct rte_memzone *\n+rte_memzone_reserve_from_heap_thread_safe(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags, unsigned int align,\n+\t\tunsigned int bound)\n+{\n+\tstruct rte_mem_config *mcfg;\n+\tconst struct rte_memzone *mz = NULL;\n+\n+\t/* get pointer to global configuration */\n+\tmcfg = rte_eal_get_configuration()->mem_config;\n+\n+\trte_rwlock_write_lock(&mcfg->mlock);\n+\n+\tmz = memzone_reserve_from_heap_aligned_thread_unsafe(name, len,\n+\t\t\theap_name, flags, align, bound);\n+\n+\trte_rwlock_write_unlock(&mcfg->mlock);\n+\n+\treturn mz;\n+}\n+\n static const struct rte_memzone *\n rte_memzone_reserve_thread_safe(const char *name, size_t len, int socket_id,\n \t\tunsigned int flags, unsigned int align, unsigned int bound)\n@@ -293,6 +392,44 @@ rte_memzone_reserve(const char *name, size_t len, int socket_id,\n \t\t\t\t\t flags, RTE_CACHE_LINE_SIZE, 0);\n }\n \n+/*\n+ * Return a pointer to a correctly filled memzone descriptor (with a\n+ * specified alignment and boundary). If the allocation cannot be done,\n+ * return NULL.\n+ */\n+const struct rte_memzone *\n+rte_memzone_reserve_from_heap_bounded(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags, unsigned int align,\n+\t\tunsigned int bound)\n+{\n+\treturn rte_memzone_reserve_from_heap_thread_safe(name, len, heap_name,\n+\t\t\tflags, align, bound);\n+}\n+\n+/*\n+ * Return a pointer to a correctly filled memzone descriptor (with a\n+ * specified alignment). If the allocation cannot be done, return NULL.\n+ */\n+const struct rte_memzone *\n+rte_memzone_reserve_from_heap_aligned(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags, unsigned int align)\n+{\n+\treturn rte_memzone_reserve_from_heap_thread_safe(name, len, heap_name,\n+\t\t\tflags, align, 0);\n+}\n+\n+/*\n+ * Return a pointer to a correctly filled memzone descriptor. If the\n+ * allocation cannot be done, return NULL.\n+ */\n+const struct rte_memzone *\n+rte_memzone_reserve_from_heap(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags)\n+{\n+\treturn rte_memzone_reserve_from_heap_thread_safe(name, len, heap_name,\n+\t\t\tflags, RTE_CACHE_LINE_SIZE, 0);\n+}\n+\n int\n rte_memzone_free(const struct rte_memzone *mz)\n {\ndiff --git a/lib/librte_eal/common/include/rte_memzone.h b/lib/librte_eal/common/include/rte_memzone.h\nindex ef370fa6f..b27e5c421 100644\n--- a/lib/librte_eal/common/include/rte_memzone.h\n+++ b/lib/librte_eal/common/include/rte_memzone.h\n@@ -258,6 +258,189 @@ const struct rte_memzone *rte_memzone_reserve_bounded(const char *name,\n \t\t\tsize_t len, int socket_id,\n \t\t\tunsigned flags, unsigned align, unsigned bound);\n \n+/**\n+ * Reserve a portion of physical memory from a specified named heap.\n+ *\n+ * This function reserves some memory and returns a pointer to a\n+ * correctly filled memzone descriptor. If the allocation cannot be\n+ * done, return NULL.\n+ *\n+ * @note Reserving memzones with len set to 0 will only attempt to allocate\n+ * memzones from memory that is already available. It will not trigger any\n+ * new allocations.\n+ *\n+ * @note Reserving IOVA-contiguous memzones with len set to 0 is not currently\n+ * supported.\n+ *\n+ * @param name\n+ * The name of the memzone. If it already exists, the function will\n+ * fail and return NULL.\n+ * @param len\n+ * The size of the memory to be reserved. If it\n+ * is 0, the biggest contiguous zone will be reserved.\n+ * @param heap_name\n+ * The name of the heap to reserve memory from.\n+ * @param flags\n+ * The flags parameter is used to request memzones to be\n+ * taken from specifically sized hugepages.\n+ * - RTE_MEMZONE_2MB - Reserved from 2MB pages\n+ * - RTE_MEMZONE_1GB - Reserved from 1GB pages\n+ * - RTE_MEMZONE_16MB - Reserved from 16MB pages\n+ * - RTE_MEMZONE_16GB - Reserved from 16GB pages\n+ * - RTE_MEMZONE_256KB - Reserved from 256KB pages\n+ * - RTE_MEMZONE_256MB - Reserved from 256MB pages\n+ * - RTE_MEMZONE_512MB - Reserved from 512MB pages\n+ * - RTE_MEMZONE_4GB - Reserved from 4GB pages\n+ * - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if\n+ * the requested page size is unavailable.\n+ * If this flag is not set, the function\n+ * will return error on an unavailable size\n+ * request.\n+ * - RTE_MEMZONE_IOVA_CONTIG - Ensure reserved memzone is IOVA-contiguous.\n+ * This option should be used when allocating\n+ * memory intended for hardware rings etc.\n+ * @return\n+ * A pointer to a correctly-filled read-only memzone descriptor, or NULL\n+ * on error.\n+ * On error case, rte_errno will be set appropriately:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - E_RTE_SECONDARY - function was called from a secondary process instance\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - EEXIST - a memzone with the same name already exists\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - EINVAL - invalid parameters\n+ */\n+__rte_experimental const struct rte_memzone *\n+rte_memzone_reserve_from_heap(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags);\n+\n+/**\n+ * Reserve a portion of physical memory from a specified named heap with\n+ * alignment on a specified boundary.\n+ *\n+ * This function reserves some memory with alignment on a specified\n+ * boundary, and returns a pointer to a correctly filled memzone\n+ * descriptor. If the allocation cannot be done or if the alignment\n+ * is not a power of 2, returns NULL.\n+ *\n+ * @note Reserving memzones with len set to 0 will only attempt to allocate\n+ * memzones from memory that is already available. It will not trigger any\n+ * new allocations.\n+ *\n+ * @note Reserving IOVA-contiguous memzones with len set to 0 is not currently\n+ * supported.\n+ *\n+ * @param name\n+ * The name of the memzone. If it already exists, the function will\n+ * fail and return NULL.\n+ * @param len\n+ * The size of the memory to be reserved. If it\n+ * is 0, the biggest contiguous zone will be reserved.\n+ * @param heap_name\n+ * The name of the heap to reserve memory from.\n+ * @param flags\n+ * The flags parameter is used to request memzones to be\n+ * taken from specifically sized hugepages.\n+ * - RTE_MEMZONE_2MB - Reserved from 2MB pages\n+ * - RTE_MEMZONE_1GB - Reserved from 1GB pages\n+ * - RTE_MEMZONE_16MB - Reserved from 16MB pages\n+ * - RTE_MEMZONE_16GB - Reserved from 16GB pages\n+ * - RTE_MEMZONE_256KB - Reserved from 256KB pages\n+ * - RTE_MEMZONE_256MB - Reserved from 256MB pages\n+ * - RTE_MEMZONE_512MB - Reserved from 512MB pages\n+ * - RTE_MEMZONE_4GB - Reserved from 4GB pages\n+ * - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if\n+ * the requested page size is unavailable.\n+ * If this flag is not set, the function\n+ * will return error on an unavailable size\n+ * request.\n+ * - RTE_MEMZONE_IOVA_CONTIG - Ensure reserved memzone is IOVA-contiguous.\n+ * This option should be used when allocating\n+ * memory intended for hardware rings etc.\n+ * @param align\n+ * Alignment for resulting memzone. Must be a power of 2.\n+ * @return\n+ * A pointer to a correctly-filled read-only memzone descriptor, or NULL\n+ * on error.\n+ * On error case, rte_errno will be set appropriately:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - E_RTE_SECONDARY - function was called from a secondary process instance\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - EEXIST - a memzone with the same name already exists\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - EINVAL - invalid parameters\n+ */\n+__rte_experimental const struct rte_memzone *\n+rte_memzone_reserve_from_heap_aligned(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags, unsigned int align);\n+\n+/**\n+ * Reserve a portion of physical memory from a specified named heap with\n+ * specified alignment and boundary.\n+ *\n+ * This function reserves some memory with specified alignment and\n+ * boundary, and returns a pointer to a correctly filled memzone\n+ * descriptor. If the allocation cannot be done or if the alignment\n+ * or boundary are not a power of 2, returns NULL.\n+ * Memory buffer is reserved in a way, that it wouldn't cross specified\n+ * boundary. That implies that requested length should be less or equal\n+ * then boundary.\n+ *\n+ * @note Reserving memzones with len set to 0 will only attempt to allocate\n+ * memzones from memory that is already available. It will not trigger any\n+ * new allocations.\n+ *\n+ * @note Reserving IOVA-contiguous memzones with len set to 0 is not currently\n+ * supported.\n+ *\n+ * @param name\n+ * The name of the memzone. If it already exists, the function will\n+ * fail and return NULL.\n+ * @param len\n+ * The size of the memory to be reserved. If it\n+ * is 0, the biggest contiguous zone will be reserved.\n+ * @param heap_name\n+ * The name of the heap to reserve memory from.\n+ * @param flags\n+ * The flags parameter is used to request memzones to be\n+ * taken from specifically sized hugepages.\n+ * - RTE_MEMZONE_2MB - Reserved from 2MB pages\n+ * - RTE_MEMZONE_1GB - Reserved from 1GB pages\n+ * - RTE_MEMZONE_16MB - Reserved from 16MB pages\n+ * - RTE_MEMZONE_16GB - Reserved from 16GB pages\n+ * - RTE_MEMZONE_256KB - Reserved from 256KB pages\n+ * - RTE_MEMZONE_256MB - Reserved from 256MB pages\n+ * - RTE_MEMZONE_512MB - Reserved from 512MB pages\n+ * - RTE_MEMZONE_4GB - Reserved from 4GB pages\n+ * - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if\n+ * the requested page size is unavailable.\n+ * If this flag is not set, the function\n+ * will return error on an unavailable size\n+ * request.\n+ * - RTE_MEMZONE_IOVA_CONTIG - Ensure reserved memzone is IOVA-contiguous.\n+ * This option should be used when allocating\n+ * memory intended for hardware rings etc.\n+ * @param align\n+ * Alignment for resulting memzone. Must be a power of 2.\n+ * @param bound\n+ * Boundary for resulting memzone. Must be a power of 2 or zero.\n+ * Zero value implies no boundary condition.\n+ * @return\n+ * A pointer to a correctly-filled read-only memzone descriptor, or NULL\n+ * on error.\n+ * On error case, rte_errno will be set appropriately:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - E_RTE_SECONDARY - function was called from a secondary process instance\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - EEXIST - a memzone with the same name already exists\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - EINVAL - invalid parameters\n+ */\n+__rte_experimental const struct rte_memzone *\n+rte_memzone_reserve_from_heap_bounded(const char *name, size_t len,\n+\t\tconst char *heap_name, unsigned int flags, unsigned int align,\n+\t\tunsigned int bound);\n+\n /**\n * Free a memzone.\n *\ndiff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map\nindex cdde7eb3b..db1cfae6a 100644\n--- a/lib/librte_eal/rte_eal_version.map\n+++ b/lib/librte_eal/rte_eal_version.map\n@@ -294,6 +294,9 @@ EXPERIMENTAL {\n \trte_memseg_contig_walk;\n \trte_memseg_list_walk;\n \trte_memseg_walk;\n+\trte_memzone_reserve_from_heap;\n+\trte_memzone_reserve_from_heap_aligned;\n+\trte_memzone_reserve_from_heap_bounded;\n \trte_mp_action_register;\n \trte_mp_action_unregister;\n \trte_mp_reply;\n", "prefixes": [ "RFC", "11/11" ] }{ "id": 42508, "url": "