get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/68227/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 68227,
    "url": "http://patchwork.dpdk.org/api/patches/68227/?format=api",
    "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/1586740309-449310-2-git-send-email-suanmingm@mellanox.com/",
    "project": {
        "id": 1,
        "url": "http://patchwork.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1586740309-449310-2-git-send-email-suanmingm@mellanox.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1586740309-449310-2-git-send-email-suanmingm@mellanox.com",
    "date": "2020-04-13T01:11:40",
    "name": "[01/10] net/mlx5: add indexed memory pool",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "8a0afa6caab5d22ca42937d08746fc3650bc9cae",
    "submitter": {
        "id": 1358,
        "url": "http://patchwork.dpdk.org/api/people/1358/?format=api",
        "name": "Suanming Mou",
        "email": "suanmingm@mellanox.com"
    },
    "delegate": {
        "id": 3268,
        "url": "http://patchwork.dpdk.org/api/users/3268/?format=api",
        "username": "rasland",
        "first_name": "Raslan",
        "last_name": "Darawsheh",
        "email": "rasland@nvidia.com"
    },
    "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/1586740309-449310-2-git-send-email-suanmingm@mellanox.com/mbox/",
    "series": [
        {
            "id": 9321,
            "url": "http://patchwork.dpdk.org/api/series/9321/?format=api",
            "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=9321",
            "date": "2020-04-13T01:11:39",
            "name": "net/mlx5: optimize flow resource allocation",
            "version": 1,
            "mbox": "http://patchwork.dpdk.org/series/9321/mbox/"
        }
    ],
    "comments": "http://patchwork.dpdk.org/api/patches/68227/comments/",
    "check": "fail",
    "checks": "http://patchwork.dpdk.org/api/patches/68227/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 53C8EA0577;\n\tMon, 13 Apr 2020 03:12:05 +0200 (CEST)",
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 0DEF82B86;\n\tMon, 13 Apr 2020 03:11:59 +0200 (CEST)",
            "from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130])\n by dpdk.org (Postfix) with ESMTP id B28DF1B53\n for <dev@dpdk.org>; Mon, 13 Apr 2020 03:11:57 +0200 (CEST)"
        ],
        "From": "Suanming Mou <suanmingm@mellanox.com>",
        "To": "Matan Azrad <matan@mellanox.com>, Shahaf Shuler <shahafs@mellanox.com>,\n Viacheslav Ovsiienko <viacheslavo@mellanox.com>",
        "Cc": "rasland@mellanox.com,\n\tdev@dpdk.org",
        "Date": "Mon, 13 Apr 2020 09:11:40 +0800",
        "Message-Id": "<1586740309-449310-2-git-send-email-suanmingm@mellanox.com>",
        "X-Mailer": "git-send-email 1.8.3.1",
        "In-Reply-To": "<1586740309-449310-1-git-send-email-suanmingm@mellanox.com>",
        "References": "<1586740309-449310-1-git-send-email-suanmingm@mellanox.com>",
        "Subject": "[dpdk-dev] [PATCH 01/10] net/mlx5: add indexed memory pool",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Currently, the memory allocated by rte_malloc() also introduced more\nthan 64 bytes overhead. It means when allocate 64 bytes memory, the\nreal cost in memory maybe double. And the libc malloc() overhead is 16\nbytes, If users try allocating millions of small memory blocks, the\noverhead costing maybe huge. And save the memory pointer will also be\nquite expensive.\n\nIndexed memory pool is introduced to save the memory for allocating\nhuge amount of small memory blocks. The indexed memory uses trunk and\nbitmap to manage the memory entries. While the pool is empty, the trunk\nslot contains memory entry array will be allocated firstly. The bitmap\nin the trunk records the entry allocation. The offset of trunk slot in\nthe pool and the offset of memory entry in the trunk slot compose the\nindex for the memory entry. So, by the index, it will be very easy to\naddress the memory of the entry. User saves the 32 bits index for the\nmemory resource instead of the 64 bits pointer.\nUser should create different pools for allocating different size of\nsmall memory block. It means one pool provides one fixed size of small\nmemory blocked allocating.\n\nSigned-off-by: Suanming Mou <suanmingm@mellanox.com>\nAcked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>\n---\n drivers/net/mlx5/mlx5_utils.c | 261 ++++++++++++++++++++++++++++++++++++++++++\n drivers/net/mlx5/mlx5_utils.h | 229 ++++++++++++++++++++++++++++++++++++\n 2 files changed, 490 insertions(+)",
    "diff": "diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c\nindex 4b4fc3c..4cab7f0 100644\n--- a/drivers/net/mlx5/mlx5_utils.c\n+++ b/drivers/net/mlx5/mlx5_utils.c\n@@ -117,3 +117,264 @@ struct mlx5_hlist_entry *\n \t}\n \trte_free(h);\n }\n+\n+static inline void\n+mlx5_ipool_lock(struct mlx5_indexed_pool *pool)\n+{\n+\tif (pool->cfg.need_lock)\n+\t\trte_spinlock_lock(&pool->lock);\n+}\n+\n+static inline void\n+mlx5_ipool_unlock(struct mlx5_indexed_pool *pool)\n+{\n+\tif (pool->cfg.need_lock)\n+\t\trte_spinlock_unlock(&pool->lock);\n+}\n+\n+struct mlx5_indexed_pool *\n+mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg)\n+{\n+\tstruct mlx5_indexed_pool *pool;\n+\n+\tif (!cfg || !cfg->size || (!cfg->malloc ^ !cfg->free) ||\n+\t    (cfg->trunk_size && ((cfg->trunk_size & (cfg->trunk_size - 1)) ||\n+\t    ((__builtin_ffs(cfg->trunk_size) + TRUNK_IDX_BITS) > 32))))\n+\t\treturn NULL;\n+\tpool = rte_zmalloc(\"mlx5_ipool\", sizeof(*pool), RTE_CACHE_LINE_SIZE);\n+\tif (!pool)\n+\t\treturn NULL;\n+\tpool->cfg = *cfg;\n+\tif (!pool->cfg.trunk_size)\n+\t\tpool->cfg.trunk_size = MLX5_IPOOL_DEFAULT_TRUNK_SIZE;\n+\tif (!cfg->malloc && !cfg->free) {\n+\t\tpool->cfg.malloc = rte_malloc_socket;\n+\t\tpool->cfg.free = rte_free;\n+\t}\n+\tpool->free_list = TRUNK_INVALID;\n+\tif (pool->cfg.need_lock)\n+\t\trte_spinlock_init(&pool->lock);\n+\treturn pool;\n+}\n+\n+static int\n+mlx5_ipool_grow(struct mlx5_indexed_pool *pool)\n+{\n+\tstruct mlx5_indexed_trunk *trunk;\n+\tstruct mlx5_indexed_trunk **trunk_tmp;\n+\tstruct mlx5_indexed_trunk **p;\n+\tsize_t trunk_size = 0;\n+\tsize_t bmp_size;\n+\tuint32_t idx;\n+\n+\tif (pool->n_trunk_valid == TRUNK_MAX_IDX)\n+\t\treturn -ENOMEM;\n+\tif (pool->n_trunk_valid == pool->n_trunk) {\n+\t\t/* No free trunk flags, expand trunk list. */\n+\t\tint n_grow = pool->n_trunk_valid ? pool->n_trunk :\n+\t\t\t     RTE_CACHE_LINE_SIZE / sizeof(void *);\n+\n+\t\tp = pool->cfg.malloc(pool->cfg.type,\n+\t\t\t\t (pool->n_trunk_valid + n_grow) *\n+\t\t\t\t sizeof(struct mlx5_indexed_trunk *),\n+\t\t\t\t RTE_CACHE_LINE_SIZE, rte_socket_id());\n+\t\tif (!p)\n+\t\t\treturn -ENOMEM;\n+\t\tif (pool->trunks)\n+\t\t\tmemcpy(p, pool->trunks, pool->n_trunk_valid *\n+\t\t\t       sizeof(struct mlx5_indexed_trunk *));\n+\t\tmemset(RTE_PTR_ADD(p, pool->n_trunk_valid * sizeof(void *)), 0,\n+\t\t       n_grow * sizeof(void *));\n+\t\ttrunk_tmp = pool->trunks;\n+\t\tpool->trunks = p;\n+\t\tif (trunk_tmp)\n+\t\t\tpool->cfg.free(pool->trunks);\n+\t\tpool->n_trunk += n_grow;\n+\t}\n+\tidx = pool->n_trunk_valid;\n+\ttrunk_size += sizeof(*trunk);\n+\tbmp_size = rte_bitmap_get_memory_footprint(pool->cfg.trunk_size);\n+\ttrunk_size += pool->cfg.trunk_size * pool->cfg.size + bmp_size;\n+\ttrunk = pool->cfg.malloc(pool->cfg.type, trunk_size,\n+\t\t\t\t RTE_CACHE_LINE_SIZE, rte_socket_id());\n+\tif (!trunk)\n+\t\treturn -ENOMEM;\n+\tpool->trunks[idx] = trunk;\n+\ttrunk->idx = idx;\n+\ttrunk->free = pool->cfg.trunk_size;\n+\ttrunk->prev = TRUNK_INVALID;\n+\ttrunk->next = TRUNK_INVALID;\n+\tMLX5_ASSERT(pool->free_list == TRUNK_INVALID);\n+\tpool->free_list = idx;\n+\t/* Mark all entries as available. */\n+\ttrunk->bmp = rte_bitmap_init_with_all_set(pool->cfg.trunk_size,\n+\t\t     &trunk->data[pool->cfg.trunk_size  * pool->cfg.size],\n+\t\t     bmp_size);\n+\tpool->n_trunk_valid++;\n+#ifdef POOL_DEBUG\n+\tpool->trunk_new++;\n+\tpool->trunk_avail++;\n+#endif\n+\treturn 0;\n+}\n+\n+void *\n+mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx)\n+{\n+\tstruct mlx5_indexed_trunk *trunk;\n+\tuint64_t slab = 0;\n+\tuint32_t iidx = 0;\n+\tvoid *p;\n+\n+\tmlx5_ipool_lock(pool);\n+\tif (pool->free_list == TRUNK_INVALID) {\n+\t\t/* If no available trunks, grow new. */\n+\t\tif (mlx5_ipool_grow(pool)) {\n+\t\t\tmlx5_ipool_unlock(pool);\n+\t\t\treturn NULL;\n+\t\t}\n+\t}\n+\tMLX5_ASSERT(pool->free_list != TRUNK_INVALID);\n+\ttrunk = pool->trunks[pool->free_list];\n+\tMLX5_ASSERT(trunk->free);\n+\tif (!rte_bitmap_scan(trunk->bmp, &iidx, &slab)) {\n+\t\tmlx5_ipool_unlock(pool);\n+\t\treturn NULL;\n+\t}\n+\tMLX5_ASSERT(slab);\n+\tiidx += __builtin_ctzll(slab);\n+\tMLX5_ASSERT(iidx != UINT32_MAX);\n+\tMLX5_ASSERT(iidx < pool->cfg.trunk_size);\n+\trte_bitmap_clear(trunk->bmp, iidx);\n+\tp = &trunk->data[iidx * pool->cfg.size];\n+\tiidx += trunk->idx * pool->cfg.trunk_size;\n+\tiidx += 1; /* non-zero index. */\n+\ttrunk->free--;\n+#ifdef POOL_DEBUG\n+\tpool->n_entry++;\n+#endif\n+\tif (!trunk->free) {\n+\t\t/* Full trunk will be removed from free list in imalloc. */\n+\t\tMLX5_ASSERT(pool->free_list == trunk->idx);\n+\t\tpool->free_list = trunk->next;\n+\t\tif (trunk->next != TRUNK_INVALID)\n+\t\t\tpool->trunks[trunk->next]->prev = TRUNK_INVALID;\n+\t\ttrunk->prev = TRUNK_INVALID;\n+\t\ttrunk->next = TRUNK_INVALID;\n+#ifdef POOL_DEBUG\n+\t\tpool->trunk_empty++;\n+\t\tpool->trunk_avail--;\n+#endif\n+\t}\n+\t*idx = iidx;\n+\tmlx5_ipool_unlock(pool);\n+\treturn p;\n+}\n+\n+void *\n+mlx5_ipool_zmalloc(struct mlx5_indexed_pool *pool, uint32_t *idx)\n+{\n+\tvoid *entry = mlx5_ipool_malloc(pool, idx);\n+\n+\tif (entry)\n+\t\tmemset(entry, 0, pool->cfg.size);\n+\treturn entry;\n+}\n+\n+void\n+mlx5_ipool_free(struct mlx5_indexed_pool *pool, uint32_t idx)\n+{\n+\tstruct mlx5_indexed_trunk *trunk;\n+\tuint32_t trunk_idx;\n+\n+\tif (!idx)\n+\t\treturn;\n+\tidx -= 1;\n+\tmlx5_ipool_lock(pool);\n+\ttrunk_idx = idx / pool->cfg.trunk_size;\n+\tif (trunk_idx >= pool->n_trunk_valid)\n+\t\tgoto out;\n+\ttrunk = pool->trunks[trunk_idx];\n+\tif (!trunk || trunk_idx != trunk->idx ||\n+\t    rte_bitmap_get(trunk->bmp, idx % pool->cfg.trunk_size))\n+\t\tgoto out;\n+\trte_bitmap_set(trunk->bmp, idx % pool->cfg.trunk_size);\n+\ttrunk->free++;\n+\tif (trunk->free == 1) {\n+\t\t/* Put into free trunk list head. */\n+\t\tMLX5_ASSERT(pool->free_list != trunk->idx);\n+\t\ttrunk->next = pool->free_list;\n+\t\ttrunk->prev = TRUNK_INVALID;\n+\t\tif (pool->free_list != TRUNK_INVALID)\n+\t\t\tpool->trunks[pool->free_list]->prev = trunk->idx;\n+\t\tpool->free_list = trunk->idx;\n+#ifdef POOL_DEBUG\n+\t\tpool->trunk_empty--;\n+\t\tpool->trunk_avail++;\n+#endif\n+\t}\n+#ifdef POOL_DEBUG\n+\tpool->n_entry--;\n+#endif\n+out:\n+\tmlx5_ipool_unlock(pool);\n+}\n+\n+void *\n+mlx5_ipool_get(struct mlx5_indexed_pool *pool, uint32_t idx)\n+{\n+\tstruct mlx5_indexed_trunk *trunk;\n+\tvoid *p = NULL;\n+\tuint32_t trunk_idx;\n+\n+\tif (!idx)\n+\t\treturn NULL;\n+\tidx -= 1;\n+\tmlx5_ipool_lock(pool);\n+\ttrunk_idx = idx / pool->cfg.trunk_size;\n+\tif (trunk_idx >= pool->n_trunk_valid)\n+\t\tgoto out;\n+\ttrunk = pool->trunks[trunk_idx];\n+\tif (!trunk || trunk_idx != trunk->idx ||\n+\t    rte_bitmap_get(trunk->bmp, idx % pool->cfg.trunk_size))\n+\t\tgoto out;\n+\tp = &trunk->data[(idx % pool->cfg.trunk_size) * pool->cfg.size];\n+out:\n+\tmlx5_ipool_unlock(pool);\n+\treturn p;\n+}\n+\n+int\n+mlx5_ipool_destroy(struct mlx5_indexed_pool *pool)\n+{\n+\tstruct mlx5_indexed_trunk **trunks;\n+\tuint32_t i;\n+\n+\tMLX5_ASSERT(pool);\n+\tmlx5_ipool_lock(pool);\n+\ttrunks = pool->trunks;\n+\tfor (i = 0; i < pool->n_trunk; i++) {\n+\t\tif (trunks[i])\n+\t\t\tpool->cfg.free(trunks[i]);\n+\t}\n+\tif (!pool->trunks)\n+\t\tpool->cfg.free(pool->trunks);\n+\tmlx5_ipool_unlock(pool);\n+\trte_free(pool);\n+\treturn 0;\n+}\n+\n+void\n+mlx5_ipool_dump(struct mlx5_indexed_pool *pool)\n+{\n+\tprintf(\"Pool %s entry size %u, trunks %u, %d entry per trunk, \"\n+\t       \"total: %d\\n\",\n+\t       pool->cfg.type, pool->cfg.size, pool->n_trunk_valid,\n+\t       pool->cfg.trunk_size, pool->n_trunk_valid);\n+#ifdef POOL_DEBUG\n+\tprintf(\"Pool %s entry %ld, trunk alloc %ld, empty: %ld, \"\n+\t       \"available %ld free %ld\\n\",\n+\t       pool->cfg.type, pool->n_entry, pool->trunk_new,\n+\t       pool->trunk_empty, pool->trunk_avail, pool->trunk_free);\n+#endif\n+}\ndiff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h\nindex 8f305c3..e404a5c 100644\n--- a/drivers/net/mlx5/mlx5_utils.h\n+++ b/drivers/net/mlx5/mlx5_utils.h\n@@ -12,6 +12,10 @@\n #include <limits.h>\n #include <errno.h>\n \n+#include <rte_spinlock.h>\n+#include <rte_memory.h>\n+#include <rte_bitmap.h>\n+\n #include <mlx5_common.h>\n \n #include \"mlx5_defs.h\"\n@@ -60,6 +64,60 @@\n \t (((val) & (from)) / ((from) / (to))) : \\\n \t (((val) & (from)) * ((to) / (from))))\n \n+/*\n+ * The indexed memory entry index is made up of trunk index and offset of\n+ * the entry in the trunk. Since the entry index is 32 bits, in case user\n+ * prefers to have small trunks, user can change the macro below to a big\n+ * number which helps the pool contains more trunks with lots of entries\n+ * allocated.\n+ */\n+#define TRUNK_IDX_BITS 16\n+#define TRUNK_MAX_IDX ((1 << TRUNK_IDX_BITS) - 1)\n+#define TRUNK_INVALID TRUNK_MAX_IDX\n+#define MLX5_IPOOL_DEFAULT_TRUNK_SIZE (1 << (28 - TRUNK_IDX_BITS))\n+#ifdef RTE_LIBRTE_MLX5_DEBUG\n+#define POOL_DEBUG 1\n+#endif\n+\n+struct mlx5_indexed_pool_config {\n+\tuint32_t size; /* Pool entry size. */\n+\tuint32_t trunk_size;\n+\t/* Trunk entry number. Must be power of 2. */\n+\tuint32_t need_lock;\n+\t/* Lock is needed for multiple thread usage. */\n+\tconst char *type; /* Memory allocate type name. */\n+\tvoid *(*malloc)(const char *type, size_t size, unsigned int align,\n+\t\t\tint socket);\n+\t/* User defined memory allocator. */\n+\tvoid (*free)(void *addr); /* User defined memory release. */\n+};\n+\n+struct mlx5_indexed_trunk {\n+\tuint32_t idx; /* Trunk id. */\n+\tuint32_t prev; /* Previous free trunk in free list. */\n+\tuint32_t next; /* Next free trunk in free list. */\n+\tuint32_t free; /* Free entries available */\n+\tstruct rte_bitmap *bmp;\n+\tuint8_t data[] __rte_cache_min_aligned; /* Entry data start. */\n+};\n+\n+struct mlx5_indexed_pool {\n+\tstruct mlx5_indexed_pool_config cfg; /* Indexed pool configuration. */\n+\trte_spinlock_t lock; /* Pool lock for multiple thread usage. */\n+\tuint32_t n_trunk_valid; /* Trunks allocated. */\n+\tuint32_t n_trunk; /* Trunk pointer array size. */\n+\t/* Dim of trunk pointer array. */\n+\tstruct mlx5_indexed_trunk **trunks;\n+\tuint32_t free_list; /* Index to first free trunk. */\n+#ifdef POOL_DEBUG\n+\tint64_t n_entry;\n+\tint64_t trunk_new;\n+\tint64_t trunk_avail;\n+\tint64_t trunk_empty;\n+\tint64_t trunk_free;\n+#endif\n+};\n+\n /**\n  * Return logarithm of the nearest power of two above input value.\n  *\n@@ -183,4 +241,175 @@ void mlx5_hlist_remove(struct mlx5_hlist *h __rte_unused,\n void mlx5_hlist_destroy(struct mlx5_hlist *h,\n \t\t\tmlx5_hlist_destroy_callback_fn cb, void *ctx);\n \n+/**\n+ * This function allocates non-initialized memory entry from pool.\n+ * In NUMA systems, the memory entry allocated resides on the same\n+ * NUMA socket as the core that calls this function.\n+ *\n+ * Memory entry is allocated from memory trunk, no alignment.\n+ *\n+ * @param pool\n+ *   Pointer to indexed memory entry pool.\n+ *   No initialization required.\n+ * @param[out] idx\n+ *   Pointer to memory to save allocated index.\n+ *   Memory index always positive value.\n+ * @return\n+ *   - Pointer to the allocated memory entry.\n+ *   - NULL on error. Not enough memory, or invalid arguments.\n+ */\n+void *mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx);\n+\n+/**\n+ * This function allocates zero initialized memory entry from pool.\n+ * In NUMA systems, the memory entry allocated resides on the same\n+ * NUMA socket as the core that calls this function.\n+ *\n+ * Memory entry is allocated from memory trunk, no alignment.\n+ *\n+ * @param pool\n+ *   Pointer to indexed memory pool.\n+ *   No initialization required.\n+ * @param[out] idx\n+ *   Pointer to memory to save allocated index.\n+ *   Memory index always positive value.\n+ * @return\n+ *   - Pointer to the allocated memory entry .\n+ *   - NULL on error. Not enough memory, or invalid arguments.\n+ */\n+void *mlx5_ipool_zmalloc(struct mlx5_indexed_pool *pool, uint32_t *idx);\n+\n+/**\n+ * This function frees indexed memory entry to pool.\n+ * Caller has to make sure that the index is allocated from same pool.\n+ *\n+ * @param pool\n+ *   Pointer to indexed memory pool.\n+ * @param idx\n+ *   Allocated memory entry index.\n+ */\n+void mlx5_ipool_free(struct mlx5_indexed_pool *pool, uint32_t idx);\n+\n+/**\n+ * This function returns pointer of indexed memory entry from index.\n+ * Caller has to make sure that the index is valid, and allocated\n+ * from same pool.\n+ *\n+ * @param pool\n+ *   Pointer to indexed memory pool.\n+ * @param idx\n+ *   Allocated memory index.\n+ * @return\n+ *   - Pointer to indexed memory entry.\n+ */\n+void *mlx5_ipool_get(struct mlx5_indexed_pool *pool, uint32_t idx);\n+\n+/**\n+ * This function creates indexed memory pool.\n+ * Caller has to configure the configuration accordingly.\n+ *\n+ * @param pool\n+ *   Pointer to indexed memory pool.\n+ * @param cfg\n+ *   Allocated memory index.\n+ */\n+struct mlx5_indexed_pool *\n+mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg);\n+\n+/**\n+ * This function releases all resources of pool.\n+ * Caller has to make sure that all indexes and memories allocated\n+ * from this pool not referenced anymore.\n+ *\n+ * @param pool\n+ *   Pointer to indexed memory pool.\n+ * @return\n+ *   - non-zero value on error.\n+ *   - 0 on success.\n+ */\n+int mlx5_ipool_destroy(struct mlx5_indexed_pool *pool);\n+\n+/**\n+ * This function dumps debug info of pool.\n+ *\n+ * @param pool\n+ *   Pointer to indexed memory pool.\n+ */\n+void mlx5_ipool_dump(struct mlx5_indexed_pool *pool);\n+\n+/*\n+ * Macros for linked list based on indexed memory.\n+ * Example data structure:\n+ * struct Foo {\n+ *\tILIST_ENTRY(uint16_t) next;\n+ *\t...\n+ * }\n+ *\n+ */\n+#define ILIST_ENTRY(type)\t\t\t\t\t\t\\\n+struct {\t\t\t\t\t\t\t\t\\\n+\ttype prev; /* Index of previous element. */\t\t\t\\\n+\ttype next; /* Index of next element. */\t\t\t\t\\\n+}\n+\n+#define ILIST_INSERT(pool, head, idx, elem, field)\t\t\t\\\n+\tdo {\t\t\t\t\t\t\t\t\\\n+\t\ttypeof(elem) peer;\t\t\t\t\t\\\n+\t\tMLX5_ASSERT((elem) && (idx));\t\t\t\t\\\n+\t\t(elem)->field.next = *(head);\t\t\t\t\\\n+\t\t(elem)->field.prev = 0;\t\t\t\t\t\\\n+\t\tif (*(head)) {\t\t\t\t\t\t\\\n+\t\t\t(peer) = mlx5_ipool_get(pool, *(head));\t\t\\\n+\t\t\tif (peer)\t\t\t\t\t\\\n+\t\t\t\t(peer)->field.prev = (idx);\t\t\\\n+\t\t}\t\t\t\t\t\t\t\\\n+\t\t*(head) = (idx);\t\t\t\t\t\\\n+\t} while (0)\n+\n+#define ILIST_REMOVE(pool, head, idx, elem, field)\t\t\t\\\n+\tdo {\t\t\t\t\t\t\t\t\\\n+\t\ttypeof(elem) peer;\t\t\t\t\t\\\n+\t\tMLX5_ASSERT(elem);\t\t\t\t\t\\\n+\t\tMLX5_ASSERT(head);\t\t\t\t\t\\\n+\t\tif ((elem)->field.prev) {\t\t\t\t\\\n+\t\t\t(peer) = mlx5_ipool_get\t\t\t\t\\\n+\t\t\t\t (pool, (elem)->field.prev);\t\t\\\n+\t\t\tif (peer)\t\t\t\t\t\\\n+\t\t\t\t(peer)->field.next = (elem)->field.next;\\\n+\t\t}\t\t\t\t\t\t\t\\\n+\t\tif ((elem)->field.next) {\t\t\t\t\\\n+\t\t\t(peer) = mlx5_ipool_get\t\t\t\t\\\n+\t\t\t\t (pool, (elem)->field.next);\t\t\\\n+\t\t\tif (peer)\t\t\t\t\t\\\n+\t\t\t\t(peer)->field.prev = (elem)->field.prev;\\\n+\t\t}\t\t\t\t\t\t\t\\\n+\t\tif (*(head) == (idx))\t\t\t\t\t\\\n+\t\t\t*(head) = (elem)->field.next;\t\t\t\\\n+\t} while (0)\n+\n+#define ILIST_FOREACH(pool, head, idx, elem, field)\t\t\t\\\n+\tfor ((idx) = (head), (elem) =\t\t\t\t\t\\\n+\t     (idx) ? mlx5_ipool_get(pool, (idx)) : NULL; (elem);\t\\\n+\t     idx = (elem)->field.next, (elem) =\t\t\t\t\\\n+\t     (idx) ? mlx5_ipool_get(pool, idx) : NULL)\n+\n+/* Single index list. */\n+#define SILIST_ENTRY(type)\t\t\t\t\t\t\\\n+struct {\t\t\t\t\t\t\t\t\\\n+\ttype next; /* Index of next element. */\t\t\t\t\\\n+}\n+\n+#define SILIST_INSERT(head, idx, elem, field)\t\t\t\t\\\n+\tdo {\t\t\t\t\t\t\t\t\\\n+\t\tMLX5_ASSERT((elem) && (idx));\t\t\t\t\\\n+\t\t(elem)->field.next = *(head);\t\t\t\t\\\n+\t\t*(head) = (idx);\t\t\t\t\t\\\n+\t} while (0)\n+\n+#define SILIST_FOREACH(pool, head, idx, elem, field)\t\t\t\\\n+\tfor ((idx) = (head), (elem) =\t\t\t\t\t\\\n+\t     (idx) ? mlx5_ipool_get(pool, (idx)) : NULL; (elem);\t\\\n+\t     idx = (elem)->field.next, (elem) =\t\t\t\t\\\n+\t     (idx) ? mlx5_ipool_get(pool, idx) : NULL)\n+\n #endif /* RTE_PMD_MLX5_UTILS_H_ */\n",
    "prefixes": [
        "01/10"
    ]
}