From patchwork Tue Oct 19 10:08:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 102163 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38674A0C43; Tue, 19 Oct 2021 12:08:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20C3E41103; Tue, 19 Oct 2021 12:08:57 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 6490C41103 for ; Tue, 19 Oct 2021 12:08:56 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id 2D0987F6FD; Tue, 19 Oct 2021 13:08:56 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id 222727F6B8; Tue, 19 Oct 2021 13:08:49 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 222727F6B8 Authentication-Results: shelob.oktetlabs.ru/222727F6B8; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz , David Marchand Cc: dev@dpdk.org Date: Tue, 19 Oct 2021 13:08:40 +0300 Message-Id: <20211019100845.1632332-2-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> References: <20211018144907.1145028-1-andrew.rybchenko@oktetlabs.ru> <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/6] mempool: avoid flags documentation in the next line X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move documentation into a separate line just before define. Prepare to have a bit longer flag name because of namespace prefix. Signed-off-by: Andrew Rybchenko Acked-by: Olivier Matz --- lib/mempool/rte_mempool.h | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 88bcbc51ef..8ef4c8ed1e 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -250,13 +250,18 @@ struct rte_mempool { #endif } __rte_cache_aligned; +/** Spreading among memory channels not required. */ #define MEMPOOL_F_NO_SPREAD 0x0001 - /**< Spreading among memory channels not required. */ -#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/ -#define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/ -#define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ -#define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created. */ -#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */ +/** Do not align objects on cache lines. */ +#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 +/** Default put is "single-producer". */ +#define MEMPOOL_F_SP_PUT 0x0004 +/** Default get is "single-consumer". */ +#define MEMPOOL_F_SC_GET 0x0008 +/** Internal: pool is created. */ +#define MEMPOOL_F_POOL_CREATED 0x0010 +/** Don't need IOVA contiguous objects. */ +#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /** * @internal When debug is enabled, store some statistics. From patchwork Tue Oct 19 10:08:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 102164 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D16B2A0C43; Tue, 19 Oct 2021 12:09:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C9064111E; Tue, 19 Oct 2021 12:09:02 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 0BA674003E for ; Tue, 19 Oct 2021 12:09:01 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id D155E7F6B8; Tue, 19 Oct 2021 13:09:00 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id 644307F6C2; Tue, 19 Oct 2021 13:08:49 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 644307F6C2 Authentication-Results: shelob.oktetlabs.ru/644307F6C2; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz , David Marchand , Maryam Tahhan , Reshma Pattan , Xiaoyun Li , Pavan Nikhilesh , Shijith Thotton , Jerin Jacob , "Artem V. Andreev" , Nithin Dabilpuram , Kiran Kumar K , Maciej Czekaj , Maxime Coquelin , Chenbo Xia Cc: dev@dpdk.org Date: Tue, 19 Oct 2021 13:08:41 +0300 Message-Id: <20211019100845.1632332-3-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> References: <20211018144907.1145028-1-andrew.rybchenko@oktetlabs.ru> <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/6] mempool: add namespace prefix to flags X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix the mempool flgas namespace by adding an RTE_ prefix to the name. The old flags remain usable, to be deprecated in the future. Signed-off-by: Andrew Rybchenko Acked-by: Olivier Matz --- app/proc-info/main.c | 15 +++--- app/test-pmd/parameters.c | 4 +- app/test/test_mempool.c | 6 +-- doc/guides/rel_notes/release_21_11.rst | 3 ++ drivers/event/cnxk/cnxk_tim_evdev.c | 2 +- drivers/event/octeontx/timvf_evdev.c | 2 +- drivers/event/octeontx2/otx2_tim_evdev.c | 2 +- drivers/mempool/bucket/rte_mempool_bucket.c | 8 +-- drivers/mempool/ring/rte_mempool_ring.c | 4 +- drivers/net/octeontx2/otx2_ethdev.c | 4 +- drivers/net/thunderx/nicvf_ethdev.c | 2 +- lib/mempool/rte_mempool.c | 40 +++++++-------- lib/mempool/rte_mempool.h | 55 +++++++++++++++------ lib/mempool/rte_mempool_ops.c | 2 +- lib/pdump/rte_pdump.c | 3 +- lib/vhost/iotlb.c | 4 +- 16 files changed, 94 insertions(+), 62 deletions(-) diff --git a/app/proc-info/main.c b/app/proc-info/main.c index a8e928fa9f..74d8fdc1db 100644 --- a/app/proc-info/main.c +++ b/app/proc-info/main.c @@ -1298,12 +1298,15 @@ show_mempool(char *name) "\t -- No IOVA config (%c)\n", ptr->name, ptr->socket_id, - (flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n', - (flags & MEMPOOL_F_NO_CACHE_ALIGN) ? 'y' : 'n', - (flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n', - (flags & MEMPOOL_F_SC_GET) ? 'y' : 'n', - (flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n', - (flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n'); + (flags & RTE_MEMPOOL_F_NO_SPREAD) ? 'y' : 'n', + (flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) ? + 'y' : 'n', + (flags & RTE_MEMPOOL_F_SP_PUT) ? 'y' : 'n', + (flags & RTE_MEMPOOL_F_SC_GET) ? 'y' : 'n', + (flags & RTE_MEMPOOL_F_POOL_CREATED) ? + 'y' : 'n', + (flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) ? + 'y' : 'n'); printf(" - Size %u Cache %u element %u\n" " - header %u trailer %u\n" " - private data size %u\n", diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3f94a82e32..b69897ef00 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -1396,7 +1396,7 @@ launch_args_parse(int argc, char** argv) "noisy-lkup-num-reads-writes must be >= 0\n"); } if (!strcmp(lgopts[opt_idx].name, "no-iova-contig")) - mempool_flags = MEMPOOL_F_NO_IOVA_CONTIG; + mempool_flags = RTE_MEMPOOL_F_NO_IOVA_CONTIG; if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) { char *end = NULL; @@ -1440,7 +1440,7 @@ launch_args_parse(int argc, char** argv) rx_mode.offloads = rx_offloads; tx_mode.offloads = tx_offloads; - if (mempool_flags & MEMPOOL_F_NO_IOVA_CONTIG && + if (mempool_flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG && mp_alloc_type != MP_ALLOC_ANON) { TESTPMD_LOG(WARNING, "cannot use no-iova-contig without " "mp-alloc=anon. mempool no-iova-contig is " diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index 66bc8d86b7..ffe69e2d03 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -213,7 +213,7 @@ static int test_mempool_creation_with_unknown_flag(void) MEMPOOL_ELT_SIZE, 0, 0, NULL, NULL, NULL, NULL, - SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG << 1); + SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG << 1); if (mp_cov != NULL) { rte_mempool_free(mp_cov); @@ -336,8 +336,8 @@ test_mempool_sp_sc(void) my_mp_init, NULL, my_obj_init, NULL, SOCKET_ID_ANY, - MEMPOOL_F_NO_CACHE_ALIGN | MEMPOOL_F_SP_PUT | - MEMPOOL_F_SC_GET); + RTE_MEMPOOL_F_NO_CACHE_ALIGN | RTE_MEMPOOL_F_SP_PUT | + RTE_MEMPOOL_F_SC_GET); if (mp_spsc == NULL) RET_ERR(); } diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index d5435a64aa..9a0e3832a3 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -221,6 +221,9 @@ API Changes removed. Its usages have been replaced by a new function ``rte_kvargs_get_with_value()``. +* mempool: The mempool flags ``MEMPOOL_F_*`` will be deprecated in the future. + Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead. + * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure to ``src_addr`` and ``dst_addr``, respectively. diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c index 9d40e336d7..d325daed95 100644 --- a/drivers/event/cnxk/cnxk_tim_evdev.c +++ b/drivers/event/cnxk/cnxk_tim_evdev.c @@ -19,7 +19,7 @@ cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring, cache_sz /= rte_lcore_count(); /* Create chunk pool. */ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) { - mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET; + mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET; plt_tim_dbg("Using single producer mode"); tim_ring->prod_type_sp = true; } diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c index 688e9daa66..06fc53cc5b 100644 --- a/drivers/event/octeontx/timvf_evdev.c +++ b/drivers/event/octeontx/timvf_evdev.c @@ -310,7 +310,7 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr) } if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) { - mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET; + mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET; timvf_log_info("Using single producer mode"); } diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index de50c4c76e..3cdc468140 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -81,7 +81,7 @@ tim_chnk_pool_create(struct otx2_tim_ring *tim_ring, cache_sz /= rte_lcore_count(); /* Create chunk pool. */ if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) { - mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET; + mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET; otx2_tim_dbg("Using single producer mode"); tim_ring->prod_type_sp = true; } diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c index 8b9daa9782..8ff9e53007 100644 --- a/drivers/mempool/bucket/rte_mempool_bucket.c +++ b/drivers/mempool/bucket/rte_mempool_bucket.c @@ -426,7 +426,7 @@ bucket_init_per_lcore(unsigned int lcore_id, void *arg) goto error; rg_flags = RING_F_SC_DEQ; - if (mp->flags & MEMPOOL_F_SP_PUT) + if (mp->flags & RTE_MEMPOOL_F_SP_PUT) rg_flags |= RING_F_SP_ENQ; bd->adoption_buffer_rings[lcore_id] = rte_ring_create(rg_name, rte_align32pow2(mp->size + 1), mp->socket_id, rg_flags); @@ -472,7 +472,7 @@ bucket_alloc(struct rte_mempool *mp) goto no_mem_for_data; } bd->pool = mp; - if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN) + if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) bucket_header_size = sizeof(struct bucket_header); else bucket_header_size = RTE_CACHE_LINE_SIZE; @@ -494,9 +494,9 @@ bucket_alloc(struct rte_mempool *mp) goto no_mem_for_stacks; } - if (mp->flags & MEMPOOL_F_SP_PUT) + if (mp->flags & RTE_MEMPOOL_F_SP_PUT) rg_flags |= RING_F_SP_ENQ; - if (mp->flags & MEMPOOL_F_SC_GET) + if (mp->flags & RTE_MEMPOOL_F_SC_GET) rg_flags |= RING_F_SC_DEQ; rc = snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT ".0", mp->name); diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c index b1f09ff28f..4b785971c4 100644 --- a/drivers/mempool/ring/rte_mempool_ring.c +++ b/drivers/mempool/ring/rte_mempool_ring.c @@ -110,9 +110,9 @@ common_ring_alloc(struct rte_mempool *mp) { uint32_t rg_flags = 0; - if (mp->flags & MEMPOOL_F_SP_PUT) + if (mp->flags & RTE_MEMPOOL_F_SP_PUT) rg_flags |= RING_F_SP_ENQ; - if (mp->flags & MEMPOOL_F_SC_GET) + if (mp->flags & RTE_MEMPOOL_F_SC_GET) rg_flags |= RING_F_SC_DEQ; return ring_alloc(mp, rg_flags); diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index d576bc6989..9db62acbd0 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1124,7 +1124,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc) txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz, 0, 0, dev->node, - MEMPOOL_F_NO_SPREAD); + RTE_MEMPOOL_F_NO_SPREAD); txq->nb_sqb_bufs = nb_sqb_bufs; txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb); txq->nb_sqb_bufs_adj = nb_sqb_bufs - @@ -1150,7 +1150,7 @@ nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc) goto fail; } - tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz); + tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz); if (dev->sqb_size != sz.elt_size) { otx2_err("sqe pool block size is not expected %d != %d", dev->sqb_size, tmp); diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index 5502f1ee69..7e07d381dd 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -1302,7 +1302,7 @@ nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, } /* Mempool memory must be physically contiguous */ - if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) { + if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) { PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous"); return -EINVAL; } diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 607419ccaf..19210c702c 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -216,7 +216,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, sz = (sz != NULL) ? sz : &lsz; sz->header_size = sizeof(struct rte_mempool_objhdr); - if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) + if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0) sz->header_size = RTE_ALIGN_CEIL(sz->header_size, RTE_MEMPOOL_ALIGN); @@ -230,7 +230,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, sz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t)); /* expand trailer to next cache line */ - if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) { + if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0) { sz->total_size = sz->header_size + sz->elt_size + sz->trailer_size; sz->trailer_size += ((RTE_MEMPOOL_ALIGN - @@ -242,7 +242,7 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, * increase trailer to add padding between objects in order to * spread them across memory channels/ranks */ - if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { + if ((flags & RTE_MEMPOOL_F_NO_SPREAD) == 0) { unsigned new_size; new_size = arch_mem_object_align (sz->header_size + sz->elt_size + sz->trailer_size); @@ -294,11 +294,11 @@ mempool_ops_alloc_once(struct rte_mempool *mp) int ret; /* create the internal ring if not already done */ - if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) { + if ((mp->flags & RTE_MEMPOOL_F_POOL_CREATED) == 0) { ret = rte_mempool_ops_alloc(mp); if (ret != 0) return ret; - mp->flags |= MEMPOOL_F_POOL_CREATED; + mp->flags |= RTE_MEMPOOL_F_POOL_CREATED; } return 0; } @@ -336,7 +336,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, memhdr->free_cb = free_cb; memhdr->opaque = opaque; - if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN) + if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr; else off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr; @@ -393,7 +393,7 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, size_t off, phys_len; int ret, cnt = 0; - if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) + if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA, len, free_cb, opaque); @@ -450,7 +450,7 @@ rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz) if (ret < 0) return -EINVAL; alloc_in_ext_mem = (ret == 1); - need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); + need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG); if (!need_iova_contig_obj) *pg_sz = 0; @@ -527,7 +527,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) * reserve space in smaller chunks. */ - need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); + need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG); ret = rte_mempool_get_page_size(mp, &pg_sz); if (ret < 0) return ret; @@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache) rte_free(cache); } -#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \ - | MEMPOOL_F_NO_CACHE_ALIGN \ - | MEMPOOL_F_SP_PUT \ - | MEMPOOL_F_SC_GET \ - | MEMPOOL_F_POOL_CREATED \ - | MEMPOOL_F_NO_IOVA_CONTIG \ +#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \ + | RTE_MEMPOOL_F_NO_CACHE_ALIGN \ + | RTE_MEMPOOL_F_SP_PUT \ + | RTE_MEMPOOL_F_SC_GET \ + | RTE_MEMPOOL_F_POOL_CREATED \ + | RTE_MEMPOOL_F_NO_IOVA_CONTIG \ ) /* create an empty mempool */ struct rte_mempool * @@ -835,8 +835,8 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, } /* "no cache align" imply "no spread" */ - if (flags & MEMPOOL_F_NO_CACHE_ALIGN) - flags |= MEMPOOL_F_NO_SPREAD; + if (flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) + flags |= RTE_MEMPOOL_F_NO_SPREAD; /* calculate mempool object sizes. */ if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) { @@ -948,11 +948,11 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size, * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to * set the correct index into the table of ops structs. */ - if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET)) + if ((flags & RTE_MEMPOOL_F_SP_PUT) && (flags & RTE_MEMPOOL_F_SC_GET)) ret = rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL); - else if (flags & MEMPOOL_F_SP_PUT) + else if (flags & RTE_MEMPOOL_F_SP_PUT) ret = rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL); - else if (flags & MEMPOOL_F_SC_GET) + else if (flags & RTE_MEMPOOL_F_SC_GET) ret = rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL); else ret = rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL); diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 8ef4c8ed1e..d4bcb009fa 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -251,17 +251,42 @@ struct rte_mempool { } __rte_cache_aligned; /** Spreading among memory channels not required. */ -#define MEMPOOL_F_NO_SPREAD 0x0001 +#define RTE_MEMPOOL_F_NO_SPREAD 0x0001 +/** + * Backward compatibility synonym for RTE_MEMPOOL_F_NO_SPREAD. + * To be deprecated. + */ +#define MEMPOOL_F_NO_SPREAD RTE_MEMPOOL_F_NO_SPREAD /** Do not align objects on cache lines. */ -#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 +#define RTE_MEMPOOL_F_NO_CACHE_ALIGN 0x0002 +/** + * Backward compatibility synonym for RTE_MEMPOOL_F_NO_CACHE_ALIGN. + * To be deprecated. + */ +#define MEMPOOL_F_NO_CACHE_ALIGN RTE_MEMPOOL_F_NO_CACHE_ALIGN /** Default put is "single-producer". */ -#define MEMPOOL_F_SP_PUT 0x0004 +#define RTE_MEMPOOL_F_SP_PUT 0x0004 +/** + * Backward compatibility synonym for RTE_MEMPOOL_F_SP_PUT. + * To be deprecated. + */ +#define MEMPOOL_F_SP_PUT RTE_MEMPOOL_F_SP_PUT /** Default get is "single-consumer". */ -#define MEMPOOL_F_SC_GET 0x0008 +#define RTE_MEMPOOL_F_SC_GET 0x0008 +/** + * Backward compatibility synonym for RTE_MEMPOOL_F_SC_GET. + * To be deprecated. + */ +#define MEMPOOL_F_SC_GET RTE_MEMPOOL_F_SC_GET /** Internal: pool is created. */ -#define MEMPOOL_F_POOL_CREATED 0x0010 +#define RTE_MEMPOOL_F_POOL_CREATED 0x0010 /** Don't need IOVA contiguous objects. */ -#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 +#define RTE_MEMPOOL_F_NO_IOVA_CONTIG 0x0020 +/** + * Backward compatibility synonym for RTE_MEMPOOL_F_NO_IOVA_CONTIG. + * To be deprecated. + */ +#define MEMPOOL_F_NO_IOVA_CONTIG RTE_MEMPOOL_F_NO_IOVA_CONTIG /** * @internal When debug is enabled, store some statistics. @@ -424,9 +449,9 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp); * Calculate memory size required to store given number of objects. * * If mempool objects are not required to be IOVA-contiguous - * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines + * (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines * virtually contiguous chunk size. Otherwise, if mempool objects must - * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear), + * be IOVA-contiguous (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is clear), * min_chunk_size defines IOVA-contiguous chunk size. * * @param[in] mp @@ -974,22 +999,22 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *); * constraint for the reserved zone. * @param flags * The *flags* arguments is an OR of following flags: - * - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread + * - RTE_MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread * between channels in RAM: the pool allocator will add padding * between objects depending on the hardware configuration. See * Memory alignment constraints for details. If this flag is set, * the allocator will just align them to a cache line. - * - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are + * - RTE_MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are * cache-aligned. This flag removes this constraint, and no * padding will be present between objects. This flag implies - * MEMPOOL_F_NO_SPREAD. - * - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior + * RTE_MEMPOOL_F_NO_SPREAD. + * - RTE_MEMPOOL_F_SP_PUT: If this flag is set, the default behavior * when using rte_mempool_put() or rte_mempool_put_bulk() is * "single-producer". Otherwise, it is "multi-producers". - * - MEMPOOL_F_SC_GET: If this flag is set, the default behavior + * - RTE_MEMPOOL_F_SC_GET: If this flag is set, the default behavior * when using rte_mempool_get() or rte_mempool_get_bulk() is * "single-consumer". Otherwise, it is "multi-consumers". - * - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't + * - RTE_MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't * necessarily be contiguous in IO memory. * @return * The pointer to the new allocated mempool, on success. NULL on error @@ -1676,7 +1701,7 @@ rte_mempool_empty(const struct rte_mempool *mp) * A pointer (virtual address) to the element of the pool. * @return * The IO address of the elt element. - * If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the + * If the mempool was created with RTE_MEMPOOL_F_NO_IOVA_CONTIG, the * returned value is RTE_BAD_IOVA. */ static inline rte_iova_t diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index 5e22667787..2d36dee8f0 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -168,7 +168,7 @@ rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name, unsigned i; /* too late, the mempool is already populated. */ - if (mp->flags & MEMPOOL_F_POOL_CREATED) + if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED) return -EEXIST; for (i = 0; i < rte_mempool_ops_table.num_ops; i++) { diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 382217bc15..46a87e2339 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -371,7 +371,8 @@ pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp) rte_errno = EINVAL; return -1; } - if (mp->flags & MEMPOOL_F_SP_PUT || mp->flags & MEMPOOL_F_SC_GET) { + if (mp->flags & RTE_MEMPOOL_F_SP_PUT || + mp->flags & RTE_MEMPOOL_F_SC_GET) { PDUMP_LOG(ERR, "mempool with SP or SC set not valid for pdump," "must have MP and MC set\n"); diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index e4a445e709..82bdb84526 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -321,8 +321,8 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index) vq->iotlb_pool = rte_mempool_create(pool_name, IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0, 0, 0, NULL, NULL, NULL, socket, - MEMPOOL_F_NO_CACHE_ALIGN | - MEMPOOL_F_SP_PUT); + RTE_MEMPOOL_F_NO_CACHE_ALIGN | + RTE_MEMPOOL_F_SP_PUT); if (!vq->iotlb_pool) { VHOST_LOG_CONFIG(ERR, "Failed to create IOTLB cache pool (%s)\n", From patchwork Tue Oct 19 10:08:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 102165 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 764BAA0C43; Tue, 19 Oct 2021 12:09:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A44540E2D; Tue, 19 Oct 2021 12:09:07 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 4A0E141104 for ; Tue, 19 Oct 2021 12:09:06 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id 114DB7F6FE; Tue, 19 Oct 2021 13:09:06 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id A33987F6F8; Tue, 19 Oct 2021 13:08:49 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru A33987F6F8 Authentication-Results: shelob.oktetlabs.ru/A33987F6F8; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz , David Marchand , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra , Anoob Joseph Cc: dev@dpdk.org Date: Tue, 19 Oct 2021 13:08:42 +0300 Message-Id: <20211019100845.1632332-4-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> References: <20211018144907.1145028-1-andrew.rybchenko@oktetlabs.ru> <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/6] mempool: add namespace to internal but still visible API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add RTE_ prefix to internal API defined in public header. Use the prefix instead of double underscore. Use uppercase for macros in the case of name conflict. Signed-off-by: Andrew Rybchenko Acked-by: Olivier Matz --- drivers/event/octeontx/ssovf_worker.h | 2 +- drivers/net/cnxk/cn10k_rx.h | 12 ++-- drivers/net/cnxk/cn10k_tx.h | 30 ++++----- drivers/net/cnxk/cn9k_rx.h | 12 ++-- drivers/net/cnxk/cn9k_tx.h | 26 ++++---- drivers/net/octeontx/octeontx_rxtx.h | 4 +- drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 2 +- drivers/net/octeontx2/otx2_rx.c | 8 +-- drivers/net/octeontx2/otx2_rx.h | 4 +- drivers/net/octeontx2/otx2_tx.c | 16 ++--- drivers/net/octeontx2/otx2_tx.h | 4 +- lib/mempool/rte_mempool.c | 8 +-- lib/mempool/rte_mempool.h | 77 +++++++++++----------- 13 files changed, 103 insertions(+), 102 deletions(-) diff --git a/drivers/event/octeontx/ssovf_worker.h b/drivers/event/octeontx/ssovf_worker.h index f609b296ed..ba9e1cd0fa 100644 --- a/drivers/event/octeontx/ssovf_worker.h +++ b/drivers/event/octeontx/ssovf_worker.h @@ -83,7 +83,7 @@ ssovf_octeontx_wqe_xtract_mseg(octtx_wqe_t *wqe, mbuf->data_off = sizeof(octtx_pki_buflink_t); - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); if (nb_segs == 1) mbuf->data_len = bytes_left; else diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index fcc451aa36..6b40a9d0b5 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -276,7 +276,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, mbuf->next = ((struct rte_mbuf *)*iova_list) - 1; mbuf = mbuf->next; - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); mbuf->data_len = sg & 0xFFFF; sg = sg >> 16; @@ -306,7 +306,7 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, uint64_t ol_flags = 0; /* Mark mempool obj as "get" as it is alloc'ed by NIX */ - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); if (flag & NIX_RX_OFFLOAD_PTYPE_F) mbuf->packet_type = nix_ptype_get(lookup_mem, w1); @@ -905,10 +905,10 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts, roc_prefetch_store_keep(mbuf3); /* Mark mempool obj as "get" as it is alloc'ed by NIX */ - __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1); - __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1); - __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1); - __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1); packets += NIX_DESCS_PER_LOOP; diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index c6f349b352..0fd877f4ec 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -677,7 +677,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags, } /* Mark mempool object as "put" since it is freed by NIX */ if (!send_hdr->w0.df) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); } else { sg->seg1_size = m->data_len; *(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m); @@ -789,7 +789,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << 55))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif m = m_next; @@ -808,7 +808,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << (i + 55)))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); #endif slist++; i++; @@ -1177,7 +1177,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << 55))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif @@ -1194,7 +1194,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << (i + 55)))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif slist++; @@ -1235,7 +1235,7 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0, #ifdef RTE_LIBRTE_MEMPOOL_DEBUG sg.u = vgetq_lane_u64(cmd1[0], 0); if (!(sg.u & (1ULL << 55))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif return; @@ -1425,7 +1425,7 @@ cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr, #ifdef RTE_LIBRTE_MEMPOOL_DEBUG sg.u = vgetq_lane_u64(cmd1, 0); if (!(sg.u & (1ULL << 55))) - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); rte_io_wmb(); #endif @@ -2352,28 +2352,28 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0)) vsetq_lane_u64(0x80000, xmask01, 0); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf0)->pool, (void **)&mbuf0, 1, 0); if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1)) vsetq_lane_u64(0x80000, xmask01, 1); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf1)->pool, (void **)&mbuf1, 1, 0); if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2)) vsetq_lane_u64(0x80000, xmask23, 0); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf2)->pool, (void **)&mbuf2, 1, 0); if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3)) vsetq_lane_u64(0x80000, xmask23, 1); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf3)->pool, (void **)&mbuf3, 1, 0); senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); @@ -2389,19 +2389,19 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, /* Mark mempool object as "put" since * it is freed by NIX */ - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf0)->pool, (void **)&mbuf0, 1, 0); - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf1)->pool, (void **)&mbuf1, 1, 0); - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf2)->pool, (void **)&mbuf2, 1, 0); - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf3)->pool, (void **)&mbuf3, 1, 0); } diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h index 7ab415a194..ba3c3668f7 100644 --- a/drivers/net/cnxk/cn9k_rx.h +++ b/drivers/net/cnxk/cn9k_rx.h @@ -151,7 +151,7 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf, mbuf->next = ((struct rte_mbuf *)*iova_list) - 1; mbuf = mbuf->next; - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); mbuf->data_len = sg & 0xFFFF; sg = sg >> 16; @@ -288,7 +288,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, uint64_t ol_flags = 0; /* Mark mempool obj as "get" as it is alloc'ed by NIX */ - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); if (flag & NIX_RX_OFFLOAD_PTYPE_F) packet_type = nix_ptype_get(lookup_mem, w1); @@ -757,10 +757,10 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, roc_prefetch_store_keep(mbuf3); /* Mark mempool obj as "get" as it is alloc'ed by NIX */ - __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1); - __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1); - __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1); - __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1); /* Advance head pointer and packets */ head += NIX_DESCS_PER_LOOP; diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h index 44273eca90..83f4be84f1 100644 --- a/drivers/net/cnxk/cn9k_tx.h +++ b/drivers/net/cnxk/cn9k_tx.h @@ -285,7 +285,7 @@ cn9k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags, } /* Mark mempool object as "put" since it is freed by NIX */ if (!send_hdr->w0.df) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); } } @@ -397,7 +397,7 @@ cn9k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << (i + 55)))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif slist++; @@ -611,7 +611,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << 55))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif @@ -628,7 +628,7 @@ cn9k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << (i + 55)))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif slist++; @@ -680,7 +680,7 @@ cn9k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0, #ifdef RTE_LIBRTE_MEMPOOL_DEBUG sg.u = vgetq_lane_u64(cmd1[0], 0); if (!(sg.u & (1ULL << 55))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif return 2 + !!(flags & NIX_TX_NEED_EXT_HDR) + @@ -1627,28 +1627,28 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf0)) vsetq_lane_u64(0x80000, xmask01, 0); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf0)->pool, (void **)&mbuf0, 1, 0); if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf1)) vsetq_lane_u64(0x80000, xmask01, 1); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf1)->pool, (void **)&mbuf1, 1, 0); if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf2)) vsetq_lane_u64(0x80000, xmask23, 0); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf2)->pool, (void **)&mbuf2, 1, 0); if (cnxk_nix_prefree_seg((struct rte_mbuf *)mbuf3)) vsetq_lane_u64(0x80000, xmask23, 1); else - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf3)->pool, (void **)&mbuf3, 1, 0); senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); @@ -1667,19 +1667,19 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, /* Mark mempool object as "put" since * it is freed by NIX */ - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf0)->pool, (void **)&mbuf0, 1, 0); - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf1)->pool, (void **)&mbuf1, 1, 0); - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf2)->pool, (void **)&mbuf2, 1, 0); - __mempool_check_cookies( + RTE_MEMPOOL_CHECK_COOKIES( ((struct rte_mbuf *)mbuf3)->pool, (void **)&mbuf3, 1, 0); #ifdef RTE_LIBRTE_MEMPOOL_DEBUG diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h index e0723ac26a..9af797c36c 100644 --- a/drivers/net/octeontx/octeontx_rxtx.h +++ b/drivers/net/octeontx/octeontx_rxtx.h @@ -344,7 +344,7 @@ __octeontx_xmit_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf, /* Mark mempool object as "put" since it is freed by PKO */ if (!(cmd_buf[0] & (1ULL << 58))) - __mempool_check_cookies(m_tofree->pool, (void **)&m_tofree, + RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool, (void **)&m_tofree, 1, 0); /* Get the gaura Id */ gaura_id = @@ -417,7 +417,7 @@ __octeontx_xmit_mseg_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf, */ if (!(cmd_buf[nb_desc] & (1ULL << 57))) { tx_pkt->next = NULL; - __mempool_check_cookies(m_tofree->pool, + RTE_MEMPOOL_CHECK_COOKIES(m_tofree->pool, (void **)&m_tofree, 1, 0); } nb_desc++; diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h index 623a2a841e..65140b759c 100644 --- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h +++ b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h @@ -146,7 +146,7 @@ otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m, sd->nix_iova.addr = rte_mbuf_data_iova(m); /* Mark mempool object as "put" since it is freed by NIX */ - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); if (!ev->sched_type) otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG); diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c index ffeade5952..0d85c898bf 100644 --- a/drivers/net/octeontx2/otx2_rx.c +++ b/drivers/net/octeontx2/otx2_rx.c @@ -296,10 +296,10 @@ nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, otx2_prefetch_store_keep(mbuf3); /* Mark mempool obj as "get" as it is alloc'ed by NIX */ - __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1); - __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1); - __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1); - __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1); /* Advance head pointer and packets */ head += NIX_DESCS_PER_LOOP; head &= qmask; diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index ea29aec62f..3dcc563be1 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -199,7 +199,7 @@ nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx, mbuf->next = ((struct rte_mbuf *)*iova_list) - 1; mbuf = mbuf->next; - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); mbuf->data_len = sg & 0xFFFF; sg = sg >> 16; @@ -309,7 +309,7 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, uint64_t ol_flags = 0; /* Mark mempool obj as "get" as it is alloc'ed by NIX */ - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); if (flag & NIX_RX_OFFLOAD_PTYPE_F) mbuf->packet_type = nix_ptype_get(lookup_mem, w1); diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index ff299f00b9..ad704d745b 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -202,7 +202,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask01, 0); else - __mempool_check_cookies(mbuf->pool, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); @@ -211,7 +211,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask01, 1); else - __mempool_check_cookies(mbuf->pool, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); @@ -220,7 +220,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask23, 0); else - __mempool_check_cookies(mbuf->pool, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); @@ -229,7 +229,7 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, if (otx2_nix_prefree_seg(mbuf)) vsetq_lane_u64(0x80000, xmask23, 1); else - __mempool_check_cookies(mbuf->pool, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); @@ -245,22 +245,22 @@ nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, */ mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 - offsetof(struct rte_mbuf, buf_iova)); - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 - offsetof(struct rte_mbuf, buf_iova)); - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 - offsetof(struct rte_mbuf, buf_iova)); - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 - offsetof(struct rte_mbuf, buf_iova)); - __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 0); RTE_SET_USED(mbuf); } diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index 486248dff7..de1be0093c 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -372,7 +372,7 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags, } /* Mark mempool object as "put" since it is freed by NIX */ if (!send_hdr->w0.df) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); } } @@ -450,7 +450,7 @@ otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << (i + 55)))) - __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif slist++; diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 19210c702c..638eaa5fa2 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -167,7 +167,7 @@ mempool_add_elem(struct rte_mempool *mp, __rte_unused void *opaque, #ifdef RTE_LIBRTE_MEMPOOL_DEBUG hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2; - tlr = __mempool_get_trailer(obj); + tlr = rte_mempool_get_trailer(obj); tlr->cookie = RTE_MEMPOOL_TRAILER_COOKIE; #endif } @@ -1064,7 +1064,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, rte_panic("MEMPOOL: object is owned by another " "mempool\n"); - hdr = __mempool_get_header(obj); + hdr = rte_mempool_get_header(obj); cookie = hdr->cookie; if (free == 0) { @@ -1092,7 +1092,7 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, rte_panic("MEMPOOL: bad header cookie (audit)\n"); } } - tlr = __mempool_get_trailer(obj); + tlr = rte_mempool_get_trailer(obj); cookie = tlr->cookie; if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) { RTE_LOG(CRIT, MEMPOOL, @@ -1144,7 +1144,7 @@ static void mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque, void *obj, __rte_unused unsigned idx) { - __mempool_check_cookies(mp, &obj, 1, 2); + RTE_MEMPOOL_CHECK_COOKIES(mp, &obj, 1, 2); } static void diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index d4bcb009fa..979ab071cb 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -299,14 +299,14 @@ struct rte_mempool { * Number to add to the object-oriented statistics. */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG -#define __MEMPOOL_STAT_ADD(mp, name, n) do { \ +#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do { \ unsigned __lcore_id = rte_lcore_id(); \ if (__lcore_id < RTE_MAX_LCORE) { \ mp->stats[__lcore_id].name += n; \ } \ - } while(0) + } while (0) #else -#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0) +#define RTE_MEMPOOL_STAT_ADD(mp, name, n) do {} while (0) #endif /** @@ -322,7 +322,8 @@ struct rte_mempool { (sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE))) /* return the header of a mempool object (internal) */ -static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj) +static inline struct rte_mempool_objhdr * +rte_mempool_get_header(void *obj) { return (struct rte_mempool_objhdr *)RTE_PTR_SUB(obj, sizeof(struct rte_mempool_objhdr)); @@ -339,12 +340,12 @@ static inline struct rte_mempool_objhdr *__mempool_get_header(void *obj) */ static inline struct rte_mempool *rte_mempool_from_obj(void *obj) { - struct rte_mempool_objhdr *hdr = __mempool_get_header(obj); + struct rte_mempool_objhdr *hdr = rte_mempool_get_header(obj); return hdr->mp; } /* return the trailer of a mempool object (internal) */ -static inline struct rte_mempool_objtlr *__mempool_get_trailer(void *obj) +static inline struct rte_mempool_objtlr *rte_mempool_get_trailer(void *obj) { struct rte_mempool *mp = rte_mempool_from_obj(obj); return (struct rte_mempool_objtlr *)RTE_PTR_ADD(obj, mp->elt_size); @@ -368,10 +369,10 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, void * const *obj_table_const, unsigned n, int free); #ifdef RTE_LIBRTE_MEMPOOL_DEBUG -#define __mempool_check_cookies(mp, obj_table_const, n, free) \ +#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) \ rte_mempool_check_cookies(mp, obj_table_const, n, free) #else -#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0) +#define RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table_const, n, free) do {} while (0) #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */ /** @@ -393,13 +394,13 @@ void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp, void * const *first_obj_table_const, unsigned int n, int free); #ifdef RTE_LIBRTE_MEMPOOL_DEBUG -#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \ - free) \ +#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \ + free) \ rte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \ free) #else -#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \ - free) \ +#define RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table_const, n, \ + free) \ do {} while (0) #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */ @@ -734,8 +735,8 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp, ops = rte_mempool_get_ops(mp->ops_index); ret = ops->dequeue(mp, obj_table, n); if (ret == 0) { - __MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n); + RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n); } return ret; } @@ -784,8 +785,8 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, { struct rte_mempool_ops *ops; - __MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1); - __MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n); + RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n); rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n); ops = rte_mempool_get_ops(mp->ops_index); return ops->enqueue(mp, obj_table, n); @@ -1310,14 +1311,14 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache, * A pointer to a mempool cache structure. May be NULL if not needed. */ static __rte_always_inline void -__mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, - unsigned int n, struct rte_mempool_cache *cache) +rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, + unsigned int n, struct rte_mempool_cache *cache) { void **cache_objs; /* increment stat now, adding in mempool always success */ - __MEMPOOL_STAT_ADD(mp, put_bulk, 1); - __MEMPOOL_STAT_ADD(mp, put_objs, n); + RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); /* No cache provided or if put would overflow mem allocated for cache */ if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE)) @@ -1374,8 +1375,8 @@ rte_mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, unsigned int n, struct rte_mempool_cache *cache) { rte_mempool_trace_generic_put(mp, obj_table, n, cache); - __mempool_check_cookies(mp, obj_table, n, 0); - __mempool_generic_put(mp, obj_table, n, cache); + RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 0); + rte_mempool_do_generic_put(mp, obj_table, n, cache); } /** @@ -1435,8 +1436,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj) * - <0: Error; code of ring dequeue function. */ static __rte_always_inline int -__mempool_generic_get(struct rte_mempool *mp, void **obj_table, - unsigned int n, struct rte_mempool_cache *cache) +rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, + unsigned int n, struct rte_mempool_cache *cache) { int ret; uint32_t index, len; @@ -1475,8 +1476,8 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, cache->len -= n; - __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_success_objs, n); + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); return 0; @@ -1486,11 +1487,11 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n); if (ret < 0) { - __MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_fail_objs, n); + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); } else { - __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_success_objs, n); + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); } return ret; @@ -1521,9 +1522,9 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned int n, struct rte_mempool_cache *cache) { int ret; - ret = __mempool_generic_get(mp, obj_table, n, cache); + ret = rte_mempool_do_generic_get(mp, obj_table, n, cache); if (ret == 0) - __mempool_check_cookies(mp, obj_table, n, 1); + RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 1); rte_mempool_trace_generic_get(mp, obj_table, n, cache); return ret; } @@ -1614,13 +1615,13 @@ rte_mempool_get_contig_blocks(struct rte_mempool *mp, ret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n); if (ret == 0) { - __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_success_blks, n); - __mempool_contig_blocks_check_cookies(mp, first_obj_table, n, - 1); + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_success_blks, n); + RTE_MEMPOOL_CONTIG_BLOCKS_CHECK_COOKIES(mp, first_obj_table, n, + 1); } else { - __MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_fail_blks, n); + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_fail_blks, n); } rte_mempool_trace_get_contig_blocks(mp, first_obj_table, n); From patchwork Tue Oct 19 10:08:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 102166 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F55AA0C43; Tue, 19 Oct 2021 12:09:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FE7C4113C; Tue, 19 Oct 2021 12:09:12 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 1CC2B41143 for ; Tue, 19 Oct 2021 12:09:10 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id D9CFE7F6FD; Tue, 19 Oct 2021 13:09:09 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id DA7A17F6F9; Tue, 19 Oct 2021 13:08:49 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru DA7A17F6F9 Authentication-Results: shelob.oktetlabs.ru/DA7A17F6F9; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz , David Marchand , Ray Kinsella Cc: dev@dpdk.org Date: Tue, 19 Oct 2021 13:08:43 +0300 Message-Id: <20211019100845.1632332-5-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> References: <20211018144907.1145028-1-andrew.rybchenko@oktetlabs.ru> <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 4/6] mempool: make header size calculation internal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add RTE_ prefix to helper macro to calculate mempool header size and make it internal. Old macro is still available, but deprecated. Signed-off-by: Andrew Rybchenko --- app/test/test_mempool.c | 2 +- doc/guides/rel_notes/deprecation.rst | 4 ++++ doc/guides/rel_notes/release_21_11.rst | 3 +++ lib/mempool/rte_mempool.c | 6 +++--- lib/mempool/rte_mempool.h | 10 +++++++--- 5 files changed, 18 insertions(+), 7 deletions(-) diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index ffe69e2d03..8ecd0f10b8 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -111,7 +111,7 @@ test_mempool_basic(struct rte_mempool *mp, int use_external_cache) printf("get private data\n"); if (rte_mempool_get_priv(mp) != (char *)mp + - MEMPOOL_HEADER_SIZE(mp, mp->cache_size)) + RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size)) GOTO_ERR(ret, out); #ifndef RTE_EXEC_ENV_FREEBSD /* rte_mem_virt2iova() not supported on bsd */ diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 45239ca56e..bc3aca8ef1 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -39,6 +39,10 @@ Deprecation Notices ``__atomic_thread_fence`` must be used for patches that need to be merged in 20.08 onwards. This change will not introduce any performance degradation. +* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated and will + be removed in DPDK 22.11. The replacement macro + ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only. + * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``. A compatibility layer will be kept until DPDK 22.11, except for the flags that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``, diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 9a0e3832a3..e95ddb93a6 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -224,6 +224,9 @@ API Changes * mempool: The mempool flags ``MEMPOOL_F_*`` will be deprecated in the future. Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead. +* mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated. + The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only. + * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure to ``src_addr`` and ``dst_addr``, respectively. diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 638eaa5fa2..4e3a15e49c 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -861,7 +861,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, goto exit_unlock; } - mempool_size = MEMPOOL_HEADER_SIZE(mp, cache_size); + mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size); mempool_size += private_data_size; mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN); @@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, /* init the mempool structure */ mp = mz->addr; - memset(mp, 0, MEMPOOL_HEADER_SIZE(mp, cache_size)); + memset(mp, 0, RTE_MEMPOOL_HEADER_SIZE(mp, cache_size)); ret = strlcpy(mp->name, name, sizeof(mp->name)); if (ret < 0 || ret >= (int)sizeof(mp->name)) { rte_errno = ENAMETOOLONG; @@ -901,7 +901,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, * The local_cache points to just past the elt_pa[] array. */ mp->local_cache = (struct rte_mempool_cache *) - RTE_PTR_ADD(mp, MEMPOOL_HEADER_SIZE(mp, 0)); + RTE_PTR_ADD(mp, RTE_MEMPOOL_HEADER_SIZE(mp, 0)); /* Init all default caches. */ if (cache_size != 0) { diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 979ab071cb..11ef60247e 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -310,17 +310,21 @@ struct rte_mempool { #endif /** - * Calculate the size of the mempool header. + * @internal Calculate the size of the mempool header. * * @param mp * Pointer to the memory pool. * @param cs * Size of the per-lcore cache. */ -#define MEMPOOL_HEADER_SIZE(mp, cs) \ +#define RTE_MEMPOOL_HEADER_SIZE(mp, cs) \ (sizeof(*(mp)) + (((cs) == 0) ? 0 : \ (sizeof(struct rte_mempool_cache) * RTE_MAX_LCORE))) +/** Deprecated. Use RTE_MEMPOOL_HEADER_SIZE() for internal purposes only. */ +#define MEMPOOL_HEADER_SIZE(mp, cs) \ + RTE_DEPRECATED(RTE_MEMPOOL_HEADER_SIZE(mp, cs)) + /* return the header of a mempool object (internal) */ static inline struct rte_mempool_objhdr * rte_mempool_get_header(void *obj) @@ -1737,7 +1741,7 @@ void rte_mempool_audit(struct rte_mempool *mp); static inline void *rte_mempool_get_priv(struct rte_mempool *mp) { return (char *)mp + - MEMPOOL_HEADER_SIZE(mp, mp->cache_size); + RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size); } /** From patchwork Tue Oct 19 10:08:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 102167 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D03BA0C43; Tue, 19 Oct 2021 12:09:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 35B7141144; Tue, 19 Oct 2021 12:09:16 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 8B97641125 for ; Tue, 19 Oct 2021 12:09:14 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id 5F1207F700; Tue, 19 Oct 2021 13:09:14 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id 249947F6FA; Tue, 19 Oct 2021 13:08:50 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 249947F6FA Authentication-Results: shelob.oktetlabs.ru/249947F6FA; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz , David Marchand , Ray Kinsella , "Artem V. Andreev" , Ashwin Sekhar T K , Pavan Nikhilesh , Hemant Agrawal , Sachin Saxena , Harman Kalra , Jerin Jacob , Nithin Dabilpuram Cc: dev@dpdk.org Date: Tue, 19 Oct 2021 13:08:44 +0300 Message-Id: <20211019100845.1632332-6-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> References: <20211018144907.1145028-1-andrew.rybchenko@oktetlabs.ru> <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 5/6] mempool: add namespace to driver register macro X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add RTE_ prefix to macro used to register mempool driver. The old one is still available but deprecated. Signed-off-by: Andrew Rybchenko --- doc/guides/prog_guide/mempool_lib.rst | 2 +- doc/guides/rel_notes/deprecation.rst | 4 ++++ doc/guides/rel_notes/release_21_11.rst | 3 +++ drivers/mempool/bucket/rte_mempool_bucket.c | 2 +- drivers/mempool/cnxk/cn10k_mempool_ops.c | 2 +- drivers/mempool/cnxk/cn9k_mempool_ops.c | 2 +- drivers/mempool/dpaa/dpaa_mempool.c | 2 +- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 2 +- drivers/mempool/octeontx/rte_mempool_octeontx.c | 2 +- drivers/mempool/octeontx2/otx2_mempool_ops.c | 2 +- drivers/mempool/ring/rte_mempool_ring.c | 12 ++++++------ drivers/mempool/stack/rte_mempool_stack.c | 4 ++-- lib/mempool/rte_mempool.h | 6 +++++- 13 files changed, 28 insertions(+), 17 deletions(-) diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 890535eb23..55838317b9 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -115,7 +115,7 @@ management systems and software based memory allocators, to be used with DPDK. There are two aspects to a mempool handler. * Adding the code for your new mempool operations (ops). This is achieved by - adding a new mempool ops code, and using the ``MEMPOOL_REGISTER_OPS`` macro. + adding a new mempool ops code, and using the ``RTE_MEMPOOL_REGISTER_OPS`` macro. * Using the new API to call ``rte_mempool_create_empty()`` and ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index bc3aca8ef1..0095d48084 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -43,6 +43,10 @@ Deprecation Notices be removed in DPDK 22.11. The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only. +* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is + deprecated and will be removed in DPDK 22.11. Use replacement macro + ``RTE_MEMPOOL_REGISTER_OPS()``. + * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``. A compatibility layer will be kept until DPDK 22.11, except for the flags that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``, diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index e95ddb93a6..9804c033c0 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -227,6 +227,9 @@ API Changes * mempool: Helper macro ``MEMPOOL_HEADER_SIZE()`` is deprecated. The replacement macro ``RTE_MEMPOOL_HEADER_SIZE()`` is internal only. +* mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is + deprecated. Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``. + * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure to ``src_addr`` and ``dst_addr``, respectively. diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c index 8ff9e53007..c0b480bfc7 100644 --- a/drivers/mempool/bucket/rte_mempool_bucket.c +++ b/drivers/mempool/bucket/rte_mempool_bucket.c @@ -663,4 +663,4 @@ static const struct rte_mempool_ops ops_bucket = { }; -MEMPOOL_REGISTER_OPS(ops_bucket); +RTE_MEMPOOL_REGISTER_OPS(ops_bucket); diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c index 95458b34b7..4c669b878f 100644 --- a/drivers/mempool/cnxk/cn10k_mempool_ops.c +++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c @@ -316,4 +316,4 @@ static struct rte_mempool_ops cn10k_mempool_ops = { .populate = cnxk_mempool_populate, }; -MEMPOOL_REGISTER_OPS(cn10k_mempool_ops); +RTE_MEMPOOL_REGISTER_OPS(cn10k_mempool_ops); diff --git a/drivers/mempool/cnxk/cn9k_mempool_ops.c b/drivers/mempool/cnxk/cn9k_mempool_ops.c index c0cdba640b..b7967f8085 100644 --- a/drivers/mempool/cnxk/cn9k_mempool_ops.c +++ b/drivers/mempool/cnxk/cn9k_mempool_ops.c @@ -86,4 +86,4 @@ static struct rte_mempool_ops cn9k_mempool_ops = { .populate = cnxk_mempool_populate, }; -MEMPOOL_REGISTER_OPS(cn9k_mempool_ops); +RTE_MEMPOOL_REGISTER_OPS(cn9k_mempool_ops); diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index f02056982c..f17aff9655 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -358,4 +358,4 @@ static const struct rte_mempool_ops dpaa_mpool_ops = { .populate = dpaa_populate, }; -MEMPOOL_REGISTER_OPS(dpaa_mpool_ops); +RTE_MEMPOOL_REGISTER_OPS(dpaa_mpool_ops); diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c index 771e0a0e28..39c6252a63 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -455,6 +455,6 @@ static const struct rte_mempool_ops dpaa2_mpool_ops = { .populate = dpaa2_populate, }; -MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops); +RTE_MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops); RTE_LOG_REGISTER_DEFAULT(dpaa2_logtype_mempool, NOTICE); diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c index bd00700202..f4de1c8412 100644 --- a/drivers/mempool/octeontx/rte_mempool_octeontx.c +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c @@ -202,4 +202,4 @@ static struct rte_mempool_ops octeontx_fpavf_ops = { .populate = octeontx_fpavf_populate, }; -MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops); +RTE_MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops); diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index d827fd8c7b..332e4f1cb2 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -898,4 +898,4 @@ static struct rte_mempool_ops otx2_npa_ops = { #endif }; -MEMPOOL_REGISTER_OPS(otx2_npa_ops); +RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops); diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c index 4b785971c4..c6aa935eea 100644 --- a/drivers/mempool/ring/rte_mempool_ring.c +++ b/drivers/mempool/ring/rte_mempool_ring.c @@ -198,9 +198,9 @@ static const struct rte_mempool_ops ops_mt_hts = { .get_count = common_ring_get_count, }; -MEMPOOL_REGISTER_OPS(ops_mp_mc); -MEMPOOL_REGISTER_OPS(ops_sp_sc); -MEMPOOL_REGISTER_OPS(ops_mp_sc); -MEMPOOL_REGISTER_OPS(ops_sp_mc); -MEMPOOL_REGISTER_OPS(ops_mt_rts); -MEMPOOL_REGISTER_OPS(ops_mt_hts); +RTE_MEMPOOL_REGISTER_OPS(ops_mp_mc); +RTE_MEMPOOL_REGISTER_OPS(ops_sp_sc); +RTE_MEMPOOL_REGISTER_OPS(ops_mp_sc); +RTE_MEMPOOL_REGISTER_OPS(ops_sp_mc); +RTE_MEMPOOL_REGISTER_OPS(ops_mt_rts); +RTE_MEMPOOL_REGISTER_OPS(ops_mt_hts); diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c index 7e85c8d6b6..1476905227 100644 --- a/drivers/mempool/stack/rte_mempool_stack.c +++ b/drivers/mempool/stack/rte_mempool_stack.c @@ -93,5 +93,5 @@ static struct rte_mempool_ops ops_lf_stack = { .get_count = stack_get_count }; -MEMPOOL_REGISTER_OPS(ops_stack); -MEMPOOL_REGISTER_OPS(ops_lf_stack); +RTE_MEMPOOL_REGISTER_OPS(ops_stack); +RTE_MEMPOOL_REGISTER_OPS(ops_lf_stack); diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 11ef60247e..409836d4d1 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -920,12 +920,16 @@ int rte_mempool_register_ops(const struct rte_mempool_ops *ops); * Note that the rte_mempool_register_ops fails silently here when * more than RTE_MEMPOOL_MAX_OPS_IDX is registered. */ -#define MEMPOOL_REGISTER_OPS(ops) \ +#define RTE_MEMPOOL_REGISTER_OPS(ops) \ RTE_INIT(mp_hdlr_init_##ops) \ { \ rte_mempool_register_ops(&ops); \ } +/** Deprecated. Use RTE_MEMPOOL_REGISTER_OPS() instead. */ +#define MEMPOOL_REGISTER_OPS(ops) \ + RTE_DEPRECATED(RTE_MEMPOOL_REGISTER_OPS(ops)) + /** * An object callback function for mempool. * From patchwork Tue Oct 19 10:08:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 102168 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B988A0C43; Tue, 19 Oct 2021 12:09:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4884141148; Tue, 19 Oct 2021 12:09:19 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id E485341104 for ; Tue, 19 Oct 2021 12:09:17 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id AE6B87F6F9; Tue, 19 Oct 2021 13:09:17 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id 5DEA87F6FB; Tue, 19 Oct 2021 13:08:50 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 5DEA87F6FB Authentication-Results: shelob.oktetlabs.ru/5DEA87F6FB; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz , David Marchand , Ray Kinsella Cc: dev@dpdk.org Date: Tue, 19 Oct 2021 13:08:45 +0300 Message-Id: <20211019100845.1632332-7-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> References: <20211018144907.1145028-1-andrew.rybchenko@oktetlabs.ru> <20211019100845.1632332-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 6/6] mempool: deprecate unused defines X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" MEMPOOL_PG_NUM_DEFAULT and MEMPOOL_PG_SHIFT_MAX are not used. Signed-off-by: Andrew Rybchenko --- doc/guides/contributing/documentation.rst | 4 ++-- doc/guides/rel_notes/deprecation.rst | 3 +++ doc/guides/rel_notes/release_21_11.rst | 3 +++ lib/mempool/rte_mempool.h | 7 ++++--- 4 files changed, 12 insertions(+), 5 deletions(-) diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst index 8cbd4a0f6f..7fcbb7fc43 100644 --- a/doc/guides/contributing/documentation.rst +++ b/doc/guides/contributing/documentation.rst @@ -705,7 +705,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati /**< Virtual address of the first mempool object. */ uintptr_t elt_va_end; /**< Virtual address of the mempool object. */ - phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT]; + phys_addr_t elt_pa[1]; /**< Array of physical page addresses for the mempool buffer. */ This doesn't have an effect on the rendered documentation but it is confusing for the developer reading the code. @@ -724,7 +724,7 @@ The following are some guidelines for use of Doxygen in the DPDK API documentati /** Virtual address of the mempool object. */ uintptr_t elt_va_end; /** Array of physical page addresses for the mempool buffer. */ - phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT]; + phys_addr_t elt_pa[1]; * Read the rendered section of the documentation that you have added for correctness, clarity and consistency with the surrounding text. diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 0095d48084..c59dd5ca98 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -47,6 +47,9 @@ Deprecation Notices deprecated and will be removed in DPDK 22.11. Use replacement macro ``RTE_MEMPOOL_REGISTER_OPS()``. +* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and + will be removed in DPDK 22.11. + * mbuf: The mbuf offload flags ``PKT_*`` will be renamed as ``RTE_MBUF_F_*``. A compatibility layer will be kept until DPDK 22.11, except for the flags that are already deprecated (``PKT_RX_L4_CKSUM_BAD``, ``PKT_RX_IP_CKSUM_BAD``, diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 9804c033c0..eea9c13151 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -230,6 +230,9 @@ API Changes * mempool: Macro to register mempool driver ``MEMPOOL_REGISTER_OPS()`` is deprecated. Use replacement ``RTE_MEMPOOL_REGISTER_OPS()``. +* mempool: The mempool API macros ``MEMPOOL_PG_*`` are deprecated and + will be removed in DPDK 22.11. + * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure to ``src_addr`` and ``dst_addr``, respectively. diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 409836d4d1..8ef067fb12 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -116,10 +116,11 @@ struct rte_mempool_objsz { /* "MP_" */ #define RTE_MEMPOOL_MZ_FORMAT RTE_MEMPOOL_MZ_PREFIX "%s" -#define MEMPOOL_PG_SHIFT_MAX (sizeof(uintptr_t) * CHAR_BIT - 1) +#define MEMPOOL_PG_SHIFT_MAX \ + RTE_DEPRECATED(sizeof(uintptr_t) * CHAR_BIT - 1) -/** Mempool over one chunk of physically continuous memory */ -#define MEMPOOL_PG_NUM_DEFAULT 1 +/** Deprecated. Mempool over one chunk of physically continuous memory */ +#define MEMPOOL_PG_NUM_DEFAULT RTE_DEPRECATED(1) #ifndef RTE_MEMPOOL_ALIGN /**