[v2,2/6] mempool: add namespace prefix to flags

Message ID 20211019100845.1632332-3-andrew.rybchenko@oktetlabs.ru (mailing list archive)
State Superseded, archived
Delegated to: David Marchand
Headers
Series mempool: cleanup namespace |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Andrew Rybchenko Oct. 19, 2021, 10:08 a.m. UTC
  Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
The old flags remain usable, to be deprecated in the future.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 app/proc-info/main.c                        | 15 +++---
 app/test-pmd/parameters.c                   |  4 +-
 app/test/test_mempool.c                     |  6 +--
 doc/guides/rel_notes/release_21_11.rst      |  3 ++
 drivers/event/cnxk/cnxk_tim_evdev.c         |  2 +-
 drivers/event/octeontx/timvf_evdev.c        |  2 +-
 drivers/event/octeontx2/otx2_tim_evdev.c    |  2 +-
 drivers/mempool/bucket/rte_mempool_bucket.c |  8 +--
 drivers/mempool/ring/rte_mempool_ring.c     |  4 +-
 drivers/net/octeontx2/otx2_ethdev.c         |  4 +-
 drivers/net/thunderx/nicvf_ethdev.c         |  2 +-
 lib/mempool/rte_mempool.c                   | 40 +++++++--------
 lib/mempool/rte_mempool.h                   | 55 +++++++++++++++------
 lib/mempool/rte_mempool_ops.c               |  2 +-
 lib/pdump/rte_pdump.c                       |  3 +-
 lib/vhost/iotlb.c                           |  4 +-
 16 files changed, 94 insertions(+), 62 deletions(-)
  

Comments

Olivier Matz Oct. 19, 2021, 4:13 p.m. UTC | #1
On Tue, Oct 19, 2021 at 01:08:41PM +0300, Andrew Rybchenko wrote:
> Fix the mempool flgas namespace by adding an RTE_ prefix to the name.

nit: flgas -> flags

> The old flags remain usable, to be deprecated in the future.
> 
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

(...)

> @@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
>  	rte_free(cache);
>  }
>  
> -#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
> -	| MEMPOOL_F_NO_CACHE_ALIGN \
> -	| MEMPOOL_F_SP_PUT \
> -	| MEMPOOL_F_SC_GET \
> -	| MEMPOOL_F_POOL_CREATED \
> -	| MEMPOOL_F_NO_IOVA_CONTIG \
> +#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
> +	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
> +	| RTE_MEMPOOL_F_SP_PUT \
> +	| RTE_MEMPOOL_F_SC_GET \
> +	| RTE_MEMPOOL_F_POOL_CREATED \
> +	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
>  	)

I guess MEMPOOL_KNOWN_FLAGS was kept as is on purpose.
  
Olivier Matz Oct. 19, 2021, 4:15 p.m. UTC | #2
On Tue, Oct 19, 2021 at 06:13:54PM +0200, Olivier Matz wrote:
> On Tue, Oct 19, 2021 at 01:08:41PM +0300, Andrew Rybchenko wrote:
> > Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
> 
> nit: flgas -> flags
> 
> > The old flags remain usable, to be deprecated in the future.
> > 
> > Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> (...)
> 
> > @@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
> >  	rte_free(cache);
> >  }
> >  
> > -#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
> > -	| MEMPOOL_F_NO_CACHE_ALIGN \
> > -	| MEMPOOL_F_SP_PUT \
> > -	| MEMPOOL_F_SC_GET \
> > -	| MEMPOOL_F_POOL_CREATED \
> > -	| MEMPOOL_F_NO_IOVA_CONTIG \
> > +#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
> > +	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
> > +	| RTE_MEMPOOL_F_SP_PUT \
> > +	| RTE_MEMPOOL_F_SC_GET \
> > +	| RTE_MEMPOOL_F_POOL_CREATED \
> > +	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
> >  	)
> 
> I guess MEMPOOL_KNOWN_FLAGS was kept as is on purpose.
> 

I forgot to add the ack

Acked-by: Olivier Matz <olivier.matz@6wind.com>
  
Andrew Rybchenko Oct. 19, 2021, 5:45 p.m. UTC | #3
On 10/19/21 7:13 PM, Olivier Matz wrote:
> On Tue, Oct 19, 2021 at 01:08:41PM +0300, Andrew Rybchenko wrote:
>> Fix the mempool flgas namespace by adding an RTE_ prefix to the name.
> 
> nit: flgas -> flags

Thanks, fixed.

> 
>> The old flags remain usable, to be deprecated in the future.
>>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> (...)
> 
>> @@ -777,12 +777,12 @@ rte_mempool_cache_free(struct rte_mempool_cache *cache)
>>   	rte_free(cache);
>>   }
>>   
>> -#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
>> -	| MEMPOOL_F_NO_CACHE_ALIGN \
>> -	| MEMPOOL_F_SP_PUT \
>> -	| MEMPOOL_F_SC_GET \
>> -	| MEMPOOL_F_POOL_CREATED \
>> -	| MEMPOOL_F_NO_IOVA_CONTIG \
>> +#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
>> +	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
>> +	| RTE_MEMPOOL_F_SP_PUT \
>> +	| RTE_MEMPOOL_F_SC_GET \
>> +	| RTE_MEMPOOL_F_POOL_CREATED \
>> +	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
>>   	)
> 
> I guess MEMPOOL_KNOWN_FLAGS was kept as is on purpose.
> 

Yes, since it is internal and located in .c file.
  

Patch

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9f..74d8fdc1db 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -1298,12 +1298,15 @@  show_mempool(char *name)
 				"\t  -- No IOVA config (%c)\n",
 				ptr->name,
 				ptr->socket_id,
-				(flags & MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_CACHE_ALIGN) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SP_PUT) ? 'y' : 'n',
-				(flags & MEMPOOL_F_SC_GET) ? 'y' : 'n',
-				(flags & MEMPOOL_F_POOL_CREATED) ? 'y' : 'n',
-				(flags & MEMPOOL_F_NO_IOVA_CONTIG) ? 'y' : 'n');
+				(flags & RTE_MEMPOOL_F_NO_SPREAD) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SP_PUT) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_SC_GET) ? 'y' : 'n',
+				(flags & RTE_MEMPOOL_F_POOL_CREATED) ?
+					'y' : 'n',
+				(flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) ?
+					'y' : 'n');
 			printf("  - Size %u Cache %u element %u\n"
 				"  - header %u trailer %u\n"
 				"  - private data size %u\n",
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3f94a82e32..b69897ef00 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -1396,7 +1396,7 @@  launch_args_parse(int argc, char** argv)
 						 "noisy-lkup-num-reads-writes must be >= 0\n");
 			}
 			if (!strcmp(lgopts[opt_idx].name, "no-iova-contig"))
-				mempool_flags = MEMPOOL_F_NO_IOVA_CONTIG;
+				mempool_flags = RTE_MEMPOOL_F_NO_IOVA_CONTIG;
 
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
@@ -1440,7 +1440,7 @@  launch_args_parse(int argc, char** argv)
 	rx_mode.offloads = rx_offloads;
 	tx_mode.offloads = tx_offloads;
 
-	if (mempool_flags & MEMPOOL_F_NO_IOVA_CONTIG &&
+	if (mempool_flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG &&
 	    mp_alloc_type != MP_ALLOC_ANON) {
 		TESTPMD_LOG(WARNING, "cannot use no-iova-contig without "
 				  "mp-alloc=anon. mempool no-iova-contig is "
diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c
index 66bc8d86b7..ffe69e2d03 100644
--- a/app/test/test_mempool.c
+++ b/app/test/test_mempool.c
@@ -213,7 +213,7 @@  static int test_mempool_creation_with_unknown_flag(void)
 		MEMPOOL_ELT_SIZE, 0, 0,
 		NULL, NULL,
 		NULL, NULL,
-		SOCKET_ID_ANY, MEMPOOL_F_NO_IOVA_CONTIG << 1);
+		SOCKET_ID_ANY, RTE_MEMPOOL_F_NO_IOVA_CONTIG << 1);
 
 	if (mp_cov != NULL) {
 		rte_mempool_free(mp_cov);
@@ -336,8 +336,8 @@  test_mempool_sp_sc(void)
 			my_mp_init, NULL,
 			my_obj_init, NULL,
 			SOCKET_ID_ANY,
-			MEMPOOL_F_NO_CACHE_ALIGN | MEMPOOL_F_SP_PUT |
-			MEMPOOL_F_SC_GET);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN | RTE_MEMPOOL_F_SP_PUT |
+			RTE_MEMPOOL_F_SC_GET);
 		if (mp_spsc == NULL)
 			RET_ERR();
 	}
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d5435a64aa..9a0e3832a3 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -221,6 +221,9 @@  API Changes
   removed. Its usages have been replaced by a new function
   ``rte_kvargs_get_with_value()``.
 
+* mempool: The mempool flags ``MEMPOOL_F_*`` will be deprecated in the future.
+  Newly added flags with ``RTE_MEMPOOL_F_`` prefix should be used instead.
+
 * net: Renamed ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
   to ``src_addr`` and ``dst_addr``, respectively.
 
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 9d40e336d7..d325daed95 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -19,7 +19,7 @@  cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		plt_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index 688e9daa66..06fc53cc5b 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -310,7 +310,7 @@  timvf_ring_create(struct rte_event_timer_adapter *adptr)
 	}
 
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		timvf_log_info("Using single producer mode");
 	}
 
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
index de50c4c76e..3cdc468140 100644
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ b/drivers/event/octeontx2/otx2_tim_evdev.c
@@ -81,7 +81,7 @@  tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
 	cache_sz /= rte_lcore_count();
 	/* Create chunk pool. */
 	if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
-		mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
+		mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
 		otx2_tim_dbg("Using single producer mode");
 		tim_ring->prod_type_sp = true;
 	}
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c
index 8b9daa9782..8ff9e53007 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket.c
+++ b/drivers/mempool/bucket/rte_mempool_bucket.c
@@ -426,7 +426,7 @@  bucket_init_per_lcore(unsigned int lcore_id, void *arg)
 		goto error;
 
 	rg_flags = RING_F_SC_DEQ;
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
 	bd->adoption_buffer_rings[lcore_id] = rte_ring_create(rg_name,
 		rte_align32pow2(mp->size + 1), mp->socket_id, rg_flags);
@@ -472,7 +472,7 @@  bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_data;
 	}
 	bd->pool = mp;
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		bucket_header_size = sizeof(struct bucket_header);
 	else
 		bucket_header_size = RTE_CACHE_LINE_SIZE;
@@ -494,9 +494,9 @@  bucket_alloc(struct rte_mempool *mp)
 		goto no_mem_for_stacks;
 	}
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 	rc = snprintf(rg_name, sizeof(rg_name),
 		      RTE_MEMPOOL_MZ_FORMAT ".0", mp->name);
diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c
index b1f09ff28f..4b785971c4 100644
--- a/drivers/mempool/ring/rte_mempool_ring.c
+++ b/drivers/mempool/ring/rte_mempool_ring.c
@@ -110,9 +110,9 @@  common_ring_alloc(struct rte_mempool *mp)
 {
 	uint32_t rg_flags = 0;
 
-	if (mp->flags & MEMPOOL_F_SP_PUT)
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT)
 		rg_flags |= RING_F_SP_ENQ;
-	if (mp->flags & MEMPOOL_F_SC_GET)
+	if (mp->flags & RTE_MEMPOOL_F_SC_GET)
 		rg_flags |= RING_F_SC_DEQ;
 
 	return ring_alloc(mp, rg_flags);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d576bc6989..9db62acbd0 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -1124,7 +1124,7 @@  nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 
 	txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
 						 0, 0, dev->node,
-						 MEMPOOL_F_NO_SPREAD);
+						 RTE_MEMPOOL_F_NO_SPREAD);
 	txq->nb_sqb_bufs = nb_sqb_bufs;
 	txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
 	txq->nb_sqb_bufs_adj = nb_sqb_bufs -
@@ -1150,7 +1150,7 @@  nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
 		goto fail;
 	}
 
-	tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz);
+	tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
 	if (dev->sqb_size != sz.elt_size) {
 		otx2_err("sqe pool block size is not expected %d != %d",
 			 dev->sqb_size, tmp);
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5502f1ee69..7e07d381dd 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1302,7 +1302,7 @@  nicvf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	}
 
 	/* Mempool memory must be physically contiguous */
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) {
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG) {
 		PMD_INIT_LOG(ERR, "Mempool memory must be physically contiguous");
 		return -EINVAL;
 	}
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 607419ccaf..19210c702c 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -216,7 +216,7 @@  rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz = (sz != NULL) ? sz : &lsz;
 
 	sz->header_size = sizeof(struct rte_mempool_objhdr);
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0)
 		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
 			RTE_MEMPOOL_ALIGN);
 
@@ -230,7 +230,7 @@  rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	sz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));
 
 	/* expand trailer to next cache line */
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
 		sz->total_size = sz->header_size + sz->elt_size +
 			sz->trailer_size;
 		sz->trailer_size += ((RTE_MEMPOOL_ALIGN -
@@ -242,7 +242,7 @@  rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
 	 * increase trailer to add padding between objects in order to
 	 * spread them across memory channels/ranks
 	 */
-	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
+	if ((flags & RTE_MEMPOOL_F_NO_SPREAD) == 0) {
 		unsigned new_size;
 		new_size = arch_mem_object_align
 			    (sz->header_size + sz->elt_size + sz->trailer_size);
@@ -294,11 +294,11 @@  mempool_ops_alloc_once(struct rte_mempool *mp)
 	int ret;
 
 	/* create the internal ring if not already done */
-	if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0) {
+	if ((mp->flags & RTE_MEMPOOL_F_POOL_CREATED) == 0) {
 		ret = rte_mempool_ops_alloc(mp);
 		if (ret != 0)
 			return ret;
-		mp->flags |= MEMPOOL_F_POOL_CREATED;
+		mp->flags |= RTE_MEMPOOL_F_POOL_CREATED;
 	}
 	return 0;
 }
@@ -336,7 +336,7 @@  rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	memhdr->free_cb = free_cb;
 	memhdr->opaque = opaque;
 
-	if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
+	if (mp->flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
 		off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
 	else
 		off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_MEMPOOL_ALIGN) - vaddr;
@@ -393,7 +393,7 @@  rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
 	size_t off, phys_len;
 	int ret, cnt = 0;
 
-	if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG)
+	if (mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG)
 		return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA,
 			len, free_cb, opaque);
 
@@ -450,7 +450,7 @@  rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz)
 	if (ret < 0)
 		return -EINVAL;
 	alloc_in_ext_mem = (ret == 1);
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 
 	if (!need_iova_contig_obj)
 		*pg_sz = 0;
@@ -527,7 +527,7 @@  rte_mempool_populate_default(struct rte_mempool *mp)
 	 * reserve space in smaller chunks.
 	 */
 
-	need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG);
+	need_iova_contig_obj = !(mp->flags & RTE_MEMPOOL_F_NO_IOVA_CONTIG);
 	ret = rte_mempool_get_page_size(mp, &pg_sz);
 	if (ret < 0)
 		return ret;
@@ -777,12 +777,12 @@  rte_mempool_cache_free(struct rte_mempool_cache *cache)
 	rte_free(cache);
 }
 
-#define MEMPOOL_KNOWN_FLAGS (MEMPOOL_F_NO_SPREAD \
-	| MEMPOOL_F_NO_CACHE_ALIGN \
-	| MEMPOOL_F_SP_PUT \
-	| MEMPOOL_F_SC_GET \
-	| MEMPOOL_F_POOL_CREATED \
-	| MEMPOOL_F_NO_IOVA_CONTIG \
+#define MEMPOOL_KNOWN_FLAGS (RTE_MEMPOOL_F_NO_SPREAD \
+	| RTE_MEMPOOL_F_NO_CACHE_ALIGN \
+	| RTE_MEMPOOL_F_SP_PUT \
+	| RTE_MEMPOOL_F_SC_GET \
+	| RTE_MEMPOOL_F_POOL_CREATED \
+	| RTE_MEMPOOL_F_NO_IOVA_CONTIG \
 	)
 /* create an empty mempool */
 struct rte_mempool *
@@ -835,8 +835,8 @@  rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
 	}
 
 	/* "no cache align" imply "no spread" */
-	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
-		flags |= MEMPOOL_F_NO_SPREAD;
+	if (flags & RTE_MEMPOOL_F_NO_CACHE_ALIGN)
+		flags |= RTE_MEMPOOL_F_NO_SPREAD;
 
 	/* calculate mempool object sizes. */
 	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
@@ -948,11 +948,11 @@  rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
 	 * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
 	 * set the correct index into the table of ops structs.
 	 */
-	if ((flags & MEMPOOL_F_SP_PUT) && (flags & MEMPOOL_F_SC_GET))
+	if ((flags & RTE_MEMPOOL_F_SP_PUT) && (flags & RTE_MEMPOOL_F_SC_GET))
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_sc", NULL);
-	else if (flags & MEMPOOL_F_SP_PUT)
+	else if (flags & RTE_MEMPOOL_F_SP_PUT)
 		ret = rte_mempool_set_ops_byname(mp, "ring_sp_mc", NULL);
-	else if (flags & MEMPOOL_F_SC_GET)
+	else if (flags & RTE_MEMPOOL_F_SC_GET)
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_sc", NULL);
 	else
 		ret = rte_mempool_set_ops_byname(mp, "ring_mp_mc", NULL);
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 8ef4c8ed1e..d4bcb009fa 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -251,17 +251,42 @@  struct rte_mempool {
 }  __rte_cache_aligned;
 
 /** Spreading among memory channels not required. */
-#define MEMPOOL_F_NO_SPREAD      0x0001
+#define RTE_MEMPOOL_F_NO_SPREAD		0x0001
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_SPREAD.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_SPREAD		RTE_MEMPOOL_F_NO_SPREAD
 /** Do not align objects on cache lines. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002
+#define RTE_MEMPOOL_F_NO_CACHE_ALIGN	0x0002
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_CACHE_ALIGN.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_CACHE_ALIGN	RTE_MEMPOOL_F_NO_CACHE_ALIGN
 /** Default put is "single-producer". */
-#define MEMPOOL_F_SP_PUT         0x0004
+#define RTE_MEMPOOL_F_SP_PUT		0x0004
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_SP_PUT.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_SP_PUT		RTE_MEMPOOL_F_SP_PUT
 /** Default get is "single-consumer". */
-#define MEMPOOL_F_SC_GET         0x0008
+#define RTE_MEMPOOL_F_SC_GET		0x0008
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_SC_GET.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_SC_GET		RTE_MEMPOOL_F_SC_GET
 /** Internal: pool is created. */
-#define MEMPOOL_F_POOL_CREATED   0x0010
+#define RTE_MEMPOOL_F_POOL_CREATED	0x0010
 /** Don't need IOVA contiguous objects. */
-#define MEMPOOL_F_NO_IOVA_CONTIG 0x0020
+#define RTE_MEMPOOL_F_NO_IOVA_CONTIG	0x0020
+/**
+ * Backward compatibility synonym for RTE_MEMPOOL_F_NO_IOVA_CONTIG.
+ * To be deprecated.
+ */
+#define MEMPOOL_F_NO_IOVA_CONTIG	RTE_MEMPOOL_F_NO_IOVA_CONTIG
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -424,9 +449,9 @@  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp);
  * Calculate memory size required to store given number of objects.
  *
  * If mempool objects are not required to be IOVA-contiguous
- * (the flag MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
+ * (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is set), min_chunk_size defines
  * virtually contiguous chunk size. Otherwise, if mempool objects must
- * be IOVA-contiguous (the flag MEMPOOL_F_NO_IOVA_CONTIG is clear),
+ * be IOVA-contiguous (the flag RTE_MEMPOOL_F_NO_IOVA_CONTIG is clear),
  * min_chunk_size defines IOVA-contiguous chunk size.
  *
  * @param[in] mp
@@ -974,22 +999,22 @@  typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *   constraint for the reserved zone.
  * @param flags
  *   The *flags* arguments is an OR of following flags:
- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *   - RTE_MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
  *     between channels in RAM: the pool allocator will add padding
  *     between objects depending on the hardware configuration. See
  *     Memory alignment constraints for details. If this flag is set,
  *     the allocator will just align them to a cache line.
- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *   - RTE_MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
  *     cache-aligned. This flag removes this constraint, and no
  *     padding will be present between objects. This flag implies
- *     MEMPOOL_F_NO_SPREAD.
- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     RTE_MEMPOOL_F_NO_SPREAD.
+ *   - RTE_MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
  *     when using rte_mempool_put() or rte_mempool_put_bulk() is
  *     "single-producer". Otherwise, it is "multi-producers".
- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *   - RTE_MEMPOOL_F_SC_GET: If this flag is set, the default behavior
  *     when using rte_mempool_get() or rte_mempool_get_bulk() is
  *     "single-consumer". Otherwise, it is "multi-consumers".
- *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
+ *   - RTE_MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
  *     necessarily be contiguous in IO memory.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
@@ -1676,7 +1701,7 @@  rte_mempool_empty(const struct rte_mempool *mp)
  *   A pointer (virtual address) to the element of the pool.
  * @return
  *   The IO address of the elt element.
- *   If the mempool was created with MEMPOOL_F_NO_IOVA_CONTIG, the
+ *   If the mempool was created with RTE_MEMPOOL_F_NO_IOVA_CONTIG, the
  *   returned value is RTE_BAD_IOVA.
  */
 static inline rte_iova_t
diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c
index 5e22667787..2d36dee8f0 100644
--- a/lib/mempool/rte_mempool_ops.c
+++ b/lib/mempool/rte_mempool_ops.c
@@ -168,7 +168,7 @@  rte_mempool_set_ops_byname(struct rte_mempool *mp, const char *name,
 	unsigned i;
 
 	/* too late, the mempool is already populated. */
-	if (mp->flags & MEMPOOL_F_POOL_CREATED)
+	if (mp->flags & RTE_MEMPOOL_F_POOL_CREATED)
 		return -EEXIST;
 
 	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 382217bc15..46a87e2339 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -371,7 +371,8 @@  pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp)
 		rte_errno = EINVAL;
 		return -1;
 	}
-	if (mp->flags & MEMPOOL_F_SP_PUT || mp->flags & MEMPOOL_F_SC_GET) {
+	if (mp->flags & RTE_MEMPOOL_F_SP_PUT ||
+	    mp->flags & RTE_MEMPOOL_F_SC_GET) {
 		PDUMP_LOG(ERR,
 			  "mempool with SP or SC set not valid for pdump,"
 			  "must have MP and MC set\n");
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e4a445e709..82bdb84526 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -321,8 +321,8 @@  vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
 	vq->iotlb_pool = rte_mempool_create(pool_name,
 			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
 			0, 0, NULL, NULL, NULL, socket,
-			MEMPOOL_F_NO_CACHE_ALIGN |
-			MEMPOOL_F_SP_PUT);
+			RTE_MEMPOOL_F_NO_CACHE_ALIGN |
+			RTE_MEMPOOL_F_SP_PUT);
 	if (!vq->iotlb_pool) {
 		VHOST_LOG_CONFIG(ERR,
 				"Failed to create IOTLB cache pool (%s)\n",