[1/3] mbuf: remove deprecated offload flags

Message ID 20210929214817.18082-2-olivier.matz@6wind.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series mbuf: offload flags namespace |

Checks

Context Check Description
ci/checkpatch warning coding style issues

Commit Message

Olivier Matz Sept. 29, 2021, 9:48 p.m. UTC
  The flags PKT_TX_VLAN_PKT, PKT_TX_QINQ_PKT, PKT_RX_EIP_CKSUM_BAD are
marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
VLAN flags") (2017). Remove their definitions from rte_mbuf_core.h,
and replace their usages.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 app/test-pmd/flowgen.c                    |  4 +--
 app/test-pmd/macfwd.c                     |  4 +--
 app/test-pmd/txonly.c                     |  4 +--
 doc/guides/rel_notes/deprecation.rst      |  5 ----
 drivers/net/af_packet/rte_eth_af_packet.c |  2 +-
 drivers/net/avp/avp_ethdev.c              |  4 +--
 drivers/net/axgbe/axgbe_rxtx.c            |  2 +-
 drivers/net/bnx2x/bnx2x.c                 |  2 +-
 drivers/net/bnxt/bnxt_txr.c               |  8 +++---
 drivers/net/cxgbe/sge.c                   |  4 +--
 drivers/net/dpaa2/dpaa2_rxtx.c            |  6 ++---
 drivers/net/e1000/em_rxtx.c               |  4 +--
 drivers/net/e1000/igb_rxtx.c              |  6 ++---
 drivers/net/fm10k/fm10k_rxtx.c            |  4 +--
 drivers/net/hinic/hinic_pmd_tx.c          |  2 +-
 drivers/net/hns3/hns3_rxtx.c              | 14 +++++-----
 drivers/net/i40e/i40e_rxtx.c              | 10 +++----
 drivers/net/iavf/iavf_rxtx.c              |  6 ++---
 drivers/net/iavf/iavf_rxtx.h              |  2 +-
 drivers/net/igc/igc_txrx.c                |  6 ++---
 drivers/net/ionic/ionic_rxtx.c            |  4 +--
 drivers/net/ixgbe/ixgbe_rxtx.c            |  6 ++---
 drivers/net/mlx5/mlx5_tx.h                | 24 ++++++++---------
 drivers/net/netvsc/hn_rxtx.c              |  2 +-
 drivers/net/nfp/nfp_rxtx.c                |  2 +-
 drivers/net/qede/qede_rxtx.c              |  2 +-
 drivers/net/qede/qede_rxtx.h              |  2 +-
 drivers/net/sfc/sfc_ef100_tx.c            |  4 +--
 drivers/net/sfc/sfc_ef10_tx.c             |  2 +-
 drivers/net/sfc/sfc_tx.c                  |  2 +-
 drivers/net/txgbe/txgbe_rxtx.c            |  8 +++---
 drivers/net/vhost/rte_eth_vhost.c         |  2 +-
 drivers/net/virtio/virtio_rxtx.c          |  2 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c        |  4 +--
 examples/vhost/main.c                     |  2 +-
 lib/mbuf/rte_mbuf_core.h                  | 33 ++---------------------
 36 files changed, 83 insertions(+), 117 deletions(-)
  

Comments

David Marchand Oct. 4, 2021, 8:29 a.m. UTC | #1
On Wed, Sep 29, 2021 at 11:50 PM Olivier Matz <olivier.matz@6wind.com> wrote:
>
> The flags PKT_TX_VLAN_PKT, PKT_TX_QINQ_PKT, PKT_RX_EIP_CKSUM_BAD are
> marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> VLAN flags") (2017). Remove their definitions from rte_mbuf_core.h,
> and replace their usages.

The patch lgtm except the removal of some "bad checksum" flags, see below.

[snip]


> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 05fc2fdee7..549e9416c4 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -159,11 +159,6 @@ Deprecation Notices
>    will be limited to maximum 256 queues.
>    Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
>
> -* ethdev: The offload flag ``PKT_RX_EIP_CKSUM_BAD`` will be removed and
> -  replaced by the new flag ``PKT_RX_OUTER_IP_CKSUM_BAD``. The new name is more
> -  consistent with existing outer header checksum status flag naming, which
> -  should help in reducing confusion about its usage.
> -
>  * i40e: As there are both i40evf and iavf pmd, the functions of them are
>    duplicated. And now more and more advanced features are developed on iavf.
>    To keep consistent with kernel driver's name

Those 3 flags are easy to replace, but some projects are still using them.

$ git grep-all -El
'\<(PKT_TX_VLAN_PKT|PKT_TX_QINQ_PKT|PKT_RX_EIP_CKSUM_BAD)\>' |grep -v
\\.patch$
DPVS/src/netif.c
DPVS/src/vlan.c
FD.io-VPP/src/plugins/dpdk/device/format.c
gatekeeper/bpf/bpf_mbuf.h
lagopus/src/dataplane/dpdk/worker.c
packet-journey/app/main.c
Trex/src/pal/common/common_mbuf.h
Trex/src/pal/linux/mbuf.h

Please update the release notes to announce this API update.


[snip]

> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index 9d8e3ddc86..93db9292c0 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -55,37 +55,12 @@ extern "C" {
>   /** RX packet with FDIR match indicate. */
>  #define PKT_RX_FDIR          (1ULL << 2)
>
> -/**
> - * Deprecated.
> - * Checking this flag alone is deprecated: check the 2 bits of
> - * PKT_RX_L4_CKSUM_MASK.
> - * This flag was set when the L4 checksum of a packet was detected as
> - * wrong by the hardware.
> - */
> -#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)
> -
> -/**
> - * Deprecated.
> - * Checking this flag alone is deprecated: check the 2 bits of
> - * PKT_RX_IP_CKSUM_MASK.
> - * This flag was set when the IP checksum of a packet was detected as
> - * wrong by the hardware.
> - */
> -#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)
> -

You did not mention PKT_RX_IP_CKSUM_BAD and PKT_RX_L4_CKSUM_BAD in the
commitlog.
There was no deprecation notice, and those flags were not marked
RTE_DEPRECATED (there are still many projects referencing them).

Is this removal intended?
  
Olivier Matz Oct. 4, 2021, 9:46 a.m. UTC | #2
Hi David,

Thank you for the review, my comments below.

On Mon, Oct 04, 2021 at 10:29:36AM +0200, David Marchand wrote:
> On Wed, Sep 29, 2021 at 11:50 PM Olivier Matz <olivier.matz@6wind.com> wrote:
> >
> > The flags PKT_TX_VLAN_PKT, PKT_TX_QINQ_PKT, PKT_RX_EIP_CKSUM_BAD are
> > marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> > VLAN flags") (2017). Remove their definitions from rte_mbuf_core.h,
> > and replace their usages.
> 
> The patch lgtm except the removal of some "bad checksum" flags, see below.
>
> [snip]
> 
> 
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 05fc2fdee7..549e9416c4 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -159,11 +159,6 @@ Deprecation Notices
> >    will be limited to maximum 256 queues.
> >    Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
> >
> > -* ethdev: The offload flag ``PKT_RX_EIP_CKSUM_BAD`` will be removed and
> > -  replaced by the new flag ``PKT_RX_OUTER_IP_CKSUM_BAD``. The new name is more
> > -  consistent with existing outer header checksum status flag naming, which
> > -  should help in reducing confusion about its usage.
> > -
> >  * i40e: As there are both i40evf and iavf pmd, the functions of them are
> >    duplicated. And now more and more advanced features are developed on iavf.
> >    To keep consistent with kernel driver's name
> 
> Those 3 flags are easy to replace, but some projects are still using them.
> 
> $ git grep-all -El
> '\<(PKT_TX_VLAN_PKT|PKT_TX_QINQ_PKT|PKT_RX_EIP_CKSUM_BAD)\>' |grep -v
> \\.patch$
> DPVS/src/netif.c
> DPVS/src/vlan.c
> FD.io-VPP/src/plugins/dpdk/device/format.c
> gatekeeper/bpf/bpf_mbuf.h
> lagopus/src/dataplane/dpdk/worker.c
> packet-journey/app/main.c
> Trex/src/pal/common/common_mbuf.h
> Trex/src/pal/linux/mbuf.h
> 
> Please update the release notes to announce this API update.

I will add an additional note in the release note.

FYI, the flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT were not marked
RTE_DEPRECATED because their deprecation is older than this macro. If needed, I
can keep them for one more version in the header file.

> [snip]
> 
> > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> > index 9d8e3ddc86..93db9292c0 100644
> > --- a/lib/mbuf/rte_mbuf_core.h
> > +++ b/lib/mbuf/rte_mbuf_core.h
> > @@ -55,37 +55,12 @@ extern "C" {
> >   /** RX packet with FDIR match indicate. */
> >  #define PKT_RX_FDIR          (1ULL << 2)
> >
> > -/**
> > - * Deprecated.
> > - * Checking this flag alone is deprecated: check the 2 bits of
> > - * PKT_RX_L4_CKSUM_MASK.
> > - * This flag was set when the L4 checksum of a packet was detected as
> > - * wrong by the hardware.
> > - */
> > -#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)
> > -
> > -/**
> > - * Deprecated.
> > - * Checking this flag alone is deprecated: check the 2 bits of
> > - * PKT_RX_IP_CKSUM_MASK.
> > - * This flag was set when the IP checksum of a packet was detected as
> > - * wrong by the hardware.
> > - */
> > -#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)
> > -
> 
> You did not mention PKT_RX_IP_CKSUM_BAD and PKT_RX_L4_CKSUM_BAD in the
> commitlog.
> There was no deprecation notice, and those flags were not marked
> RTE_DEPRECATED (there are still many projects referencing them).
> 
> Is this removal intended?

Yes, actually these flags were defined twice at different places. I'm just
removing one occurence, and the other remains.

Thanks,
Olivier
  

Patch

diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 0d3664a64d..d0360b4363 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,9 +100,9 @@  pkt_burst_flow_gen(struct fwd_stream *fs)
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags |= PKT_TX_VLAN_PKT;
+		ol_flags |= PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d..21be8bb470 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -73,9 +73,9 @@  pkt_burst_mac_forward(struct fwd_stream *fs)
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN_PKT;
+		ol_flags = PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d3..ab7cd622c7 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -353,9 +353,9 @@  pkt_burst_transmit(struct fwd_stream *fs)
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
 	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
-		ol_flags = PKT_TX_VLAN_PKT;
+		ol_flags = PKT_TX_VLAN;
 	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
-		ol_flags |= PKT_TX_QINQ_PKT;
+		ol_flags |= PKT_TX_QINQ;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 05fc2fdee7..549e9416c4 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -159,11 +159,6 @@  Deprecation Notices
   will be limited to maximum 256 queues.
   Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed.
 
-* ethdev: The offload flag ``PKT_RX_EIP_CKSUM_BAD`` will be removed and
-  replaced by the new flag ``PKT_RX_OUTER_IP_CKSUM_BAD``. The new name is more
-  consistent with existing outer header checksum status flag naming, which
-  should help in reducing confusion about its usage.
-
 * i40e: As there are both i40evf and iavf pmd, the functions of them are
   duplicated. And now more and more advanced features are developed on iavf.
   To keep consistent with kernel driver's name
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index fcd8090399..294132b759 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -224,7 +224,7 @@  eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 
 		/* insert vlan info if necessary */
-		if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		if (mbuf->ol_flags & PKT_TX_VLAN) {
 			if (rte_vlan_insert(&mbuf)) {
 				rte_pktmbuf_free(mbuf);
 				continue;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..01553958be 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1674,7 +1674,7 @@  avp_dev_copy_to_buffers(struct avp_dev *avp,
 	first_buf->nb_segs = count;
 	first_buf->pkt_len = total_length;
 
-	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	if (mbuf->ol_flags & PKT_TX_VLAN) {
 		first_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 		first_buf->vlan_tci = mbuf->vlan_tci;
 	}
@@ -1905,7 +1905,7 @@  avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		pkt_buf->nb_segs = 1;
 		pkt_buf->next = NULL;
 
-		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		if (m->ol_flags & PKT_TX_VLAN) {
 			pkt_buf->ol_flags |= RTE_AVP_TX_VLAN_PKT;
 			pkt_buf->vlan_tci = m->vlan_tci;
 		}
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 33f709a6bb..45b9bd3e39 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -811,7 +811,7 @@  static int axgbe_xmit_hw(struct axgbe_tx_queue *txq,
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1);
 	rte_wmb();
 
-	if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+	if (mbuf->ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
 		/* Mark it as a CONTEXT descriptor */
 		AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3,
 				  CTXT, 1);
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 9163b8b1fd..235c374180 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -2189,7 +2189,7 @@  int bnx2x_tx_encap(struct bnx2x_tx_queue *txq, struct rte_mbuf *m0)
 
 	tx_start_bd->nbd = rte_cpu_to_le_16(2);
 
-	if (m0->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m0->ol_flags & PKT_TX_VLAN) {
 		tx_start_bd->vlan_or_ethertype =
 		    rte_cpu_to_le_16(m0->vlan_tci);
 		tx_start_bd->bd_flags.as_bitfield |=
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 47824334ae..5d3cdfa8f2 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -110,10 +110,10 @@  bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
 {
 	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
 				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
+				PKT_TX_VLAN | PKT_TX_OUTER_IP_CKSUM |
 				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
 				PKT_TX_TUNNEL_GENEVE | PKT_TX_IEEE1588_TMST |
-				PKT_TX_QINQ_PKT) ||
+				PKT_TX_QINQ) ||
 	     (BNXT_TRUFLOW_EN(txq->bp) &&
 	      (txq->bp->tx_cfa_action || txq->vfr_tx_cfa_action)))
 		return true;
@@ -200,13 +200,13 @@  static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		vlan_tag_flags = 0;
 
 		/* HW can accelerate only outer vlan in QinQ mode */
-		if (tx_pkt->ol_flags & PKT_TX_QINQ_PKT) {
+		if (tx_pkt->ol_flags & PKT_TX_QINQ) {
 			vlan_tag_flags = TX_BD_LONG_CFA_META_KEY_VLAN_TAG |
 				tx_pkt->vlan_tci_outer;
 			outer_tpid_bd = txq->bp->outer_tpid_bd &
 				BNXT_OUTER_TPID_BD_MASK;
 			vlan_tag_flags |= outer_tpid_bd;
-		} else if (tx_pkt->ol_flags & PKT_TX_VLAN_PKT) {
+		} else if (tx_pkt->ol_flags & PKT_TX_VLAN) {
 			/* shurd: Should this mask at
 			 * TX_BD_LONG_CFA_META_VLAN_VID_MASK?
 			 */
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4..3299d6252e 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1037,7 +1037,7 @@  static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 		cntrl = F_TXPKT_L4CSUM_DIS | F_TXPKT_IPCSUM_DIS;
 	}
 
-	if (mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	if (mbuf->ol_flags & PKT_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(mbuf->vlan_tci);
 	}
@@ -1258,7 +1258,7 @@  int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 		txq->stats.tx_cso += m->tso_segsz;
 	}
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		txq->stats.vlan_ins++;
 		cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(m->vlan_tci);
 	}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3..f491f4d10a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1228,7 +1228,7 @@  dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely(((*bufs)->ol_flags
-						& PKT_TX_VLAN_PKT) ||
+						& PKT_TX_VLAN) ||
 						(eth_data->dev_conf.txmode.offloads
 						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
@@ -1271,7 +1271,7 @@  dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				goto send_n_return;
 			}
 
-			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
+			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN) ||
 				(eth_data->dev_conf.txmode.offloads
 				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
@@ -1532,7 +1532,7 @@  dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				    (*bufs)->nb_segs == 1 &&
 				    rte_mbuf_refcnt_read((*bufs)) == 1)) {
 					if (unlikely((*bufs)->ol_flags
-						& PKT_TX_VLAN_PKT)) {
+						& PKT_TX_VLAN)) {
 					  ret = rte_vlan_insert(bufs);
 					  if (ret)
 						goto send_n_return;
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd00..0105e2d384 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -55,7 +55,7 @@ 
 		PKT_TX_IPV4 |           \
 		PKT_TX_IP_CKSUM |       \
 		PKT_TX_L4_MASK |        \
-		PKT_TX_VLAN_PKT)
+		PKT_TX_VLAN)
 
 #define E1000_TX_OFFLOAD_NOTSUP_MASK \
 		(PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK)
@@ -506,7 +506,7 @@  eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		popts_spec = 0;
 
 		/* Set VLAN Tag offload fields. */
-		if (ol_flags & PKT_TX_VLAN_PKT) {
+		if (ol_flags & PKT_TX_VLAN) {
 			cmd_type_len |= E1000_TXD_CMD_VLE;
 			popts_spec = tx_pkt->vlan_tci << E1000_TXD_VLAN_SHIFT;
 		}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712..c630894052 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -54,7 +54,7 @@ 
 		PKT_TX_OUTER_IPV4 |	 \
 		PKT_TX_IPV6 |		 \
 		PKT_TX_IPV4 |		 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG |		 \
@@ -260,7 +260,7 @@  igbe_set_xmit_ctx(struct igb_tx_queue* txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_idx << E1000_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tx_offload_mask.data |= TX_VLAN_CMP_MASK;
 
 	/* check if TCP segmentation required for this packet */
@@ -369,7 +369,7 @@  tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, E1000_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
+	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & PKT_TX_TCP_SEG) != 0];
 	return cmdtype;
 }
diff --git a/drivers/net/fm10k/fm10k_rxtx.c b/drivers/net/fm10k/fm10k_rxtx.c
index 0a9a27aa5a..496e72a003 100644
--- a/drivers/net/fm10k/fm10k_rxtx.c
+++ b/drivers/net/fm10k/fm10k_rxtx.c
@@ -38,7 +38,7 @@  static inline void dump_rxd(union fm10k_rx_desc *rxd)
 #endif
 
 #define FM10K_TX_OFFLOAD_MASK (  \
-		PKT_TX_VLAN_PKT |        \
+		PKT_TX_VLAN |        \
 		PKT_TX_IPV6 |            \
 		PKT_TX_IPV4 |            \
 		PKT_TX_IP_CKSUM |        \
@@ -609,7 +609,7 @@  static inline void tx_xmit_pkt(struct fm10k_tx_queue *q, struct rte_mbuf *mb)
 		q->hw_ring[q->next_free].flags |= FM10K_TXD_FLAG_CSUM;
 
 	/* set vlan if requested */
-	if (mb->ol_flags & PKT_TX_VLAN_PKT)
+	if (mb->ol_flags & PKT_TX_VLAN)
 		q->hw_ring[q->next_free].vlan = mb->vlan_tci;
 	else
 		q->hw_ring[q->next_free].vlan = 0;
diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c
index 669f82389c..e14937139d 100644
--- a/drivers/net/hinic/hinic_pmd_tx.c
+++ b/drivers/net/hinic/hinic_pmd_tx.c
@@ -592,7 +592,7 @@  hinic_fill_tx_offload_info(struct rte_mbuf *mbuf,
 	task->pkt_info2 = 0;
 
 	/* Base VLAN */
-	if (unlikely(ol_flags & PKT_TX_VLAN_PKT)) {
+	if (unlikely(ol_flags & PKT_TX_VLAN)) {
 		vlan_tag = mbuf->vlan_tci;
 		hinic_set_vlan_tx_offload(task, queue_info, vlan_tag,
 					  vlan_tag >> VLAN_PRIO_SHIFT);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..59ba9e7454 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -3190,11 +3190,11 @@  hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 	 * To avoid the VLAN of Tx descriptor is overwritten by PVID, it should
 	 * be added to the position close to the IP header when PVID is enabled.
 	 */
-	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN_PKT |
-				PKT_TX_QINQ_PKT)) {
+	if (!txq->pvid_sw_shift_en && ol_flags & (PKT_TX_VLAN |
+				PKT_TX_QINQ)) {
 		desc->tx.ol_type_vlan_len_msec |=
 				rte_cpu_to_le_32(BIT(HNS3_TXD_OVLAN_B));
-		if (ol_flags & PKT_TX_QINQ_PKT)
+		if (ol_flags & PKT_TX_QINQ)
 			desc->tx.outer_vlan_tag =
 					rte_cpu_to_le_16(rxm->vlan_tci_outer);
 		else
@@ -3202,8 +3202,8 @@  hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc,
 					rte_cpu_to_le_16(rxm->vlan_tci);
 	}
 
-	if (ol_flags & PKT_TX_QINQ_PKT ||
-	    ((ol_flags & PKT_TX_VLAN_PKT) && txq->pvid_sw_shift_en)) {
+	if (ol_flags & PKT_TX_QINQ ||
+	    ((ol_flags & PKT_TX_VLAN) && txq->pvid_sw_shift_en)) {
 		desc->tx.type_cs_vlan_tso_len |=
 					rte_cpu_to_le_32(BIT(HNS3_TXD_VLAN_B));
 		desc->tx.vlan_tag = rte_cpu_to_le_16(rxm->vlan_tci);
@@ -3742,12 +3742,12 @@  hns3_vld_vlan_chk(struct hns3_tx_queue *txq, struct rte_mbuf *m)
 	 * implementation function named hns3_prep_pkts to inform users that
 	 * these packets will be discarded.
 	 */
-	if (m->ol_flags & PKT_TX_QINQ_PKT)
+	if (m->ol_flags & PKT_TX_QINQ)
 		return -EINVAL;
 
 	eh = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
 	if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
-		if (m->ol_flags & PKT_TX_VLAN_PKT)
+		if (m->ol_flags & PKT_TX_VLAN)
 			return -EINVAL;
 
 		/* Ensure the incoming packet is not a QinQ packet */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3eb82578b0..33a6a9e840 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -64,8 +64,8 @@ 
 		PKT_TX_L4_MASK |        \
 		PKT_TX_OUTER_IP_CKSUM | \
 		PKT_TX_TCP_SEG |        \
-		PKT_TX_QINQ_PKT |       \
-		PKT_TX_VLAN_PKT |	\
+		PKT_TX_QINQ |       \
+		PKT_TX_VLAN |	\
 		PKT_TX_TUNNEL_MASK |	\
 		I40E_TX_IEEE1588_TMST)
 
@@ -1006,7 +1006,7 @@  i40e_calc_context_desc(uint64_t flags)
 {
 	static uint64_t mask = PKT_TX_OUTER_IP_CKSUM |
 		PKT_TX_TCP_SEG |
-		PKT_TX_QINQ_PKT |
+		PKT_TX_QINQ |
 		PKT_TX_TUNNEL_MASK;
 
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1151,7 +1151,7 @@  i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
+		if (ol_flags & (PKT_TX_VLAN | PKT_TX_QINQ)) {
 			td_cmd |= I40E_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
 		}
@@ -1200,7 +1200,7 @@  i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 			ctx_txd->tunneling_params =
 				rte_cpu_to_le_32(cd_tunneling_params);
-			if (ol_flags & PKT_TX_QINQ_PKT) {
+			if (ol_flags & PKT_TX_QINQ) {
 				cd_l2tag2 = tx_pkt->vlan_tci_outer;
 				cd_type_cmd_tso_mss |=
 					((uint64_t)I40E_TX_CTX_DESC_IL2TAG2 <<
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 87afc0b4cb..cba1ba8052 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2074,7 +2074,7 @@  iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag)
 {
 	if (flags & PKT_TX_TCP_SEG)
 		return 1;
-	if (flags & PKT_TX_VLAN_PKT &&
+	if (flags & PKT_TX_VLAN &&
 	    vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2)
 		return 1;
 	return 0;
@@ -2260,7 +2260,7 @@  iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (ol_flags & PKT_TX_VLAN_PKT &&
+		if (ol_flags & PKT_TX_VLAN &&
 		    txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) {
 			td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1;
 			td_tag = tx_pkt->vlan_tci;
@@ -2301,7 +2301,7 @@  iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				cd_type_cmd_tso_mss |=
 					iavf_set_tso_ctx(tx_pkt, tx_offload);
 
-			if (ol_flags & PKT_TX_VLAN_PKT &&
+			if (ol_flags & PKT_TX_VLAN &&
 			   txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) {
 				cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2
 					<< IAVF_TXD_CTX_QW1_CMD_SHIFT;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d6..6c3fdbb3b2 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -62,7 +62,7 @@ 
 		PKT_TX_OUTER_IPV4 |		 \
 		PKT_TX_IPV6 |			 \
 		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd2..9848afd9ca 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -78,7 +78,7 @@ 
 		PKT_TX_OUTER_IPV4 |	\
 		PKT_TX_IPV6 |		\
 		PKT_TX_IPV4 |		\
-		PKT_TX_VLAN_PKT |	\
+		PKT_TX_VLAN |	\
 		PKT_TX_IP_CKSUM |	\
 		PKT_TX_L4_MASK |	\
 		PKT_TX_TCP_SEG |	\
@@ -1530,7 +1530,7 @@  igc_set_xmit_ctx(struct igc_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx = (ctx_curr << IGC_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tx_offload_mask.vlan_tci = 0xffff;
 
 	/* check if TCP segmentation required for this packet */
@@ -1604,7 +1604,7 @@  tx_desc_vlan_flags_to_cmdtype(uint64_t ol_flags)
 	uint32_t cmdtype;
 	static uint32_t vlan_cmd[2] = {0, IGC_ADVTXD_DCMD_VLE};
 	static uint32_t tso_cmd[2] = {0, IGC_ADVTXD_DCMD_TSE};
-	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
+	cmdtype = vlan_cmd[(ol_flags & PKT_TX_VLAN) != 0];
 	cmdtype |= tso_cmd[(ol_flags & IGC_TX_OFFLOAD_SEG) != 0];
 	return cmdtype;
 }
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa..431435eea0 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -356,7 +356,7 @@  ionic_tx_tso(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	uint32_t offset = 0;
 	bool start, done;
 	bool encap;
-	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN_PKT);
+	bool has_vlan = !!(txm->ol_flags & PKT_TX_VLAN);
 	uint16_t vlan_tci = txm->vlan_tci;
 	uint64_t ol_flags = txm->ol_flags;
 
@@ -495,7 +495,7 @@  ionic_tx(struct ionic_tx_qcq *txq, struct rte_mbuf *txm)
 	if (opcode == IONIC_TXQ_DESC_OPCODE_CSUM_NONE)
 		stats->no_csum++;
 
-	has_vlan = (ol_flags & PKT_TX_VLAN_PKT);
+	has_vlan = (ol_flags & PKT_TX_VLAN);
 	encap = ((ol_flags & PKT_TX_OUTER_IP_CKSUM) ||
 			(ol_flags & PKT_TX_OUTER_UDP_CKSUM)) &&
 			((ol_flags & PKT_TX_OUTER_IPV4) ||
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755..717ae8f775 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -64,7 +64,7 @@ 
 		PKT_TX_OUTER_IPV4 |		 \
 		PKT_TX_IPV6 |			 \
 		PKT_TX_IPV4 |			 \
-		PKT_TX_VLAN_PKT |		 \
+		PKT_TX_VLAN |		 \
 		PKT_TX_IP_CKSUM |		 \
 		PKT_TX_L4_MASK |		 \
 		PKT_TX_TCP_SEG |		 \
@@ -384,7 +384,7 @@  ixgbe_set_xmit_ctx(struct ixgbe_tx_queue *txq,
 	/* Specify which HW CTX to upload. */
 	mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
 
-	if (ol_flags & PKT_TX_VLAN_PKT) {
+	if (ol_flags & PKT_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 	}
 
@@ -543,7 +543,7 @@  tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
 	if (ol_flags & PKT_TX_TCP_SEG)
 		cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 1a35919371..1efe912a06 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -538,7 +538,7 @@  txq_mbuf_to_swp(struct mlx5_txq_local *__rte_restrict loc,
 	 * should be set regardless of HW offload.
 	 */
 	off = loc->mbuf->outer_l2_len;
-	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && ol & PKT_TX_VLAN)
 		off += sizeof(struct rte_vlan_hdr);
 	set = (off >> 1) << 8; /* Outer L3 offset. */
 	off += loc->mbuf->outer_l3_len;
@@ -956,7 +956,7 @@  mlx5_tx_eseg_none(struct mlx5_txq_data *__rte_restrict txq __rte_unused,
 		       *RTE_FLOW_DYNF_METADATA(loc->mbuf) : 0 : 0;
 	/* Engage VLAN tag insertion feature if requested. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+	    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 		/*
 		 * We should get here only if device support
 		 * this feature correctly.
@@ -1814,7 +1814,7 @@  mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq,
 	 * the required space in WQE ring buffer.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = loc->mbuf->l2_len + vlan +
 		loc->mbuf->l3_len + loc->mbuf->l4_len;
@@ -1929,7 +1929,7 @@  mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
 	/* Update sent data bytes counter. */
 	txq->stats.obytes += rte_pktmbuf_pkt_len(loc->mbuf);
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	    loc->mbuf->ol_flags & PKT_TX_VLAN)
 		txq->stats.obytes += sizeof(struct rte_vlan_hdr);
 #endif
 	/*
@@ -2028,7 +2028,7 @@  mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq,
 	 * to estimate the required space for WQE.
 	 */
 	dlen = rte_pktmbuf_pkt_len(loc->mbuf);
-	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+	if (MLX5_TXOFF_CONFIG(VLAN) && loc->mbuf->ol_flags & PKT_TX_VLAN)
 		vlan = sizeof(struct rte_vlan_hdr);
 	inlen = dlen + vlan;
 	/* Check against minimal length. */
@@ -2291,7 +2291,7 @@  mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq,
 		}
 		dlen = rte_pktmbuf_data_len(loc->mbuf);
 		if (MLX5_TXOFF_CONFIG(VLAN) &&
-		    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+		    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 			vlan = sizeof(struct rte_vlan_hdr);
 		}
 		/*
@@ -2416,7 +2416,7 @@  mlx5_tx_able_to_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return MLX5_TXCMP_CODE_SINGLE;
 	/* Check if eMPW can be engaged. */
 	if (MLX5_TXOFF_CONFIG(VLAN) &&
-	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) &&
+	    unlikely(loc->mbuf->ol_flags & PKT_TX_VLAN) &&
 		(!MLX5_TXOFF_CONFIG(INLINE) ||
 		 unlikely((rte_pktmbuf_data_len(loc->mbuf) +
 			   sizeof(struct rte_vlan_hdr)) > txq->inlen_empw))) {
@@ -2478,7 +2478,7 @@  mlx5_tx_match_empw(struct mlx5_txq_data *__rte_restrict txq,
 		return false;
 	/* There must be no VLAN packets in eMPW loop. */
 	if (MLX5_TXOFF_CONFIG(VLAN))
-		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN_PKT));
+		MLX5_ASSERT(!(loc->mbuf->ol_flags & PKT_TX_VLAN));
 	/* Check if the scheduling is requested. */
 	if (MLX5_TXOFF_CONFIG(TXPP) &&
 	    loc->mbuf->ol_flags & txq->ts_mask)
@@ -2939,7 +2939,7 @@  mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			}
 			/* Inline entire packet, optional VLAN insertion. */
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 				/*
 				 * The packet length must be checked in
 				 * mlx5_tx_able_to_empw() and packet
@@ -3004,7 +3004,7 @@  mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq,
 			MLX5_ASSERT(room >= MLX5_WQE_DSEG_SIZE);
 			if (MLX5_TXOFF_CONFIG(VLAN))
 				MLX5_ASSERT(!(loc->mbuf->ol_flags &
-					    PKT_TX_VLAN_PKT));
+					    PKT_TX_VLAN));
 			mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx);
 			/* We have to store mbuf in elts.*/
 			txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf;
@@ -3149,7 +3149,7 @@  mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 
 			inlen = rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT) {
+			    loc->mbuf->ol_flags & PKT_TX_VLAN) {
 				vlan = sizeof(struct rte_vlan_hdr);
 				inlen += vlan;
 			}
@@ -3380,7 +3380,7 @@  mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			/* Update sent data bytes counter. */
 			txq->stats.obytes += rte_pktmbuf_data_len(loc->mbuf);
 			if (MLX5_TXOFF_CONFIG(VLAN) &&
-			    loc->mbuf->ol_flags & PKT_TX_VLAN_PKT)
+			    loc->mbuf->ol_flags & PKT_TX_VLAN)
 				txq->stats.obytes +=
 					sizeof(struct rte_vlan_hdr);
 #endif
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index c6bf7cc132..afef7a96a3 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1331,7 +1331,7 @@  static void hn_encap(struct rndis_packet_msg *pkt,
 					  NDIS_PKTINFO_TYPE_HASHVAL);
 	*pi_data = queue_id;
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		pi_data = hn_rndis_pktinfo_append(pkt, NDIS_VLAN_INFO_SIZE,
 						  NDIS_PKTINFO_TYPE_VLAN);
 		*pi_data = m->vlan_tci;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 1402c5f84a..0dcaf525f6 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -929,7 +929,7 @@  nfp_net_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		nfp_net_tx_tso(txq, &txd, pkt);
 		nfp_net_tx_cksum(txq, &txd, pkt);
 
-		if ((pkt->ol_flags & PKT_TX_VLAN_PKT) &&
+		if ((pkt->ol_flags & PKT_TX_VLAN) &&
 		    (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)) {
 			txd.flags |= PCIE_DESC_TX_VLAN;
 			txd.vlan = pkt->vlan_tci;
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba..050c6f5c32 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -2587,7 +2587,7 @@  qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Descriptor based VLAN insertion */
-		if (tx_ol_flags & PKT_TX_VLAN_PKT) {
+		if (tx_ol_flags & PKT_TX_VLAN) {
 			vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
 			bd1_bd_flags_bf |=
 			    1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c8..025ed6fff2 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -153,7 +153,7 @@ 
 				   PKT_TX_IPV6)
 
 #define QEDE_TX_OFFLOAD_MASK (QEDE_TX_CSUM_OFFLOAD_MASK | \
-			      PKT_TX_VLAN_PKT		| \
+			      PKT_TX_VLAN		| \
 			      PKT_TX_TUNNEL_MASK)
 
 #define QEDE_TX_OFFLOAD_NOTSUP_MASK \
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d34..53d01612d1 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -382,7 +382,7 @@  sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		efx_oword_t tx_desc_extra_fields;
 
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
@@ -464,7 +464,7 @@  sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
 
-	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+	if (m->ol_flags & PKT_TX_VLAN) {
 		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
 				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
 				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca..277fe6c6ca 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -805,7 +805,7 @@  sfc_ef10_simple_prepare_pkts(__rte_unused void *tx_queue,
 
 		/* ef10_simple does not support TSO and VLAN insertion */
 		if (unlikely(m->ol_flags &
-			     (PKT_TX_TCP_SEG | PKT_TX_VLAN_PKT))) {
+			     (PKT_TX_TCP_SEG | PKT_TX_VLAN))) {
 			rte_errno = ENOTSUP;
 			break;
 		}
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d2..936ae815ea 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -766,7 +766,7 @@  static unsigned int
 sfc_efx_tx_maybe_insert_tag(struct sfc_efx_txq *txq, struct rte_mbuf *m,
 			    efx_desc_t **pend)
 {
-	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN_PKT) ?
+	uint16_t this_tag = ((m->ol_flags & PKT_TX_VLAN) ?
 			     m->vlan_tci : 0);
 
 	if (this_tag == txq->hw_vlan_tci)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1..53da1b8450 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -54,7 +54,7 @@  static const u64 TXGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
 		PKT_TX_OUTER_IPV4 |
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
-		PKT_TX_VLAN_PKT |
+		PKT_TX_VLAN |
 		PKT_TX_L4_MASK |
 		PKT_TX_TCP_SEG |
 		PKT_TX_TUNNEL_MASK |
@@ -408,7 +408,7 @@  txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
 		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
 	}
 
-	if (ol_flags & PKT_TX_VLAN_PKT) {
+	if (ol_flags & PKT_TX_VLAN) {
 		tx_offload_mask.vlan_tci |= ~0;
 		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
 	}
@@ -496,7 +496,7 @@  tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
 			tmp |= TXGBE_TXD_IPCS;
 		tmp |= TXGBE_TXD_L4CS;
 	}
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		tmp |= TXGBE_TXD_CC;
 
 	return tmp;
@@ -507,7 +507,7 @@  tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
 {
 	uint32_t cmdtype = 0;
 
-	if (ol_flags & PKT_TX_VLAN_PKT)
+	if (ol_flags & PKT_TX_VLAN)
 		cmdtype |= TXGBE_TXD_VLE;
 	if (ol_flags & PKT_TX_TCP_SEG)
 		cmdtype |= TXGBE_TXD_TSE;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..214a6ee4c8 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -444,7 +444,7 @@  eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 		struct rte_mbuf *m = bufs[i];
 
 		/* Do VLAN tag insertion */
-		if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		if (m->ol_flags & PKT_TX_VLAN) {
 			int error = rte_vlan_insert(&m);
 			if (unlikely(error)) {
 				rte_pktmbuf_free(m);
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 97ed69596a..65f08b775a 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -1747,7 +1747,7 @@  virtio_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts,
 #endif
 
 		/* Do VLAN tag insertion */
-		if (unlikely(m->ol_flags & PKT_TX_VLAN_PKT)) {
+		if (unlikely(m->ol_flags & PKT_TX_VLAN)) {
 			error = rte_vlan_insert(&m);
 			/* rte_vlan_insert() may change pointer
 			 * even in the case of failure
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 5cf53d4de8..69e877f816 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -49,7 +49,7 @@ 
 #include "vmxnet3_ethdev.h"
 
 #define	VMXNET3_TX_OFFLOAD_MASK	( \
-		PKT_TX_VLAN_PKT | \
+		PKT_TX_VLAN | \
 		PKT_TX_IPV6 |     \
 		PKT_TX_IPV4 |     \
 		PKT_TX_L4_MASK |  \
@@ -520,7 +520,7 @@  vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 		/* Add VLAN tag if present */
 		gdesc = txq->cmd_ring.base + first2fill;
-		if (txm->ol_flags & PKT_TX_VLAN_PKT) {
+		if (txm->ol_flags & PKT_TX_VLAN) {
 			gdesc->txd.ti = 1;
 			gdesc->txd.tci = txm->vlan_tci;
 		}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e3..1603c29fb5 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1115,7 +1115,7 @@  virtio_tx_route(struct vhost_dev *vdev, struct rte_mbuf *m, uint16_t vlan_tag)
 			(vh->vlan_tci != vlan_tag_be))
 			vh->vlan_tci = vlan_tag_be;
 	} else {
-		m->ol_flags |= PKT_TX_VLAN_PKT;
+		m->ol_flags |= PKT_TX_VLAN;
 
 		/*
 		 * Find the right seg to adjust the data len when offset is
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 9d8e3ddc86..93db9292c0 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -55,37 +55,12 @@  extern "C" {
  /** RX packet with FDIR match indicate. */
 #define PKT_RX_FDIR          (1ULL << 2)
 
-/**
- * Deprecated.
- * Checking this flag alone is deprecated: check the 2 bits of
- * PKT_RX_L4_CKSUM_MASK.
- * This flag was set when the L4 checksum of a packet was detected as
- * wrong by the hardware.
- */
-#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)
-
-/**
- * Deprecated.
- * Checking this flag alone is deprecated: check the 2 bits of
- * PKT_RX_IP_CKSUM_MASK.
- * This flag was set when the IP checksum of a packet was detected as
- * wrong by the hardware.
- */
-#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)
-
 /**
  * This flag is set when the outermost IP header checksum is detected as
  * wrong by the hardware.
  */
 #define PKT_RX_OUTER_IP_CKSUM_BAD (1ULL << 5)
 
-/**
- * Deprecated.
- * This flag has been renamed, use PKT_RX_OUTER_IP_CKSUM_BAD instead.
- */
-#define PKT_RX_EIP_CKSUM_BAD \
-	RTE_DEPRECATED(PKT_RX_EIP_CKSUM_BAD) PKT_RX_OUTER_IP_CKSUM_BAD
-
 /**
  * A vlan has been stripped by the hardware and its tci is saved in
  * mbuf->vlan_tci. This can only happen if vlan stripping is enabled
@@ -289,8 +264,6 @@  extern "C" {
  * mbuf 'vlan_tci' & 'vlan_tci_outer' must be valid when this flag is set.
  */
 #define PKT_TX_QINQ        (1ULL << 49)
-/** This old name is deprecated. */
-#define PKT_TX_QINQ_PKT    PKT_TX_QINQ
 
 /**
  * TCP segmentation offload. To enable this offload feature for a
@@ -358,8 +331,6 @@  extern "C" {
  * mbuf 'vlan_tci' field must be valid when this flag is set.
  */
 #define PKT_TX_VLAN          (1ULL << 57)
-/* this old name is deprecated */
-#define PKT_TX_VLAN_PKT      PKT_TX_VLAN
 
 /**
  * Offload the IP checksum of an external header in the hardware. The
@@ -391,14 +362,14 @@  extern "C" {
 		PKT_TX_OUTER_IPV6 |	 \
 		PKT_TX_OUTER_IPV4 |	 \
 		PKT_TX_OUTER_IP_CKSUM |  \
-		PKT_TX_VLAN_PKT |        \
+		PKT_TX_VLAN |        \
 		PKT_TX_IPV6 |		 \
 		PKT_TX_IPV4 |		 \
 		PKT_TX_IP_CKSUM |        \
 		PKT_TX_L4_MASK |         \
 		PKT_TX_IEEE1588_TMST |	 \
 		PKT_TX_TCP_SEG |         \
-		PKT_TX_QINQ_PKT |        \
+		PKT_TX_QINQ |        \
 		PKT_TX_TUNNEL_MASK |	 \
 		PKT_TX_MACSEC |		 \
 		PKT_TX_SEC_OFFLOAD |	 \