From patchwork Thu Feb 4 03:11:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Xiaoyun" X-Patchwork-Id: 87713 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 99591A0A0E; Thu, 4 Feb 2021 04:14:10 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20E3C240503; Thu, 4 Feb 2021 04:14:10 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 9B1852404EA for ; Thu, 4 Feb 2021 04:14:07 +0100 (CET) IronPort-SDR: JfYm37V7meisO8MeRfE4As4NPnq6Z2tS4+w+/kQpVAy3RCXH3h9+wZ1jbj9TBnvyDa2jaZiq/5 rUbt/8CPfjqg== X-IronPort-AV: E=McAfee;i="6000,8403,9884"; a="160327028" X-IronPort-AV: E=Sophos;i="5.79,400,1602572400"; d="scan'208";a="160327028" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Feb 2021 19:14:06 -0800 IronPort-SDR: CisQNXuJQU2I4kx5stt4Gt5rer21r+okhCuAzgsE03JFeEMDASzazCqDhLBZMiM6Dgejz1S+wz fp5e1fiFxUhQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,400,1602572400"; d="scan'208";a="480714051" Received: from dpdk-xiaoyunl.sh.intel.com ([10.67.111.154]) by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2021 19:14:04 -0800 From: Xiaoyun Li To: jingjing.wu@intel.com, beilei.xing@intel.com, dev@dpdk.org Cc: Xiaoyun Li Date: Thu, 4 Feb 2021 11:11:18 +0800 Message-Id: <20210204031118.603270-1-xiaoyun.li@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH] net/iavf: fix VLAN insert issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The new VIRTCHNL_VF_OFFLOAD_VLAN_V2 capability allows PF to set the location of TX VLAN insertion. So VF needs to insert VLAN tag according to the location flags. Fixes: 1c301e8c3cff ("net/iavf: support new VLAN capabilities") Signed-off-by: Xiaoyun Li Acked-by: Beilei Xing --- drivers/net/iavf/iavf_rxtx.c | 45 +++++++++++++++++++++++++++++++----- drivers/net/iavf/iavf_rxtx.h | 3 +++ 2 files changed, 42 insertions(+), 6 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 3d471d9acc..af5a28d84d 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -629,6 +629,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, const struct rte_eth_txconf *tx_conf) { struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct iavf_info *vf = + IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); struct iavf_tx_queue *txq; const struct rte_memzone *mz; uint32_t ring_size; @@ -670,6 +672,24 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { + struct virtchnl_vlan_supported_caps *insertion_support = + &vf->vlan_v2_caps.offloads.insertion_support; + uint32_t insertion_cap; + + if (insertion_support->outer) + insertion_cap = insertion_support->outer; + else + insertion_cap = insertion_support->inner; + + if (insertion_cap & VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1) + txq->vlan_flag = IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + else if (insertion_cap & VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2) + txq->vlan_flag = IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2; + } else { + txq->vlan_flag = IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1; + } + txq->nb_tx_desc = nb_desc; txq->rs_thresh = tx_rs_thresh; txq->free_thresh = tx_free_thresh; @@ -1968,11 +1988,14 @@ iavf_xmit_cleanup(struct iavf_tx_queue *txq) /* Check if the context descriptor is needed for TX offloading */ static inline uint16_t -iavf_calc_context_desc(uint64_t flags) +iavf_calc_context_desc(uint64_t flags, uint8_t vlan_flag) { - static uint64_t mask = PKT_TX_TCP_SEG; - - return (flags & mask) ? 1 : 0; + if (flags & PKT_TX_TCP_SEG) + return 1; + if (flags & PKT_TX_VLAN_PKT && + vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) + return 1; + return 0; } static inline void @@ -2093,6 +2116,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t tx_last; uint16_t slen; uint64_t buf_dma_addr; + uint16_t cd_l2tag2 = 0; union iavf_tx_offload tx_offload = {0}; txq = tx_queue; @@ -2119,7 +2143,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) tx_offload.l4_len = tx_pkt->l4_len; tx_offload.tso_segsz = tx_pkt->tso_segsz; /* Calculate the number of context descriptors needed. */ - nb_ctx = iavf_calc_context_desc(ol_flags); + nb_ctx = iavf_calc_context_desc(ol_flags, txq->vlan_flag); /* The number of descriptors that must be allocated for * a packet equals to the number of the segments of that @@ -2154,7 +2178,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } /* Descriptor based VLAN insertion */ - if (ol_flags & PKT_TX_VLAN_PKT) { + if (ol_flags & PKT_TX_VLAN_PKT && + txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1) { td_cmd |= IAVF_TX_DESC_CMD_IL2TAG1; td_tag = tx_pkt->vlan_tci; } @@ -2189,8 +2214,16 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) cd_type_cmd_tso_mss |= iavf_set_tso_ctx(tx_pkt, tx_offload); + if (ol_flags & PKT_TX_VLAN_PKT && + txq->vlan_flag & IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2) { + cd_type_cmd_tso_mss |= IAVF_TX_CTX_DESC_IL2TAG2 + << IAVF_TXD_CTX_QW1_CMD_SHIFT; + cd_l2tag2 = tx_pkt->vlan_tci; + } + ctx_txd->type_cmd_tso_mss = rte_cpu_to_le_64(cd_type_cmd_tso_mss); + ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2); IAVF_DUMP_TX_DESC(txq, &txr[tx_id], tx_id); txe->last_id = tx_last; diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index d4b4935be6..d583badd98 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -232,6 +232,9 @@ struct iavf_tx_queue { bool q_set; /* if rx queue has been configured */ bool tx_deferred_start; /* don't start this queue in dev start */ const struct iavf_txq_ops *ops; +#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0) +#define IAVF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1) + uint8_t vlan_flag; }; /* Offload features */