From patchwork Mon Feb 6 05:46:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xing, Beilei" X-Patchwork-Id: 123116 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5661D41BE7; Mon, 6 Feb 2023 07:15:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEED942F8E; Mon, 6 Feb 2023 07:13:42 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 9DE1542D5E for ; Mon, 6 Feb 2023 07:13:38 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675664019; x=1707200019; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2Ye0mDvqH0MUoB8nllf8saPVUmE2C40560GEmlhaLDg=; b=IPWZugPzDuEnunoqRO3mQV56t5w5Nm8SuYIzAig3utWfZv04hcQlihJC hbMB7Q83DSUe3jt3bsy+d/oBK040NaVMfqT3ssvcVghiUuZstcbffQTwX PYmUvz7PtmLyDDdtzSD4WxCVEHfHA6wOPfthdXBiaeSB2CqpiBesQyHZY oAIFccLtTxCPLuLr5YuzHpRRWB1Ko56+nMSnt4AafPrV6+o4mwePLRcZP fUu2kk5VcbbaLyxrT61mwZNkAGo7IRM5+GZ2ypYKuxlsshlSr+ehaINb0 pWuPSUbg1UJNnSduLa02QOtSwnrScYpiqW4cZNjMSqYGHVTX62QcN9Pya Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="308780525" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="308780525" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 22:13:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="755142960" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="755142960" Received: from dpdk-beileix-3.sh.intel.com ([10.67.110.253]) by FMSMGA003.fm.intel.com with ESMTP; 05 Feb 2023 22:13:36 -0800 From: beilei.xing@intel.com To: jingjing.wu@intel.com Cc: dev@dpdk.org, qi.z.zhang@intel.com, Beilei Xing Subject: [PATCH v7 18/19] common/idpf: refine API name for data path module Date: Mon, 6 Feb 2023 05:46:17 +0000 Message-Id: <20230206054618.40975-19-beilei.xing@intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20230206054618.40975-1-beilei.xing@intel.com> References: <20230203094340.8103-1-beilei.xing@intel.com> <20230206054618.40975-1-beilei.xing@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Beilei Xing Refine API name for all data path functions. Signed-off-by: Beilei Xing --- drivers/common/idpf/idpf_common_rxtx.c | 20 ++++++------ drivers/common/idpf/idpf_common_rxtx.h | 32 +++++++++---------- drivers/common/idpf/idpf_common_rxtx_avx512.c | 8 ++--- drivers/common/idpf/version.map | 15 +++++---- drivers/net/idpf/idpf_rxtx.c | 22 ++++++------- 5 files changed, 49 insertions(+), 48 deletions(-) diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c index 86dadf9cd2..b1585208ec 100644 --- a/drivers/common/idpf/idpf_common_rxtx.c +++ b/drivers/common/idpf/idpf_common_rxtx.c @@ -618,8 +618,8 @@ idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq) } uint16_t -idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring; volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc; @@ -850,8 +850,8 @@ idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf, } uint16_t -idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) +idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) { struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue; volatile struct idpf_flex_tx_sched_desc *txr; @@ -1024,8 +1024,8 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold, } uint16_t -idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { volatile union virtchnl2_rx_desc *rx_ring; volatile union virtchnl2_rx_desc *rxdp; @@ -1186,8 +1186,8 @@ idpf_xmit_cleanup(struct idpf_tx_queue *txq) /* TX function */ uint16_t -idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) +idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) { volatile struct idpf_flex_tx_desc *txd; volatile struct idpf_flex_tx_desc *txr; @@ -1350,8 +1350,8 @@ idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* TX prep functions */ uint16_t -idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) +idpf_dp_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) { #ifdef RTE_LIBRTE_ETHDEV_DEBUG int ret; diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h index 08081ad30a..7fd3e5259d 100644 --- a/drivers/common/idpf/idpf_common_rxtx.h +++ b/drivers/common/idpf/idpf_common_rxtx.h @@ -248,31 +248,31 @@ int idpf_qc_single_rxq_mbufs_alloc(struct idpf_rx_queue *rxq); __rte_internal int idpf_qc_split_rxq_mbufs_alloc(struct idpf_rx_queue *rxq); __rte_internal -uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); +uint16_t idpf_dp_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); __rte_internal -uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); +uint16_t idpf_dp_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); __rte_internal -uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); +uint16_t idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); __rte_internal -uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); +uint16_t idpf_dp_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); __rte_internal -uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); +uint16_t idpf_dp_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); __rte_internal int idpf_qc_singleq_rx_vec_setup(struct idpf_rx_queue *rxq); __rte_internal int idpf_qc_singleq_tx_vec_avx512_setup(struct idpf_tx_queue *txq); __rte_internal -uint16_t idpf_singleq_recv_pkts_avx512(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); +uint16_t idpf_dp_singleq_recv_pkts_avx512(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); __rte_internal -uint16_t idpf_singleq_xmit_pkts_avx512(void *tx_queue, - struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); +uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif /* _IDPF_COMMON_RXTX_H_ */ diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c index 9dd63fefab..f41c577dcf 100644 --- a/drivers/common/idpf/idpf_common_rxtx_avx512.c +++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c @@ -533,8 +533,8 @@ _idpf_singleq_recv_raw_pkts_avx512(struct idpf_rx_queue *rxq, * - nb_pkts < IDPF_DESCS_PER_LOOP, just return no packet */ uint16_t -idpf_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +idpf_dp_singleq_recv_pkts_avx512(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { return _idpf_singleq_recv_raw_pkts_avx512(rx_queue, rx_pkts, nb_pkts); } @@ -819,8 +819,8 @@ idpf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, } uint16_t -idpf_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) +idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) { return idpf_xmit_pkts_vec_avx512_cmn(tx_queue, tx_pkts, nb_pkts); } diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index 2ff152a353..e37a40771b 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -4,6 +4,14 @@ INTERNAL { idpf_adapter_deinit; idpf_adapter_init; + idpf_dp_prep_pkts; + idpf_dp_singleq_recv_pkts; + idpf_dp_singleq_recv_pkts_avx512; + idpf_dp_singleq_xmit_pkts; + idpf_dp_singleq_xmit_pkts_avx512; + idpf_dp_splitq_recv_pkts; + idpf_dp_splitq_xmit_pkts; + idpf_qc_rx_thresh_check; idpf_qc_rx_queue_release; idpf_qc_rxq_mbufs_release; @@ -31,13 +39,6 @@ INTERNAL { idpf_vport_rss_config; idpf_execute_vc_cmd; - idpf_prep_pkts; - idpf_singleq_recv_pkts; - idpf_singleq_recv_pkts_avx512; - idpf_singleq_xmit_pkts; - idpf_singleq_xmit_pkts_avx512; - idpf_splitq_recv_pkts; - idpf_splitq_xmit_pkts; idpf_vc_alloc_vectors; idpf_vc_check_api_version; idpf_vc_config_irq_map_unmap; diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index ec75d6f69e..41e91b16b6 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -771,7 +771,7 @@ idpf_set_rx_function(struct rte_eth_dev *dev) #ifdef RTE_ARCH_X86 if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { - dev->rx_pkt_burst = idpf_splitq_recv_pkts; + dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts; } else { if (vport->rx_vec_allowed) { for (i = 0; i < dev->data->nb_tx_queues; i++) { @@ -780,19 +780,19 @@ idpf_set_rx_function(struct rte_eth_dev *dev) } #ifdef CC_AVX512_SUPPORT if (vport->rx_use_avx512) { - dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512; + dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512; return; } #endif /* CC_AVX512_SUPPORT */ } - dev->rx_pkt_burst = idpf_singleq_recv_pkts; + dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts; } #else if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) - dev->rx_pkt_burst = idpf_splitq_recv_pkts; + dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts; else - dev->rx_pkt_burst = idpf_singleq_recv_pkts; + dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts; #endif /* RTE_ARCH_X86 */ } @@ -824,8 +824,8 @@ idpf_set_tx_function(struct rte_eth_dev *dev) #endif /* RTE_ARCH_X86 */ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { - dev->tx_pkt_burst = idpf_splitq_xmit_pkts; - dev->tx_pkt_prepare = idpf_prep_pkts; + dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts; + dev->tx_pkt_prepare = idpf_dp_prep_pkts; } else { #ifdef RTE_ARCH_X86 if (vport->tx_vec_allowed) { @@ -837,14 +837,14 @@ idpf_set_tx_function(struct rte_eth_dev *dev) continue; idpf_qc_singleq_tx_vec_avx512_setup(txq); } - dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512; - dev->tx_pkt_prepare = idpf_prep_pkts; + dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512; + dev->tx_pkt_prepare = idpf_dp_prep_pkts; return; } #endif /* CC_AVX512_SUPPORT */ } #endif /* RTE_ARCH_X86 */ - dev->tx_pkt_burst = idpf_singleq_xmit_pkts; - dev->tx_pkt_prepare = idpf_prep_pkts; + dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts; + dev->tx_pkt_prepare = idpf_dp_prep_pkts; } }