From patchwork Tue Sep 10 16:33:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 59105 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AB21E1EF5B; Tue, 10 Sep 2019 18:39:19 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 44E8A1EF40 for ; Tue, 10 Sep 2019 18:39:13 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Sep 2019 09:39:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,490,1559545200"; d="scan'208";a="178732666" Received: from npg-dpdk-haiyue-1.sh.intel.com ([10.67.119.115]) by orsmga008.jf.intel.com with ESMTP; 10 Sep 2019 09:39:11 -0700 From: Haiyue Wang To: dev@dpdk.org, ferruh.yigit@intel.com, mdr@ashroe.eu, chenmin.sun@intel.com Cc: Haiyue Wang Date: Wed, 11 Sep 2019 00:33:36 +0800 Message-Id: <20190910163337.4843-4-haiyue.wang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190910163337.4843-1-haiyue.wang@intel.com> References: <20190910163337.4843-1-haiyue.wang@intel.com> Subject: [dpdk-dev] [RFC v3 3/4] net/ice: support to get the Rx/Tx burst mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" According to the selected Rx/Tx burst function name, retrieve the related burst mode options. Signed-off-by: Haiyue Wang --- drivers/net/ice/ice_ethdev.c | 2 ++ drivers/net/ice/ice_rxtx.c | 54 ++++++++++++++++++++++++++++++++++++ drivers/net/ice/ice_rxtx.h | 4 +++ 3 files changed, 60 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 63997fdfb..784e07744 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -147,6 +147,8 @@ static const struct eth_dev_ops ice_eth_dev_ops = { .vlan_pvid_set = ice_vlan_pvid_set, .rxq_info_get = ice_rxq_info_get, .txq_info_get = ice_txq_info_get, + .rx_burst_mode_get = ice_rx_burst_mode_get, + .tx_burst_mode_get = ice_tx_burst_mode_get, .get_eeprom_length = ice_get_eeprom_length, .get_eeprom = ice_get_eeprom, .rx_queue_count = ice_rx_queue_count, diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 81af81441..9e4c08d0a 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -2396,6 +2396,37 @@ ice_set_rx_function(struct rte_eth_dev *dev) } } +void +ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + uint64_t options; + + if (pkt_burst == ice_recv_scattered_pkts) + options = RTE_ETH_BURST_SCALAR | RTE_ETH_BURST_SCATTERED; + else if (pkt_burst == ice_recv_pkts_bulk_alloc) + options = RTE_ETH_BURST_SCALAR | RTE_ETH_BURST_BULK_ALLOC; + else if (pkt_burst == ice_recv_pkts) + options = RTE_ETH_BURST_SCALAR; +#ifdef RTE_ARCH_X86 + else if (pkt_burst == ice_recv_scattered_pkts_vec_avx2) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_AVX2 | + RTE_ETH_BURST_SCATTERED; + else if (pkt_burst == ice_recv_pkts_vec_avx2) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_AVX2; + else if (pkt_burst == ice_recv_scattered_pkts_vec) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_SSE | + RTE_ETH_BURST_SCATTERED; + else if (pkt_burst == ice_recv_pkts_vec) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_SSE; +#endif + else + options = 0; + + mode->options = options; +} + void __attribute__((cold)) ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq) { @@ -2519,6 +2550,29 @@ ice_set_tx_function(struct rte_eth_dev *dev) } } +void +ice_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + uint64_t options; + + if (pkt_burst == ice_xmit_pkts_simple) + options = RTE_ETH_BURST_SCALAR | RTE_ETH_BURST_SIMPLE; + else if (pkt_burst == ice_xmit_pkts) + options = RTE_ETH_BURST_SCALAR; +#ifdef RTE_ARCH_X86 + else if (pkt_burst == ice_xmit_pkts_vec_avx2) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_AVX2; + else if (pkt_burst == ice_xmit_pkts_vec) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_SSE; +#endif + else + options = 0; + + mode->options = options; +} + /* For each value it means, datasheet of hardware can tell more details * * @note: fix ice_dev_supported_ptypes_get() if any change here. diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index e9214110c..8bdd4369c 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -166,6 +166,10 @@ void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +void ice_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); +void ice_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); int ice_rx_descriptor_status(void *rx_queue, uint16_t offset); int ice_tx_descriptor_status(void *tx_queue, uint16_t offset); void ice_set_default_ptype_table(struct rte_eth_dev *dev);