From patchwork Thu Sep 26 11:48:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 59869 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CCCE31BE86; Thu, 26 Sep 2019 13:54:02 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id D0E601B079 for ; Thu, 26 Sep 2019 13:53:56 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Sep 2019 04:53:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,551,1559545200"; d="scan'208";a="389581518" Received: from npg-dpdk-haiyue-1.sh.intel.com ([10.67.119.153]) by fmsmga005.fm.intel.com with ESMTP; 26 Sep 2019 04:53:55 -0700 From: Haiyue Wang To: dev@dpdk.org, ferruh.yigit@intel.com, xiaolong.ye@intel.com Cc: ray.kinsella@intel.com, bernard.iremonger@intel.com, chenmin.sun@intel.com, Haiyue Wang Date: Thu, 26 Sep 2019 19:48:17 +0800 Message-Id: <20190926114818.91063-4-haiyue.wang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926114818.91063-1-haiyue.wang@intel.com> References: <20190926114818.91063-1-haiyue.wang@intel.com> Subject: [dpdk-dev] [PATCH v1 3/4] net/ice: support to get the Rx/Tx burst mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" According to the selected Rx/Tx burst function name, retrieve the related burst mode options. Signed-off-by: Haiyue Wang Acked-by: Bernard Iremonger --- drivers/net/ice/ice_ethdev.c | 2 ++ drivers/net/ice/ice_rxtx.c | 54 ++++++++++++++++++++++++++++++++++++ drivers/net/ice/ice_rxtx.h | 4 +++ 3 files changed, 60 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index c126d962c..38b141eea 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -158,6 +158,8 @@ static const struct eth_dev_ops ice_eth_dev_ops = { .vlan_pvid_set = ice_vlan_pvid_set, .rxq_info_get = ice_rxq_info_get, .txq_info_get = ice_txq_info_get, + .rx_burst_mode_get = ice_rx_burst_mode_get, + .tx_burst_mode_get = ice_tx_burst_mode_get, .get_eeprom_length = ice_get_eeprom_length, .get_eeprom = ice_get_eeprom, .rx_queue_count = ice_rx_queue_count, diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index af96c0f41..2ad683ea9 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -2441,6 +2441,37 @@ ice_set_rx_function(struct rte_eth_dev *dev) } } +void +ice_rx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + uint64_t options; + + if (pkt_burst == ice_recv_scattered_pkts) + options = RTE_ETH_BURST_SCALAR | RTE_ETH_BURST_SCATTERED; + else if (pkt_burst == ice_recv_pkts_bulk_alloc) + options = RTE_ETH_BURST_SCALAR | RTE_ETH_BURST_BULK_ALLOC; + else if (pkt_burst == ice_recv_pkts) + options = RTE_ETH_BURST_SCALAR; +#ifdef RTE_ARCH_X86 + else if (pkt_burst == ice_recv_scattered_pkts_vec_avx2) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_AVX2 | + RTE_ETH_BURST_SCATTERED; + else if (pkt_burst == ice_recv_pkts_vec_avx2) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_AVX2; + else if (pkt_burst == ice_recv_scattered_pkts_vec) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_SSE | + RTE_ETH_BURST_SCATTERED; + else if (pkt_burst == ice_recv_pkts_vec) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_SSE; +#endif + else + options = 0; + + mode->options = options; +} + void __attribute__((cold)) ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq) { @@ -2564,6 +2595,29 @@ ice_set_tx_function(struct rte_eth_dev *dev) } } +void +ice_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + eth_tx_burst_t pkt_burst = dev->tx_pkt_burst; + uint64_t options; + + if (pkt_burst == ice_xmit_pkts_simple) + options = RTE_ETH_BURST_SCALAR | RTE_ETH_BURST_SIMPLE; + else if (pkt_burst == ice_xmit_pkts) + options = RTE_ETH_BURST_SCALAR; +#ifdef RTE_ARCH_X86 + else if (pkt_burst == ice_xmit_pkts_vec_avx2) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_AVX2; + else if (pkt_burst == ice_xmit_pkts_vec) + options = RTE_ETH_BURST_VECTOR | RTE_ETH_BURST_SSE; +#endif + else + options = 0; + + mode->options = options; +} + /* For each value it means, datasheet of hardware can tell more details * * @note: fix ice_dev_supported_ptypes_get() if any change here. diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index 25b3822df..eccfbe93f 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -168,6 +168,10 @@ void ice_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void ice_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +void ice_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); +void ice_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); int ice_rx_descriptor_status(void *rx_queue, uint16_t offset); int ice_tx_descriptor_status(void *tx_queue, uint16_t offset); void ice_set_default_ptype_table(struct rte_eth_dev *dev);