From patchwork Fri Jun 19 08:50:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71780 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A93D6A04F5; Fri, 19 Jun 2020 10:47:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E02571BDFD; Fri, 19 Jun 2020 10:47:29 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 069CE14583 for ; Fri, 19 Jun 2020 10:47:24 +0200 (CEST) IronPort-SDR: o2K+Yq4yftFh+kv9LUfcUJUXDNMR9B3sV5m860p66gkzuNBTN8SGm8Ljj0QBlXY0zQczhxBU1I 5dMyV3rcmUqQ== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384406" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384406" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:24 -0700 IronPort-SDR: k7dxuB2zmElV8kK715AZxbc1mMxT7p53EgQH7VhX0ISy0mEuryB/jiACRl8XHQ0nCfQ+KN3uUt aroBT/XHkEIw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547907" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:22 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:34 +0800 Message-Id: <20200619085045.22875-2-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 01/12] net/ice: init RSS and supported RXDID in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable RSS parameters initialization and get the supported flexible descriptor RXDIDs bitmap from PF during DCF init. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.c | 54 ++++++++++++++++++++++++++++++++++++++- drivers/net/ice/ice_dcf.h | 3 +++ 2 files changed, 56 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 0cd5d1bf6..93fabd5f7 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -233,7 +233,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw) caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING | VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF | - VF_BASE_MODE_OFFLOADS; + VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC; err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES, (uint8_t *)&caps, sizeof(caps)); @@ -547,6 +547,30 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw) return err; } +static int +ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw) +{ + int err; + + err = ice_dcf_send_cmd_req_no_irq(hw, + VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, + NULL, 0); + if (err) { + PMD_INIT_LOG(ERR, "Failed to send OP_GET_SUPPORTED_RXDIDS"); + return -1; + } + + err = ice_dcf_recv_cmd_rsp_no_irq(hw, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, + (uint8_t *)&hw->supported_rxdid, + sizeof(uint64_t), NULL); + if (err) { + PMD_INIT_LOG(ERR, "Failed to get response of OP_GET_SUPPORTED_RXDIDS"); + return -1; + } + + return 0; +} + int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) { @@ -620,6 +644,29 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) goto err_alloc; } + /* Allocate memory for RSS info */ + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { + hw->rss_key = rte_zmalloc(NULL, + hw->vf_res->rss_key_size, 0); + if (!hw->rss_key) { + PMD_INIT_LOG(ERR, "unable to allocate rss_key memory"); + goto err_alloc; + } + hw->rss_lut = rte_zmalloc("rss_lut", + hw->vf_res->rss_lut_size, 0); + if (!hw->rss_lut) { + PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory"); + goto err_rss; + } + } + + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) { + if (ice_dcf_get_supported_rxdid(hw) != 0) { + PMD_INIT_LOG(ERR, "failed to do get supported rxdid"); + goto err_rss; + } + } + hw->eth_dev = eth_dev; rte_intr_callback_register(&pci_dev->intr_handle, ice_dcf_dev_interrupt_handler, hw); @@ -628,6 +675,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) return 0; +err_rss: + rte_free(hw->rss_key); + rte_free(hw->rss_lut); err_alloc: rte_free(hw->vf_res); err_api: @@ -655,4 +705,6 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) rte_free(hw->arq_buf); rte_free(hw->vf_vsi_map); rte_free(hw->vf_res); + rte_free(hw->rss_lut); + rte_free(hw->rss_key); } diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index d2e447b48..152266e3c 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -50,6 +50,9 @@ struct ice_dcf_hw { uint16_t vsi_id; struct rte_eth_dev *eth_dev; + uint8_t *rss_lut; + uint8_t *rss_key; + uint64_t supported_rxdid; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, From patchwork Fri Jun 19 08:50:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71781 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC798A04F5; Fri, 19 Jun 2020 10:47:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5743F1BE85; Fri, 19 Jun 2020 10:47:31 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 340681BC25 for ; Fri, 19 Jun 2020 10:47:27 +0200 (CEST) IronPort-SDR: RCiTAaVap55amtUORODgCUu+4qMD6VZQsSfPaXNoIJ9q+JwnJjph9Yhj/OAzCstqiMgYbepcNb u5Ka6y3nLm4A== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384411" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384411" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:26 -0700 IronPort-SDR: ckxUzQ2GE3CLLntHeJ5dl1r2+iMSta6ZsmJR1b5nOQOHDMM/N+wxPOKMN3Cd0LQNyJYhO7/HuB 0VW4yyn71cuw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547915" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:24 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:35 +0800 Message-Id: <20200619085045.22875-3-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 02/12] net/ice: complete device info get in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add support to get complete device information for DCF, including Rx/Tx offload capabilities and default configuration. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf_ethdev.c | 70 ++++++++++++++++++++++++++++++-- 1 file changed, 67 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index e5ba1a61f..eb3708191 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -24,6 +24,7 @@ #include "ice_generic_flow.h" #include "ice_dcf_ethdev.h" +#include "ice_rxtx.h" static uint16_t ice_dcf_recv_pkts(__rte_unused void *rx_queue, @@ -66,11 +67,74 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; dev_info->max_mac_addrs = 1; - dev_info->max_rx_pktlen = (uint32_t)-1; - dev_info->max_rx_queues = RTE_DIM(adapter->rxqs); - dev_info->max_tx_queues = RTE_DIM(adapter->txqs); + dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs; + dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs; + dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN; + dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX; + dev_info->hash_key_size = hw->vf_res->rss_key_size; + dev_info->reta_size = hw->vf_res->rss_lut_size; + dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL; + + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_RX_OFFLOAD_SCATTER | + DEV_RX_OFFLOAD_JUMBO_FRAME | + DEV_RX_OFFLOAD_VLAN_FILTER | + DEV_RX_OFFLOAD_RSS_HASH; + dev_info->tx_offload_capa = + DEV_TX_OFFLOAD_VLAN_INSERT | + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM | + DEV_TX_OFFLOAD_SCTP_CKSUM | + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_TX_OFFLOAD_TCP_TSO | + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GRE_TNL_TSO | + DEV_TX_OFFLOAD_IPIP_TNL_TSO | + DEV_TX_OFFLOAD_GENEVE_TNL_TSO | + DEV_TX_OFFLOAD_MULTI_SEGS; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = ICE_DEFAULT_RX_PTHRESH, + .hthresh = ICE_DEFAULT_RX_HTHRESH, + .wthresh = ICE_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = ICE_DEFAULT_TX_PTHRESH, + .hthresh = ICE_DEFAULT_TX_HTHRESH, + .wthresh = ICE_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = ICE_MAX_RING_DESC, + .nb_min = ICE_MIN_RING_DESC, + .nb_align = ICE_ALIGN_RING_DESC, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = ICE_MAX_RING_DESC, + .nb_min = ICE_MIN_RING_DESC, + .nb_align = ICE_ALIGN_RING_DESC, + }; return 0; } From patchwork Fri Jun 19 08:50:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71782 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 70AA3A04F5; Fri, 19 Jun 2020 10:47:50 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0495B1BE94; Fri, 19 Jun 2020 10:47:36 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id DFED41BDAE for ; Fri, 19 Jun 2020 10:47:29 +0200 (CEST) IronPort-SDR: 2q9CQg78+apTIfoIYYyHV4cKeBxTjGACknNxGRqZrtMAEk1TuHHwEmQiNxSfAKMiZXEJSsXRlh M8zgPCWTg20Q== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384414" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384414" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:29 -0700 IronPort-SDR: ENs52qtnw+/j9NRiZ9rgfoagt/XvMn4dxMcpZcZ21f836F8+F+C24F1JfhXufM0VITKqoYyBd5 b2qn4VjC3RLg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547925" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:26 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:36 +0800 Message-Id: <20200619085045.22875-4-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 03/12] net/ice: complete dev configure in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable device configuration function in DCF. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf_ethdev.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index eb3708191..01412ced0 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -57,8 +57,17 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev) } static int -ice_dcf_dev_configure(__rte_unused struct rte_eth_dev *dev) +ice_dcf_dev_configure(struct rte_eth_dev *dev) { + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct ice_adapter *ad = &dcf_ad->parent; + + ad->rx_bulk_alloc_allowed = true; + ad->tx_simple_allowed = true; + + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + return 0; } From patchwork Fri Jun 19 08:50:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71783 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C531A04F5; Fri, 19 Jun 2020 10:48:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 64ED41BEA3; Fri, 19 Jun 2020 10:47:37 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id C92021BE8E for ; Fri, 19 Jun 2020 10:47:31 +0200 (CEST) IronPort-SDR: yZVnfAADy+PpHmDhDKjMHKo4dysf/sIhOkyuGk6py/TMt9ZmGsxIjWhdgqqH6/wJLzee2NjO7d 6YvJk9gGi/cQ== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384418" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384418" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:31 -0700 IronPort-SDR: OsKaDznx8lhHAgSh+EUVAzjGFgNndZjcqJXG3B5bW/Dq0ZqGjE7O+eqB6yfzgjV/bSq0LHfdoX jW6CKViJQZmg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547930" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:29 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:37 +0800 Message-Id: <20200619085045.22875-5-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 04/12] net/ice: complete queue setup in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Delete original DCF queue setup functions and use ice queue setup and release functions instead. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf_ethdev.c | 42 +++----------------------------- drivers/net/ice/ice_dcf_ethdev.h | 3 --- drivers/net/ice/ice_dcf_parent.c | 7 ++++++ 3 files changed, 11 insertions(+), 41 deletions(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 01412ced0..b07850ece 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -229,11 +229,6 @@ ice_dcf_dev_close(struct rte_eth_dev *dev) ice_dcf_uninit_hw(dev, &adapter->real_hw); } -static void -ice_dcf_queue_release(__rte_unused void *q) -{ -} - static int ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -241,45 +236,16 @@ ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev, return 0; } -static int -ice_dcf_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t rx_queue_id, - __rte_unused uint16_t nb_rx_desc, - __rte_unused unsigned int socket_id, - __rte_unused const struct rte_eth_rxconf *rx_conf, - __rte_unused struct rte_mempool *mb_pool) -{ - struct ice_dcf_adapter *adapter = dev->data->dev_private; - - dev->data->rx_queues[rx_queue_id] = &adapter->rxqs[rx_queue_id]; - - return 0; -} - -static int -ice_dcf_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t tx_queue_id, - __rte_unused uint16_t nb_tx_desc, - __rte_unused unsigned int socket_id, - __rte_unused const struct rte_eth_txconf *tx_conf) -{ - struct ice_dcf_adapter *adapter = dev->data->dev_private; - - dev->data->tx_queues[tx_queue_id] = &adapter->txqs[tx_queue_id]; - - return 0; -} - static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .dev_start = ice_dcf_dev_start, .dev_stop = ice_dcf_dev_stop, .dev_close = ice_dcf_dev_close, .dev_configure = ice_dcf_dev_configure, .dev_infos_get = ice_dcf_dev_info_get, - .rx_queue_setup = ice_dcf_rx_queue_setup, - .tx_queue_setup = ice_dcf_tx_queue_setup, - .rx_queue_release = ice_dcf_queue_release, - .tx_queue_release = ice_dcf_queue_release, + .rx_queue_setup = ice_rx_queue_setup, + .tx_queue_setup = ice_tx_queue_setup, + .rx_queue_release = ice_rx_queue_release, + .tx_queue_release = ice_tx_queue_release, .link_update = ice_dcf_link_update, .stats_get = ice_dcf_stats_get, .stats_reset = ice_dcf_stats_reset, diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index e60e808d8..b54528bea 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -19,10 +19,7 @@ struct ice_dcf_queue { struct ice_dcf_adapter { struct ice_adapter parent; /* Must be first */ - struct ice_dcf_hw real_hw; - struct ice_dcf_queue rxqs[ICE_DCF_MAX_RINGS]; - struct ice_dcf_queue txqs[ICE_DCF_MAX_RINGS]; }; void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index d13e19d78..322a5273f 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -335,6 +335,13 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev) parent_adapter->eth_dev = eth_dev; parent_adapter->pf.adapter = parent_adapter; parent_adapter->pf.dev_data = eth_dev->data; + /* create a dummy main_vsi */ + parent_adapter->pf.main_vsi = + rte_zmalloc(NULL, sizeof(struct ice_vsi), 0); + if (!parent_adapter->pf.main_vsi) + return -ENOMEM; + parent_adapter->pf.main_vsi->adapter = parent_adapter; + parent_hw->back = parent_adapter; parent_hw->mac_type = ICE_MAC_GENERIC; parent_hw->vendor_id = ICE_INTEL_VENDOR_ID; From patchwork Fri Jun 19 08:50:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71784 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A946A04F5; Fri, 19 Jun 2020 10:48:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5FCD71BEAE; Fri, 19 Jun 2020 10:47:39 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 21DDC1BE94 for ; Fri, 19 Jun 2020 10:47:33 +0200 (CEST) IronPort-SDR: ogdoCOTJ7N0x/SY4s1JLutFLNnrondGL5kdZTl9iIjthWYngA33aRPl2QcerPbtChhBvx49H8V NbdvVNqPO36g== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384423" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384423" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:33 -0700 IronPort-SDR: 8ezQEkKQQPx1IUEpBjNZW4zRJDwLt3/dQYi4OQvyXz7qA8Q1qkl+gn2hROR2weqjjpdAhTO3Y6 SD60CjBlerLA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547939" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:31 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:38 +0800 Message-Id: <20200619085045.22875-6-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 05/12] net/ice: add stop flag for device start / stop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add stop flag for DCF device start and stop. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++ drivers/net/ice/ice_dcf_parent.c | 1 + 2 files changed, 15 insertions(+) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index b07850ece..676a504fd 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -45,6 +45,11 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue, static int ice_dcf_dev_start(struct rte_eth_dev *dev) { + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct ice_adapter *ad = &dcf_ad->parent; + + ad->pf.adapter_stopped = 0; + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; @@ -53,7 +58,16 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) static void ice_dcf_dev_stop(struct rte_eth_dev *dev) { + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct ice_adapter *ad = &dcf_ad->parent; + + if (ad->pf.adapter_stopped == 1) { + PMD_DRV_LOG(DEBUG, "Port is already stopped"); + return; + } + dev->data->dev_link.link_status = ETH_LINK_DOWN; + ad->pf.adapter_stopped = 1; } static int diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index 322a5273f..c5dfdd36e 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -341,6 +341,7 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev) if (!parent_adapter->pf.main_vsi) return -ENOMEM; parent_adapter->pf.main_vsi->adapter = parent_adapter; + parent_adapter->pf.adapter_stopped = 1; parent_hw->back = parent_adapter; parent_hw->mac_type = ICE_MAC_GENERIC; From patchwork Fri Jun 19 08:50:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71785 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B796BA04F5; Fri, 19 Jun 2020 10:48:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0A41E1BE9E; Fri, 19 Jun 2020 10:47:45 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 33B4B1BE9B for ; Fri, 19 Jun 2020 10:47:36 +0200 (CEST) IronPort-SDR: HeWRK/kDUqCd+Vazt9VDNGdQcuMG5+Lhvjx0KEnmAhlj39NSV2M+hyUaaElzGlF/Jk9xEQA3uj 1DUaPTRVYMbQ== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384425" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384425" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:35 -0700 IronPort-SDR: lqJ6nOpEmPbMiIahba2UYILmlE7fiz86bTfvPxyqWHgQU1ceVG7yme2WnkdpFGGnSourLky8hp nQXb+w6agAgQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547946" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:33 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:39 +0800 Message-Id: <20200619085045.22875-7-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 06/12] net/ice: add Rx queue init in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable Rx queues initialization during device start in DCF. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 83 ++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 152266e3c..dcb2a0283 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -53,6 +53,7 @@ struct ice_dcf_hw { uint8_t *rss_lut; uint8_t *rss_key; uint64_t supported_rxdid; + uint16_t num_queue_pairs; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 676a504fd..5afd07f96 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -42,14 +42,97 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue, return 0; } +static int +ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq) +{ + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct rte_eth_dev_data *dev_data = dev->data; + struct iavf_hw *hw = &dcf_ad->real_hw.avf; + uint16_t buf_size, max_pkt_len, len; + + buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; + rxq->rx_hdr_len = 0; + rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); + len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len; + max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len); + + /* Check if the jumbo frame and maximum packet length are set + * correctly. + */ + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (max_pkt_len <= RTE_ETHER_MAX_LEN || + max_pkt_len > ICE_FRAME_SIZE_MAX) { + PMD_DRV_LOG(ERR, "maximum packet length must be " + "larger than %u and smaller than %u, " + "as jumbo frame is enabled", + (uint32_t)RTE_ETHER_MAX_LEN, + (uint32_t)ICE_FRAME_SIZE_MAX); + return -EINVAL; + } + } else { + if (max_pkt_len < RTE_ETHER_MIN_LEN || + max_pkt_len > RTE_ETHER_MAX_LEN) { + PMD_DRV_LOG(ERR, "maximum packet length must be " + "larger than %u and smaller than %u, " + "as jumbo frame is disabled", + (uint32_t)RTE_ETHER_MIN_LEN, + (uint32_t)RTE_ETHER_MAX_LEN); + return -EINVAL; + } + } + + rxq->max_pkt_len = max_pkt_len; + if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) || + (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) { + dev_data->scattered_rx = 1; + } + rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id); + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + IAVF_WRITE_FLUSH(hw); + + return 0; +} + +static int +ice_dcf_init_rx_queues(struct rte_eth_dev *dev) +{ + struct ice_rx_queue **rxq = + (struct ice_rx_queue **)dev->data->rx_queues; + int i, ret; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (!rxq[i] || !rxq[i]->q_set) + continue; + ret = ice_dcf_init_rxq(dev, rxq[i]); + if (ret) + return ret; + } + + ice_set_rx_function(dev); + ice_set_tx_function(dev); + + return 0; +} + static int ice_dcf_dev_start(struct rte_eth_dev *dev) { struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; struct ice_adapter *ad = &dcf_ad->parent; + struct ice_dcf_hw *hw = &dcf_ad->real_hw; + int ret; ad->pf.adapter_stopped = 0; + hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, + dev->data->nb_tx_queues); + + ret = ice_dcf_init_rx_queues(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Fail to init queues"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; From patchwork Fri Jun 19 08:50:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71786 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04482A04F5; Fri, 19 Jun 2020 10:48:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 72FC31BEC0; Fri, 19 Jun 2020 10:47:46 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 6DB6A1BEA7 for ; Fri, 19 Jun 2020 10:47:38 +0200 (CEST) IronPort-SDR: IOo/QlJC/Ci6ydic5yCB4hoo41RBxhWQ8rSWnquEcOo20YCdPjGitV8RjUl0jCPyD6a+paQ7zg Sf9qkeucx1VA== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384433" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384433" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:38 -0700 IronPort-SDR: uxZq/rbNK+OeKqTFjG/+WAZu/e0GFleLhXxKcErcEpy3YqeChnDr73jocoUzv+eGEMqm0rU1Gt 13JsnS4Bh/IA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547954" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:35 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:40 +0800 Message-Id: <20200619085045.22875-8-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 07/12] net/ice: init RSS during DCF start X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable RSS initialization during DCF start. Add RSS LUT and RSS key configuration functions. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.c | 117 +++++++++++++++++++++++++++++++ drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 8 +++ 3 files changed, 126 insertions(+) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 93fabd5f7..f285323dd 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -708,3 +708,120 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) rte_free(hw->rss_lut); rte_free(hw->rss_key); } + +static int +ice_dcf_configure_rss_key(struct ice_dcf_hw *hw) +{ + struct virtchnl_rss_key *rss_key; + struct dcf_virtchnl_cmd args; + int len, err; + + len = sizeof(*rss_key) + hw->vf_res->rss_key_size - 1; + rss_key = rte_zmalloc("rss_key", len, 0); + if (!rss_key) + return -ENOMEM; + + rss_key->vsi_id = hw->vsi_res->vsi_id; + rss_key->key_len = hw->vf_res->rss_key_size; + rte_memcpy(rss_key->key, hw->rss_key, hw->vf_res->rss_key_size); + + args.v_op = VIRTCHNL_OP_CONFIG_RSS_KEY; + args.req_msglen = len; + args.req_msg = (uint8_t *)rss_key; + args.rsp_msglen = 0; + args.rsp_buflen = 0; + args.rsp_msgbuf = NULL; + args.pending = 0; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_KEY"); + + rte_free(rss_key); + return err; +} + +static int +ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw) +{ + struct virtchnl_rss_lut *rss_lut; + struct dcf_virtchnl_cmd args; + int len, err; + + len = sizeof(*rss_lut) + hw->vf_res->rss_lut_size - 1; + rss_lut = rte_zmalloc("rss_lut", len, 0); + if (!rss_lut) + return -ENOMEM; + + rss_lut->vsi_id = hw->vsi_res->vsi_id; + rss_lut->lut_entries = hw->vf_res->rss_lut_size; + rte_memcpy(rss_lut->lut, hw->rss_lut, hw->vf_res->rss_lut_size); + + args.v_op = VIRTCHNL_OP_CONFIG_RSS_LUT; + args.req_msglen = len; + args.req_msg = (uint8_t *)rss_lut; + args.rsp_msglen = 0; + args.rsp_buflen = 0; + args.rsp_msgbuf = NULL; + args.pending = 0; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_LUT"); + + rte_free(rss_lut); + return err; +} + +int +ice_dcf_init_rss(struct ice_dcf_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_eth_rss_conf *rss_conf; + uint8_t i, j, nb_q; + int ret; + + rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + nb_q = dev->data->nb_rx_queues; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) { + PMD_DRV_LOG(DEBUG, "RSS is not supported"); + return -ENOTSUP; + } + if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) { + PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default"); + /* set all lut items to default queue */ + memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size); + return ice_dcf_configure_rss_lut(hw); + } + + /* In IAVF, RSS enablement is set by PF driver. It is not supported + * to set based on rss_conf->rss_hf. + */ + + /* configure RSS key */ + if (!rss_conf->rss_key) + /* Calculate the default hash key */ + for (i = 0; i < hw->vf_res->rss_key_size; i++) + hw->rss_key[i] = (uint8_t)rte_rand(); + else + rte_memcpy(hw->rss_key, rss_conf->rss_key, + RTE_MIN(rss_conf->rss_key_len, + hw->vf_res->rss_key_size)); + + /* init RSS LUT table */ + for (i = 0, j = 0; i < hw->vf_res->rss_lut_size; i++, j++) { + if (j >= nb_q) + j = 0; + hw->rss_lut[i] = j; + } + /* send virtchnnl ops to configure rss*/ + ret = ice_dcf_configure_rss_lut(hw); + if (ret) + return ret; + ret = ice_dcf_configure_rss_key(hw); + if (ret) + return ret; + + return 0; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index dcb2a0283..eea4b286b 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -63,5 +63,6 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc, int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw); int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); +int ice_dcf_init_rss(struct ice_dcf_hw *hw); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 5afd07f96..e2ab7e637 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -133,6 +133,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { + ret = ice_dcf_init_rss(hw); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure RSS"); + return ret; + } + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; From patchwork Fri Jun 19 08:50:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71787 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6F3AA04F5; Fri, 19 Jun 2020 10:48:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 052E21BED2; Fri, 19 Jun 2020 10:47:48 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id E68391BEB3 for ; Fri, 19 Jun 2020 10:47:40 +0200 (CEST) IronPort-SDR: TBaKRx2jpf62g6oU5uuiJ4Lk++86thlWhJERa1HfNXoYwDWW80KOuEcUCCgPitzLROsExeIz8N zYVO/sGMdj3w== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384438" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384438" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:40 -0700 IronPort-SDR: bWTp5+eh8M+f8TvFytdTZYE3gQEjoU2wPSDy5JxvS/6sRboo9omKj0KLUqLCawXvAaabI2f/uB BA8RHTyqvzuQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547961" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:38 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:41 +0800 Message-Id: <20200619085045.22875-9-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 08/12] net/ice: add queue config in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add queues and Rx queue irqs configuration during device start in DCF. The setup is sent to PF via virtchnl. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.c | 111 +++++++++++++++++++++++++++ drivers/net/ice/ice_dcf.h | 6 ++ drivers/net/ice/ice_dcf_ethdev.c | 126 +++++++++++++++++++++++++++++++ 3 files changed, 243 insertions(+) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index f285323dd..8869e0d1c 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -24,6 +24,7 @@ #include #include "ice_dcf.h" +#include "ice_rxtx.h" #define ICE_DCF_AQ_LEN 32 #define ICE_DCF_AQ_BUF_SZ 4096 @@ -825,3 +826,113 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw) return 0; } + +#define IAVF_RXDID_LEGACY_1 1 +#define IAVF_RXDID_COMMS_GENERIC 16 + +int +ice_dcf_configure_queues(struct ice_dcf_hw *hw) +{ + struct ice_rx_queue **rxq = + (struct ice_rx_queue **)hw->eth_dev->data->rx_queues; + struct ice_tx_queue **txq = + (struct ice_tx_queue **)hw->eth_dev->data->tx_queues; + struct virtchnl_vsi_queue_config_info *vc_config; + struct virtchnl_queue_pair_info *vc_qp; + struct dcf_virtchnl_cmd args; + uint16_t i, size; + int err; + + size = sizeof(*vc_config) + + sizeof(vc_config->qpair[0]) * hw->num_queue_pairs; + vc_config = rte_zmalloc("cfg_queue", size, 0); + if (!vc_config) + return -ENOMEM; + + vc_config->vsi_id = hw->vsi_res->vsi_id; + vc_config->num_queue_pairs = hw->num_queue_pairs; + + for (i = 0, vc_qp = vc_config->qpair; + i < hw->num_queue_pairs; + i++, vc_qp++) { + vc_qp->txq.vsi_id = hw->vsi_res->vsi_id; + vc_qp->txq.queue_id = i; + if (i < hw->eth_dev->data->nb_tx_queues) { + vc_qp->txq.ring_len = txq[i]->nb_tx_desc; + vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma; + } + vc_qp->rxq.vsi_id = hw->vsi_res->vsi_id; + vc_qp->rxq.queue_id = i; + vc_qp->rxq.max_pkt_size = rxq[i]->max_pkt_len; + + if (i >= hw->eth_dev->data->nb_rx_queues) + continue; + + vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc; + vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma; + vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len; + + if (hw->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && + hw->supported_rxdid & + BIT(IAVF_RXDID_COMMS_GENERIC)) { + vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_GENERIC; + PMD_DRV_LOG(NOTICE, "request RXDID == %d in " + "Queue[%d]", vc_qp->rxq.rxdid, i); + } else { + PMD_DRV_LOG(ERR, "RXDID 16 is not supported"); + return -EINVAL; + } + } + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_CONFIG_VSI_QUEUES; + args.req_msg = (uint8_t *)vc_config; + args.req_msglen = size; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "Failed to execute command of" + " VIRTCHNL_OP_CONFIG_VSI_QUEUES"); + + rte_free(vc_config); + return err; +} + +int +ice_dcf_config_irq_map(struct ice_dcf_hw *hw) +{ + struct virtchnl_irq_map_info *map_info; + struct virtchnl_vector_map *vecmap; + struct dcf_virtchnl_cmd args; + int len, i, err; + + len = sizeof(struct virtchnl_irq_map_info) + + sizeof(struct virtchnl_vector_map) * hw->nb_msix; + + map_info = rte_zmalloc("map_info", len, 0); + if (!map_info) + return -ENOMEM; + + map_info->num_vectors = hw->nb_msix; + for (i = 0; i < hw->nb_msix; i++) { + vecmap = &map_info->vecmap[i]; + vecmap->vsi_id = hw->vsi_res->vsi_id; + vecmap->rxitr_idx = 0; + vecmap->vector_id = hw->msix_base + i; + vecmap->txq_map = 0; + vecmap->rxq_map = hw->rxq_map[hw->msix_base + i]; + } + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_CONFIG_IRQ_MAP; + args.req_msg = (u8 *)map_info; + args.req_msglen = len; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP"); + + rte_free(map_info); + return err; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index eea4b286b..9470d1df7 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -54,6 +54,10 @@ struct ice_dcf_hw { uint8_t *rss_key; uint64_t supported_rxdid; uint16_t num_queue_pairs; + + uint16_t msix_base; + uint16_t nb_msix; + uint16_t rxq_map[16]; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, @@ -64,5 +68,7 @@ int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw); int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); +int ice_dcf_configure_queues(struct ice_dcf_hw *hw); +int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index e2ab7e637..a190ab7c1 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -114,10 +114,124 @@ ice_dcf_init_rx_queues(struct rte_eth_dev *dev) return 0; } +#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET +#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET + +#define IAVF_ITR_INDEX_DEFAULT 0 +#define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */ +#define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ + +static inline uint16_t +iavf_calc_itr_interval(int16_t interval) +{ + if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX) + interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT; + + /* Convert to hardware count, as writing each 1 represents 2 us */ + return interval / 2; +} + +static int +ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, + struct rte_intr_handle *intr_handle) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + uint16_t interval, i; + int vec; + + if (rte_intr_cap_multiple(intr_handle) && + dev->data->dev_conf.intr_conf.rxq) { + if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues)) + return -1; + } + + if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { + intr_handle->intr_vec = + rte_zmalloc("intr_vec", + dev->data->nb_rx_queues * sizeof(int), 0); + if (!intr_handle->intr_vec) { + PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec", + dev->data->nb_rx_queues); + return -1; + } + } + + if (!dev->data->dev_conf.intr_conf.rxq || + !rte_intr_dp_is_en(intr_handle)) { + /* Rx interrupt disabled, Map interrupt only for writeback */ + hw->nb_msix = 1; + if (hw->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + /* If WB_ON_ITR supports, enable it */ + hw->msix_base = IAVF_RX_VEC_START; + IAVF_WRITE_REG(&hw->avf, + IAVF_VFINT_DYN_CTLN1(hw->msix_base - 1), + IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK | + IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK); + } else { + /* If no WB_ON_ITR offload flags, need to set + * interrupt for descriptor write back. + */ + hw->msix_base = IAVF_MISC_VEC_ID; + + /* set ITR to max */ + interval = + iavf_calc_itr_interval(IAVF_QUEUE_ITR_INTERVAL_MAX); + IAVF_WRITE_REG(&hw->avf, IAVF_VFINT_DYN_CTL01, + IAVF_VFINT_DYN_CTL01_INTENA_MASK | + (IAVF_ITR_INDEX_DEFAULT << + IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) | + (interval << + IAVF_VFINT_DYN_CTL01_INTERVAL_SHIFT)); + } + IAVF_WRITE_FLUSH(&hw->avf); + /* map all queues to the same interrupt */ + for (i = 0; i < dev->data->nb_rx_queues; i++) + hw->rxq_map[hw->msix_base] |= 1 << i; + } else { + if (!rte_intr_allow_others(intr_handle)) { + hw->nb_msix = 1; + hw->msix_base = IAVF_MISC_VEC_ID; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + hw->rxq_map[hw->msix_base] |= 1 << i; + intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID; + } + PMD_DRV_LOG(DEBUG, + "vector %u are mapping to all Rx queues", + hw->msix_base); + } else { + /* If Rx interrupt is reuquired, and we can use + * multi interrupts, then the vec is from 1 + */ + hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors, + intr_handle->nb_efd); + hw->msix_base = IAVF_MISC_VEC_ID; + vec = IAVF_MISC_VEC_ID; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + hw->rxq_map[vec] |= 1 << i; + intr_handle->intr_vec[i] = vec++; + if (vec >= hw->nb_msix) + vec = IAVF_RX_VEC_START; + } + PMD_DRV_LOG(DEBUG, + "%u vectors are mapping to %u Rx queues", + hw->nb_msix, dev->data->nb_rx_queues); + } + } + + if (ice_dcf_config_irq_map(hw)) { + PMD_DRV_LOG(ERR, "config interrupt mapping failed"); + return -1; + } + return 0; +} + static int ice_dcf_dev_start(struct rte_eth_dev *dev) { struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct rte_intr_handle *intr_handle = dev->intr_handle; struct ice_adapter *ad = &dcf_ad->parent; struct ice_dcf_hw *hw = &dcf_ad->real_hw; int ret; @@ -141,6 +255,18 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) } } + ret = ice_dcf_configure_queues(hw); + if (ret) { + PMD_DRV_LOG(ERR, "Fail to config queues"); + return ret; + } + + ret = ice_dcf_config_rx_queues_irqs(dev, intr_handle); + if (ret) { + PMD_DRV_LOG(ERR, "Fail to config rx queues' irqs"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; From patchwork Fri Jun 19 08:50:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71788 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 009FDA04F5; Fri, 19 Jun 2020 10:48:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A9B9B1BED0; Fri, 19 Jun 2020 10:47:49 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 70E3B1BE80 for ; Fri, 19 Jun 2020 10:47:43 +0200 (CEST) IronPort-SDR: ANMzoqaRFwqoD5EE/pfw7h9kX9omWHjAUQNmHD2f06Vy408GAlB3GHK79VSGhTSo66jn0EHMvi Cgf9/aQwaoVw== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384440" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384440" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:42 -0700 IronPort-SDR: loCV2BDvZWDKyf19fiKJjU7q08LW5w2grw8IjrW+/ThztBqqvWDDpsQZ2Nz4ox2guTmCeH2ur6 G2Ubg+xG9Q3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477547983" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:40 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:42 +0800 Message-Id: <20200619085045.22875-10-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 09/12] net/ice: add queue start and stop for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add queue start and stop in DCF. Support queue enable and disable through virtual channel. Add support for Rx queue mbufs allocation and queue reset. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.c | 57 ++++++ drivers/net/ice/ice_dcf.h | 3 +- drivers/net/ice/ice_dcf_ethdev.c | 322 +++++++++++++++++++++++++++++++ 3 files changed, 381 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 8869e0d1c..f18c0f16a 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -936,3 +936,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw) rte_free(map_info); return err; } + +int +ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on) +{ + struct virtchnl_queue_select queue_select; + struct dcf_virtchnl_cmd args; + int err; + + memset(&queue_select, 0, sizeof(queue_select)); + queue_select.vsi_id = hw->vsi_res->vsi_id; + if (rx) + queue_select.rx_queues |= 1 << qid; + else + queue_select.tx_queues |= 1 << qid; + + memset(&args, 0, sizeof(args)); + if (on) + args.v_op = VIRTCHNL_OP_ENABLE_QUEUES; + else + args.v_op = VIRTCHNL_OP_DISABLE_QUEUES; + + args.req_msg = (u8 *)&queue_select; + args.req_msglen = sizeof(queue_select); + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "Failed to execute command of %s", + on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES"); + + return err; +} + +int +ice_dcf_disable_queues(struct ice_dcf_hw *hw) +{ + struct virtchnl_queue_select queue_select; + struct dcf_virtchnl_cmd args; + int err; + + memset(&queue_select, 0, sizeof(queue_select)); + queue_select.vsi_id = hw->vsi_res->vsi_id; + + queue_select.rx_queues = BIT(hw->eth_dev->data->nb_rx_queues) - 1; + queue_select.tx_queues = BIT(hw->eth_dev->data->nb_tx_queues) - 1; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_DISABLE_QUEUES; + args.req_msg = (u8 *)&queue_select; + args.req_msglen = sizeof(queue_select); + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, + "Failed to execute command of OP_DISABLE_QUEUES"); + + return err; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 9470d1df7..68e1661c0 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); - +int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); +int ice_dcf_disable_queues(struct ice_dcf_hw *hw); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index a190ab7c1..d0219a728 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -227,6 +227,272 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, return 0; } +static int +alloc_rxq_mbufs(struct ice_rx_queue *rxq) +{ + volatile union ice_32b_rx_flex_desc *rxd; + struct rte_mbuf *mbuf = NULL; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + mbuf = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &rxq->rx_ring[i]; + rxd->read.pkt_addr = dma_addr; + rxd->read.hdr_addr = 0; + rxd->read.rsvd1 = 0; + rxd->read.rsvd2 = 0; + + rxq->sw_ring[i].mbuf = (void *)mbuf; + } + + return 0; +} + +static int +ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct iavf_hw *hw = &ad->real_hw.avf; + struct ice_rx_queue *rxq; + int err = 0; + + if (rx_queue_id >= dev->data->nb_rx_queues) + return -EINVAL; + + rxq = dev->data->rx_queues[rx_queue_id]; + + err = alloc_rxq_mbufs(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); + return err; + } + + rte_wmb(); + + /* Init the RX tail register. */ + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + IAVF_WRITE_FLUSH(hw); + + /* Ready to switch the queue on */ + err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true); + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", + rx_queue_id); + return err; + } + + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + + return 0; +} + +static inline void +reset_rx_queue(struct ice_rx_queue *rxq) +{ + uint16_t len; + uint32_t i; + + if (!rxq) + return; + + len = rxq->nb_rx_desc + ICE_RX_MAX_BURST; + + for (i = 0; i < len * sizeof(union ice_rx_flex_desc); i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); + + for (i = 0; i < ICE_RX_MAX_BURST; i++) + rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf; + + /* for rx bulk */ + rxq->rx_nb_avail = 0; + rxq->rx_next_avail = 0; + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); + + rxq->rx_tail = 0; + rxq->nb_rx_hold = 0; + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; +} + +static inline void +reset_tx_queue(struct ice_tx_queue *txq) +{ + struct ice_tx_entry *txe; + uint32_t i, size; + uint16_t prev; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + txe = txq->sw_ring; + size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc; + for (i = 0; i < size; i++) + ((volatile char *)txq->tx_ring)[i] = 0; + + prev = (uint16_t)(txq->nb_tx_desc - 1); + for (i = 0; i < txq->nb_tx_desc; i++) { + txq->tx_ring[i].cmd_type_offset_bsz = + rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); + txe[i].mbuf = NULL; + txe[i].last_id = i; + txe[prev].next_id = i; + prev = i; + } + + txq->tx_tail = 0; + txq->nb_tx_used = 0; + + txq->last_desc_cleaned = txq->nb_tx_desc - 1; + txq->nb_tx_free = txq->nb_tx_desc - 1; + + txq->tx_next_dd = txq->tx_rs_thresh - 1; + txq->tx_next_rs = txq->tx_rs_thresh - 1; +} + +static int +ice_dcf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct ice_rx_queue *rxq; + int err; + + if (rx_queue_id >= dev->data->nb_rx_queues) + return -EINVAL; + + err = ice_dcf_switch_queue(hw, rx_queue_id, true, false); + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", + rx_queue_id); + return err; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + rxq->rx_rel_mbufs(rxq); + reset_rx_queue(rxq); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + +static int +ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct iavf_hw *hw = &ad->real_hw.avf; + struct ice_tx_queue *txq; + int err = 0; + + if (tx_queue_id >= dev->data->nb_tx_queues) + return -EINVAL; + + txq = dev->data->tx_queues[tx_queue_id]; + + /* Init the RX tail register. */ + txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(tx_queue_id); + IAVF_PCI_REG_WRITE(txq->qtx_tail, 0); + IAVF_WRITE_FLUSH(hw); + + /* Ready to switch the queue on */ + err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true); + + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", + tx_queue_id); + return err; + } + + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + + return 0; +} + +static int +ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct ice_tx_queue *txq; + int err; + + if (tx_queue_id >= dev->data->nb_tx_queues) + return -EINVAL; + + err = ice_dcf_switch_queue(hw, tx_queue_id, false, false); + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", + tx_queue_id); + return err; + } + + txq = dev->data->tx_queues[tx_queue_id]; + txq->tx_rel_mbufs(txq); + reset_tx_queue(txq); + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + +static int +ice_dcf_start_queues(struct rte_eth_dev *dev) +{ + struct ice_rx_queue *rxq; + struct ice_tx_queue *txq; + int nb_rxq = 0; + int nb_txq, i; + + for (nb_txq = 0; nb_txq < dev->data->nb_tx_queues; nb_txq++) { + txq = dev->data->tx_queues[nb_txq]; + if (txq->tx_deferred_start) + continue; + if (ice_dcf_tx_queue_start(dev, nb_txq) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_txq); + goto tx_err; + } + } + + for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { + rxq = dev->data->rx_queues[nb_rxq]; + if (rxq->rx_deferred_start) + continue; + if (ice_dcf_rx_queue_start(dev, nb_rxq) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_rxq); + goto rx_err; + } + } + + return 0; + + /* stop the started queues if failed to start all queues */ +rx_err: + for (i = 0; i < nb_rxq; i++) + ice_dcf_rx_queue_stop(dev, i); +tx_err: + for (i = 0; i < nb_txq; i++) + ice_dcf_tx_queue_stop(dev, i); + + return -1; +} + static int ice_dcf_dev_start(struct rte_eth_dev *dev) { @@ -267,15 +533,59 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } + if (dev->data->dev_conf.intr_conf.rxq != 0) { + rte_intr_disable(intr_handle); + rte_intr_enable(intr_handle); + } + + ret = ice_dcf_start_queues(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to enable queues"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; } +static void +ice_dcf_stop_queues(struct rte_eth_dev *dev) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct ice_rx_queue *rxq; + struct ice_tx_queue *txq; + int ret, i; + + /* Stop All queues */ + ret = ice_dcf_disable_queues(hw); + if (ret) + PMD_DRV_LOG(WARNING, "Fail to stop queues"); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + if (!txq) + continue; + txq->tx_rel_mbufs(txq); + reset_tx_queue(txq); + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (!rxq) + continue; + rxq->rx_rel_mbufs(rxq); + reset_rx_queue(rxq); + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + } +} + static void ice_dcf_dev_stop(struct rte_eth_dev *dev) { struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct rte_intr_handle *intr_handle = dev->intr_handle; struct ice_adapter *ad = &dcf_ad->parent; if (ad->pf.adapter_stopped == 1) { @@ -283,6 +593,14 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev) return; } + ice_dcf_stop_queues(dev); + + rte_intr_efd_disable(intr_handle); + if (intr_handle->intr_vec) { + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; + } + dev->data->dev_link.link_status = ETH_LINK_DOWN; ad->pf.adapter_stopped = 1; } @@ -477,6 +795,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .tx_queue_setup = ice_tx_queue_setup, .rx_queue_release = ice_rx_queue_release, .tx_queue_release = ice_tx_queue_release, + .rx_queue_start = ice_dcf_rx_queue_start, + .tx_queue_start = ice_dcf_tx_queue_start, + .rx_queue_stop = ice_dcf_rx_queue_stop, + .tx_queue_stop = ice_dcf_tx_queue_stop, .link_update = ice_dcf_link_update, .stats_get = ice_dcf_stats_get, .stats_reset = ice_dcf_stats_reset, From patchwork Fri Jun 19 08:50:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71789 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A555A04F5; Fri, 19 Jun 2020 10:49:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BE16E1BEE4; Fri, 19 Jun 2020 10:47:51 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 866941BEB4 for ; Fri, 19 Jun 2020 10:47:45 +0200 (CEST) IronPort-SDR: YElWRGQ6l0oAeZ32HOME7ezwijgSKW7nQc6FDKOl1NjNvD3+KEJjMcfmJcs1yGoDs6dVCxZdXK wQKEjI499Nkw== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384441" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384441" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:45 -0700 IronPort-SDR: FAF6RibkuxaLC/2kc+LSXMUoK5t1ykkml2YzrPcId9BmvfFBRN9yuRzxy8qhTMRcgu81PI+XPI n97T/ZN8+GWw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477548004" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:42 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:43 +0800 Message-Id: <20200619085045.22875-11-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 10/12] net/ice: enable stats for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add support to get and reset Rx/Tx stats in DCF. Query stats from PF. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.c | 27 ++++++++ drivers/net/ice/ice_dcf.h | 4 ++ drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++---- 3 files changed, 120 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index f18c0f16a..fbeb58ee1 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -993,3 +993,30 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw) return err; } + +int +ice_dcf_query_stats(struct ice_dcf_hw *hw, + struct virtchnl_eth_stats *pstats) +{ + struct virtchnl_queue_select q_stats; + struct dcf_virtchnl_cmd args; + int err; + + memset(&q_stats, 0, sizeof(q_stats)); + q_stats.vsi_id = hw->vsi_res->vsi_id; + + args.v_op = VIRTCHNL_OP_GET_STATS; + args.req_msg = (uint8_t *)&q_stats; + args.req_msglen = sizeof(q_stats); + args.rsp_msglen = sizeof(*pstats); + args.rsp_msgbuf = (uint8_t *)pstats; + args.rsp_buflen = sizeof(*pstats); + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) { + PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS"); + return err; + } + + return 0; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 68e1661c0..e82bc7748 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -58,6 +58,7 @@ struct ice_dcf_hw { uint16_t msix_base; uint16_t nb_msix; uint16_t rxq_map[16]; + struct virtchnl_eth_stats eth_stats_offset; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, @@ -72,4 +73,7 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); +int ice_dcf_query_stats(struct ice_dcf_hw *hw, + struct virtchnl_eth_stats *pstats); + #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index d0219a728..38e321f4b 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -697,19 +697,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, return 0; } -static int -ice_dcf_stats_get(__rte_unused struct rte_eth_dev *dev, - __rte_unused struct rte_eth_stats *igb_stats) -{ - return 0; -} - -static int -ice_dcf_stats_reset(__rte_unused struct rte_eth_dev *dev) -{ - return 0; -} - static int ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) { @@ -762,6 +749,95 @@ ice_dcf_dev_filter_ctrl(struct rte_eth_dev *dev, return ret; } +#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4) +#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6) +#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t) + +static void +ice_dcf_stat_update_48(uint64_t *offset, uint64_t *stat) +{ + if (*stat >= *offset) + *stat = *stat - *offset; + else + *stat = (uint64_t)((*stat + + ((uint64_t)1 << ICE_DCF_48_BIT_WIDTH)) - *offset); + + *stat &= ICE_DCF_48_BIT_MASK; +} + +static void +ice_dcf_stat_update_32(uint64_t *offset, uint64_t *stat) +{ + if (*stat >= *offset) + *stat = (uint64_t)(*stat - *offset); + else + *stat = (uint64_t)((*stat + + ((uint64_t)1 << ICE_DCF_32_BIT_WIDTH)) - *offset); +} + +static void +ice_dcf_update_stats(struct virtchnl_eth_stats *oes, + struct virtchnl_eth_stats *nes) +{ + ice_dcf_stat_update_48(&oes->rx_bytes, &nes->rx_bytes); + ice_dcf_stat_update_48(&oes->rx_unicast, &nes->rx_unicast); + ice_dcf_stat_update_48(&oes->rx_multicast, &nes->rx_multicast); + ice_dcf_stat_update_48(&oes->rx_broadcast, &nes->rx_broadcast); + ice_dcf_stat_update_32(&oes->rx_discards, &nes->rx_discards); + ice_dcf_stat_update_48(&oes->tx_bytes, &nes->tx_bytes); + ice_dcf_stat_update_48(&oes->tx_unicast, &nes->tx_unicast); + ice_dcf_stat_update_48(&oes->tx_multicast, &nes->tx_multicast); + ice_dcf_stat_update_48(&oes->tx_broadcast, &nes->tx_broadcast); + ice_dcf_stat_update_32(&oes->tx_errors, &nes->tx_errors); + ice_dcf_stat_update_32(&oes->tx_discards, &nes->tx_discards); +} + + +static int +ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct virtchnl_eth_stats pstats; + int ret; + + ret = ice_dcf_query_stats(hw, &pstats); + if (ret == 0) { + ice_dcf_update_stats(&hw->eth_stats_offset, &pstats); + stats->ipackets = pstats.rx_unicast + pstats.rx_multicast + + pstats.rx_broadcast - pstats.rx_discards; + stats->opackets = pstats.tx_broadcast + pstats.tx_multicast + + pstats.tx_unicast; + stats->imissed = pstats.rx_discards; + stats->oerrors = pstats.tx_errors + pstats.tx_discards; + stats->ibytes = pstats.rx_bytes; + stats->ibytes -= stats->ipackets * RTE_ETHER_CRC_LEN; + stats->obytes = pstats.tx_bytes; + } else { + PMD_DRV_LOG(ERR, "Get statistics failed"); + } + return ret; +} + +static int +ice_dcf_stats_reset(struct rte_eth_dev *dev) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct virtchnl_eth_stats pstats; + int ret; + + /* read stat values to clear hardware registers */ + ret = ice_dcf_query_stats(hw, &pstats); + if (ret != 0) + return ret; + + /* set stats offset base on current values */ + hw->eth_stats_offset = pstats; + + return 0; +} + static void ice_dcf_dev_close(struct rte_eth_dev *dev) { From patchwork Fri Jun 19 08:50:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71790 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35786A04F5; Fri, 19 Jun 2020 10:49:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1C4311BF0B; Fri, 19 Jun 2020 10:47:53 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id AF38E1BEC6 for ; Fri, 19 Jun 2020 10:47:47 +0200 (CEST) IronPort-SDR: i9w/inF9A2XoszNuA41u0eKlcWrhPYjEPP6Lq6cqVr47ZpebElkiIYAoIRL68Q3/MTcXs30U+L 9NMjeaJ9lW4g== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384443" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384443" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:47 -0700 IronPort-SDR: uSRjd1xvZRkIswWgLYf3PQRPALH4QJnSULP7TjKMtQXi+hOVRg/Wca/aRVLIZj4Sx9DPvpMJan QLnNfXc4i29g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477548011" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:45 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:44 +0800 Message-Id: <20200619085045.22875-12-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 11/12] net/ice: set MAC filter during dev start for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add support to add and delete MAC address filter in DCF. Signed-off-by: Qi Zhang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.c | 42 ++++++++++++++++++++++++++++++++ drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 7 ++++++ 3 files changed, 50 insertions(+) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index fbeb58ee1..712f43825 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -1020,3 +1020,45 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw, return 0; } + +int +ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add) +{ + struct virtchnl_ether_addr_list *list; + struct rte_ether_addr *addr; + struct dcf_virtchnl_cmd args; + int len, err = 0; + + len = sizeof(struct virtchnl_ether_addr_list); + addr = hw->eth_dev->data->mac_addrs; + len += sizeof(struct virtchnl_ether_addr); + + list = rte_zmalloc(NULL, len, 0); + if (!list) { + PMD_DRV_LOG(ERR, "fail to allocate memory"); + return -ENOMEM; + } + + rte_memcpy(list->list[0].addr, addr->addr_bytes, + sizeof(addr->addr_bytes)); + PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x", + addr->addr_bytes[0], addr->addr_bytes[1], + addr->addr_bytes[2], addr->addr_bytes[3], + addr->addr_bytes[4], addr->addr_bytes[5]); + + list->vsi_id = hw->vsi_res->vsi_id; + list->num_elements = 1; + + memset(&args, 0, sizeof(args)); + args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR : + VIRTCHNL_OP_DEL_ETH_ADDR; + args.req_msg = (uint8_t *)list; + args.req_msglen = len; + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command %s", + add ? "OP_ADD_ETHER_ADDRESS" : + "OP_DEL_ETHER_ADDRESS"); + rte_free(list); + return err; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index e82bc7748..a44a01e90 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -75,5 +75,6 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); int ice_dcf_query_stats(struct ice_dcf_hw *hw, struct virtchnl_eth_stats *pstats); +int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 38e321f4b..c39dfc1cc 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -544,6 +544,12 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } + ret = ice_dcf_add_del_all_mac_addr(hw, true); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac addr"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; @@ -601,6 +607,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev) intr_handle->intr_vec = NULL; } + ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false); dev->data->dev_link.link_status = ETH_LINK_DOWN; ad->pf.adapter_stopped = 1; } From patchwork Fri Jun 19 08:50:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 71791 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10B57A04F5; Fri, 19 Jun 2020 10:49:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7BEFE1BF3C; Fri, 19 Jun 2020 10:47:54 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id D69461BED7 for ; Fri, 19 Jun 2020 10:47:49 +0200 (CEST) IronPort-SDR: 2HeKAkTYODoSPupi1hcxLOVzi7jYDD3bBG5gcOOKj1GLQ6jHwnH1Lhay8WOsf58RueP9jvSN1V S66N4cACWKEQ== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="204384447" X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="204384447" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 01:47:49 -0700 IronPort-SDR: he8qMZaESj3/YQ6whdGAovU4wTh2Mgv+6pNhRlNgHTEyNlsRVtxHao5UtYFEgaIUcE+I1veQpQ Vfa5kBVKtgVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,254,1589266800"; d="scan'208";a="477548016" Received: from dpdk-xuting-second.sh.intel.com ([10.67.116.154]) by fmsmga006.fm.intel.com with ESMTP; 19 Jun 2020 01:47:47 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, marko.kovacevic@intel.com, john.mcnamara@intel.com, Ting Xu Date: Fri, 19 Jun 2020 16:50:45 +0800 Message-Id: <20200619085045.22875-13-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200619085045.22875-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> <20200619085045.22875-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v4 12/12] doc: enable DCF datapath configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add doc for DCF datapath configuration in DPDK 20.08 release note. Signed-off-by: Ting Xu --- doc/guides/rel_notes/release_20_08.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index dee4ccbb5..1a3a4cdb2 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -56,6 +56,12 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= +* **Updated the Intel ice driver.** + + Updated the Intel ice driver with new features and improvements, including: + + * Added support for DCF datapath configuration. + * **Updated Mellanox mlx5 driver.** Updated Mellanox mlx5 driver with new features and improvements, including: