From patchwork Wed May 2 02:43:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 39224 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DA6002583; Wed, 2 May 2018 04:43:33 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 161E7231E for ; Wed, 2 May 2018 04:43:32 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 May 2018 19:43:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,353,1520924400"; d="scan'208";a="55229662" Received: from dpdk51.sh.intel.com ([10.67.110.184]) by orsmga002.jf.intel.com with ESMTP; 01 May 2018 19:43:30 -0700 From: Qi Zhang To: ferruh.yigit@intel.com, konstantin.ananyev@intel.com Cc: dev@dpdk.org, beilei.xing@intel.com, Qi Zhang Date: Wed, 2 May 2018 10:43:45 +0800 Message-Id: <20180502024346.275514-1-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 Subject: [dpdk-dev] [PATCH 1/2] net/i40e: fix queue offload initialize X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add missing queue offload initialization. Fixes: 7497d3e2f777 ("net/i40e: convert to new Tx offloads API") Fixes: c3ac7c5b0b8a ("net/i40e: convert to new Rx offloads API") Signed-off-by: Qi Zhang --- drivers/net/i40e/i40e_ethdev.c | 1 + drivers/net/i40e/i40e_ethdev_vf.c | 1 + drivers/net/i40e/i40e_rxtx.c | 2 ++ drivers/net/i40e/i40e_vf_representor.c | 1 + 4 files changed, 5 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 284e9cb64..a001d5b99 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -3345,6 +3345,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH, .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | ETH_TXQ_FLAGS_NOOFFLOADS, + .offloads = 0, }; dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 48e7ac21e..de5f460e9 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -2238,6 +2238,7 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH, .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | ETH_TXQ_FLAGS_NOOFFLOADS, + .offloads = 0, }; dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 006f5b846..755109ee5 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -1857,6 +1857,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->drop_en = rx_conf->rx_drop_en; rxq->vsi = vsi; rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->offloads = rx_conf->offloads; /* Allocate the maximun number of RX ring hardware descriptor. */ len = I40E_MAX_RING_DESC; @@ -2297,6 +2298,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->reg_idx = reg_idx; txq->port_id = dev->data->port_id; txq->txq_flags = tx_conf->txq_flags; + txq->offloads = tx_conf->offloads; txq->vsi = vsi; txq->tx_deferred_start = tx_conf->tx_deferred_start; diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c index a8aa0115d..7b67e23ae 100644 --- a/drivers/net/i40e/i40e_vf_representor.c +++ b/drivers/net/i40e/i40e_vf_representor.c @@ -81,6 +81,7 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev, .tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH, .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | ETH_TXQ_FLAGS_NOOFFLOADS, + .offloads = 0, }; dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {