From patchwork Tue Jun 28 06:20:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhichao Zeng X-Patchwork-Id: 113495 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAD75A0544; Tue, 28 Jun 2022 08:21:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC1E240691; Tue, 28 Jun 2022 08:21:45 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 82EFF400D6; Tue, 28 Jun 2022 08:21:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656397304; x=1687933304; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Jnnjo6hBjuHw6sMCyTSMfPlUSz42GYLBsSaRlmhC4kE=; b=lyq1dlZyYkRHGJ2MwSVHoUg3SxwFL9VmeWShVf64lgs3SgcUlgnrFCi6 L4SBGIeFfFDNl4CkiIFjb+SJWXZsYZQMaPD/FXy9W+6BmiUEgutfCOseC 2BtMdhKrIPxzeiTVWbiPfISLhuHbK2c77Xdxo8Ue3UWzHM+W810I/9hdS VHp3IC8BCO53ODG9NgquFS31vYOAnADbdPaHaSKFtBYGlH30SRY/JQQCv UzOaccP0jnc9VNzVI9LaxrDRAL5ZDaN2hq+ML0dxb/VHSbybZpZCk04dW VARndbN1uOMOxTvBCCcQODTsfLQoMFdIFa/7F+FjOPAchSIQCOpR7gY74 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10391"; a="279193338" X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="279193338" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 23:21:26 -0700 X-IronPort-AV: E=Sophos;i="5.92,227,1650956400"; d="scan'208";a="594638133" Received: from unknown (HELO localhost.localdomain) ([10.239.252.103]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jun 2022 23:21:24 -0700 From: zhichaox.zeng@intel.com To: dev@dpdk.org Cc: stable@dpdk.org, qiming.yang@intel.com, qi.z.zhang@intel.com, Zhichao Zeng , alvinx.zhang@intel.com, Junfeng Guo , Simei Su , Ferruh Yigit Subject: [PATCH v3] net/igc: move the initialization of data path into dev_init Date: Tue, 28 Jun 2022 14:20:51 +0800 Message-Id: <20220628062052.5397-1-zhichaox.zeng@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220622104907.862666-1-zhichaox.zeng@intel.com> References: <20220622104907.862666-1-zhichaox.zeng@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Zhichao Zeng The upper-layer application usually call the common interface "dev_init" to initialize the data path, but in the igc driver, the initialization of data path is in "igc_rx_init" and "eth_igc_tx_queue_setup", while in other drivers it is in "dev_init". So when upper-layer application calling these function pointers will occur segmentation faults. This patch moves the initialization of data path into "eth_igc_dev_init" to avoid segmentation faults, which is consistent with other drivers. Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx") Cc: alvinx.zhang@intel.com Cc: stable@dpdk.org Signed-off-by: Zhichao Zeng --- v2: remove unnecessary parameters, move declaration to relevant header file --- v3: remove redundant code, optimize commit log --- drivers/net/igc/igc_ethdev.c | 3 +++ drivers/net/igc/igc_txrx.c | 10 +++------- drivers/net/igc/igc_txrx.h | 4 ++++ 3 files changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index b9933b395d..25fb91bfec 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -1234,6 +1234,9 @@ eth_igc_dev_init(struct rte_eth_dev *dev) dev->rx_queue_count = eth_igc_rx_queue_count; dev->rx_descriptor_status = eth_igc_rx_descriptor_status; dev->tx_descriptor_status = eth_igc_tx_descriptor_status; + dev->rx_pkt_burst = igc_recv_pkts; + dev->tx_pkt_burst = igc_xmit_pkts; + dev->tx_pkt_prepare = eth_igc_prep_pkts; /* * for secondary processes, we don't initialize any further as primary diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index e48d5df11a..f38c5e7863 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -345,7 +345,7 @@ rx_desc_get_pkt_info(struct igc_rx_queue *rxq, struct rte_mbuf *rxm, rxm->packet_type = rx_desc_pkt_info_to_pkt_type(pkt_info); } -static uint16_t +uint16_t igc_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct igc_rx_queue * const rxq = rx_queue; @@ -1071,8 +1071,6 @@ igc_rx_init(struct rte_eth_dev *dev) uint16_t i; int ret; - dev->rx_pkt_burst = igc_recv_pkts; - /* * Make sure receives are disabled while setting * up the descriptor ring. @@ -1397,7 +1395,7 @@ eth_igc_rx_queue_setup(struct rte_eth_dev *dev, } /* prepare packets for transmit */ -static uint16_t +uint16_t eth_igc_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -1604,7 +1602,7 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags) return tmp; } -static uint16_t +uint16_t igc_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct igc_tx_queue * const txq = tx_queue; @@ -2030,8 +2028,6 @@ int eth_igc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr); igc_reset_tx_queue(txq); - dev->tx_pkt_burst = igc_xmit_pkts; - dev->tx_pkt_prepare = ð_igc_prep_pkts; dev->data->tx_queues[queue_idx] = txq; txq->offloads = tx_conf->offloads; diff --git a/drivers/net/igc/igc_txrx.h b/drivers/net/igc/igc_txrx.h index 535108a868..a5240df9d7 100644 --- a/drivers/net/igc/igc_txrx.h +++ b/drivers/net/igc/igc_txrx.h @@ -49,6 +49,10 @@ void eth_igc_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); void eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t rx_queue_id, int on); +uint16_t igc_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t igc_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t eth_igc_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #ifdef __cplusplus } #endif