From patchwork Tue Nov 28 21:26:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 134673 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE3F9433F2; Tue, 28 Nov 2023 14:06:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A908F42D8C; Tue, 28 Nov 2023 14:06:23 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 8515742D89; Tue, 28 Nov 2023 14:06:21 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701176781; x=1732712781; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=4vs7NVcpfelLUTO1h++2rl2LGjsAvWhZuQ05vDYo6k0=; b=dhQDtmEqhHSVnw4LS7O5Y5IlOXzW5GLTq8RV45JE+706sAXFlA96AqMi rXEePk1Lj9G9aNTh5kZ0ImFMqA+OXEp6RgPL07QbW9hTGvQT9aykob/GG xwxS7hGz+bnagxYvCbCJkkTnswO8E7wW0t9Tu99D1BuRdSum9Vf87rblf woRza1ThBZgkeNG+O5b8YLtHJBeOj0oMnVqP17jsPESu2oWkWMoZPDdl5 7XExYYU4uqUYEBqMAFQjInKMJvgsdGnYI0ztUq4LpV/6ZvW1QtlORJ9aE Dm942uh9csL1nS7LW8JpfLGV4zV1CmklyQUTJ+vm57d3+qToD3+xDvnA0 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10907"; a="383304400" X-IronPort-AV: E=Sophos;i="6.04,233,1695711600"; d="scan'208";a="383304400" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Nov 2023 05:05:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10907"; a="797586756" X-IronPort-AV: E=Sophos;i="6.04,233,1695711600"; d="scan'208";a="797586756" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga008.jf.intel.com with ESMTP; 28 Nov 2023 05:05:18 -0800 From: Qi Zhang To: qiming.yang@intel.com Cc: timothy.miskell@intel.com, dev@dpdk.org, Qi Zhang , stable@dpdk.org Subject: [PATCH] net/ice: fix link update Date: Tue, 28 Nov 2023 16:26:02 -0500 Message-Id: <20231128212602.2084420-1-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The ice_aq_get_link_info function is not thread-safe. However, it is possible to simultaneous invocations during both the dev_start and the LSC interrupt handler, potentially leading to unexpected adminq errors. This patch addresses the issue by introducing a thread-safe wrapper that utilizes a spinlock. Fixes: cf911d90e366 ("net/ice: support link update") Cc: stable@dpdk.org Signed-off-by: Qi Zhang --- drivers/net/ice/ice_ethdev.c | 26 ++++++++++++++++++++------ drivers/net/ice/ice_ethdev.h | 3 +++ 2 files changed, 23 insertions(+), 6 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 3ccba4db80..1f8ab5158a 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -1804,6 +1804,7 @@ ice_pf_setup(struct ice_pf *pf) } pf->main_vsi = vsi; + rte_spinlock_init(&pf->link_lock); return 0; } @@ -3621,17 +3622,31 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev) return 0; } +static enum ice_status +ice_get_link_info_safe(struct ice_pf *pf, bool ena_lse, + struct ice_link_status *link) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + int ret; + + rte_spinlock_lock(&pf->link_lock); + + ret = ice_aq_get_link_info(hw->port_info, ena_lse, link, NULL); + + rte_spinlock_unlock(&pf->link_lock); + + return ret; +} + static void ice_get_init_link_status(struct rte_eth_dev *dev) { - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false; struct ice_link_status link_status; int ret; - ret = ice_aq_get_link_info(hw->port_info, enable_lse, - &link_status, NULL); + ret = ice_get_link_info_safe(pf, enable_lse, &link_status); if (ret != ICE_SUCCESS) { PMD_DRV_LOG(ERR, "Failed to get link info"); pf->init_link_up = false; @@ -3996,7 +4011,7 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete) { #define CHECK_INTERVAL 50 /* 50ms */ #define MAX_REPEAT_TIME 40 /* 2s (40 * 50ms) in total */ - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_link_status link_status; struct rte_eth_link link, old; int status; @@ -4010,8 +4025,7 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete) do { /* Get link status information from hardware */ - status = ice_aq_get_link_info(hw->port_info, enable_lse, - &link_status, NULL); + status = ice_get_link_info_safe(pf, enable_lse, &link_status); if (status != ICE_SUCCESS) { link.link_speed = RTE_ETH_SPEED_NUM_100M; link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index abe6dcdc23..691893be13 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -548,6 +548,9 @@ struct ice_pf { uint64_t rss_hf; struct ice_tm_conf tm_conf; uint16_t outer_ethertype; + /* lock prevent race condition between lsc interrupt handler + * and link status update during dev_start */ + rte_spinlock_t link_lock; }; #define ICE_MAX_QUEUE_NUM 2048