From patchwork Thu Dec 14 08:40:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135173 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B444C436E7; Thu, 14 Dec 2023 01:20:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7D59E402BC; Thu, 14 Dec 2023 01:20:06 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id C0A9A40283; Thu, 14 Dec 2023 01:20:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1702513205; x=1734049205; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=8dzuLeJfkDzG2ZOsFdD+WjzJUYxHdFph/4iQuLExmyA=; b=TX3zIe7jTkY7TFbVutmG27JCR13CkIdVhonShQjBUmZW8egllSTVcxyH mcjAtqXQvQtmHsXR1vaBxDcmgugWRZApmGtB8u+X8fjwYBzJty3colJsE dHxyHnnrc9whq3xedO0KgMpglb0CeUoBjcTHEeUD+AjkKEPnWcFU8qY2e do42vLMYHQl7ESsO8iolSYQgpxf9g5eRdWVd0skXRantJlLumd64ZygJf AtxqmRay4qLD1YvkPI4Feny6utWopsBh7jFveePAs92olY/UPOmA+jNTT 3jOz9p/EoboYO3FQ26t4m1yitpDu9s0MUuZDjwBsEcEkSbgbzzu9SGlX2 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10923"; a="459368574" X-IronPort-AV: E=Sophos;i="6.04,274,1695711600"; d="scan'208";a="459368574" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Dec 2023 16:20:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10923"; a="803063248" X-IronPort-AV: E=Sophos;i="6.04,274,1695711600"; d="scan'208";a="803063248" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga008.jf.intel.com with ESMTP; 13 Dec 2023 16:20:02 -0800 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, Qi Zhang , stable@dpdk.org Subject: [PATCH v2] net/ice: fix link update Date: Thu, 14 Dec 2023 03:40:54 -0500 Message-Id: <20231214084054.2593194-1-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The ice_aq_get_link_info function is not thread-safe. However, it is possible to simultaneous invocations during both the dev_start and the LSC interrupt handler, potentially leading to unexpected adminq errors. This patch addresses the issue by introducing a thread-safe wrapper that utilizes a spinlock. Fixes: cf911d90e366 ("net/ice: support link update") Cc: stable@dpdk.org Signed-off-by: Qi Zhang Acked-by: Qiming Yang --- v2: - fix coding style warning. drivers/net/ice/ice_ethdev.c | 26 ++++++++++++++++++++------ drivers/net/ice/ice_ethdev.h | 4 ++++ 2 files changed, 24 insertions(+), 6 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 3ccba4db80..1f8ab5158a 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -1804,6 +1804,7 @@ ice_pf_setup(struct ice_pf *pf) } pf->main_vsi = vsi; + rte_spinlock_init(&pf->link_lock); return 0; } @@ -3621,17 +3622,31 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev) return 0; } +static enum ice_status +ice_get_link_info_safe(struct ice_pf *pf, bool ena_lse, + struct ice_link_status *link) +{ + struct ice_hw *hw = ICE_PF_TO_HW(pf); + int ret; + + rte_spinlock_lock(&pf->link_lock); + + ret = ice_aq_get_link_info(hw->port_info, ena_lse, link, NULL); + + rte_spinlock_unlock(&pf->link_lock); + + return ret; +} + static void ice_get_init_link_status(struct rte_eth_dev *dev) { - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); bool enable_lse = dev->data->dev_conf.intr_conf.lsc ? true : false; struct ice_link_status link_status; int ret; - ret = ice_aq_get_link_info(hw->port_info, enable_lse, - &link_status, NULL); + ret = ice_get_link_info_safe(pf, enable_lse, &link_status); if (ret != ICE_SUCCESS) { PMD_DRV_LOG(ERR, "Failed to get link info"); pf->init_link_up = false; @@ -3996,7 +4011,7 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete) { #define CHECK_INTERVAL 50 /* 50ms */ #define MAX_REPEAT_TIME 40 /* 2s (40 * 50ms) in total */ - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_link_status link_status; struct rte_eth_link link, old; int status; @@ -4010,8 +4025,7 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete) do { /* Get link status information from hardware */ - status = ice_aq_get_link_info(hw->port_info, enable_lse, - &link_status, NULL); + status = ice_get_link_info_safe(pf, enable_lse, &link_status); if (status != ICE_SUCCESS) { link.link_speed = RTE_ETH_SPEED_NUM_100M; link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index abe6dcdc23..d607f028e0 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -548,6 +548,10 @@ struct ice_pf { uint64_t rss_hf; struct ice_tm_conf tm_conf; uint16_t outer_ethertype; + /* lock prevent race condition between lsc interrupt handler + * and link status update during dev_start. + */ + rte_spinlock_t link_lock; }; #define ICE_MAX_QUEUE_NUM 2048