From patchwork Wed Aug 10 07:09:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Naga Harish K, S V" X-Patchwork-Id: 114800 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27C0AA0540; Wed, 10 Aug 2022 09:10:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1627140DDE; Wed, 10 Aug 2022 09:10:23 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 2E5574068E; Wed, 10 Aug 2022 09:10:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660115422; x=1691651422; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=33VvfmsAoJ4XPvlkory0HZbVzkt0OI4C+un26CDCBNU=; b=AK63hgbnRDYfVifr2zVXeSRQ+vp8cOOxgK+GwlGEaDMaICiXfzDqo6ET /rZJCxHQP8VeKUb2pNqoFt71FafwN+FmSAdlaZlYwP5cGBRnxRT8SG9pD Cz411k29t7fzMEHkzXIJxY6KD54QettURdLHPMeRVpkL30M9H250Mqs3Z tOwyXtTnqiuyV/Shn0OAva3A69li6Wqyivedjb+vu34d3Zx+rh9i/dHCp GMmWqS5A81eN0JXkStYSUY8brtj5182/W+a0H9Qe3R7hz2BJ9yf8AQJVM 1GKgm34tLVdiwJzCuEMQvgiDazLqI4nw1FiTveeabIrPSt5KnFCtVT/hu w==; X-IronPort-AV: E=McAfee;i="6400,9594,10434"; a="377308190" X-IronPort-AV: E=Sophos;i="5.93,226,1654585200"; d="scan'208";a="377308190" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Aug 2022 00:10:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,226,1654585200"; d="scan'208";a="633658188" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by orsmga008.jf.intel.com with ESMTP; 10 Aug 2022 00:10:00 -0700 From: Naga Harish K S V To: erik.g.carrillo@intel.com Cc: dev@dpdk.org, stable@dpdk.org Subject: [PATCH v2 3/4] timer: fix function to stop all timers Date: Wed, 10 Aug 2022 02:09:58 -0500 Message-Id: <20220810070958.3111119-1-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220803162651.3145945-1-s.v.naga.harish.k@intel.com> References: <20220803162651.3145945-1-s.v.naga.harish.k@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There is a possibility of deadlock in this API, as same spinlock is tried to be acquired in nested manner. In timer_del function, if the previous owner and current owner lcore are different, the lock is tried to be acquired even though the same lock is already acquired by the caller of timer_del function. This patch removes the acquisition of nested locking. Fixes: 821c51267bcd63a ("timer: add function to stop all timers in a list") Cc: stable@dpdk.org Signed-off-by: Naga Harish K S V --- lib/timer/rte_timer.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/lib/timer/rte_timer.c b/lib/timer/rte_timer.c index 9994813d0d..85d67573eb 100644 --- a/lib/timer/rte_timer.c +++ b/lib/timer/rte_timer.c @@ -580,7 +580,7 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks, } static int -__rte_timer_stop(struct rte_timer *tim, int local_is_locked, +__rte_timer_stop(struct rte_timer *tim, struct rte_timer_data *timer_data) { union rte_timer_status prev_status, status; @@ -602,7 +602,7 @@ __rte_timer_stop(struct rte_timer *tim, int local_is_locked, /* remove it from list */ if (prev_status.state == RTE_TIMER_PENDING) { - timer_del(tim, prev_status, local_is_locked, priv_timer); + timer_del(tim, prev_status, 0, priv_timer); __TIMER_STAT_ADD(priv_timer, pending, -1); } @@ -631,7 +631,7 @@ rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim) TIMER_DATA_VALID_GET_OR_ERR_RET(timer_data_id, timer_data, -EINVAL); - return __rte_timer_stop(tim, 0, timer_data); + return __rte_timer_stop(tim, timer_data); } /* loop until rte_timer_stop() succeed */ @@ -987,21 +987,16 @@ rte_timer_stop_all(uint32_t timer_data_id, unsigned int *walk_lcores, walk_lcore = walk_lcores[i]; priv_timer = &timer_data->priv_timer[walk_lcore]; - rte_spinlock_lock(&priv_timer->list_lock); - for (tim = priv_timer->pending_head.sl_next[0]; tim != NULL; tim = next_tim) { next_tim = tim->sl_next[0]; - /* Call timer_stop with lock held */ - __rte_timer_stop(tim, 1, timer_data); + __rte_timer_stop(tim, timer_data); if (f) f(tim, f_arg); } - - rte_spinlock_unlock(&priv_timer->list_lock); } return 0;