From patchwork Tue Sep 6 08:05:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yiding Zhou X-Patchwork-Id: 115955 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 052E6A054A; Tue, 6 Sep 2022 10:03:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC1DD40143; Tue, 6 Sep 2022 10:03:38 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id B773940143; Tue, 6 Sep 2022 10:03:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662451417; x=1693987417; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XIZh/lLcR5L3ZjSVPwNFzG7J0LS/6wpJh5spnksjnRE=; b=QZ1WGL5fKS861P4bqf6ehrl5NuhtPcCy/xHaMbazTxvGvWKGWb/W5jQ9 VYSp/yovLBCXqPD35KFQ9FsMwksQHHozbd42NqwMAxtNaelwSOWY7fc5d LzjaRY5dpkpxRjvCwZ0Lbh9QiDDTMGPIr6R2aYXfLuK0tXTDIzSM+Ioli a9sRQXNvUSnFeGKT4twNP8aZ3nGl15nUwzFVzDZ11oW+Sk9Kc8JCx5H29 gWv3XExfasQTZaczz7P4lg0K9B5V07glGNlFSvIBVaVi0YJkGPpSvm0lF R12DJuE35eX34A96tA3JjnpydoFMMrLq3aZHArdGQrLmlMktqUxGPcpip A==; X-IronPort-AV: E=McAfee;i="6500,9779,10461"; a="297317415" X-IronPort-AV: E=Sophos;i="5.93,293,1654585200"; d="scan'208";a="297317415" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2022 01:03:32 -0700 X-IronPort-AV: E=Sophos;i="5.93,293,1654585200"; d="scan'208";a="644065355" Received: from unknown (HELO dpdkserver..) ([10.239.252.174]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Sep 2022 01:03:28 -0700 From: Yiding Zhou To: dev@dpdk.org Cc: qi.z.zhang@intel.com, anatoly.burakov@intel.com, xingguang.he@intel.com, Yiding Zhou , stable@dpdk.org Subject: [PATCH v2] net/pcap: fix timeout of stopping device Date: Tue, 6 Sep 2022 16:05:11 +0800 Message-Id: <20220906080511.46088-1-yidingx.zhou@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220825072041.10768-1-yidingx.zhou@intel.com> References: <20220825072041.10768-1-yidingx.zhou@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The pcap file will be synchronized to the disk when stopping the device. It takes a long time if the file is large that would cause the 'detach sync request' timeout when the device is closed under multi-process scenario. This commit fixes the issue by using alarm handler to release dumper. Fixes: 0ecfb6c04d54 ("net/pcap: move handler to process private") Cc: stable@dpdk.org Signed-off-by: Yiding Zhou --- v2: use alarm handler to release dumper --- drivers/net/pcap/pcap_ethdev.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c index ec29fd6bc5..5c643a0277 100644 --- a/drivers/net/pcap/pcap_ethdev.c +++ b/drivers/net/pcap/pcap_ethdev.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "pcap_osdep.h" @@ -664,6 +665,25 @@ eth_dev_start(struct rte_eth_dev *dev) return 0; } +static void eth_pcap_dumper_release(void *arg) +{ + pcap_dump_close((pcap_dumper_t *)arg); +} + +static void +eth_pcap_dumper_close(pcap_dumper_t *dumper) +{ + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* + * Delay 30 seconds before releasing dumper to wait for file sync + * to complete to avoid blocking alarm thread in PRIMARY process + */ + rte_eal_alarm_set(30000000, eth_pcap_dumper_release, dumper); + } else { + rte_eal_alarm_set(1, eth_pcap_dumper_release, dumper); + } +} + /* * This function gets called when the current port gets stopped. * Is the only place for us to close all the tx streams dumpers. @@ -689,7 +709,7 @@ eth_dev_stop(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_tx_queues; i++) { if (pp->tx_dumper[i] != NULL) { - pcap_dump_close(pp->tx_dumper[i]); + eth_pcap_dumper_close(pp->tx_dumper[i]); pp->tx_dumper[i] = NULL; }