From patchwork Thu Jun 8 06:23:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingjin Ye X-Patchwork-Id: 128370 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 064BF42C5A; Thu, 8 Jun 2023 08:31:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D4DE40A84; Thu, 8 Jun 2023 08:31:38 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 1A20E40042; Thu, 8 Jun 2023 08:31:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686205896; x=1717741896; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=dTq+BqRqUD8YBAWLRfSMnSvfObPvsVe2km+mEXrM2+Q=; b=C3O87sR4zBtVOyAnY1xXVYXx1s26rL8dPPvcX9CW9e60hplUuuRrsEeK ZmPogoo/gsg9/PiFUSN3l6sOmU+k18MyXBnwlQ/pcNZMt/15oTcH4Iuen C73GeypGRYi8laxuz7sdbsEqutIuxqnWT6ZnvCw3aKnq1ZfhWcRi5Vj3D CHb45ky0u2NlyY6dtp491/AE7GzT4txAIfsGqDHUygvIlEp29bL22gCVI bpbAjaeV3ouUP/SDkuH8FQ8k4Ru8HHL9e7rCtxkklk+2BYPHKW14zrkLc XLeeNjSDbsRBEY/uNpYf48EOWJ/6kbfogQKW2C7JyBZL7H3a8Qem7z1iK g==; X-IronPort-AV: E=McAfee;i="6600,9927,10734"; a="420787318" X-IronPort-AV: E=Sophos;i="6.00,226,1681196400"; d="scan'208";a="420787318" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2023 23:31:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10734"; a="742993392" X-IronPort-AV: E=Sophos;i="6.00,226,1681196400"; d="scan'208";a="742993392" Received: from unknown (HELO localhost.localdomain) ([10.239.252.253]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2023 23:31:32 -0700 From: Mingjin Ye To: dev@dpdk.org Cc: qiming.yang@intel.com, yidingx.zhou@intel.com, Mingjin Ye , stable@dpdk.org, Jingjing Wu , Beilei Xing Subject: [PATCH] net/iavf: fix abnormal disable HW interrupt Date: Thu, 8 Jun 2023 06:23:05 +0000 Message-Id: <20230608062305.99819-1-mingjinx.ye@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For command VIRTCHNL_OP_REQUEST_QUEUES, polling access to the admin queue has the issue of access overruns after disabling interrupt. That results in FW disabling HW interrupt for protection purposes. The updates/changes in this patch: 1. Remove the polling admin queue processing and use the generic interrupt processing instead. 2. Release redundant queue resource before stopping processing interrupt events. Fixes: 22b123a36d07 ("net/avf: initialize PMD") Fixes: ef807926e148 ("net/iavf: support requesting additional queues from PF") Fixes: 84108425054a ("net/iavf: support asynchronous virtual channel message") Cc: stable@dpdk.org Signed-off-by: Mingjin Ye --- drivers/net/iavf/iavf_ethdev.c | 25 +++++++++--------- drivers/net/iavf/iavf_vchnl.c | 48 +++++++--------------------------- 2 files changed, 23 insertions(+), 50 deletions(-) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index e6cf897293..ba5c88a1ec 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -2756,6 +2756,19 @@ iavf_dev_close(struct rte_eth_dev *dev) } ret = iavf_dev_stop(dev); + + /* + * Release redundant queue resource when close the dev + * so that other vfs can re-use the queues. + */ + if (vf->lv_enabled) { + ret = iavf_request_queues(dev, IAVF_MAX_NUM_QUEUES_DFLT); + if (ret) + PMD_DRV_LOG(ERR, "Reset the num of queues failed"); + + vf->max_rss_qregion = IAVF_MAX_NUM_QUEUES_DFLT; + } + adapter->closed = true; /* free iAVF security device context all related resources */ @@ -2772,18 +2785,6 @@ iavf_dev_close(struct rte_eth_dev *dev) if (vf->promisc_unicast_enabled || vf->promisc_multicast_enabled) iavf_config_promisc(adapter, false, false); - /* - * Release redundant queue resource when close the dev - * so that other vfs can re-use the queues. - */ - if (vf->lv_enabled) { - ret = iavf_request_queues(dev, IAVF_MAX_NUM_QUEUES_DFLT); - if (ret) - PMD_DRV_LOG(ERR, "Reset the num of queues failed"); - - vf->max_rss_qregion = IAVF_MAX_NUM_QUEUES_DFLT; - } - iavf_shutdown_adminq(hw); if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { /* disable uio intr before callback unregister */ diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 8cc5377bcf..579c0d0d70 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -323,6 +323,7 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args, switch (args->ops) { case VIRTCHNL_OP_RESET_VF: + case VIRTCHNL_OP_REQUEST_QUEUES: /*no need to wait for response */ _clear_cmd(vf); break; @@ -346,33 +347,6 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args, } _clear_cmd(vf); break; - case VIRTCHNL_OP_REQUEST_QUEUES: - /* - * ignore async reply, only wait for system message, - * vf_reset = true if get VIRTCHNL_EVENT_RESET_IMPENDING, - * if not, means request queues failed. - */ - do { - result = iavf_read_msg_from_pf(adapter, args->out_size, - args->out_buffer); - if (result == IAVF_MSG_SYS && vf->vf_reset) { - break; - } else if (result == IAVF_MSG_CMD || - result == IAVF_MSG_ERR) { - err = -1; - break; - } - iavf_msec_delay(ASQ_DELAY_MS); - /* If don't read msg or read sys event, continue */ - } while (i++ < MAX_TRY_TIMES); - if (i >= MAX_TRY_TIMES || - vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) { - err = -1; - PMD_DRV_LOG(ERR, "No response or return failure (%d)" - " for cmd %d", vf->cmd_retval, args->ops); - } - _clear_cmd(vf); - break; default: /* For other virtchnl ops in running time, * wait for the cmd done flag. @@ -2055,11 +2029,11 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num) struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct virtchnl_vf_res_request vfres; struct iavf_cmd_info args; uint16_t num_queue_pairs; int err; + int i = 0; if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) { @@ -2080,16 +2054,7 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num) args.out_size = IAVF_AQ_BUF_SZ; if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { - /* disable interrupt to avoid the admin queue message to be read - * before iavf_read_msg_from_pf. - * - * don't disable interrupt handler until ready to execute vf cmd. - */ - rte_spinlock_lock(&vf->aq_lock); - rte_intr_disable(pci_dev->intr_handle); - err = iavf_execute_vf_cmd(adapter, &args, 0); - rte_intr_enable(pci_dev->intr_handle); - rte_spinlock_unlock(&vf->aq_lock); + err = iavf_execute_vf_cmd_safe(adapter, &args, 0); } else { rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); err = iavf_execute_vf_cmd_safe(adapter, &args, 0); @@ -2102,6 +2067,13 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num) return err; } + /* wait for interrupt notification vf is resetting */ + while (i++ < MAX_TRY_TIMES) { + if (vf->vf_reset) + break; + iavf_msec_delay(ASQ_DELAY_MS); + } + /* request queues succeeded, vf is resetting */ if (vf->vf_reset) { PMD_DRV_LOG(INFO, "vf is resetting");