From patchwork Wed Aug 25 08:34:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Zhang X-Patchwork-Id: 97323 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94FF5A0C53; Wed, 25 Aug 2021 10:47:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D17840041; Wed, 25 Aug 2021 10:47:53 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id BEC894003D for ; Wed, 25 Aug 2021 10:47:51 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10086"; a="215638673" X-IronPort-AV: E=Sophos;i="5.84,350,1620716400"; d="scan'208";a="215638673" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2021 01:47:50 -0700 X-IronPort-AV: E=Sophos;i="5.84,350,1620716400"; d="scan'208";a="527133938" Received: from unknown (HELO intel-npg-odc-srv03.cd.intel.com) ([10.240.178.145]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Aug 2021 01:47:48 -0700 From: Robin Zhang To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com, junfeng.guo@intel.com, stevex.yang@intel.com, Robin Zhang Date: Wed, 25 Aug 2021 08:34:35 +0000 Message-Id: <20210825083435.207234-1-robinx.zhang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723074630.193200-1-robinx.zhang@intel.com> References: <20210723074630.193200-1-robinx.zhang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2] net/iavf: enable interrupt polling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For VF hosted by Intel 700 series NICs, internal rx interrupt and adminq interrupt share the same source, that cause a lot cpu cycles be wasted on interrupt handler on rx path. The patch disable pci interrupt and remove the interrupt handler, replace it with a low frequency(50ms) interrupt polling daemon which is implemtented by registering an alarm callback periodly. The virtual channel capability bit VIRTCHNL_VF_OFFLOAD_WB_ON_ITR can be used to negotiate if iavf PMD needs to enable background alarm or not, so ideally this change will not impact the case hosted by Intel 800 series NICS. This patch implements the same logic with an early i40e commit: commit 864a800d706d ("net/i40e: remove VF interrupt handler") Signed-off-by: Robin Zhang v2: - only enable interrupt polling for VF of i40e devices. Acked-by: Pallavi Kadam Acked-by: Qi Zhang --- drivers/net/iavf/iavf.h | 3 ++ drivers/net/iavf/iavf_ethdev.c | 71 +++++++++++++++++++++++++++------- drivers/net/iavf/iavf_vchnl.c | 22 +++++++---- 3 files changed, 74 insertions(+), 22 deletions(-) diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index b3bd078111..771f3b79d7 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -69,6 +69,8 @@ #define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */ #define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ +#define IAVF_ALARM_INTERVAL 50000 /* us */ + /* The overhead from MTU to max frame size. * Considering QinQ packet, the VLAN tag needs to be counted twice. */ @@ -372,6 +374,7 @@ int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num, void iavf_add_del_all_mac_addr(struct iavf_adapter *adapter, bool add); int iavf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); +void iavf_dev_alarm_handler(void *param); int iavf_query_stats(struct iavf_adapter *adapter, struct virtchnl_eth_stats **pstats); int iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 574cfe055e..29d2aaa10e 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -704,9 +705,9 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, */ vf->msix_base = IAVF_MISC_VEC_ID; - /* set ITR to max */ + /* set ITR to default */ interval = iavf_calc_itr_interval( - IAVF_QUEUE_ITR_INTERVAL_MAX); + IAVF_QUEUE_ITR_INTERVAL_DEFAULT); IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01, IAVF_VFINT_DYN_CTL01_INTENA_MASK | (IAVF_ITR_INDEX_DEFAULT << @@ -867,7 +868,8 @@ iavf_dev_start(struct rte_eth_dev *dev) } /* re-enable intr again, because efd assign may change */ if (dev->data->dev_conf.intr_conf.rxq != 0) { - rte_intr_disable(intr_handle); + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) + rte_intr_disable(intr_handle); rte_intr_enable(intr_handle); } @@ -901,6 +903,10 @@ iavf_dev_stop(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); + if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) && + dev->data->dev_conf.intr_conf.rxq != 0) + rte_intr_disable(intr_handle); + if (adapter->stopped == 1) return 0; @@ -1659,6 +1665,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); uint16_t msix_intr; msix_intr = pci_dev->intr_handle.intr_vec[queue_id]; @@ -1679,7 +1686,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) IAVF_WRITE_FLUSH(hw); - rte_intr_ack(&pci_dev->intr_handle); + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) + rte_intr_ack(&pci_dev->intr_handle); return 0; } @@ -2224,6 +2232,29 @@ iavf_dev_interrupt_handler(void *param) iavf_enable_irq0(hw); } +void +iavf_dev_alarm_handler(void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t icr0; + + iavf_disable_irq0(hw); + + /* read out interrupt causes */ + icr0 = IAVF_READ_REG(hw, IAVF_VFINT_ICR01); + + if (icr0 & IAVF_VFINT_ICR01_ADMINQ_MASK) { + PMD_DRV_LOG(DEBUG, "ICR01_ADMINQ is reported"); + iavf_handle_virtchnl_msg(dev); + } + + iavf_enable_irq0(hw); + + rte_eal_alarm_set(IAVF_ALARM_INTERVAL, + iavf_dev_alarm_handler, dev); +} + static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops) @@ -2260,6 +2291,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); int ret = 0; @@ -2324,13 +2356,18 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr, ð_dev->data->mac_addrs[0]); - /* register callback func to eal lib */ - rte_intr_callback_register(&pci_dev->intr_handle, - iavf_dev_interrupt_handler, - (void *)eth_dev); + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + /* register callback func to eal lib */ + rte_intr_callback_register(&pci_dev->intr_handle, + iavf_dev_interrupt_handler, + (void *)eth_dev); - /* enable uio intr after callback register */ - rte_intr_enable(&pci_dev->intr_handle); + /* enable uio intr after callback register */ + rte_intr_enable(&pci_dev->intr_handle); + } else { + rte_eal_alarm_set(IAVF_ALARM_INTERVAL, + iavf_dev_alarm_handler, eth_dev); + } /* configure and enable device interrupt */ iavf_enable_irq0(hw); @@ -2374,12 +2411,16 @@ iavf_dev_close(struct rte_eth_dev *dev) iavf_config_promisc(adapter, false, false); iavf_shutdown_adminq(hw); - /* disable uio intr before callback unregister */ - rte_intr_disable(intr_handle); + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + /* disable uio intr before callback unregister */ + rte_intr_disable(intr_handle); - /* unregister callback func from eal lib */ - rte_intr_callback_unregister(intr_handle, - iavf_dev_interrupt_handler, dev); + /* unregister callback func from eal lib */ + rte_intr_callback_unregister(intr_handle, + iavf_dev_interrupt_handler, dev); + } else { + rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); + } iavf_disable_irq0(hw); if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS) diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 06dc663947..71ecf7f202 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -13,6 +13,7 @@ #include #include +#include #include #include #include @@ -1687,13 +1688,20 @@ iavf_request_queues(struct iavf_adapter *adapter, uint16_t num) args.out_buffer = vf->aq_resp; args.out_size = IAVF_AQ_BUF_SZ; - /* - * disable interrupt to avoid the admin queue message to be read - * before iavf_read_msg_from_pf. - */ - rte_intr_disable(&pci_dev->intr_handle); - err = iavf_execute_vf_cmd(adapter, &args); - rte_intr_enable(&pci_dev->intr_handle); + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + /* disable interrupt to avoid the admin queue message to be read + * before iavf_read_msg_from_pf. + */ + rte_intr_disable(&pci_dev->intr_handle); + err = iavf_execute_vf_cmd(adapter, &args); + rte_intr_enable(&pci_dev->intr_handle); + } else { + rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev); + err = iavf_execute_vf_cmd(adapter, &args); + rte_eal_alarm_set(IAVF_ALARM_INTERVAL, + iavf_dev_alarm_handler, dev); + } + if (err) { PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES"); return err;