From patchwork Thu Jan 20 09:12:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh A P X-Patchwork-Id: 106114 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E846AA00C3; Thu, 20 Jan 2022 09:53:29 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A4DC3426D7; Thu, 20 Jan 2022 09:53:27 +0100 (CET) Received: from relay.smtp-ext.broadcom.com (lpdvsmtp11.broadcom.com [192.19.166.231]) by mails.dpdk.org (Postfix) with ESMTP id DB070426D3 for ; Thu, 20 Jan 2022 09:53:24 +0100 (CET) Received: from dhcp-10-123-153-22.dhcp.broadcom.net (bgccx-dev-host-lnx2.bec.broadcom.net [10.123.153.22]) by relay.smtp-ext.broadcom.com (Postfix) with ESMTP id 2E9F0C000C76; Thu, 20 Jan 2022 00:53:22 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com 2E9F0C000C76 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1642668804; bh=JNNzITHIn8RxPO5+tgkI8UQEK78yG+Zc8ThTUeb1Tco=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TUavR3mAqHJpuJq5J2bOmxjsyTx5TPwsWo397Qhl24FqRESyyR9+0TPz5NQL0oTXH 9fWRn1Jb1ZR2gct86TsyJ84a0BMTZ6od3eqFl3MrPmGpS/6Yxgd8ps3Opy87rXPy9P G3a5JHSaEcFxilYTNOaEInyBtGjAr5Iz4cWHdgNo= From: Kalesh A P To: dev@dpdk.org Cc: ferruh.yigit@intel.com, ajit.khaparde@broadcom.com Subject: [dpdk-dev] [PATCH 1/4] net/bnxt: fix check for autoneg enablement Date: Thu, 20 Jan 2022 14:42:25 +0530 Message-Id: <20220120091228.7076-2-kalesh-anakkur.purayil@broadcom.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> References: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP HWRM_PORT_PHY_QCFG_OUTPUT response indicates the autoneg speed mask supported by the FW. While enabling autoneg, driver should also check the FW advertised PAM4 speeds supported in auto mode which is set in the HWRM_PORT_PHY_QCFG_OUTPUT response. Fixes: c23f9ded0391 ("net/bnxt: support 200G PAM4 link") Cc: stable@dpdk.org Signed-off-by: Kalesh AP Reviewed-by: Ajit Khaparde Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_hwrm.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 5850e7e..5418fa1 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3253,7 +3253,8 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up) bp->link_info->link_signal_mode); link_req.phy_flags = HWRM_PORT_PHY_CFG_INPUT_FLAGS_RESET_PHY; /* Autoneg can be done only when the FW allows. */ - if (autoneg == 1 && bp->link_info->support_auto_speeds) { + if (autoneg == 1 && + (bp->link_info->support_auto_speeds || bp->link_info->support_pam4_auto_speeds)) { link_req.phy_flags |= HWRM_PORT_PHY_CFG_INPUT_FLAGS_RESTART_AUTONEG; link_req.auto_link_speed_mask = From patchwork Thu Jan 20 09:12:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh A P X-Patchwork-Id: 106115 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0EB5A00C3; Thu, 20 Jan 2022 09:53:34 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 974BB426DE; Thu, 20 Jan 2022 09:53:29 +0100 (CET) Received: from relay.smtp-ext.broadcom.com (relay.smtp-ext.broadcom.com [192.19.166.231]) by mails.dpdk.org (Postfix) with ESMTP id 856E2426D5 for ; Thu, 20 Jan 2022 09:53:26 +0100 (CET) Received: from dhcp-10-123-153-22.dhcp.broadcom.net (bgccx-dev-host-lnx2.bec.broadcom.net [10.123.153.22]) by relay.smtp-ext.broadcom.com (Postfix) with ESMTP id CBA07C000C77; Thu, 20 Jan 2022 00:53:24 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com CBA07C000C77 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1642668805; bh=RooFCbt3sEZYueS9tgDnUcIXVcWyecX7o0SsGt04uiw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SXR0tIBWpBGl3xzq3WHD/CMmSUN134KjT1Ihxd8sLXcSvw2BInIhWZw8wLHfmlxBd 7dobGdImrUeWw0JGLbUri0RDvq3Fujr47GKzQr+joaTYjtqtly+cPyP3m3t30lewaW LCIBxHTnWUKBiOcXK5MLthBhP8A7qZEQjmbVe3Js= From: Kalesh A P To: dev@dpdk.org Cc: ferruh.yigit@intel.com, ajit.khaparde@broadcom.com Subject: [dpdk-dev] [PATCH 2/4] net/bnxt: handle ring cleanup in case of error Date: Thu, 20 Jan 2022 14:42:26 +0530 Message-Id: <20220120091228.7076-3-kalesh-anakkur.purayil@broadcom.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> References: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP In bnxt_alloc_mem(), after bnxt_alloc_async_ring_struct(), any of the functions failure causes an error: bnxt_hwrm_ring_free(): hwrm_ring_free nq failed. rc:1 Fix this by initializing ring->fw_ring_id to INVALID_HW_RING_ID in bnxt_alloc_async_ring_struct(). Fixes: bd0a14c99f65 ("net/bnxt: use dedicated CPR for async events") Cc: stable@dpdk.org Signed-off-by: Kalesh AP Reviewed-by: Ajit Khaparde Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_ring.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index dc437f3..5c6c27f 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -851,6 +851,7 @@ int bnxt_alloc_async_ring_struct(struct bnxt *bp) ring->ring_mask = ring->ring_size - 1; ring->vmem_size = 0; ring->vmem = NULL; + ring->fw_ring_id = INVALID_HW_RING_ID; bp->async_cp_ring = cpr; cpr->cp_ring_struct = ring; From patchwork Thu Jan 20 09:12:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh A P X-Patchwork-Id: 106116 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 995E3A00C3; Thu, 20 Jan 2022 09:53:39 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 82AE8426E3; Thu, 20 Jan 2022 09:53:30 +0100 (CET) Received: from relay.smtp-ext.broadcom.com (lpdvsmtp11.broadcom.com [192.19.166.231]) by mails.dpdk.org (Postfix) with ESMTP id 3BAE2426DC for ; Thu, 20 Jan 2022 09:53:28 +0100 (CET) Received: from dhcp-10-123-153-22.dhcp.broadcom.net (bgccx-dev-host-lnx2.bec.broadcom.net [10.123.153.22]) by relay.smtp-ext.broadcom.com (Postfix) with ESMTP id 79478C000C74; Thu, 20 Jan 2022 00:53:26 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com 79478C000C74 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1642668807; bh=2Gyr5UQbwZdM0E9hZipcrqj1A/vgtX9vdTlBZ8qKs6Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sVMN2JEML+n3CN4VpZhtnGrWNGGO/Lm4ibdLyoHIv/rxiPg/xGBNBAK76e3xgXt7m ly5w+dn4dPrd6vy/tzfFgDnzxwwAW3/MiB6wgtXHDvobfCfgYyMuOvtbNabYWQ4Bs/ sf82u1Ty2PLlafEb1oWtji3zWeGYW7usPG8mONtU= From: Kalesh A P To: dev@dpdk.org Cc: ferruh.yigit@intel.com, ajit.khaparde@broadcom.com Subject: [dpdk-dev] [PATCH 3/4] net/bnxt: fix to alloc the memzone per VNIC Date: Thu, 20 Jan 2022 14:42:27 +0530 Message-Id: <20220120091228.7076-4-kalesh-anakkur.purayil@broadcom.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> References: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP In case of Thor RSS table size is too big. This could result in memory allocation failure when the supported vnic count is high. Instead of allocating the memzone for all VNICs in one shot, allocate for each VNIC individually. Also, fixed to free the memzone in the uninit path. Fixes: 9738793f28ec ("net/bnxt: add VNIC functions and structs") Cc: stable@dpdk.org Signed-off-by: Kalesh AP Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_vnic.c | 68 +++++++++++++++++++------------------------- drivers/net/bnxt/bnxt_vnic.h | 1 + 2 files changed, 30 insertions(+), 39 deletions(-) diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index 09d67ef..b3c03a2 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -98,18 +98,11 @@ void bnxt_free_vnic_attributes(struct bnxt *bp) for (i = 0; i < bp->max_vnics; i++) { vnic = &bp->vnic_info[i]; - if (vnic->rss_table) { - /* 'Unreserve' the rss_table */ - /* N/A */ - - vnic->rss_table = NULL; - } - - if (vnic->rss_hash_key) { - /* 'Unreserve' the rss_hash_key */ - /* N/A */ - + if (vnic->rss_mz != NULL) { + rte_memzone_free(vnic->rss_mz); + vnic->rss_mz = NULL; vnic->rss_hash_key = NULL; + vnic->rss_table = NULL; } } } @@ -122,7 +115,6 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) char mz_name[RTE_MEMZONE_NAMESIZE]; uint32_t entry_length; size_t rss_table_size; - uint16_t max_vnics; int i; rte_iova_t mz_phys_addr; @@ -136,38 +128,36 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) entry_length = RTE_CACHE_LINE_ROUNDUP(entry_length + rss_table_size); - max_vnics = bp->max_vnics; - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, - "bnxt_" PCI_PRI_FMT "_vnicattr", pdev->addr.domain, - pdev->addr.bus, pdev->addr.devid, pdev->addr.function); - mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; - mz = rte_memzone_lookup(mz_name); - if (!mz) { - mz = rte_memzone_reserve(mz_name, - entry_length * max_vnics, - bp->eth_dev->device->numa_node, - RTE_MEMZONE_2MB | - RTE_MEMZONE_SIZE_HINT_ONLY | - RTE_MEMZONE_IOVA_CONTIG); - if (!mz) - return -ENOMEM; - } - mz_phys_addr = mz->iova; - - for (i = 0; i < max_vnics; i++) { + for (i = 0; i < bp->max_vnics; i++) { vnic = &bp->vnic_info[i]; + snprintf(mz_name, RTE_MEMZONE_NAMESIZE, + "bnxt_" PCI_PRI_FMT "_vnicattr_%d", pdev->addr.domain, + pdev->addr.bus, pdev->addr.devid, pdev->addr.function, i); + mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; + mz = rte_memzone_lookup(mz_name); + if (mz == NULL) { + mz = rte_memzone_reserve(mz_name, + entry_length, + bp->eth_dev->device->numa_node, + RTE_MEMZONE_2MB | + RTE_MEMZONE_SIZE_HINT_ONLY | + RTE_MEMZONE_IOVA_CONTIG); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot allocate bnxt vnic_attributes memory\n"); + return -ENOMEM; + } + } + vnic->rss_mz = mz; + mz_phys_addr = mz->iova; + /* Allocate rss table and hash key */ - vnic->rss_table = - (void *)((char *)mz->addr + (entry_length * i)); + vnic->rss_table = (void *)((char *)mz->addr); + vnic->rss_table_dma_addr = mz_phys_addr; memset(vnic->rss_table, -1, entry_length); - vnic->rss_table_dma_addr = mz_phys_addr + (entry_length * i); - vnic->rss_hash_key = (void *)((char *)vnic->rss_table + - rss_table_size); - - vnic->rss_hash_key_dma_addr = vnic->rss_table_dma_addr + - rss_table_size; + vnic->rss_hash_key = (void *)((char *)vnic->rss_table + rss_table_size); + vnic->rss_hash_key_dma_addr = vnic->rss_table_dma_addr + rss_table_size; if (!reconfig) { bnxt_prandom_bytes(vnic->rss_hash_key, HW_HASH_KEY_SIZE); memcpy(bp->rss_conf.rss_key, vnic->rss_hash_key, HW_HASH_KEY_SIZE); diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index 25481fc..9055b93 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -28,6 +28,7 @@ struct bnxt_vnic_info { uint16_t mru; uint16_t hash_type; uint8_t hash_mode; + const struct rte_memzone *rss_mz; rte_iova_t rss_table_dma_addr; uint16_t *rss_table; rte_iova_t rss_hash_key_dma_addr; From patchwork Thu Jan 20 09:12:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh A P X-Patchwork-Id: 106117 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79237A00C3; Thu, 20 Jan 2022 09:53:44 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F28641C3D; Thu, 20 Jan 2022 09:53:32 +0100 (CET) Received: from relay.smtp-ext.broadcom.com (lpdvsmtp11.broadcom.com [192.19.166.231]) by mails.dpdk.org (Postfix) with ESMTP id C4911426D8 for ; Thu, 20 Jan 2022 09:53:29 +0100 (CET) Received: from dhcp-10-123-153-22.dhcp.broadcom.net (bgccx-dev-host-lnx2.bec.broadcom.net [10.123.153.22]) by relay.smtp-ext.broadcom.com (Postfix) with ESMTP id 1AF30C000C75; Thu, 20 Jan 2022 00:53:27 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com 1AF30C000C75 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1642668809; bh=0Wns2Tl7i9zHrFI+9vrTzzPn4eIcjXELHNuHCnsHCWY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HK4txjkXzyYqvndBEdyRLie7s5ot+uRXhkpyvIczjNwHi/t0AzXT0iYrKOZnOyEBM P5VTcsuy/Jwuw7Vga8uyq7eX1MqPvYjlS3i0VVy1UvSHcBr8pCvDJpIcuKVXUu67Xp SOo26zxrec+CET4ePLj29o3mPLCI7bjx5ofKyvO4= From: Kalesh A P To: dev@dpdk.org Cc: ferruh.yigit@intel.com, ajit.khaparde@broadcom.com Subject: [dpdk-dev] [PATCH 4/4] net/bnxt: fix VF resource allocation strategy Date: Thu, 20 Jan 2022 14:42:28 +0530 Message-Id: <20220120091228.7076-5-kalesh-anakkur.purayil@broadcom.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> References: <20220120091228.7076-1-kalesh-anakkur.purayil@broadcom.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Ajit Khaparde 1. VFs need a notification queue to handle async messages. But the current logic does not reserve a notification queue leading to initialization failure in some cases. 2. With the current logic, DPDK PF driver reserves only one VNIC to the VFs leading to initialization failure with more than 1 RXQs. Added logic to distribute number of NQs and VNICs from the pool across VFs and PF. While reserving resources for the VFs, the strategy is to keep both min & max values the same. This could result in a failure when there isn't enough resources to satisfy the request. Hence fixed to instruct the FW to not reserve all minimum resources requested for the VF. The VF driver can request the FW for the allocated resources during probe. Fixes: b7778e8a1c00 ("net/bnxt: refactor to properly allocate resources for PF/VF") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde Signed-off-by: Kalesh AP Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_hwrm.c | 32 +++++++++++++++++--------------- drivers/net/bnxt/bnxt_hwrm.h | 2 ++ 2 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 5418fa1..b4aeec5 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -902,15 +902,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs) bp->max_l2_ctx += bp->max_rx_em_flows; - /* TODO: For now, do not support VMDq/RFS on VFs. */ - if (BNXT_PF(bp)) { - if (bp->pf->max_vfs) - bp->max_vnics = 1; - else - bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics); - } else { - bp->max_vnics = 1; - } + bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics); PMD_DRV_LOG(DEBUG, "Max l2_cntxts is %d vnics is %d\n", bp->max_l2_ctx, bp->max_vnics); bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx); @@ -3495,7 +3487,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, rte_cpu_to_le_16(pf_resc->num_hw_ring_grps); } else if (BNXT_HAS_NQ(bp)) { enables |= HWRM_FUNC_CFG_INPUT_ENABLES_NUM_MSIX; - req.num_msix = rte_cpu_to_le_16(bp->max_nq_rings); + req.num_msix = rte_cpu_to_le_16(pf_resc->num_nq_rings); } req.flags = rte_cpu_to_le_32(bp->pf->func_cfg_flags); @@ -3508,7 +3500,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, req.num_tx_rings = rte_cpu_to_le_16(pf_resc->num_tx_rings); req.num_rx_rings = rte_cpu_to_le_16(pf_resc->num_rx_rings); req.num_l2_ctxs = rte_cpu_to_le_16(pf_resc->num_l2_ctxs); - req.num_vnics = rte_cpu_to_le_16(bp->max_vnics); + req.num_vnics = rte_cpu_to_le_16(pf_resc->num_vnics); req.fid = rte_cpu_to_le_16(0xffff); req.enables = rte_cpu_to_le_32(enables); @@ -3545,14 +3537,12 @@ bnxt_fill_vf_func_cfg_req_new(struct bnxt *bp, req->min_rx_rings = req->max_rx_rings; req->max_l2_ctxs = rte_cpu_to_le_16(bp->max_l2_ctx / (num_vfs + 1)); req->min_l2_ctxs = req->max_l2_ctxs; - /* TODO: For now, do not support VMDq/RFS on VFs. */ - req->max_vnics = rte_cpu_to_le_16(1); + req->max_vnics = rte_cpu_to_le_16(bp->max_vnics / (num_vfs + 1)); req->min_vnics = req->max_vnics; req->max_hw_ring_grps = rte_cpu_to_le_16(bp->max_ring_grps / (num_vfs + 1)); req->min_hw_ring_grps = req->max_hw_ring_grps; - req->flags = - rte_cpu_to_le_16(HWRM_FUNC_VF_RESOURCE_CFG_INPUT_FLAGS_MIN_GUARANTEED); + req->max_msix = rte_cpu_to_le_16(bp->max_nq_rings / (num_vfs + 1)); } static void @@ -3612,6 +3602,8 @@ static int bnxt_update_max_resources(struct bnxt *bp, bp->max_rx_rings -= rte_le_to_cpu_16(resp->alloc_rx_rings); bp->max_l2_ctx -= rte_le_to_cpu_16(resp->alloc_l2_ctx); bp->max_ring_grps -= rte_le_to_cpu_16(resp->alloc_hw_ring_grps); + bp->max_nq_rings -= rte_le_to_cpu_16(resp->alloc_msix); + bp->max_vnics -= rte_le_to_cpu_16(resp->alloc_vnics); HWRM_UNLOCK(); @@ -3685,6 +3677,8 @@ static int bnxt_query_pf_resources(struct bnxt *bp, pf_resc->num_rx_rings = rte_le_to_cpu_16(resp->alloc_rx_rings); pf_resc->num_l2_ctxs = rte_le_to_cpu_16(resp->alloc_l2_ctx); pf_resc->num_hw_ring_grps = rte_le_to_cpu_32(resp->alloc_hw_ring_grps); + pf_resc->num_nq_rings = rte_le_to_cpu_32(resp->alloc_msix); + pf_resc->num_vnics = rte_le_to_cpu_16(resp->alloc_vnics); bp->pf->evb_mode = resp->evb_mode; HWRM_UNLOCK(); @@ -3705,6 +3699,8 @@ bnxt_calculate_pf_resources(struct bnxt *bp, pf_resc->num_rx_rings = bp->max_rx_rings; pf_resc->num_l2_ctxs = bp->max_l2_ctx; pf_resc->num_hw_ring_grps = bp->max_ring_grps; + pf_resc->num_nq_rings = bp->max_nq_rings; + pf_resc->num_vnics = bp->max_vnics; return; } @@ -3723,6 +3719,10 @@ bnxt_calculate_pf_resources(struct bnxt *bp, bp->max_l2_ctx % (num_vfs + 1); pf_resc->num_hw_ring_grps = bp->max_ring_grps / (num_vfs + 1) + bp->max_ring_grps % (num_vfs + 1); + pf_resc->num_nq_rings = bp->max_nq_rings / (num_vfs + 1) + + bp->max_nq_rings % (num_vfs + 1); + pf_resc->num_vnics = bp->max_vnics / (num_vfs + 1) + + bp->max_vnics % (num_vfs + 1); } int bnxt_hwrm_allocate_pf_only(struct bnxt *bp) @@ -3898,6 +3898,8 @@ bnxt_update_pf_resources(struct bnxt *bp, bp->max_tx_rings = pf_resc->num_tx_rings; bp->max_rx_rings = pf_resc->num_rx_rings; bp->max_ring_grps = pf_resc->num_hw_ring_grps; + bp->max_nq_rings = pf_resc->num_nq_rings; + bp->max_vnics = pf_resc->num_vnics; } static int32_t diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 21e1b7a..63f8d8c 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -114,6 +114,8 @@ struct bnxt_pf_resource_info { uint16_t num_rx_rings; uint16_t num_cp_rings; uint16_t num_l2_ctxs; + uint16_t num_nq_rings; + uint16_t num_vnics; uint32_t num_hw_ring_grps; };