From patchwork Tue Oct 12 21:18:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 101277 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C64AA0C4D; Tue, 12 Oct 2021 23:20:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2958B4113A; Tue, 12 Oct 2021 23:20:14 +0200 (CEST) Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by mails.dpdk.org (Postfix) with ESMTP id AB7B4410DC for ; Tue, 12 Oct 2021 23:20:09 +0200 (CEST) Received: by mail-pl1-f177.google.com with SMTP id c4so390815pls.6 for ; Tue, 12 Oct 2021 14:20:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=eljXczYYOcCKAhFtx8NmW2eH9FBkSvWvgRf+qyv143c=; b=XQRty+MymrTzdNp4jvdAK9wELMH2KNoD7xQRwsXfSIOr32OUUD/TC6eb+EeB5ZaYdP kk46r4BDIBNLfU3JoD9zcSstjdPk0t2bBWd75k7rGNzInZ9/7rUHBNVHk2q+atnNf0Hx YaGm+FF5T/o4zblWI+EP0lv+tLlK1h1i+0mhY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=eljXczYYOcCKAhFtx8NmW2eH9FBkSvWvgRf+qyv143c=; b=AixugwKxJn1EXXohLh96N96rEkTBo5qazXMyldgVEzoOmGz6BdZ1agaLJdVSoEO5p4 b2vUnvORjcjqMbGrdpjlnnhC116fAHAODCIzKRvAvIBNYFAbbhvCglUqPuK2Fn6nhk7h u24FaQ88f5HzPh+o1+jFd1gVn08R303GwHnaAcPJB4YPRrDSK3sxOvPvMx5jPRB7K5Ja EvREs9n2u9i3YUR9ONcF5ZN7zGeBDQzf+z/54na4ueod8ff6KLrw2pq14+HIvN1QigUz vONVc2YCZY2gmieouGa8fzm2Msp328S9QBPWuiB6KfvZy6gksJVR6/sLmvqqU5A1NQGX LhFg== X-Gm-Message-State: AOAM532xK74zI6YmCMCl7MurK/77ycT5xjKWv0bl2Ugl/hz3PX9BdXb6 UJinJgLFDxrR3E6PZ0j06IICk7jZuD+u85zPQuUbbh97pNNeALcOD8VLaIhTYSXXxxZCxJbMph/ N++LB5csmg8jVTmAwJv8b0gXMjg7A8kKdIUGT8L0TCKfVxkcqo9EpaPNOzi5cRHU= X-Google-Smtp-Source: ABdhPJzzmgRzcukCBD91DY2PFx2nRgXgjsowHVfzmSBQ9X/BJIUXOLXNDP//KqsAzicDok2f0YM7Ew== X-Received: by 2002:a17:902:da83:b0:13f:704:d731 with SMTP id j3-20020a170902da8300b0013f0704d731mr31197790plx.77.1634073608307; Tue, 12 Oct 2021 14:20:08 -0700 (PDT) Received: from localhost.localdomain ([136.52.99.246]) by smtp.gmail.com with ESMTPSA id ls7sm4084941pjb.16.2021.10.12.14.20.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Oct 2021 14:20:07 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Somnath Kotur Date: Tue, 12 Oct 2021 14:18:43 -0700 Message-Id: <20211012211845.71121-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211012211845.71121-1-ajit.khaparde@broadcom.com> References: <20211012211845.71121-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v4 1/3] net/bnxt: create aggregration rings when needed X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Aggregration rings are needed when PMD needs to support jumbo frames, LRO. Currently we are creating the aggregration rings whether jumbo frames or LRO has been enabled or disabled. This causes unnecessary allocation of mbufs needing larger mbuf pool which is not used at all. This patch modifies the code to create aggregration rings only when needed. Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_hwrm.c | 9 +++ drivers/net/bnxt/bnxt_ring.c | 148 ++++++++++++++++++++++------------- drivers/net/bnxt/bnxt_rxq.c | 72 +++++++++++------ drivers/net/bnxt/bnxt_rxq.h | 2 + drivers/net/bnxt/bnxt_rxr.c | 111 +++++++++++++++----------- 5 files changed, 216 insertions(+), 126 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 503add42fd..181e607d7b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -2741,6 +2741,14 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].rx_fw_ring_id = INVALID_HW_RING_ID; + /* Check agg ring struct explicitly. + * bnxt_need_agg_ring() returns the current state of offload flags, + * but we may have to deal with agg ring struct before the offload + * flags are updated. + */ + if (!bnxt_need_agg_ring(bp->eth_dev) || rxr->ag_ring_struct == NULL) + goto no_agg; + ring = rxr->ag_ring_struct; bnxt_hwrm_ring_free(bp, ring, BNXT_CHIP_P5(bp) ? @@ -2750,6 +2758,7 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].ag_fw_ring_id = INVALID_HW_RING_ID; +no_agg: bnxt_hwrm_stat_ctx_free(bp, cpr); bnxt_free_cp_ring(bp, cpr); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index aaad08e5e5..08cefa1baa 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -104,13 +104,19 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, struct bnxt_ring *cp_ring = cp_ring_info->cp_ring_struct; struct bnxt_rx_ring_info *rx_ring_info = rxq ? rxq->rx_ring : NULL; struct bnxt_tx_ring_info *tx_ring_info = txq ? txq->tx_ring : NULL; - struct bnxt_ring *tx_ring; - struct bnxt_ring *rx_ring; - struct rte_pci_device *pdev = bp->pdev; uint64_t rx_offloads = bp->eth_dev->data->dev_conf.rxmode.offloads; + int ag_ring_start, ag_bitmap_start, tpa_info_start; + int ag_vmem_start, cp_ring_start, nq_ring_start; + int total_alloc_len, rx_ring_start, rx_ring_len; + struct rte_pci_device *pdev = bp->pdev; + struct bnxt_ring *tx_ring, *rx_ring; const struct rte_memzone *mz = NULL; char mz_name[RTE_MEMZONE_NAMESIZE]; rte_iova_t mz_phys_addr; + int ag_bitmap_len = 0; + int tpa_info_len = 0; + int ag_vmem_len = 0; + int ag_ring_len = 0; int stats_len = (tx_ring_info || rx_ring_info) ? RTE_CACHE_LINE_ROUNDUP(sizeof(struct hwrm_stat_ctx_query_output) - @@ -138,14 +144,12 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, RTE_CACHE_LINE_ROUNDUP(rx_ring_info-> rx_ring_struct->vmem_size) : 0; rx_vmem_len = RTE_ALIGN(rx_vmem_len, 128); - int ag_vmem_start = 0; - int ag_vmem_len = 0; - int cp_ring_start = 0; - int nq_ring_start = 0; ag_vmem_start = rx_vmem_start + rx_vmem_len; - ag_vmem_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP( - rx_ring_info->ag_ring_struct->vmem_size) : 0; + if (bnxt_need_agg_ring(bp->eth_dev)) + ag_vmem_len = rx_ring_info && rx_ring_info->ag_ring_struct ? + RTE_CACHE_LINE_ROUNDUP(rx_ring_info->ag_ring_struct->vmem_size) : 0; + cp_ring_start = ag_vmem_start + ag_vmem_len; cp_ring_start = RTE_ALIGN(cp_ring_start, 4096); @@ -164,36 +168,36 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, sizeof(struct tx_bd_long)) : 0; tx_ring_len = RTE_ALIGN(tx_ring_len, 4096); - int rx_ring_start = tx_ring_start + tx_ring_len; + rx_ring_start = tx_ring_start + tx_ring_len; rx_ring_start = RTE_ALIGN(rx_ring_start, 4096); - int rx_ring_len = rx_ring_info ? + rx_ring_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(rx_ring_info->rx_ring_struct->ring_size * sizeof(struct rx_prod_pkt_bd)) : 0; rx_ring_len = RTE_ALIGN(rx_ring_len, 4096); - int ag_ring_start = rx_ring_start + rx_ring_len; + ag_ring_start = rx_ring_start + rx_ring_len; ag_ring_start = RTE_ALIGN(ag_ring_start, 4096); - int ag_ring_len = rx_ring_len * AGG_RING_SIZE_FACTOR; - ag_ring_len = RTE_ALIGN(ag_ring_len, 4096); - int ag_bitmap_start = ag_ring_start + ag_ring_len; - int ag_bitmap_len = rx_ring_info ? + if (bnxt_need_agg_ring(bp->eth_dev)) { + ag_ring_len = rx_ring_len * AGG_RING_SIZE_FACTOR; + ag_ring_len = RTE_ALIGN(ag_ring_len, 4096); + + ag_bitmap_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(rte_bitmap_get_memory_footprint( rx_ring_info->rx_ring_struct->ring_size * AGG_RING_SIZE_FACTOR)) : 0; - int tpa_info_start = ag_bitmap_start + ag_bitmap_len; - int tpa_info_len = 0; - - if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) { - int tpa_max = BNXT_TPA_MAX_AGGS(bp); + if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) { + int tpa_max = BNXT_TPA_MAX_AGGS(bp); - tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info); - tpa_info_len = RTE_CACHE_LINE_ROUNDUP(tpa_info_len); + tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info); + tpa_info_len = RTE_CACHE_LINE_ROUNDUP(tpa_info_len); + } } - int total_alloc_len = tpa_info_start; - total_alloc_len += tpa_info_len; + ag_bitmap_start = ag_ring_start + ag_ring_len; + tpa_info_start = ag_bitmap_start + ag_bitmap_len; + total_alloc_len = tpa_info_start + tpa_info_len; snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_" PCI_PRI_FMT "-%04x_%s", pdev->addr.domain, @@ -254,34 +258,36 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, (struct rte_mbuf **)rx_ring->vmem; } - rx_ring = rx_ring_info->ag_ring_struct; - - rx_ring->bd = ((char *)mz->addr + ag_ring_start); - rx_ring_info->ag_desc_ring = - (struct rx_prod_pkt_bd *)rx_ring->bd; - rx_ring->bd_dma = mz->iova + ag_ring_start; - rx_ring_info->ag_desc_mapping = rx_ring->bd_dma; - rx_ring->mem_zone = (const void *)mz; - - if (!rx_ring->bd) - return -ENOMEM; - if (rx_ring->vmem_size) { - rx_ring->vmem = - (void **)((char *)mz->addr + ag_vmem_start); - rx_ring_info->ag_buf_ring = - (struct rte_mbuf **)rx_ring->vmem; + if (bnxt_need_agg_ring(bp->eth_dev)) { + rx_ring = rx_ring_info->ag_ring_struct; + + rx_ring->bd = ((char *)mz->addr + ag_ring_start); + rx_ring_info->ag_desc_ring = + (struct rx_prod_pkt_bd *)rx_ring->bd; + rx_ring->bd_dma = mz->iova + ag_ring_start; + rx_ring_info->ag_desc_mapping = rx_ring->bd_dma; + rx_ring->mem_zone = (const void *)mz; + + if (!rx_ring->bd) + return -ENOMEM; + if (rx_ring->vmem_size) { + rx_ring->vmem = + (void **)((char *)mz->addr + ag_vmem_start); + rx_ring_info->ag_buf_ring = + (struct rte_mbuf **)rx_ring->vmem; + } + + rx_ring_info->ag_bitmap = + rte_bitmap_init(rx_ring_info->rx_ring_struct->ring_size * + AGG_RING_SIZE_FACTOR, (uint8_t *)mz->addr + + ag_bitmap_start, ag_bitmap_len); + + /* TPA info */ + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) + rx_ring_info->tpa_info = + ((struct bnxt_tpa_info *) + ((char *)mz->addr + tpa_info_start)); } - - rx_ring_info->ag_bitmap = - rte_bitmap_init(rx_ring_info->rx_ring_struct->ring_size * - AGG_RING_SIZE_FACTOR, (uint8_t *)mz->addr + - ag_bitmap_start, ag_bitmap_len); - - /* TPA info */ - if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) - rx_ring_info->tpa_info = - ((struct bnxt_tpa_info *)((char *)mz->addr + - tpa_info_start)); } cp_ring->bd = ((char *)mz->addr + cp_ring_start); @@ -550,6 +556,9 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index) uint8_t ring_type; int rc = 0; + if (!bnxt_need_agg_ring(bp->eth_dev)) + return 0; + ring->fw_rx_ring_id = rxr->rx_ring_struct->fw_ring_id; if (BNXT_CHIP_P5(bp)) { @@ -590,7 +599,7 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) */ cp_ring->ring_size = rxr->rx_ring_struct->ring_size * 2; - if (bp->eth_dev->data->scattered_rx) + if (bnxt_need_agg_ring(bp->eth_dev)) cp_ring->ring_size *= AGG_RING_SIZE_FACTOR; cp_ring->ring_mask = cp_ring->ring_size - 1; @@ -645,7 +654,8 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) goto err_out; } bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); - bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); + if (bnxt_need_agg_ring(bp->eth_dev)) + bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); } rxq->index = queue_index; #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) @@ -683,8 +693,11 @@ static void bnxt_init_all_rings(struct bnxt *bp) ring = rxr->rx_ring_struct; ring->fw_ring_id = INVALID_HW_RING_ID; /* Rx-AGG */ - ring = rxr->ag_ring_struct; - ring->fw_ring_id = INVALID_HW_RING_ID; + if (bnxt_need_agg_ring(bp->eth_dev)) { + ring = rxr->ag_ring_struct; + if (ring != NULL) + ring->fw_ring_id = INVALID_HW_RING_ID; + } } for (i = 0; i < bp->tx_cp_nr_rings; i++) { txq = bp->tx_queues[i]; @@ -712,6 +725,29 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) bnxt_init_all_rings(bp); for (i = 0; i < bp->rx_cp_nr_rings; i++) { + unsigned int soc_id = bp->eth_dev->device->numa_node; + struct bnxt_rx_queue *rxq = bp->rx_queues[i]; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + struct bnxt_ring *ring; + + if (bnxt_need_agg_ring(bp->eth_dev)) { + ring = rxr->ag_ring_struct; + if (ring == NULL) { + bnxt_free_rxq_mem(rxq); + + rc = bnxt_init_rx_ring_struct(rxq, soc_id); + if (rc) + goto err_out; + + rc = bnxt_alloc_rings(bp, soc_id, + i, NULL, rxq, + rxq->cp_ring, NULL, + "rxr"); + if (rc) + goto err_out; + } + } + rc = bnxt_alloc_hwrm_rx_ring(bp, i); if (rc) goto err_out; diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 2eb7a3cb29..38ec4aa14b 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -20,6 +20,17 @@ * RX Queues */ +/* Determine whether the current configuration needs aggregation ring in HW. */ +int bnxt_need_agg_ring(struct rte_eth_dev *eth_dev) +{ + /* scattered_rx will be true if OFFLOAD_SCATTER is enabled, + * if LRO is enabled, or if the max packet len is greater than the + * mbuf data size. So AGG ring will be needed whenever scattered_rx + * is set. + */ + return eth_dev->data->scattered_rx ? 1 : 0; +} + void bnxt_free_rxq_stats(struct bnxt_rx_queue *rxq) { if (rxq && rxq->cp_ring && rxq->cp_ring->hw_stats) @@ -203,6 +214,9 @@ void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq) } } /* Free up mbufs in Agg ring */ + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return; + sw_ring = rxq->rx_ring->ag_buf_ring; if (sw_ring) { for (i = 0; @@ -240,40 +254,48 @@ void bnxt_free_rx_mbufs(struct bnxt *bp) } } +void bnxt_free_rxq_mem(struct bnxt_rx_queue *rxq) +{ + bnxt_rx_queue_release_mbufs(rxq); + + /* Free RX, AGG ring hardware descriptors */ + if (rxq->rx_ring) { + bnxt_free_ring(rxq->rx_ring->rx_ring_struct); + rte_free(rxq->rx_ring->rx_ring_struct); + rxq->rx_ring->rx_ring_struct = NULL; + /* Free RX Agg ring hardware descriptors */ + bnxt_free_ring(rxq->rx_ring->ag_ring_struct); + rte_free(rxq->rx_ring->ag_ring_struct); + rxq->rx_ring->ag_ring_struct = NULL; + + rte_free(rxq->rx_ring); + rxq->rx_ring = NULL; + } + /* Free RX completion ring hardware descriptors */ + if (rxq->cp_ring) { + bnxt_free_ring(rxq->cp_ring->cp_ring_struct); + rte_free(rxq->cp_ring->cp_ring_struct); + rxq->cp_ring->cp_ring_struct = NULL; + rte_free(rxq->cp_ring); + rxq->cp_ring = NULL; + } + + bnxt_free_rxq_stats(rxq); + rte_memzone_free(rxq->mz); + rxq->mz = NULL; +} + void bnxt_rx_queue_release_op(struct rte_eth_dev *dev, uint16_t queue_idx) { struct bnxt_rx_queue *rxq = dev->data->rx_queues[queue_idx]; - if (rxq) { + if (rxq != NULL) { if (is_bnxt_in_error(rxq->bp)) return; bnxt_free_hwrm_rx_ring(rxq->bp, rxq->queue_id); - bnxt_rx_queue_release_mbufs(rxq); - - /* Free RX ring hardware descriptors */ - if (rxq->rx_ring) { - bnxt_free_ring(rxq->rx_ring->rx_ring_struct); - rte_free(rxq->rx_ring->rx_ring_struct); - /* Free RX Agg ring hardware descriptors */ - bnxt_free_ring(rxq->rx_ring->ag_ring_struct); - rte_free(rxq->rx_ring->ag_ring_struct); - - rte_free(rxq->rx_ring); - } - /* Free RX completion ring hardware descriptors */ - if (rxq->cp_ring) { - bnxt_free_ring(rxq->cp_ring->cp_ring_struct); - rte_free(rxq->cp_ring->cp_ring_struct); - rte_free(rxq->cp_ring); - } - - bnxt_free_rxq_stats(rxq); - rte_memzone_free(rxq->mz); - rxq->mz = NULL; - + bnxt_free_rxq_mem(rxq); rte_free(rxq); - dev->data->rx_queues[queue_idx] = NULL; } } diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index 9bb9352feb..0331c23810 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -63,4 +63,6 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq); +int bnxt_need_agg_ring(struct rte_eth_dev *eth_dev); +void bnxt_free_rxq_mem(struct bnxt_rx_queue *rxq); #endif diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 4c1ee4294e..aeacc60a01 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -1223,57 +1223,75 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) rxq->rx_buf_size = BNXT_MAX_PKT_LEN + sizeof(struct rte_mbuf); - rxr = rte_zmalloc_socket("bnxt_rx_ring", - sizeof(struct bnxt_rx_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (rxr == NULL) - return -ENOMEM; - rxq->rx_ring = rxr; - - ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - rxr->rx_ring_struct = ring; - ring->ring_size = rte_align32pow2(rxq->nb_rx_desc); - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)rxr->rx_desc_ring; - ring->bd_dma = rxr->rx_desc_mapping; - - /* Allocate extra rx ring entries for vector rx. */ - ring->vmem_size = sizeof(struct rte_mbuf *) * - (ring->ring_size + BNXT_RX_EXTRA_MBUF_ENTRIES); + if (rxq->rx_ring != NULL) { + rxr = rxq->rx_ring; + } else { - ring->vmem = (void **)&rxr->rx_buf_ring; - ring->fw_ring_id = INVALID_HW_RING_ID; + rxr = rte_zmalloc_socket("bnxt_rx_ring", + sizeof(struct bnxt_rx_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxr == NULL) + return -ENOMEM; + rxq->rx_ring = rxr; + } - cpr = rte_zmalloc_socket("bnxt_rx_ring", - sizeof(struct bnxt_cp_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (cpr == NULL) - return -ENOMEM; - rxq->cp_ring = cpr; + if (rxr->rx_ring_struct == NULL) { + ring = rte_zmalloc_socket("bnxt_rx_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) + return -ENOMEM; + rxr->rx_ring_struct = ring; + ring->ring_size = rte_align32pow2(rxq->nb_rx_desc); + ring->ring_mask = ring->ring_size - 1; + ring->bd = (void *)rxr->rx_desc_ring; + ring->bd_dma = rxr->rx_desc_mapping; + + /* Allocate extra rx ring entries for vector rx. */ + ring->vmem_size = sizeof(struct rte_mbuf *) * + (ring->ring_size + BNXT_RX_EXTRA_MBUF_ENTRIES); + + ring->vmem = (void **)&rxr->rx_buf_ring; + ring->fw_ring_id = INVALID_HW_RING_ID; + } - ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - cpr->cp_ring_struct = ring; + if (rxq->cp_ring != NULL) { + cpr = rxq->cp_ring; + } else { + cpr = rte_zmalloc_socket("bnxt_rx_ring", + sizeof(struct bnxt_cp_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (cpr == NULL) + return -ENOMEM; + rxq->cp_ring = cpr; + } - /* Allocate two completion slots per entry in desc ring. */ - ring->ring_size = rxr->rx_ring_struct->ring_size * 2; - ring->ring_size *= AGG_RING_SIZE_FACTOR; + if (cpr->cp_ring_struct == NULL) { + ring = rte_zmalloc_socket("bnxt_rx_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) + return -ENOMEM; + cpr->cp_ring_struct = ring; + + /* Allocate two completion slots per entry in desc ring. */ + ring->ring_size = rxr->rx_ring_struct->ring_size * 2; + if (bnxt_need_agg_ring(rxq->bp->eth_dev)) + ring->ring_size *= AGG_RING_SIZE_FACTOR; + + ring->ring_size = rte_align32pow2(ring->ring_size); + ring->ring_mask = ring->ring_size - 1; + ring->bd = (void *)cpr->cp_desc_ring; + ring->bd_dma = cpr->cp_desc_mapping; + ring->vmem_size = 0; + ring->vmem = NULL; + ring->fw_ring_id = INVALID_HW_RING_ID; + } - ring->ring_size = rte_align32pow2(ring->ring_size); - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)cpr->cp_desc_ring; - ring->bd_dma = cpr->cp_desc_mapping; - ring->vmem_size = 0; - ring->vmem = NULL; - ring->fw_ring_id = INVALID_HW_RING_ID; + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return 0; + rxr = rxq->rx_ring; /* Allocate Aggregator rings */ ring = rte_zmalloc_socket("bnxt_rx_ring_struct", sizeof(struct bnxt_ring), @@ -1351,6 +1369,9 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq) rxr->rx_buf_ring[i] = &rxq->fake_mbuf; } + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return 0; + ring = rxr->ag_ring_struct; type = RX_PROD_AGG_BD_TYPE_RX_PROD_AGG; bnxt_init_rxbds(ring, type, size); From patchwork Tue Oct 12 21:18:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 101278 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97FEDA0C4D; Tue, 12 Oct 2021 23:20:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4F4C141157; Tue, 12 Oct 2021 23:20:15 +0200 (CEST) Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by mails.dpdk.org (Postfix) with ESMTP id 863544111A for ; Tue, 12 Oct 2021 23:20:10 +0200 (CEST) Received: by mail-pg1-f174.google.com with SMTP id r201so315716pgr.4 for ; Tue, 12 Oct 2021 14:20:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=esa8aTc5d0GQZFDmTglSl2RqsH9xdOhtrNLnqjtU6uU=; b=KMZ/l3+CZhzPMTAqUPsaQMI2cKPu1GdQ7FxULBuQVoJuFHMXSs9y0sbAbmq6YUR6lD ivuAiNW6JpF0FSh8DGp6tGAA66O/WoHQ5xppU+hGrcZDN1a7bCLKg8odj7DuRM2ir9Ik BZ7v4s2AAj3RVtAu9fwiW0FTARb/OW3WTtKiI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=esa8aTc5d0GQZFDmTglSl2RqsH9xdOhtrNLnqjtU6uU=; b=mrSM3V2s39dlzyPvgbmYYMoXI+fapLXLuF+F51OmR4BHaG8+goXFGBihz2aq9TN6rl pkAAHbm2AfEEudD9+/gzPpiP8yBsI2pWELiR2util0K8xX3j/rOxTMY9oklN9WKoObPE cmpz2e/Edtr61KBnd9aAIcuma5n/YgE+07an93BadmajtSzC/qwcz3t4bk8YN2UUW5KA b+qDumMeyRpUn1Wf2Y52QuuoTItpp9uyIxuWgt+QiTm5Vfmsxx0UZto7pMWBHva/dJSf /GF6y+F5s894mTcampjTSl7J1juhIivJVitVFxt9sEdj1c22F2mmKViHUg1FX79KYCIg KxJA== X-Gm-Message-State: AOAM533dVdSTSiIJgc2m+rkEOrfQ9FdpWtreFxAgiSFYGyzjXO6YCQ5k xaL2GeNrEa5cTvsLrSSslAnRK+wydC3G9s+fBH/m3xwRA0oiygrCrVFbJD2/eNwSYXlSOvUdUXS e6zbEVOpPa3IppSLTRg0Wo5g0Ok6pl7Oewtor00CwqZn9Yge+xmh6531ktYBtxFA= X-Google-Smtp-Source: ABdhPJz56+j0ij2dqvVY84ktvsKGyJPN96vJ4GCoFsBUFVoH1bwbtl5BIvkoq4nrTI18CDuqIGm2uA== X-Received: by 2002:aa7:8b56:0:b0:44c:10a:4ee9 with SMTP id i22-20020aa78b56000000b0044c010a4ee9mr34133647pfd.46.1634073609387; Tue, 12 Oct 2021 14:20:09 -0700 (PDT) Received: from localhost.localdomain ([136.52.99.246]) by smtp.gmail.com with ESMTPSA id ls7sm4084941pjb.16.2021.10.12.14.20.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Oct 2021 14:20:08 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org, Lance Richardson Date: Tue, 12 Oct 2021 14:18:44 -0700 Message-Id: <20211012211845.71121-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211012211845.71121-1-ajit.khaparde@broadcom.com> References: <20211012211845.71121-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v4 2/3] net/bnxt: fix Rx queue state on start X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix Rx queue state on device start. The state of Rx queues could be incorrect in some cases because instead of updating the state for all the Rx queues, we are updating it for queues in a VNIC. Fixes: 0105ea1296c9 ("net/bnxt: support runtime queue setup") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson --- drivers/net/bnxt/bnxt_ethdev.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index aa7e7fdc85..a98f93ab29 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -482,12 +482,6 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id) rxq->vnic->fw_grp_ids[j] = INVALID_HW_RING_ID; else vnic->rx_queue_cnt++; - - if (!rxq->rx_deferred_start) { - bp->eth_dev->data->rx_queue_state[j] = - RTE_ETH_QUEUE_STATE_STARTED; - rxq->rx_started = true; - } } PMD_DRV_LOG(DEBUG, "vnic->rx_queue_cnt = %d\n", vnic->rx_queue_cnt); @@ -824,6 +818,16 @@ static int bnxt_start_nic(struct bnxt *bp) } } + for (j = 0; j < bp->rx_nr_rings; j++) { + struct bnxt_rx_queue *rxq = bp->rx_queues[j]; + + if (!rxq->rx_deferred_start) { + bp->eth_dev->data->rx_queue_state[j] = + RTE_ETH_QUEUE_STATE_STARTED; + rxq->rx_started = true; + } + } + rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, &bp->vnic_info[0], 0, NULL); if (rc) { PMD_DRV_LOG(ERR, From patchwork Tue Oct 12 21:18:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 101279 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CC0DA0C4D; Tue, 12 Oct 2021 23:20:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B89341154; Tue, 12 Oct 2021 23:20:16 +0200 (CEST) Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by mails.dpdk.org (Postfix) with ESMTP id A3EDC4113A for ; Tue, 12 Oct 2021 23:20:11 +0200 (CEST) Received: by mail-pl1-f181.google.com with SMTP id g9so370820plo.12 for ; Tue, 12 Oct 2021 14:20:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=7W5HflzflfBzWFvQJwwmDJzARtj33I86bdRBEVyjWis=; b=GJi1ijG9dLAlWzcMNe/iFZRFCD2jssLy96rEvw5FN1Og/HsxgVUeJ/yY3tsG11rK0T /ecoYICmW/hPlJEoKXa9lV6W+FqeROIsGzQLuz3E/xwocTQCssZVZihpDumXRvEbg6zk ZLZp7R4rn8yxouIrBZfHJdbZpIbLXR7Dc8UTs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=7W5HflzflfBzWFvQJwwmDJzARtj33I86bdRBEVyjWis=; b=sc/eMijrYn7TQS8Vrv880cFITCwpDlF/zTxyy3NydO0kh/JDQ9LEyM7L2m64u64qYF nFqoQ+l9VTF/6/FdOfnjTLpS0XJTj+djbM0UkdOX1V0RaaB8bBVoE1rU0SzlkMSBEqWy Hkwk+8xJ3StDtclhPZD9v2CSNEjWGkUv4d9RxTq8m5LPsT3SsYe20oQyH5PMvyWx2mVW Z3XIklHMi+zkZ8iBIBI12IxrgQoAjLrrfpHAZasmSuDAEWQExub5NE8q54chKgB7l4/y tPEwTlwamRxGCZ97TPb0j3s/sUdKy98oPMw5Huf/EYYc23AJkacyYxXttU3t6OjARL4+ KDeQ== X-Gm-Message-State: AOAM532tq5VUp05xHlP6pQqtwbPvgv4YJkLuHscVCQl3W61Nu9tDnJAm f2dGxeSsta4NCfChM2KL3jn+DlW5LmQbl6CtmIMRt0tkNV0XQmNEU6t8DQA70SzoWWmW7E9TU+R L1S/GkKX6dYD4Jiumk3KdCezJ9YRkU60bpHv1o66GPl1ACNZG388MEks9ezu0/G4= X-Google-Smtp-Source: ABdhPJyMP9QyVx9TR6yy0bSm4jhm7WWk0E889+5+olQmhxwSZoQKgqhSeh0cYIhsRo28SfPUWLh4GA== X-Received: by 2002:a17:90a:ba09:: with SMTP id s9mr9029735pjr.42.1634073610471; Tue, 12 Oct 2021 14:20:10 -0700 (PDT) Received: from localhost.localdomain ([136.52.99.246]) by smtp.gmail.com with ESMTPSA id ls7sm4084941pjb.16.2021.10.12.14.20.09 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Oct 2021 14:20:09 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Kalesh AP Date: Tue, 12 Oct 2021 14:18:45 -0700 Message-Id: <20211012211845.71121-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211012211845.71121-1-ajit.khaparde@broadcom.com> References: <20211012211845.71121-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v4 3/3] net/bnxt: enhance support for RSS action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enhance support for RSS action in the non-TruFlow path. This will allow the user or application to update the RSS settings using RTE_FLOW API. Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson Reviewed-by: Kalesh AP --- drivers/net/bnxt/bnxt_filter.h | 1 + drivers/net/bnxt/bnxt_flow.c | 196 ++++++++++++++++++++++++++++++++- 2 files changed, 196 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_filter.h b/drivers/net/bnxt/bnxt_filter.h index 8bae0c4c72..587932c96f 100644 --- a/drivers/net/bnxt/bnxt_filter.h +++ b/drivers/net/bnxt/bnxt_filter.h @@ -43,6 +43,7 @@ struct bnxt_filter_info { #define HWRM_CFA_EM_FILTER 1 #define HWRM_CFA_NTUPLE_FILTER 2 #define HWRM_CFA_TUNNEL_REDIRECT_FILTER 3 +#define HWRM_CFA_CONFIG 4 uint8_t filter_type; uint32_t dst_id; diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 59489b591a..b2ebb5634e 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -738,6 +738,10 @@ bnxt_validate_and_parse_flow_type(struct bnxt *bp, filter->enables = en; filter->valid_flags = valid_flags; + /* Items parsed but no filter to create in HW. */ + if (filter->enables == 0 && filter->valid_flags == 0) + filter->filter_type = HWRM_CFA_CONFIG; + return 0; } @@ -1070,6 +1074,167 @@ bnxt_update_filter_flags_en(struct bnxt_filter_info *filter, filter1, filter->fw_l2_filter_id, filter->l2_ref_cnt); } +/* Valid actions supported along with RSS are count and mark. */ +static int +bnxt_validate_rss_action(const struct rte_flow_action actions[]) +{ + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_RSS: + break; + case RTE_FLOW_ACTION_TYPE_MARK: + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + break; + default: + return -ENOTSUP; + } + } + + return 0; +} + +static int +bnxt_get_vnic(struct bnxt *bp, uint32_t group) +{ + int vnic_id = 0; + + /* For legacy NS3 based implementations, + * group_id will be mapped to a VNIC ID. + */ + if (BNXT_STINGRAY(bp)) + vnic_id = group; + + /* Non NS3 cases, group_id will be ignored. + * Setting will be configured on default VNIC. + */ + return vnic_id; +} + +static int +bnxt_vnic_rss_cfg_update(struct bnxt *bp, + struct bnxt_vnic_info *vnic, + const struct rte_flow_action *act, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss; + unsigned int rss_idx, i; + uint16_t hash_type; + uint64_t types; + int rc; + + rss = (const struct rte_flow_action_rss *)act->conf; + + /* Currently only Toeplitz hash is supported. */ + if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT && + rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unsupported RSS hash function"); + rc = -rte_errno; + goto ret; + } + + /* key_len should match the hash key supported by hardware */ + if (rss->key_len != 0 && rss->key_len != HW_HASH_KEY_SIZE) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Incorrect hash key parameters"); + rc = -rte_errno; + goto ret; + } + + /* Currently RSS hash on inner and outer headers are supported. + * 0 => Default setting + * 1 => Inner + * 2 => Outer + */ + if (rss->level > 2) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unsupported hash level"); + rc = -rte_errno; + goto ret; + } + + if ((rss->queue_num == 0 && rss->queue != NULL) || + (rss->queue_num != 0 && rss->queue == NULL)) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue config specified"); + rc = -rte_errno; + goto ret; + } + + /* If RSS types is 0, use a best effort configuration */ + types = rss->types ? rss->types : ETH_RSS_IPV4; + + hash_type = bnxt_rte_to_hwrm_hash_types(types); + + /* If requested types can't be supported, leave existing settings */ + if (hash_type) + vnic->hash_type = hash_type; + + vnic->hash_mode = + bnxt_rte_to_hwrm_hash_level(bp, rss->types, rss->level); + + /* Update RSS key only if key_len != 0 */ + if (rss->key_len != 0) + memcpy(vnic->rss_hash_key, rss->key, rss->key_len); + + if (rss->queue_num == 0) + goto skip_rss_table; + + /* Validate Rx queues */ + for (i = 0; i < rss->queue_num; i++) { + PMD_DRV_LOG(DEBUG, "RSS action Queue %d\n", rss->queue[i]); + + if (rss->queue[i] >= bp->rx_nr_rings || + !bp->rx_queues[rss->queue[i]]) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue ID for RSS"); + rc = -rte_errno; + goto ret; + } + } + + /* Prepare the indirection table */ + for (rss_idx = 0; rss_idx < HW_HASH_INDEX_SIZE; rss_idx++) { + struct bnxt_rx_queue *rxq; + uint32_t idx; + + idx = rss->queue[rss_idx % rss->queue_num]; + + if (BNXT_CHIP_P5(bp)) { + rxq = bp->rx_queues[idx]; + vnic->rss_table[rss_idx * 2] = + rxq->rx_ring->rx_ring_struct->fw_ring_id; + vnic->rss_table[rss_idx * 2 + 1] = + rxq->cp_ring->cp_ring_struct->fw_ring_id; + } else { + vnic->rss_table[rss_idx] = vnic->fw_grp_ids[idx]; + } + } + +skip_rss_table: + rc = bnxt_hwrm_vnic_rss_cfg(bp, vnic); +ret: + return rc; +} + static int bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], @@ -1329,13 +1494,38 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, filter->flow_id = filter1->flow_id; break; case RTE_FLOW_ACTION_TYPE_RSS: + rc = bnxt_validate_rss_action(actions); + if (rc != 0) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid actions specified with RSS"); + rc = -rte_errno; + goto ret; + } + rss = (const struct rte_flow_action_rss *)act->conf; - vnic_id = attr->group; + vnic_id = bnxt_get_vnic(bp, attr->group); BNXT_VALID_VNIC_OR_RET(bp, vnic_id); vnic = &bp->vnic_info[vnic_id]; + /* + * For non NS3 cases, rte_flow_items will not be considered + * for RSS updates. + */ + if (filter->filter_type == HWRM_CFA_CONFIG) { + /* RSS config update requested */ + rc = bnxt_vnic_rss_cfg_update(bp, vnic, act, error); + if (rc != 0) + return -rte_errno; + + filter->dst_id = vnic->fw_vnic_id; + break; + } + /* Check if requested RSS config matches RSS config of VNIC * only if it is not a fresh VNIC configuration. * Otherwise the existing VNIC configuration can be used. @@ -2006,6 +2196,10 @@ _bnxt_flow_destroy(struct bnxt *bp, return ret; } + /* For config type, there is no filter in HW. Finish cleanup here */ + if (filter->filter_type == HWRM_CFA_CONFIG) + goto done; + ret = bnxt_match_filter(bp, filter); if (ret == 0) PMD_DRV_LOG(ERR, "Could not find matching flow\n");