From patchwork Thu Oct 7 03:23:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 100676 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59937A0C4D; Thu, 7 Oct 2021 05:24:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 03ADD4116B; Thu, 7 Oct 2021 05:24:01 +0200 (CEST) Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by mails.dpdk.org (Postfix) with ESMTP id D691840685 for ; Thu, 7 Oct 2021 05:23:59 +0200 (CEST) Received: by mail-pj1-f49.google.com with SMTP id q7-20020a17090a2e0700b001a01027dd88so3277863pjd.1 for ; Wed, 06 Oct 2021 20:23:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=Dq9J9rXFtSLRGYn+TgQTA5Wf2QJZP7TG9bxm9sk8Qqg=; b=XQiq1qGJ1cz8kSxlHJuX7ygvafTSQg7Nc+Yd2aQY3TUSwGuPiUOqwZbNArOxlucpd0 o+eceGWLA9v9kktWy8nWzE4/Fy1bKyBdBehwIRVTlUJOcbbPr+gGgcjTYABMf9Nf/aFY MOw99urKN5r902Tk0o0AYCSjHsWEulFlVpYyQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=Dq9J9rXFtSLRGYn+TgQTA5Wf2QJZP7TG9bxm9sk8Qqg=; b=sNl8jyLSlCRozY3DCIx10ekQxocUwLOEHhsfQ/pgHGW9zAux5a+yUIZgaNMijX+dZr tZYkveCBDPdVJNc+Lsx+RNHLUAMw2bplx1JaDrKo4DoiPHZwD7NXeVn3GHEbCWaRyn4J RGGQxrQATOZkv65dMeA1ZWO4L6JbNOyEiCt0p8/K8E/8Wf59bxq0a8ImQbnVABiYFfFA UrkzjeXhWzx205dNnMr1QjD4NUztOVEWpJhNxfkfpz2GQzyErRcgyoDwEXayAJA9yeZ8 a3Kg5nRimLy74uKx0bO6CrEQmEpuureTK+DyUL1s+gIgH0oIZutKyAQA3PeTRD7XS7e4 qObQ== X-Gm-Message-State: AOAM532Cx2Zn+IVE2PJjKAONbINOsA6H8Ds1jr9vAj5hTvS5uAXqhauM FtNGtDXPV7G4UTWA2b9PhxcFY0Lt+gYi3rf3MopMQDxziNnSnxx3bvx86b/Jt9F/+EoL5g7tzhY XCks77xVLiQpMWUjlBDWOowAIktahHYzmWhsf52EyxcC3bJGpazpxfM0oUecx2U4= X-Google-Smtp-Source: ABdhPJzsdXsLunQlribvitfyvrmo9rWabyQ0dzrX6i5u1pDKWMf8QNd64ZU9ixD7vJiHBfYNqT0jRg== X-Received: by 2002:a17:90a:de16:: with SMTP id m22mr2712996pjv.38.1633577038626; Wed, 06 Oct 2021 20:23:58 -0700 (PDT) Received: from localhost.localdomain ([2600:8802:3300:145:493f:a3ef:de16:5144]) by smtp.gmail.com with ESMTPSA id y15sm16807320pfa.64.2021.10.06.20.23.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Oct 2021 20:23:58 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: Lance Richardson , Somnath Kotur Date: Wed, 6 Oct 2021 20:23:51 -0700 Message-Id: <20211007032353.93579-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211007032353.93579-1-ajit.khaparde@broadcom.com> References: <20211007032353.93579-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH 1/3] net/bnxt: create aggregration rings when needed X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Aggregration rings are needed when PMD needs to support jumbo frames, LRO. Currently we are creating the aggregration rings whether jumbo frames or LRO has been enabled or disabled. This causes unnecessary allocation of mbufs needing larger mbuf pool which is not used at all. This patch modifies the code to create aggregration rings only when needed. Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_hwrm.c | 9 +++ drivers/net/bnxt/bnxt_ring.c | 148 ++++++++++++++++++++++------------- drivers/net/bnxt/bnxt_rxq.c | 71 +++++++++++------ drivers/net/bnxt/bnxt_rxq.h | 2 + drivers/net/bnxt/bnxt_rxr.c | 111 +++++++++++++++----------- 5 files changed, 216 insertions(+), 125 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 503add42fd..181e607d7b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -2741,6 +2741,14 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].rx_fw_ring_id = INVALID_HW_RING_ID; + /* Check agg ring struct explicitly. + * bnxt_need_agg_ring() returns the current state of offload flags, + * but we may have to deal with agg ring struct before the offload + * flags are updated. + */ + if (!bnxt_need_agg_ring(bp->eth_dev) || rxr->ag_ring_struct == NULL) + goto no_agg; + ring = rxr->ag_ring_struct; bnxt_hwrm_ring_free(bp, ring, BNXT_CHIP_P5(bp) ? @@ -2750,6 +2758,7 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].ag_fw_ring_id = INVALID_HW_RING_ID; +no_agg: bnxt_hwrm_stat_ctx_free(bp, cpr); bnxt_free_cp_ring(bp, cpr); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 957b175f1b..0d6a56a39a 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -104,13 +104,19 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, struct bnxt_ring *cp_ring = cp_ring_info->cp_ring_struct; struct bnxt_rx_ring_info *rx_ring_info = rxq ? rxq->rx_ring : NULL; struct bnxt_tx_ring_info *tx_ring_info = txq ? txq->tx_ring : NULL; - struct bnxt_ring *tx_ring; - struct bnxt_ring *rx_ring; - struct rte_pci_device *pdev = bp->pdev; uint64_t rx_offloads = bp->eth_dev->data->dev_conf.rxmode.offloads; + int ag_ring_start, ag_bitmap_start, tpa_info_start; + int ag_vmem_start, cp_ring_start, nq_ring_start; + int total_alloc_len, rx_ring_start, rx_ring_len; + struct rte_pci_device *pdev = bp->pdev; + struct bnxt_ring *tx_ring, *rx_ring; const struct rte_memzone *mz = NULL; char mz_name[RTE_MEMZONE_NAMESIZE]; rte_iova_t mz_phys_addr; + int ag_bitmap_len = 0; + int tpa_info_len = 0; + int ag_vmem_len = 0; + int ag_ring_len = 0; int stats_len = (tx_ring_info || rx_ring_info) ? RTE_CACHE_LINE_ROUNDUP(sizeof(struct hwrm_stat_ctx_query_output) - @@ -138,14 +144,12 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, RTE_CACHE_LINE_ROUNDUP(rx_ring_info-> rx_ring_struct->vmem_size) : 0; rx_vmem_len = RTE_ALIGN(rx_vmem_len, 128); - int ag_vmem_start = 0; - int ag_vmem_len = 0; - int cp_ring_start = 0; - int nq_ring_start = 0; ag_vmem_start = rx_vmem_start + rx_vmem_len; - ag_vmem_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP( - rx_ring_info->ag_ring_struct->vmem_size) : 0; + if (bnxt_need_agg_ring(bp->eth_dev)) + ag_vmem_len = rx_ring_info && rx_ring_info->ag_ring_struct ? + RTE_CACHE_LINE_ROUNDUP(rx_ring_info->ag_ring_struct->vmem_size) : 0; + cp_ring_start = ag_vmem_start + ag_vmem_len; cp_ring_start = RTE_ALIGN(cp_ring_start, 4096); @@ -164,36 +168,36 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, sizeof(struct tx_bd_long)) : 0; tx_ring_len = RTE_ALIGN(tx_ring_len, 4096); - int rx_ring_start = tx_ring_start + tx_ring_len; + rx_ring_start = tx_ring_start + tx_ring_len; rx_ring_start = RTE_ALIGN(rx_ring_start, 4096); - int rx_ring_len = rx_ring_info ? + rx_ring_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(rx_ring_info->rx_ring_struct->ring_size * sizeof(struct rx_prod_pkt_bd)) : 0; rx_ring_len = RTE_ALIGN(rx_ring_len, 4096); - int ag_ring_start = rx_ring_start + rx_ring_len; + ag_ring_start = rx_ring_start + rx_ring_len; ag_ring_start = RTE_ALIGN(ag_ring_start, 4096); - int ag_ring_len = rx_ring_len * AGG_RING_SIZE_FACTOR; - ag_ring_len = RTE_ALIGN(ag_ring_len, 4096); - int ag_bitmap_start = ag_ring_start + ag_ring_len; - int ag_bitmap_len = rx_ring_info ? + if (bnxt_need_agg_ring(bp->eth_dev)) { + ag_ring_len = rx_ring_len * AGG_RING_SIZE_FACTOR; + ag_ring_len = RTE_ALIGN(ag_ring_len, 4096); + + ag_bitmap_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(rte_bitmap_get_memory_footprint( rx_ring_info->rx_ring_struct->ring_size * AGG_RING_SIZE_FACTOR)) : 0; - int tpa_info_start = ag_bitmap_start + ag_bitmap_len; - int tpa_info_len = 0; - - if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) { - int tpa_max = BNXT_TPA_MAX_AGGS(bp); + if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) { + int tpa_max = BNXT_TPA_MAX_AGGS(bp); - tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info); - tpa_info_len = RTE_CACHE_LINE_ROUNDUP(tpa_info_len); + tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info); + tpa_info_len = RTE_CACHE_LINE_ROUNDUP(tpa_info_len); + } } - int total_alloc_len = tpa_info_start; - total_alloc_len += tpa_info_len; + ag_bitmap_start = ag_ring_start + ag_ring_len; + tpa_info_start = ag_bitmap_start + ag_bitmap_len; + total_alloc_len = tpa_info_start + tpa_info_len; snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_" PCI_PRI_FMT "-%04x_%s", pdev->addr.domain, @@ -254,34 +258,36 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, (struct rte_mbuf **)rx_ring->vmem; } - rx_ring = rx_ring_info->ag_ring_struct; - - rx_ring->bd = ((char *)mz->addr + ag_ring_start); - rx_ring_info->ag_desc_ring = - (struct rx_prod_pkt_bd *)rx_ring->bd; - rx_ring->bd_dma = mz->iova + ag_ring_start; - rx_ring_info->ag_desc_mapping = rx_ring->bd_dma; - rx_ring->mem_zone = (const void *)mz; - - if (!rx_ring->bd) - return -ENOMEM; - if (rx_ring->vmem_size) { - rx_ring->vmem = - (void **)((char *)mz->addr + ag_vmem_start); - rx_ring_info->ag_buf_ring = - (struct rte_mbuf **)rx_ring->vmem; + if (bnxt_need_agg_ring(bp->eth_dev)) { + rx_ring = rx_ring_info->ag_ring_struct; + + rx_ring->bd = ((char *)mz->addr + ag_ring_start); + rx_ring_info->ag_desc_ring = + (struct rx_prod_pkt_bd *)rx_ring->bd; + rx_ring->bd_dma = mz->iova + ag_ring_start; + rx_ring_info->ag_desc_mapping = rx_ring->bd_dma; + rx_ring->mem_zone = (const void *)mz; + + if (!rx_ring->bd) + return -ENOMEM; + if (rx_ring->vmem_size) { + rx_ring->vmem = + (void **)((char *)mz->addr + ag_vmem_start); + rx_ring_info->ag_buf_ring = + (struct rte_mbuf **)rx_ring->vmem; + } + + rx_ring_info->ag_bitmap = + rte_bitmap_init(rx_ring_info->rx_ring_struct->ring_size * + AGG_RING_SIZE_FACTOR, (uint8_t *)mz->addr + + ag_bitmap_start, ag_bitmap_len); + + /* TPA info */ + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) + rx_ring_info->tpa_info = + ((struct bnxt_tpa_info *) + ((char *)mz->addr + tpa_info_start)); } - - rx_ring_info->ag_bitmap = - rte_bitmap_init(rx_ring_info->rx_ring_struct->ring_size * - AGG_RING_SIZE_FACTOR, (uint8_t *)mz->addr + - ag_bitmap_start, ag_bitmap_len); - - /* TPA info */ - if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) - rx_ring_info->tpa_info = - ((struct bnxt_tpa_info *)((char *)mz->addr + - tpa_info_start)); } cp_ring->bd = ((char *)mz->addr + cp_ring_start); @@ -550,6 +556,9 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index) uint8_t ring_type; int rc = 0; + if (!bnxt_need_agg_ring(bp->eth_dev)) + return 0; + ring->fw_rx_ring_id = rxr->rx_ring_struct->fw_ring_id; if (BNXT_CHIP_P5(bp)) { @@ -590,7 +599,7 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) */ cp_ring->ring_size = rxr->rx_ring_struct->ring_size * 2; - if (bp->eth_dev->data->scattered_rx) + if (bnxt_need_agg_ring(bp->eth_dev)) cp_ring->ring_size *= AGG_RING_SIZE_FACTOR; cp_ring->ring_mask = cp_ring->ring_size - 1; @@ -645,7 +654,8 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) goto err_out; } bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); - bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); + if (bnxt_need_agg_ring(bp->eth_dev)) + bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); } rxq->index = queue_index; #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) @@ -683,8 +693,11 @@ static void bnxt_init_all_rings(struct bnxt *bp) ring = rxr->rx_ring_struct; ring->fw_ring_id = INVALID_HW_RING_ID; /* Rx-AGG */ - ring = rxr->ag_ring_struct; - ring->fw_ring_id = INVALID_HW_RING_ID; + if (bnxt_need_agg_ring(bp->eth_dev)) { + ring = rxr->ag_ring_struct; + if (ring != NULL) + ring->fw_ring_id = INVALID_HW_RING_ID; + } } for (i = 0; i < bp->tx_cp_nr_rings; i++) { txq = bp->tx_queues[i]; @@ -712,6 +725,29 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) bnxt_init_all_rings(bp); for (i = 0; i < bp->rx_cp_nr_rings; i++) { + unsigned int soc_id = bp->eth_dev->device->numa_node; + struct bnxt_rx_queue *rxq = bp->rx_queues[i]; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + struct bnxt_ring *ring; + + if (bnxt_need_agg_ring(bp->eth_dev)) { + ring = rxr->ag_ring_struct; + if (ring == NULL) { + bnxt_free_rxq_mem(rxq); + + rc = bnxt_init_rx_ring_struct(rxq, soc_id); + if (rc) + goto err_out; + + rc = bnxt_alloc_rings(bp, soc_id, + i, NULL, rxq, + rxq->cp_ring, NULL, + "rxr"); + if (rc) + goto err_out; + } + } + rc = bnxt_alloc_hwrm_rx_ring(bp, i); if (rc) goto err_out; diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index bbcb3b06e7..2ba74e464a 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -20,6 +20,17 @@ * RX Queues */ +/* Determine whether the current configuration needs aggregation ring in HW. */ +int bnxt_need_agg_ring(struct rte_eth_dev *eth_dev) +{ + /* scattered_rx will be true if OFFLOAD_SCATTER is enabled, + * if LRO is enabled, or if the max packet len is greater than the + * mbuf data size. So AGG ring will be needed whenever scattered_rx + * is set. + */ + return eth_dev->data->scattered_rx ? 1 : 0; +} + void bnxt_free_rxq_stats(struct bnxt_rx_queue *rxq) { if (rxq && rxq->cp_ring && rxq->cp_ring->hw_stats) @@ -203,6 +214,9 @@ void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq) } } /* Free up mbufs in Agg ring */ + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return; + sw_ring = rxq->rx_ring->ag_buf_ring; if (sw_ring) { for (i = 0; @@ -240,38 +254,47 @@ void bnxt_free_rx_mbufs(struct bnxt *bp) } } +void bnxt_free_rxq_mem(struct bnxt_rx_queue *rxq) +{ + bnxt_rx_queue_release_mbufs(rxq); + + /* Free RX, AGG ring hardware descriptors */ + if (rxq->rx_ring) { + bnxt_free_ring(rxq->rx_ring->rx_ring_struct); + rte_free(rxq->rx_ring->rx_ring_struct); + rxq->rx_ring->rx_ring_struct = NULL; + /* Free RX Agg ring hardware descriptors */ + bnxt_free_ring(rxq->rx_ring->ag_ring_struct); + rte_free(rxq->rx_ring->ag_ring_struct); + rxq->rx_ring->ag_ring_struct = NULL; + + rte_free(rxq->rx_ring); + rxq->rx_ring = NULL; + } + /* Free RX completion ring hardware descriptors */ + if (rxq->cp_ring) { + bnxt_free_ring(rxq->cp_ring->cp_ring_struct); + rte_free(rxq->cp_ring->cp_ring_struct); + rxq->cp_ring->cp_ring_struct = NULL; + rte_free(rxq->cp_ring); + rxq->cp_ring = NULL; + } + + bnxt_free_rxq_stats(rxq); + rte_memzone_free(rxq->mz); + rxq->mz = NULL; +} + void bnxt_rx_queue_release_op(void *rx_queue) { struct bnxt_rx_queue *rxq = (struct bnxt_rx_queue *)rx_queue; - if (rxq) { + if (rxq != NULL) { if (is_bnxt_in_error(rxq->bp)) return; bnxt_free_hwrm_rx_ring(rxq->bp, rxq->queue_id); - bnxt_rx_queue_release_mbufs(rxq); - - /* Free RX ring hardware descriptors */ - if (rxq->rx_ring) { - bnxt_free_ring(rxq->rx_ring->rx_ring_struct); - rte_free(rxq->rx_ring->rx_ring_struct); - /* Free RX Agg ring hardware descriptors */ - bnxt_free_ring(rxq->rx_ring->ag_ring_struct); - rte_free(rxq->rx_ring->ag_ring_struct); - - rte_free(rxq->rx_ring); - } - /* Free RX completion ring hardware descriptors */ - if (rxq->cp_ring) { - bnxt_free_ring(rxq->cp_ring->cp_ring_struct); - rte_free(rxq->cp_ring->cp_ring_struct); - rte_free(rxq->cp_ring); - } - - bnxt_free_rxq_stats(rxq); - rte_memzone_free(rxq->mz); - rxq->mz = NULL; - + bnxt_free_rxq_mem(rxq); rte_free(rxq); } } diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index 42bd8e7ab7..2bd0c64345 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -63,4 +63,6 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq); +int bnxt_need_agg_ring(struct rte_eth_dev *eth_dev); +void bnxt_free_rxq_mem(struct bnxt_rx_queue *rxq); #endif diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 4c1ee4294e..aeacc60a01 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -1223,57 +1223,75 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) rxq->rx_buf_size = BNXT_MAX_PKT_LEN + sizeof(struct rte_mbuf); - rxr = rte_zmalloc_socket("bnxt_rx_ring", - sizeof(struct bnxt_rx_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (rxr == NULL) - return -ENOMEM; - rxq->rx_ring = rxr; - - ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - rxr->rx_ring_struct = ring; - ring->ring_size = rte_align32pow2(rxq->nb_rx_desc); - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)rxr->rx_desc_ring; - ring->bd_dma = rxr->rx_desc_mapping; - - /* Allocate extra rx ring entries for vector rx. */ - ring->vmem_size = sizeof(struct rte_mbuf *) * - (ring->ring_size + BNXT_RX_EXTRA_MBUF_ENTRIES); + if (rxq->rx_ring != NULL) { + rxr = rxq->rx_ring; + } else { - ring->vmem = (void **)&rxr->rx_buf_ring; - ring->fw_ring_id = INVALID_HW_RING_ID; + rxr = rte_zmalloc_socket("bnxt_rx_ring", + sizeof(struct bnxt_rx_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxr == NULL) + return -ENOMEM; + rxq->rx_ring = rxr; + } - cpr = rte_zmalloc_socket("bnxt_rx_ring", - sizeof(struct bnxt_cp_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (cpr == NULL) - return -ENOMEM; - rxq->cp_ring = cpr; + if (rxr->rx_ring_struct == NULL) { + ring = rte_zmalloc_socket("bnxt_rx_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) + return -ENOMEM; + rxr->rx_ring_struct = ring; + ring->ring_size = rte_align32pow2(rxq->nb_rx_desc); + ring->ring_mask = ring->ring_size - 1; + ring->bd = (void *)rxr->rx_desc_ring; + ring->bd_dma = rxr->rx_desc_mapping; + + /* Allocate extra rx ring entries for vector rx. */ + ring->vmem_size = sizeof(struct rte_mbuf *) * + (ring->ring_size + BNXT_RX_EXTRA_MBUF_ENTRIES); + + ring->vmem = (void **)&rxr->rx_buf_ring; + ring->fw_ring_id = INVALID_HW_RING_ID; + } - ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - cpr->cp_ring_struct = ring; + if (rxq->cp_ring != NULL) { + cpr = rxq->cp_ring; + } else { + cpr = rte_zmalloc_socket("bnxt_rx_ring", + sizeof(struct bnxt_cp_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (cpr == NULL) + return -ENOMEM; + rxq->cp_ring = cpr; + } - /* Allocate two completion slots per entry in desc ring. */ - ring->ring_size = rxr->rx_ring_struct->ring_size * 2; - ring->ring_size *= AGG_RING_SIZE_FACTOR; + if (cpr->cp_ring_struct == NULL) { + ring = rte_zmalloc_socket("bnxt_rx_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) + return -ENOMEM; + cpr->cp_ring_struct = ring; + + /* Allocate two completion slots per entry in desc ring. */ + ring->ring_size = rxr->rx_ring_struct->ring_size * 2; + if (bnxt_need_agg_ring(rxq->bp->eth_dev)) + ring->ring_size *= AGG_RING_SIZE_FACTOR; + + ring->ring_size = rte_align32pow2(ring->ring_size); + ring->ring_mask = ring->ring_size - 1; + ring->bd = (void *)cpr->cp_desc_ring; + ring->bd_dma = cpr->cp_desc_mapping; + ring->vmem_size = 0; + ring->vmem = NULL; + ring->fw_ring_id = INVALID_HW_RING_ID; + } - ring->ring_size = rte_align32pow2(ring->ring_size); - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)cpr->cp_desc_ring; - ring->bd_dma = cpr->cp_desc_mapping; - ring->vmem_size = 0; - ring->vmem = NULL; - ring->fw_ring_id = INVALID_HW_RING_ID; + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return 0; + rxr = rxq->rx_ring; /* Allocate Aggregator rings */ ring = rte_zmalloc_socket("bnxt_rx_ring_struct", sizeof(struct bnxt_ring), @@ -1351,6 +1369,9 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq) rxr->rx_buf_ring[i] = &rxq->fake_mbuf; } + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return 0; + ring = rxr->ag_ring_struct; type = RX_PROD_AGG_BD_TYPE_RX_PROD_AGG; bnxt_init_rxbds(ring, type, size); From patchwork Thu Oct 7 03:23:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 100677 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D6F45A0C4D; Thu, 7 Oct 2021 05:24:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B6D59411A9; Thu, 7 Oct 2021 05:24:02 +0200 (CEST) Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by mails.dpdk.org (Postfix) with ESMTP id 04EBB41170 for ; Thu, 7 Oct 2021 05:24:01 +0200 (CEST) Received: by mail-pl1-f174.google.com with SMTP id y5so3012473pll.3 for ; Wed, 06 Oct 2021 20:24:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=u208CC6NZxYxtjG/1l6ZQiBCCNJ491TTxTVYFmRoeTg=; b=PhKpShhmPLBj4RnHetCEtp8bCVsKS+RDcFBCMA6x1LI3BqW+XbCKNchLehBkjAwr8/ /mUXFBOX82mOZVPaEFZnP8z9GNT5U9lOc6Qzm0eMJSqyQ2YapkYyae2j4w1eSLGACFg1 tp8DlIlaSfHcHegZAdp5DLBGXcvZalaj4WM6U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=u208CC6NZxYxtjG/1l6ZQiBCCNJ491TTxTVYFmRoeTg=; b=tZcKenibIf5OfFjxj9i67DaGtyIs9/wSzLUvILAdFOYBkbIdc5siqObvryWfwB4yea LonxX0DXqImrTVFpFIAfmU7UvYpru5/5p6oGvof/WWYv2MFKzs68mAZ2uXli0K4ygtG3 /qQxHZqykgl0f1j9kzcyLbGV1h71w16kwCR0Vw8j7H4EG+iM9EJOnUeEx+BUfTV4Hkt7 xUcpQ5juLvH9pUVNUcd7yA/CoGMJvdGYZUrqWpHurc6daY0Z6XWgxeWNdFQyx9uxKeQS Rl6T5qy1ljnDgIrIH6h8Iro+Pzs1cL8/VbGXDiGPqZW73ssmFDwzkYzzHkrVoOfLlnJS mOKQ== X-Gm-Message-State: AOAM531ZaeFkecA8jM5y/ZQN3godgII1JoKIPrpnrlXm4CfzXLozrBcW jAhC24IW86rpogYjMRVvq2Xzb+YYne6A62jJ6V/iVBnWzMFTJUB02h+aUZq5e0bKp7PJqvB/1Ig DIKA0/777VyHY2moaWHc69PUaKslgCt3ToTBv6900jVlXQHn3WIYrPIE+pXB+/Os= X-Google-Smtp-Source: ABdhPJwJG3lUQuA7gb7ouVJ3RGPuEURrmKQIvQNo/uXm8+fzeKk45as2a/iyv5hk7FYpgFd/o1qzXw== X-Received: by 2002:a17:90a:62ca:: with SMTP id k10mr2081148pjs.38.1633577039663; Wed, 06 Oct 2021 20:23:59 -0700 (PDT) Received: from localhost.localdomain ([2600:8802:3300:145:493f:a3ef:de16:5144]) by smtp.gmail.com with ESMTPSA id y15sm16807320pfa.64.2021.10.06.20.23.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Oct 2021 20:23:59 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: stable@dpdk.org, Lance Richardson Date: Wed, 6 Oct 2021 20:23:52 -0700 Message-Id: <20211007032353.93579-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211007032353.93579-1-ajit.khaparde@broadcom.com> References: <20211007032353.93579-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH 2/3] net/bnxt: fix Rx queue state on start X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix Rx queue state on device start. The state of Rx queues could be incorrect in some cases because instead of updating the state for all the Rx queues, we are updating it for queues in a VNIC. Fixes: 0105ea1296c9 ("net/bnxt: support runtime queue setup") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson --- drivers/net/bnxt/bnxt_ethdev.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index aa7e7fdc85..a98f93ab29 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -482,12 +482,6 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id) rxq->vnic->fw_grp_ids[j] = INVALID_HW_RING_ID; else vnic->rx_queue_cnt++; - - if (!rxq->rx_deferred_start) { - bp->eth_dev->data->rx_queue_state[j] = - RTE_ETH_QUEUE_STATE_STARTED; - rxq->rx_started = true; - } } PMD_DRV_LOG(DEBUG, "vnic->rx_queue_cnt = %d\n", vnic->rx_queue_cnt); @@ -824,6 +818,16 @@ static int bnxt_start_nic(struct bnxt *bp) } } + for (j = 0; j < bp->rx_nr_rings; j++) { + struct bnxt_rx_queue *rxq = bp->rx_queues[j]; + + if (!rxq->rx_deferred_start) { + bp->eth_dev->data->rx_queue_state[j] = + RTE_ETH_QUEUE_STATE_STARTED; + rxq->rx_started = true; + } + } + rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, &bp->vnic_info[0], 0, NULL); if (rc) { PMD_DRV_LOG(ERR, From patchwork Thu Oct 7 03:23:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 100678 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01B0AA0C4D; Thu, 7 Oct 2021 05:24:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BFBBE411B1; Thu, 7 Oct 2021 05:24:03 +0200 (CEST) Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by mails.dpdk.org (Postfix) with ESMTP id 1288F41190 for ; Thu, 7 Oct 2021 05:24:02 +0200 (CEST) Received: by mail-pj1-f41.google.com with SMTP id nn3-20020a17090b38c300b001a03bb6c4ebso1542455pjb.1 for ; Wed, 06 Oct 2021 20:24:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=Adoav6BSM79DADGK2PEBIEb6PzokNnozqWJbiWS5RJk=; b=LW18c+QltnGs5Tmj1spvsFCZH3rP1KqCSOPN01vNTXvHkIBbqpepbvlUbl6R4wc+lR Ln3J5NzHeMyapMvDgMPb5J8AAkLueimNCvJYn4xHU3ZGiEavEaYOxDHzfH2Z1/IaZs5F xQH10naw2QeDLQOzaiFEZKMoHrF9JqJ7gJOtE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=Adoav6BSM79DADGK2PEBIEb6PzokNnozqWJbiWS5RJk=; b=RgRSzLevLB6WW4+orJqpgM5kdInbPcH+PRulP68w8AJpfpiri1X+ylQWE6tNL7W2z/ eYxiTCSaoae9z1oJ1f0ljxWW540pCJVUQWEzPDlIMgBWifITnW2fRygmzXqIwZ2RSaId 9/q0rKBwixn1oxMRxVBOIvR6OEjrqCg/xA5gQ34FCPialSFUY6HMfFP/w+Zr/xfgbFVe qZgWhVPIegG5+p2fnKSaSJnPVnJ3BdQQw++nQg91dcH0P1vNg8kcdIs/db/YkKdKruOI TaiJQ1BNNddZ6FxzPPLFaobrIn7/rPDuXmbCV6bbutQZUkMoegh857jLqykV4kd+GTON aRug== X-Gm-Message-State: AOAM5311An+sY2Q8QoIxFVEO/1sGCqMpviWs+uvQKKxwanY1ROZolUHg NIIORbboYgHmJuaVomJtCHuGKU5PguN+rvCWYFA2kwIuiCE5nWvFPjJyr5HU9+ZtPZfZft3ZA7A r4pGk/asXxRtVF654C7fBH0V8OHsYXw3J6Wt82mv6fkKAVnEp4Zu5PkEcTbTqt9Y= X-Google-Smtp-Source: ABdhPJwQnUyEnHnxhGo0XNGvoFDyDhF+RJ5v4lfTYd7BVsFG/VpuoToOCb6FqM/lLFcIQ/P+2LxrBQ== X-Received: by 2002:a17:90a:d905:: with SMTP id c5mr2669497pjv.65.1633577040891; Wed, 06 Oct 2021 20:24:00 -0700 (PDT) Received: from localhost.localdomain ([2600:8802:3300:145:493f:a3ef:de16:5144]) by smtp.gmail.com with ESMTPSA id y15sm16807320pfa.64.2021.10.06.20.24.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Oct 2021 20:24:00 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: Lance Richardson , Kalesh AP Date: Wed, 6 Oct 2021 20:23:53 -0700 Message-Id: <20211007032353.93579-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211007032353.93579-1-ajit.khaparde@broadcom.com> References: <20211007032353.93579-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH 3/3] net/bnxt: enhance support for RSS action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enhance support for RSS action in the non-TruFlow path. This will allow the user or application to update the RSS settings using RTE_FLOW API. Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson Reviewed-by: Kalesh AP --- drivers/net/bnxt/bnxt_filter.h | 1 + drivers/net/bnxt/bnxt_flow.c | 190 ++++++++++++++++++++++++++++++++- 2 files changed, 190 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_filter.h b/drivers/net/bnxt/bnxt_filter.h index 8bae0c4c72..07938534a9 100644 --- a/drivers/net/bnxt/bnxt_filter.h +++ b/drivers/net/bnxt/bnxt_filter.h @@ -43,6 +43,7 @@ struct bnxt_filter_info { #define HWRM_CFA_EM_FILTER 1 #define HWRM_CFA_NTUPLE_FILTER 2 #define HWRM_CFA_TUNNEL_REDIRECT_FILTER 3 +#define HWRM_CFA_CONFIG_VNIC 4 uint8_t filter_type; uint32_t dst_id; diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 59489b591a..7043d44b4d 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -1070,6 +1070,170 @@ bnxt_update_filter_flags_en(struct bnxt_filter_info *filter, filter1, filter->fw_l2_filter_id, filter->l2_ref_cnt); } +/* Valid actions supported along with RSS are count and mark. */ +static int +bnxt_validate_rss_action(const struct rte_flow_action actions[]) +{ + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_RSS: + break; + case RTE_FLOW_ACTION_TYPE_MARK: + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + break; + default: + return -ENOTSUP; + } + } + + return 0; +} + +static int +bnxt_get_vnic(struct bnxt *bp, uint32_t group) +{ + int vnic_id = 0; + + /* For legacy NS3 based implementations, + * group_id will be mapped to a VNIC ID. + */ + if (BNXT_STINGRAY(bp)) + vnic_id = group; + + /* Non NS3 cases, group_id will be ignored. + * Setting will be configured on default VNIC. + */ + return vnic_id; +} + +static int +bnxt_vnic_rss_cfg_update(struct bnxt *bp, + struct bnxt_vnic_info *vnic, + const struct rte_flow_action *act, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss; + unsigned int rss_idx, i; + uint16_t hash_type; + uint64_t types; + int rc; + + rss = (const struct rte_flow_action_rss *)act->conf; + + /* Currently only Toeplitz hash is supported. */ + if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT && + rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unsupported RSS hash function"); + rc = -rte_errno; + goto ret; + } + + /* key_len should match the hash key supported by hardware */ + if (rss->queue_num == 0 && + ((rss->key_len == 0 && rss->key != NULL) || + (rss->key_len != 0 && rss->key == NULL) || + (rss->key_len != 0 && rss->key_len != HW_HASH_KEY_SIZE))) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Incorrect hash key parameters"); + rc = -rte_errno; + goto ret; + } + + /* Currently RSS hash on inner and outer headers are supported. + * 0 => Default setting + * 1 => Inner + * 2 => Outer + */ + if (rss->level > 2) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unsupported hash level"); + rc = -rte_errno; + goto ret; + } + + if ((rss->queue_num == 0 && rss->queue != NULL) || + (rss->queue_num != 0 && rss->queue == NULL)) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue config specified"); + rc = -rte_errno; + goto ret; + } + + /* If RSS types is 0, use a best effort configuration */ + types = rss->types ? rss->types : ETH_RSS_IPV4; + + hash_type = bnxt_rte_to_hwrm_hash_types(types); + + /* If requested types can't be supported, leave existing settings */ + if (hash_type) + vnic->hash_type = hash_type; + + vnic->hash_mode = + bnxt_rte_to_hwrm_hash_level(bp, rss->types, rss->level); + + /* Update RETA table only if key_len != 0 */ + if (rss->key_len != 0) + memcpy(vnic->rss_hash_key, rss->key, rss->key_len); + + if (rss->queue_num == 0) + goto skip_rss_table; + + /* Validate Rx queues */ + for (i = 0; i < rss->queue_num; i++) { + PMD_DRV_LOG(DEBUG, "RSS action Queue %d\n", rss->queue[i]); + + if (rss->queue[i] >= bp->rx_nr_rings || + !bp->rx_queues[rss->queue[i]]) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue ID for RSS"); + rc = -rte_errno; + goto ret; + } + } + + /* Prepare the indirection table */ + for (rss_idx = 0; rss_idx < HW_HASH_INDEX_SIZE; rss_idx++) { + struct bnxt_rx_queue *rxq; + uint32_t idx; + + idx = rss->queue[rss_idx % rss->queue_num]; + + if (BNXT_CHIP_P5(bp)) { + rxq = bp->rx_queues[idx]; + vnic->rss_table[rss_idx * 2] = + rxq->rx_ring->rx_ring_struct->fw_ring_id; + vnic->rss_table[rss_idx * 2 + 1] = + rxq->cp_ring->cp_ring_struct->fw_ring_id; + } else { + vnic->rss_table[rss_idx] = vnic->fw_grp_ids[idx]; + } + } + +skip_rss_table: + rc = bnxt_hwrm_vnic_rss_cfg(bp, vnic); +ret: + return rc; +} + static int bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], @@ -1108,6 +1272,17 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, use_ntuple = bnxt_filter_type_check(pattern, error); + rc = bnxt_validate_rss_action(actions); + if (rc != 0) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid actions specified with RSS"); + rc = -rte_errno; + goto ret; + } + start: switch (act->type) { case RTE_FLOW_ACTION_TYPE_QUEUE: @@ -1331,11 +1506,24 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_RSS: rss = (const struct rte_flow_action_rss *)act->conf; - vnic_id = attr->group; + vnic_id = bnxt_get_vnic(bp, attr->group); BNXT_VALID_VNIC_OR_RET(bp, vnic_id); vnic = &bp->vnic_info[vnic_id]; + if (filter->enables == 0 && filter->valid_flags == 0) { + /* RSS config update requested */ + rc = bnxt_vnic_rss_cfg_update(bp, vnic, act, error); + if (rc != 0) { + rc = -rte_errno; + goto ret; + } else { + filter->dst_id = vnic->fw_vnic_id; + filter->filter_type = HWRM_CFA_CONFIG_VNIC; + break; + } + } + /* Check if requested RSS config matches RSS config of VNIC * only if it is not a fresh VNIC configuration. * Otherwise the existing VNIC configuration can be used.