From patchwork Tue Jun 19 21:30:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41273 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 67A6B1B061; Tue, 19 Jun 2018 23:31:16 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id BB56E1B058; Tue, 19 Jun 2018 23:31:07 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id AE1D030C080; Tue, 19 Jun 2018 14:31:01 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com AE1D030C080 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443861; bh=cNOtISiYrlJXDVEWmpparf8ygDsWEJqpxgpZ6CDuAr0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XUoEX9c1gHj/XY3nh0bh5g2tTvdEOwrLd1lsnfzs+v6vAtkt4cVSkRXyHbGBdOW2u rdU8kTZd4PJm7hB7GD+YilgXUpJhaZErnT0Q0CYXsHeCmCXy5wdkEEbs7Lv9UR7OHA AchOas7p5G0NbPAWr8FrWnn8qPcMIthcxPsJKsg8= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id D69AAAC06AD; Tue, 19 Jun 2018 14:31:01 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org Date: Tue, 19 Jun 2018 14:30:28 -0700 Message-Id: <20180619213058.12273-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 01/31] net/bnxt: fix clear port stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" PORT_CLR_STATS is not allowed for VFs, NPAR, MultiHost functions or when SR-IOV is enabled. Don't send the HWRM command in such cases. Fixes: bfb9c2260be2 ("net/bnxt: support xstats get/reset") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 4 ++++ drivers/net/bnxt/bnxt_hwrm.c | 5 ++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index afaaf8c41..35c3073dd 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -98,6 +98,7 @@ struct bnxt_child_vf_info { struct bnxt_pf_info { #define BNXT_FIRST_PF_FID 1 #define BNXT_MAX_VFS(bp) (bp->pf.max_vfs) +#define BNXT_TOTAL_VFS(bp) (bp->pf.total_vfs) #define BNXT_FIRST_VF_FID 128 #define BNXT_PF_RINGS_USED(bp) bnxt_get_num_queues(bp) #define BNXT_PF_RINGS_AVAIL(bp) (bp->pf.max_cp_rings - BNXT_PF_RINGS_USED(bp)) @@ -105,6 +106,9 @@ struct bnxt_pf_info { uint16_t first_vf_id; uint16_t active_vfs; uint16_t max_vfs; + uint16_t total_vfs; /* Total VFs possible. + * Not necessarily enabled. + */ uint32_t func_cfg_flags; void *vf_req_buf; rte_iova_t vf_req_buf_dma_addr; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index d6fdc1b88..f441d4610 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -506,6 +506,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) if (BNXT_PF(bp)) { bp->pf.port_id = resp->port_id; bp->pf.first_vf_id = rte_le_to_cpu_16(resp->first_vf_id); + bp->pf.total_vfs = rte_le_to_cpu_16(resp->max_vfs); new_max_vfs = bp->pdev->max_vfs; if (new_max_vfs != bp->pf.max_vfs) { if (bp->pf.vf_info) @@ -3151,7 +3152,9 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp) struct bnxt_pf_info *pf = &bp->pf; int rc; - if (!(bp->flags & BNXT_FLAG_PORT_STATS)) + /* Not allowed on NS2 device, NPAR, MultiHost, VF */ + if (!(bp->flags & BNXT_FLAG_PORT_STATS) || BNXT_VF(bp) || + BNXT_NPAR(bp) || BNXT_MH(bp) || BNXT_TOTAL_VFS(bp)) return 0; HWRM_PREP(req, PORT_CLR_STATS); From patchwork Tue Jun 19 21:30:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41275 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 48FCC1B1C3; Tue, 19 Jun 2018 23:31:20 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id B346F1B056 for ; Tue, 19 Jun 2018 23:31:07 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 55A0830C083; Tue, 19 Jun 2018 14:31:02 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 55A0830C083 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443862; bh=qdYJyYG3K2YqtvJW9CTePHqW3+0GaQhb8l5U7+dgPbM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EKT9PkqWxdv44R2oiIf+HGPo9dhr0YrPPP4qLG+GRCv7utPjUGchBwl5SJSGdNAxu wrVYoEc90t32/UJxZqinbLrRQbLU39zUMwewzL/w9RWWwBX9sOO0Y3CODluvgpK/6P t7E8lyTjGwEwxj8cUfgC/4fVM2NHp02IP1QrsKIU= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 35F68AC0768; Tue, 19 Jun 2018 14:31:02 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:29 -0700 Message-Id: <20180619213058.12273-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 02/31] net/bnxt: add Tx batching support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Batch more than one Tx requests such that only one completion is generarted by the HW. We request a Tx completion for first and last Tx request in the batch. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_cpr.h | 12 ++++++ drivers/net/bnxt/bnxt_txq.h | 1 + drivers/net/bnxt/bnxt_txr.c | 97 +++++++++++++++++++++++++++++---------------- 3 files changed, 75 insertions(+), 35 deletions(-) diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h index 6c1e6d2b0..5b36bf7d7 100644 --- a/drivers/net/bnxt/bnxt_cpr.h +++ b/drivers/net/bnxt/bnxt_cpr.h @@ -22,12 +22,20 @@ #define ADV_RAW_CMP(idx, n) ((idx) + (n)) #define NEXT_RAW_CMP(idx) ADV_RAW_CMP(idx, 1) #define RING_CMP(ring, idx) ((idx) & (ring)->ring_mask) +#define RING_CMPL(ring_mask, idx) ((idx) & (ring_mask)) #define NEXT_CMP(idx) RING_CMP(ADV_RAW_CMP(idx, 1)) #define FLIP_VALID(cons, mask, val) ((cons) >= (mask) ? !(val) : (val)) #define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID) #define DB_CP_FLAGS (DB_KEY_CP | DB_IDX_VALID | DB_IRQ_DIS) +#define NEXT_CMPL(cpr, idx, v, inc) do { \ + (idx) += (inc); \ + if (unlikely((idx) == (cpr)->cp_ring_struct->ring_size)) { \ + (v) = !(v); \ + idx = 0; \ + } \ +} while (0) #define B_CP_DB_REARM(cpr, raw_cons) \ rte_write32((DB_CP_REARM_FLAGS | \ RING_CMP(((cpr)->cp_ring_struct), raw_cons)), \ @@ -50,6 +58,10 @@ rte_write32((DB_CP_FLAGS | \ RING_CMP(((cpr)->cp_ring_struct), raw_cons)), \ ((cpr)->cp_doorbell)) +#define B_CP_DB(cpr, raw_cons, ring_mask) \ + rte_write32((DB_CP_FLAGS | \ + RING_CMPL((ring_mask), raw_cons)), \ + ((cpr)->cp_doorbell)) struct bnxt_ring; struct bnxt_cp_ring_info { diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h index 720ca90cf..f2c712a75 100644 --- a/drivers/net/bnxt/bnxt_txq.h +++ b/drivers/net/bnxt/bnxt_txq.h @@ -24,6 +24,7 @@ struct bnxt_tx_queue { uint8_t wthresh; /* Write-back threshold reg */ uint32_t ctx_curr; /* Hardware context states */ uint8_t tx_deferred_start; /* not in global dev start */ + uint8_t cmpl_next; /* Next BD to trigger a compl */ struct bnxt *bp; int index; diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 470fddd56..0fdf0fd08 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -114,7 +114,9 @@ static inline uint32_t bnxt_tx_avail(struct bnxt_tx_ring_info *txr) } static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, - struct bnxt_tx_queue *txq) + struct bnxt_tx_queue *txq, + uint16_t *coal_pkts, + uint16_t *cmpl_next) { struct bnxt_tx_ring_info *txr = txq->tx_ring; struct tx_bd_long *txbd; @@ -146,8 +148,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, return -ENOMEM; txbd = &txr->tx_desc_ring[txr->tx_prod]; - txbd->opaque = txr->tx_prod; + txbd->opaque = *coal_pkts; txbd->flags_type = tx_buf->nr_bds << TX_BD_LONG_FLAGS_BD_CNT_SFT; + txbd->flags_type |= TX_BD_SHORT_FLAGS_COAL_NOW; + if (!*cmpl_next) { + txbd->flags_type |= TX_BD_LONG_FLAGS_NO_CMPL; + } else { + *coal_pkts = 0; + *cmpl_next = false; + } txbd->len = tx_pkt->data_len; if (txbd->len >= 2014) txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K; @@ -235,7 +244,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, txbd = &txr->tx_desc_ring[txr->tx_prod]; txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(m_seg)); - txbd->flags_type = TX_BD_SHORT_TYPE_TX_BD_SHORT; + txbd->flags_type |= TX_BD_SHORT_TYPE_TX_BD_SHORT; txbd->len = m_seg->data_len; m_seg = m_seg->next; @@ -278,35 +287,44 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq) struct bnxt_cp_ring_info *cpr = txq->cp_ring; uint32_t raw_cons = cpr->cp_raw_cons; uint32_t cons; - int nb_tx_pkts = 0; + uint32_t nb_tx_pkts = 0; struct tx_cmpl *txcmp; + struct cmpl_base *cp_desc_ring = cpr->cp_desc_ring; + struct bnxt_ring *cp_ring_struct = cpr->cp_ring_struct; + uint32_t ring_mask = cp_ring_struct->ring_mask; + uint32_t opaque = 0; - if ((txq->tx_ring->tx_ring_struct->ring_size - - (bnxt_tx_avail(txq->tx_ring))) > - txq->tx_free_thresh) { - while (1) { - cons = RING_CMP(cpr->cp_ring_struct, raw_cons); - txcmp = (struct tx_cmpl *)&cpr->cp_desc_ring[cons]; - - if (!CMP_VALID(txcmp, raw_cons, cpr->cp_ring_struct)) - break; - cpr->valid = FLIP_VALID(cons, - cpr->cp_ring_struct->ring_mask, - cpr->valid); - - if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2) - nb_tx_pkts++; - else - RTE_LOG_DP(DEBUG, PMD, - "Unhandled CMP type %02x\n", - CMP_TYPE(txcmp)); - raw_cons = NEXT_RAW_CMP(raw_cons); - } - if (nb_tx_pkts) - bnxt_tx_cmp(txq, nb_tx_pkts); + if (((txq->tx_ring->tx_prod - txq->tx_ring->tx_cons) & + txq->tx_ring->tx_ring_struct->ring_mask) < txq->tx_free_thresh) + return 0; + + do { + cons = RING_CMPL(ring_mask, raw_cons); + txcmp = (struct tx_cmpl *)&cpr->cp_desc_ring[cons]; + rte_prefetch_non_temporal(&cp_desc_ring[(cons + 2) & + ring_mask]); + + if (!CMPL_VALID(txcmp, cpr->valid)) + break; + opaque = rte_cpu_to_le_32(txcmp->opaque); + NEXT_CMPL(cpr, cons, cpr->valid, 1); + rte_prefetch0(&cp_desc_ring[cons]); + + if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2) + nb_tx_pkts += opaque; + else + RTE_LOG_DP(ERR, PMD, + "Unhandled CMP type %02x\n", + CMP_TYPE(txcmp)); + raw_cons = cons; + } while (nb_tx_pkts < ring_mask); + + if (nb_tx_pkts) { + bnxt_tx_cmp(txq, nb_tx_pkts); cpr->cp_raw_cons = raw_cons; - B_CP_DIS_DB(cpr, cpr->cp_raw_cons); + B_CP_DB(cpr, cpr->cp_raw_cons, ring_mask); } + return nb_tx_pkts; } @@ -315,8 +333,8 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, { struct bnxt_tx_queue *txq = tx_queue; uint16_t nb_tx_pkts = 0; - uint16_t db_mask = txq->tx_ring->tx_ring_struct->ring_size >> 2; - uint16_t last_db_mask = 0; + uint16_t coal_pkts = 0; + uint16_t cmpl_next = txq->cmpl_next; /* Handle TX completions */ bnxt_handle_tx_cp(txq); @@ -326,16 +344,25 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, PMD_DRV_LOG(DEBUG, "Tx q stopped;return\n"); return 0; } + + txq->cmpl_next = 0; /* Handle TX burst request */ for (nb_tx_pkts = 0; nb_tx_pkts < nb_pkts; nb_tx_pkts++) { - if (bnxt_start_xmit(tx_pkts[nb_tx_pkts], txq)) { + int rc; + + /* Request a completion on first and last packet */ + cmpl_next |= (nb_pkts == nb_tx_pkts + 1); + coal_pkts++; + rc = bnxt_start_xmit(tx_pkts[nb_tx_pkts], txq, + &coal_pkts, &cmpl_next); + + if (unlikely(rc)) { + /* Request a completion in next cycle */ + txq->cmpl_next = 1; break; - } else if ((nb_tx_pkts & db_mask) != last_db_mask) { - B_TX_DB(txq->tx_ring->tx_doorbell, - txq->tx_ring->tx_prod); - last_db_mask = nb_tx_pkts & db_mask; } } + if (nb_tx_pkts) B_TX_DB(txq->tx_ring->tx_doorbell, txq->tx_ring->tx_prod); From patchwork Tue Jun 19 21:30:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41272 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0BA5F1B059; Tue, 19 Jun 2018 23:31:14 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id B77121B057 for ; Tue, 19 Jun 2018 23:31:07 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 9B36530C08C; Tue, 19 Jun 2018 14:31:02 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 9B36530C08C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443862; bh=xEmSipGGTkLtumQ141pU1gNSJTPQlK/L1QRmqFMKWUc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gbJvaQD8YZHYKSmYr0hfcedFvm0sfO+yHRiTdRShLUTX9hfBxch33hSQrEtAu0oC6 IobEz9OagOWLKgjE8si45/up7TDm127vK+FqslQWsbTqsU50HYKQkau8AJUqQUgjkN LFQ1bpZSL++69o9wMkWmzODAEys5dKVYQU0sao9o= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 7F442AC06AD; Tue, 19 Jun 2018 14:31:02 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:30 -0700 Message-Id: <20180619213058.12273-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 03/31] net/bnxt: Rx processing optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 1) Use nb_rx_pkts instead of checking producer indices of Rx and aggregator rings to decide if any Rx completions were processed. 2) Post Rx buffers early in Rx processing instead of waiting for the budgeted burst quota. 3) Ring Rx CQ DB after Rx buffers are posted. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_rxr.c | 12 ++++++++---- drivers/net/bnxt/bnxt_rxr.h | 2 ++ 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 9d8842926..b6b72c553 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -540,8 +540,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, int rc = 0; bool evt = false; - /* If Rx Q was stopped return */ - if (rxq->rx_deferred_start) + /* If Rx Q was stopped return. RxQ0 cannot be stopped. */ + if (rxq->rx_deferred_start && rxq->queue_id) return 0; /* Handle RX burst request */ @@ -572,10 +572,13 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, raw_cons = NEXT_RAW_CMP(raw_cons); if (nb_rx_pkts == nb_pkts || evt) break; + /* Post some Rx buf early in case of larger burst processing */ + if (nb_rx_pkts == BNXT_RX_POST_THRESH) + B_RX_DB(rxr->rx_doorbell, rxr->rx_prod); } cpr->cp_raw_cons = raw_cons; - if ((prod == rxr->rx_prod && ag_prod == rxr->ag_prod) && !evt) { + if (!nb_rx_pkts && !evt) { /* * For PMD, there is no need to keep on pushing to REARM * the doorbell if there are no new completions @@ -583,7 +586,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rx_pkts; } - B_CP_DIS_DB(cpr, cpr->cp_raw_cons); if (prod != rxr->rx_prod) B_RX_DB(rxr->rx_doorbell, rxr->rx_prod); @@ -591,6 +593,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, if (ag_prod != rxr->ag_prod) B_RX_DB(rxr->ag_doorbell, rxr->ag_prod); + B_CP_DIS_DB(cpr, cpr->cp_raw_cons); + /* Attempt to alloc Rx buf in case of a previous allocation failure. */ if (rc == -ENOMEM) { int i; diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index 5b28f0321..3815a2199 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -54,6 +54,8 @@ #define RX_CMP_IP_CS_UNKNOWN(rxcmp1) \ !((rxcmp1)->flags2 & RX_CMP_IP_CS_BITS) +#define BNXT_RX_POST_THRESH 32 + enum pkt_hash_types { PKT_HASH_TYPE_NONE, /* Undefined type */ PKT_HASH_TYPE_L2, /* Input: src_MAC, dest_MAC */ From patchwork Tue Jun 19 21:30:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41271 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 05B931B058; Tue, 19 Jun 2018 23:31:10 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id AF5281B055 for ; Tue, 19 Jun 2018 23:31:07 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 95FC130C08B; Tue, 19 Jun 2018 14:31:02 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 95FC130C08B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443862; bh=8ZPc0YfNlkBVHp/kP0XqfSFv/1YqVhEA/Z+5/H6jl4c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t7Y4VGWbs4I6Ws/PXyibGIQ94vx4blEUqJ229aNO/+IMsHHynlBSjLcaItlJvdOq2 43zuo7iFqrJI5xLFzNZ1uMCpONLZ3J/VascVnn6IPP0pBboQZNBR2Dcr22SYei9dvs jkATizbPMhhiCMCW/3KbZ64/xGBULt04fDk5NbOU= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id CB6F4AC0799; Tue, 19 Jun 2018 14:31:02 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:31 -0700 Message-Id: <20180619213058.12273-5-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 04/31] net/bnxt: set min and max descriptor count for Tx and Rx rings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Set min and max descriptor count for Tx and Rx rings. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 3 +++ drivers/net/bnxt/bnxt_ethdev.c | 4 ++++ 2 files changed, 7 insertions(+) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 35c3073dd..d25bf78af 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -24,6 +24,9 @@ #define VLAN_TAG_SIZE 4 #define BNXT_MAX_LED 4 #define BNXT_NUM_VLANS 2 +#define BNXT_MIN_RING_DESC 16 +#define BNXT_MAX_TX_RING_DESC 4096 +#define BNXT_MAX_RX_RING_DESC 8192 struct bnxt_led_info { uint8_t led_id; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 6e56bfd36..33560db0d 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -449,6 +449,10 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev, eth_dev->data->dev_conf.intr_conf.lsc = 1; eth_dev->data->dev_conf.intr_conf.rxq = 1; + dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC; + dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC; + dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC; + dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC; /* *INDENT-ON* */ From patchwork Tue Jun 19 21:30:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41281 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D85771B3B5; Tue, 19 Jun 2018 23:31:33 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id F27F01B059; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 43FFC30C03F; Tue, 19 Jun 2018 14:31:03 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 43FFC30C03F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443863; bh=8yB38YHNjLEUyybbpMLMxsueiotAwoCVgiVo3Qc3svo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vYmbdPkChCxiEFerwVMc28M1uc8lj8eBR/lUxhBJF5Lqy+a2tcnMSA/gocU65D9l2 WpbQofdL01yBjXlYIIBqq/lm5r7ZZTZ8W2ulbylVc/O9YEiK18JDGzCkBadHg4Lr7S sFshYRd58V+ihGruI1i5VO2UfEY2WWnuSVHiUTGI= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 1DF3CAC0768; Tue, 19 Jun 2018 14:31:03 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org Date: Tue, 19 Jun 2018 14:30:32 -0700 Message-Id: <20180619213058.12273-6-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" We are not cleaning up all the memory and also not unregistering the driver during device close operation. This patch fixes the issue. Fixes: 893074951314 (net/bnxt: free memory in close operation) Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 33560db0d..b3826360c 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -152,6 +152,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = { static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask); static void bnxt_print_link_info(struct rte_eth_dev *eth_dev); static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu); +static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev); /***********************/ @@ -668,6 +669,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev) rte_free(bp->grp_info); bp->grp_info = NULL; } + + bnxt_dev_uninit(eth_dev); } static void bnxt_mac_addr_remove_op(struct rte_eth_dev *eth_dev, @@ -3116,7 +3119,6 @@ static int bnxt_init_board(struct rte_eth_dev *eth_dev) return rc; } -static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev); #define ALLOW_FUNC(x) \ { \ @@ -3408,13 +3410,15 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) } static int -bnxt_dev_uninit(struct rte_eth_dev *eth_dev) { +bnxt_dev_uninit(struct rte_eth_dev *eth_dev) +{ struct bnxt *bp = eth_dev->data->dev_private; int rc; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return -EPERM; + PMD_DRV_LOG(INFO, "Calling Device uninit\n"); bnxt_disable_int(bp); bnxt_free_int(bp); bnxt_free_mem(bp); @@ -3428,8 +3432,17 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) { } rc = bnxt_hwrm_func_driver_unregister(bp, 0); bnxt_free_hwrm_resources(bp); - rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone); - rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone); + + if (bp->tx_mem_zone) { + rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone); + bp->tx_mem_zone = NULL; + } + + if (bp->rx_mem_zone) { + rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone); + bp->rx_mem_zone = NULL; + } + if (bp->dev_stopped == 0) bnxt_dev_close_op(eth_dev); if (bp->pf.vf_info) @@ -3456,7 +3469,7 @@ static int bnxt_pci_remove(struct rte_pci_device *pci_dev) static struct rte_pci_driver bnxt_rte_pmd = { .id_table = bnxt_pci_id_map, .drv_flags = RTE_PCI_DRV_NEED_MAPPING | - RTE_PCI_DRV_INTR_LSC, + RTE_PCI_DRV_INTR_LSC | RTE_PCI_DRV_INTR_RMV, .probe = bnxt_pci_probe, .remove = bnxt_pci_remove, }; From patchwork Tue Jun 19 21:30:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41280 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F1CF51B3A9; Tue, 19 Jun 2018 23:31:31 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id EF42B1B058 for ; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 8285E30C08F; Tue, 19 Jun 2018 14:31:03 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 8285E30C08F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443863; bh=1Pg3xqPoM5qxYdMy5YTCcasUVaSVGXPHwqPzztgBNV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z2/Ic9TALtrpv1kupqQACoq+Ssq+f5+apuBLeUZ2Nb+fC2td2bkAlA8sfpEKuxoYH CtE2yVZ2eXEVPXzDYVZEATT8lFwqCCa1E8cJ5chu2ecqCZhISX7QPNHro+8fk1apnM o5yq5OFJvBrIfQlNAHFDtm9I+bL9uwVwJDGAf/f8= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 6F5EBAC06AD; Tue, 19 Jun 2018 14:31:03 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:33 -0700 Message-Id: <20180619213058.12273-7-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 06/31] net/bnxt: set ring coalesce parameters for Stratus NIC X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Set ring coalesce parameters for Stratus NIC. Other skews don't necessarily need this. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 19 ++++++++++++++++ drivers/net/bnxt/bnxt_ethdev.c | 11 +++++++++ drivers/net/bnxt/bnxt_hwrm.c | 51 ++++++++++++++++++++++++++++++++++++++++++ drivers/net/bnxt/bnxt_hwrm.h | 2 ++ drivers/net/bnxt/bnxt_ring.c | 23 +++++++++++++++++++ 5 files changed, 106 insertions(+) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index d25bf78af..bd8d031de 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -28,6 +28,14 @@ #define BNXT_MAX_TX_RING_DESC 4096 #define BNXT_MAX_RX_RING_DESC 8192 +#define BNXT_INT_LAT_TMR_MIN 75 +#define BNXT_INT_LAT_TMR_MAX 150 +#define BNXT_NUM_CMPL_AGGR_INT 36 +#define BNXT_CMPL_AGGR_DMA_TMR 37 +#define BNXT_NUM_CMPL_DMA_AGGR 36 +#define BNXT_CMPL_AGGR_DMA_TMR_DURING_INT 50 +#define BNXT_NUM_CMPL_DMA_AGGR_DURING_INT 12 + struct bnxt_led_info { uint8_t led_id; uint8_t led_type; @@ -209,6 +217,16 @@ struct bnxt_ptp_cfg { uint32_t tx_mapped_regs[BNXT_PTP_TX_REGS]; }; +struct bnxt_coal { + uint16_t num_cmpl_aggr_int; + uint16_t num_cmpl_dma_aggr; + uint16_t num_cmpl_dma_aggr_during_int; + uint16_t int_lat_tmr_max; + uint16_t int_lat_tmr_min; + uint16_t cmpl_aggr_dma_tmr; + uint16_t cmpl_aggr_dma_tmr_during_int; +}; + #define BNXT_HWRM_SHORT_REQ_LEN sizeof(struct hwrm_short_input) struct bnxt { void *bar0; @@ -315,6 +333,7 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete); int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg); bool is_bnxt_supported(struct rte_eth_dev *dev); +bool bnxt_stratus_device(struct bnxt *bp); extern const struct rte_flow_ops bnxt_flow_ops; extern int bnxt_logtype_driver; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index b3826360c..1b52425e6 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -3073,6 +3073,17 @@ static bool bnxt_vf_pciid(uint16_t id) return false; } +bool bnxt_stratus_device(struct bnxt *bp) +{ + uint16_t id = bp->pdev->id.device_id; + + if (id == BROADCOM_DEV_ID_STRATUS_NIC || + id == BROADCOM_DEV_ID_STRATUS_NIC_VF1 || + id == BROADCOM_DEV_ID_STRATUS_NIC_VF2) + return true; + return false; +} + static int bnxt_init_board(struct rte_eth_dev *eth_dev) { struct bnxt *bp = eth_dev->data->dev_private; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index f441d4610..707ee62e0 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3835,3 +3835,54 @@ int bnxt_vnic_rss_configure(struct bnxt *bp, struct bnxt_vnic_info *vnic) } return 0; } + +static void bnxt_hwrm_set_coal_params(struct bnxt_coal *hw_coal, + struct hwrm_ring_cmpl_ring_cfg_aggint_params_input *req) +{ + uint16_t flags; + + req->num_cmpl_aggr_int = rte_cpu_to_le_16(hw_coal->num_cmpl_aggr_int); + + /* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */ + req->num_cmpl_dma_aggr = rte_cpu_to_le_16(hw_coal->num_cmpl_dma_aggr); + + /* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */ + req->num_cmpl_dma_aggr_during_int = + rte_cpu_to_le_16(hw_coal->num_cmpl_dma_aggr_during_int); + + req->int_lat_tmr_max = rte_cpu_to_le_16(hw_coal->int_lat_tmr_max); + + /* min timer set to 1/2 of interrupt timer */ + req->int_lat_tmr_min = rte_cpu_to_le_16(hw_coal->int_lat_tmr_min); + + /* buf timer set to 1/4 of interrupt timer */ + req->cmpl_aggr_dma_tmr = rte_cpu_to_le_16(hw_coal->cmpl_aggr_dma_tmr); + + req->cmpl_aggr_dma_tmr_during_int = + rte_cpu_to_le_16(hw_coal->cmpl_aggr_dma_tmr_during_int); + + flags = HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_TIMER_RESET | + HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_RING_IDLE; + req->flags = rte_cpu_to_le_16(flags); +} + +int bnxt_hwrm_set_ring_coal(struct bnxt *bp, + struct bnxt_coal *coal, uint16_t ring_id) +{ + struct hwrm_ring_cmpl_ring_cfg_aggint_params_input req = {0}; + struct hwrm_ring_cmpl_ring_cfg_aggint_params_output *resp = + bp->hwrm_cmd_resp_addr; + int rc; + + /* Set ring coalesce parameters only for Stratus 100G NIC */ + if (!bnxt_stratus_device(bp)) + return 0; + + HWRM_PREP(req, RING_CMPL_RING_CFG_AGGINT_PARAMS); + bnxt_hwrm_set_coal_params(coal, &req); + req.ring_id = rte_cpu_to_le_16(ring_id); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + HWRM_CHECK_RESULT(); + HWRM_UNLOCK(); + return 0; +} diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 60a4ab16a..b83aab306 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -167,4 +167,6 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type, int bnxt_hwrm_ptp_cfg(struct bnxt *bp); int bnxt_vnic_rss_configure(struct bnxt *bp, struct bnxt_vnic_info *vnic); +int bnxt_hwrm_set_ring_coal(struct bnxt *bp, + struct bnxt_coal *coal, uint16_t ring_id); #endif diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index bb9f6d1c0..81eb89d74 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -258,6 +258,24 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, return 0; } +static void bnxt_init_dflt_coal(struct bnxt_coal *coal) +{ + /* Tick values in micro seconds. + * 1 coal_buf x bufs_per_record = 1 completion record. + */ + coal->num_cmpl_aggr_int = BNXT_NUM_CMPL_AGGR_INT; + /* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */ + coal->num_cmpl_dma_aggr = BNXT_NUM_CMPL_DMA_AGGR; + /* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */ + coal->num_cmpl_dma_aggr_during_int = BNXT_NUM_CMPL_DMA_AGGR_DURING_INT; + coal->int_lat_tmr_max = BNXT_INT_LAT_TMR_MAX; + /* min timer set to 1/2 of interrupt timer */ + coal->int_lat_tmr_min = BNXT_INT_LAT_TMR_MIN; + /* buf timer set to 1/4 of interrupt timer */ + coal->cmpl_aggr_dma_tmr = BNXT_CMPL_AGGR_DMA_TMR; + coal->cmpl_aggr_dma_tmr_during_int = BNXT_CMPL_AGGR_DMA_TMR_DURING_INT; +} + /* ring_grp usage: * [0] = default completion ring * [1 -> +rx_cp_nr_rings] = rx_cp, rx rings @@ -265,9 +283,12 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, */ int bnxt_alloc_hwrm_rings(struct bnxt *bp) { + struct bnxt_coal coal; unsigned int i; int rc = 0; + bnxt_init_dflt_coal(&coal); + for (i = 0; i < bp->rx_cp_nr_rings; i++) { struct bnxt_rx_queue *rxq = bp->rx_queues[i]; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; @@ -291,6 +312,7 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) cpr->cp_doorbell = (char *)bp->doorbell_base + i * 0x80; bp->grp_info[i].cp_fw_ring_id = cp_ring->fw_ring_id; B_CP_DIS_DB(cpr, cpr->cp_raw_cons); + bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id); if (!i) { /* @@ -379,6 +401,7 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) txr->tx_doorbell = (char *)bp->doorbell_base + idx * 0x80; txq->index = idx; + bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id); } err_out: From patchwork Tue Jun 19 21:30:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41278 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AAF4F1B39E; Tue, 19 Jun 2018 23:31:27 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id EEF5A1B057; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 577CC30C0E0; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 577CC30C0E0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443864; bh=YJO6ifVzsoOfHT+KxnCvgGg+nCYc+SK6uSKQiPEq8QI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dhjop76DlEIBc6Vlbkptq3cmnYEHvQ3JnKJp8UzAmYbCzMHRVW1Vt/M55FxRfEEje de6Dad7qNsSVi+QAZjAzxBU8Qox+Z17uxIEm85CgBffQzyvoNWQFgNpQdPWykbyZRY n/KRVOeSmjYYouWPKeKwiFyjv1dej/AdytB70368= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id AD7B1AC07CF; Tue, 19 Jun 2018 14:31:03 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org, Xiaoxin Peng Date: Tue, 19 Jun 2018 14:30:34 -0700 Message-Id: <20180619213058.12273-8-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 07/31] net/bnxt: fix HW Tx checksum offload check X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add more checks for checksum calculation offload. Also check for tunnel frames and select the proper buffer descriptor size. Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code") Cc: stable@dpdk.org Signed-off-by: Xiaoxin Peng Signed-off-by: Ajit Khaparde Reviewed-by: Jason He Reviewed-by: Qingmin Liu --- drivers/net/bnxt/bnxt_txr.c | 51 ++++++++++++++++++++++++++++++++++++++++++--- drivers/net/bnxt/bnxt_txr.h | 10 +++++++++ 2 files changed, 58 insertions(+), 3 deletions(-) diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 0fdf0fd08..68645b2f7 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -135,7 +135,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM | - PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM)) + PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM | + PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN | + PKT_TX_TUNNEL_GENEVE)) long_bd = true; tx_buf = &txr->tx_buf_ring[txr->tx_prod]; @@ -203,16 +205,46 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, /* Outer IP, Inner IP, Inner TCP/UDP CSO */ txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM; txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_TCP_CKSUM) == + PKT_TX_OIP_IIP_TCP_CKSUM) { + /* Outer IP, Inner IP, Inner TCP/UDP CSO */ + txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM; + txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_UDP_CKSUM) == + PKT_TX_OIP_IIP_UDP_CKSUM) { + /* Outer IP, Inner IP, Inner TCP/UDP CSO */ + txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM; + txbd1->mss = 0; } else if ((tx_pkt->ol_flags & PKT_TX_IIP_TCP_UDP_CKSUM) == PKT_TX_IIP_TCP_UDP_CKSUM) { /* (Inner) IP, (Inner) TCP/UDP CSO */ txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM; txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_IIP_UDP_CKSUM) == + PKT_TX_IIP_UDP_CKSUM) { + /* (Inner) IP, (Inner) TCP/UDP CSO */ + txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM; + txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_IIP_TCP_CKSUM) == + PKT_TX_IIP_TCP_CKSUM) { + /* (Inner) IP, (Inner) TCP/UDP CSO */ + txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM; + txbd1->mss = 0; } else if ((tx_pkt->ol_flags & PKT_TX_OIP_TCP_UDP_CKSUM) == PKT_TX_OIP_TCP_UDP_CKSUM) { /* Outer IP, (Inner) TCP/UDP CSO */ txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM; txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_OIP_UDP_CKSUM) == + PKT_TX_OIP_UDP_CKSUM) { + /* Outer IP, (Inner) TCP/UDP CSO */ + txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM; + txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_OIP_TCP_CKSUM) == + PKT_TX_OIP_TCP_CKSUM) { + /* Outer IP, (Inner) TCP/UDP CSO */ + txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM; + txbd1->mss = 0; } else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_CKSUM) == PKT_TX_OIP_IIP_CKSUM) { /* Outer IP, Inner IP CSO */ @@ -223,11 +255,23 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, /* TCP/UDP CSO */ txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM; txbd1->mss = 0; - } else if (tx_pkt->ol_flags & PKT_TX_IP_CKSUM) { + } else if ((tx_pkt->ol_flags & PKT_TX_TCP_CKSUM) == + PKT_TX_TCP_CKSUM) { + /* TCP/UDP CSO */ + txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM; + txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_UDP_CKSUM) == + PKT_TX_UDP_CKSUM) { + /* TCP/UDP CSO */ + txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM; + txbd1->mss = 0; + } else if ((tx_pkt->ol_flags & PKT_TX_IP_CKSUM) == + PKT_TX_IP_CKSUM) { /* IP CSO */ txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM; txbd1->mss = 0; - } else if (tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) { + } else if ((tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) == + PKT_TX_OUTER_IP_CKSUM) { /* IP CSO */ txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM; txbd1->mss = 0; @@ -251,6 +295,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, } txbd->flags_type |= TX_BD_LONG_FLAGS_PACKET_END; + txbd1->lflags = rte_cpu_to_le_32(txbd1->lflags); txr->tx_prod = RING_NEXT(txr->tx_ring_struct, txr->tx_prod); diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h index 15c7e5a09..7f3c7cdb0 100644 --- a/drivers/net/bnxt/bnxt_txr.h +++ b/drivers/net/bnxt/bnxt_txr.h @@ -45,10 +45,20 @@ int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); #define PKT_TX_OIP_IIP_TCP_UDP_CKSUM (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \ PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM) +#define PKT_TX_OIP_IIP_UDP_CKSUM (PKT_TX_UDP_CKSUM | \ + PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM) +#define PKT_TX_OIP_IIP_TCP_CKSUM (PKT_TX_TCP_CKSUM | \ + PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM) #define PKT_TX_IIP_TCP_UDP_CKSUM (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \ PKT_TX_IP_CKSUM) +#define PKT_TX_IIP_TCP_CKSUM (PKT_TX_TCP_CKSUM | PKT_TX_IP_CKSUM) +#define PKT_TX_IIP_UDP_CKSUM (PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM) #define PKT_TX_OIP_TCP_UDP_CKSUM (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \ PKT_TX_OUTER_IP_CKSUM) +#define PKT_TX_OIP_UDP_CKSUM (PKT_TX_UDP_CKSUM | \ + PKT_TX_OUTER_IP_CKSUM) +#define PKT_TX_OIP_TCP_CKSUM (PKT_TX_TCP_CKSUM | \ + PKT_TX_OUTER_IP_CKSUM) #define PKT_TX_OIP_IIP_CKSUM (PKT_TX_IP_CKSUM | \ PKT_TX_OUTER_IP_CKSUM) #define PKT_TX_TCP_UDP_CKSUM (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM) From patchwork Tue Jun 19 21:30:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41276 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0F4BD1B1EF; Tue, 19 Jun 2018 23:31:23 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id EEB6F1B056 for ; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 961A630C10B; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 961A630C10B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443864; bh=6+IQf8uredhMhxJchDu9AhTxM3ZitEPs680ybDlRK7A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Hs/gSjsrQS0RvuRcgwrJE/nYTdjtyMrQ0JQVJuYnIKxwX21OI/hIYNY0/hZtBrpNg BrcdmQ2nxD+C7MbX7M1YwqwJN+FydVyXwp0eOBxySGfIIsEt6jMabeHEcXrC4C8o2/ uLNVfEf4AeOXZyow2GY4tOdS34nxfm0KvX8roNnU= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 0BB39AC0834; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:35 -0700 Message-Id: <20180619213058.12273-9-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for StingRay VF device 0xd800 Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 1b52425e6..5d7f29cf4 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -73,6 +73,7 @@ int bnxt_logtype_driver; #define BROADCOM_DEV_ID_58802 0xd802 #define BROADCOM_DEV_ID_58804 0xd804 #define BROADCOM_DEV_ID_58808 0x16f0 +#define BROADCOM_DEV_ID_58802_VF 0xd800 static const struct rte_pci_id bnxt_pci_id_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, @@ -116,6 +117,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58804) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58808) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802_VF) }, { .vendor_id = 0, /* sentinel */ }, }; @@ -3068,7 +3070,8 @@ static bool bnxt_vf_pciid(uint16_t id) id == BROADCOM_DEV_ID_5741X_VF || id == BROADCOM_DEV_ID_57414_VF || id == BROADCOM_DEV_ID_STRATUS_NIC_VF1 || - id == BROADCOM_DEV_ID_STRATUS_NIC_VF2) + id == BROADCOM_DEV_ID_STRATUS_NIC_VF2 || + id == BROADCOM_DEV_ID_58802_VF) return true; return false; } From patchwork Tue Jun 19 21:30:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41284 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 605621B437; Tue, 19 Jun 2018 23:31:40 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id EDDDD1B055 for ; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 98AF630C10C; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 98AF630C10C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443864; bh=Xt3lamBQ7KVDyhiWVosu6fb0RrAT8ZtpSKnw0esIE0I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l6ZXhqarMRdT8fTasvOQyKckN7A6qLKmLPsv30gcKMXx1NKH1Qj5m0AqxFTqNMjYp Lz8mg/L4b26FoMVJDTmG4C3Apdg7H/a0ZCxChW4fvF+lTGGwTACQg9mDo0lO6eRYeI IvWTS9l/AjAAjkKVye5bnLOrGnOiT9+a/E86lyic= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 48CB6AC07A6; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Somnath Kotur Date: Tue, 19 Jun 2018 14:30:36 -0700 Message-Id: <20180619213058.12273-10-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 09/31] net/bnxt: fix rx/tx queue start/stop operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Packets destined to the to-be-stopped queue should not be dropped (neither in HW nor in the driver), so re-program the RSS Table without this queue on stop and add it back to the table on start unless it is a Representor VF. Since 0th entry is used for default ring, use fw_grp_id + 1 to change the RSS table population logic by programming valid IDs instead of the default zeroth entry in case of an invalid fw_grp_id. Destroy and recreate the trio of Rx rings(compl, Rx, AG) every time in start so that HW is in sync with software. Fixes: 9b63c6fd70e3 ("net/bnxt: support Rx/Tx queue start/stop") Signed-off-by: Somnath Kotur Reviewed-by: Ray Jui Reviewed-by: Scott Branden Reviewed-by: Randy Schacher Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_ethdev.c | 10 ++++- drivers/net/bnxt/bnxt_hwrm.c | 94 +++++++++++++++++++----------------------- drivers/net/bnxt/bnxt_hwrm.h | 1 + drivers/net/bnxt/bnxt_ring.c | 92 +++++++++++++++++++++++++++++++++++++++++ drivers/net/bnxt/bnxt_ring.h | 1 + drivers/net/bnxt/bnxt_rxq.c | 54 +++++++++++++++++++----- drivers/net/bnxt/bnxt_rxq.h | 4 ++ drivers/net/bnxt/bnxt_rxr.c | 14 +++++-- 9 files changed, 204 insertions(+), 67 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index bd8d031de..f92e98d83 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -27,6 +27,7 @@ #define BNXT_MIN_RING_DESC 16 #define BNXT_MAX_TX_RING_DESC 4096 #define BNXT_MAX_RX_RING_DESC 8192 +#define BNXT_DB_SIZE 0x80 #define BNXT_INT_LAT_TMR_MIN 75 #define BNXT_INT_LAT_TMR_MAX 150 diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 5d7f29cf4..d66a29758 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -198,13 +198,14 @@ static int bnxt_alloc_mem(struct bnxt *bp) static int bnxt_init_chip(struct bnxt *bp) { - unsigned int i; + struct bnxt_rx_queue *rxq; struct rte_eth_link new; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; uint32_t intr_vector = 0; uint32_t queue_id, base = BNXT_MISC_VEC_ID; uint32_t vec = BNXT_MISC_VEC_ID; + unsigned int i, j; int rc; /* disable uio/vfio intr/eventfd mapping */ @@ -278,6 +279,13 @@ static int bnxt_init_chip(struct bnxt *bp) goto err_out; } + for (j = 0; j < bp->rx_nr_rings; j++) { + rxq = bp->eth_dev->data->rx_queues[j]; + + if (rxq->rx_deferred_start) + rxq->vnic->fw_grp_ids[j] = INVALID_HW_RING_ID; + } + rc = bnxt_vnic_rss_configure(bp, vnic); if (rc) { PMD_DRV_LOG(ERR, diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 707ee62e0..64687a69b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1817,8 +1817,7 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp) return rc; } -static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, - unsigned int idx __rte_unused) +static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr) { struct bnxt_ring *cp_ring = cpr->cp_ring_struct; @@ -1830,17 +1829,52 @@ static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, cpr->cp_raw_cons = 0; } +void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) +{ + struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index]; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + struct bnxt_ring *ring = rxr->rx_ring_struct; + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + + if (ring->fw_ring_id != INVALID_HW_RING_ID) { + bnxt_hwrm_ring_free(bp, ring, + HWRM_RING_FREE_INPUT_RING_TYPE_RX); + ring->fw_ring_id = INVALID_HW_RING_ID; + bp->grp_info[queue_index].rx_fw_ring_id = INVALID_HW_RING_ID; + memset(rxr->rx_desc_ring, 0, + rxr->rx_ring_struct->ring_size * + sizeof(*rxr->rx_desc_ring)); + memset(rxr->rx_buf_ring, 0, + rxr->rx_ring_struct->ring_size * + sizeof(*rxr->rx_buf_ring)); + rxr->rx_prod = 0; + } + ring = rxr->ag_ring_struct; + if (ring->fw_ring_id != INVALID_HW_RING_ID) { + bnxt_hwrm_ring_free(bp, ring, + HWRM_RING_FREE_INPUT_RING_TYPE_RX); + ring->fw_ring_id = INVALID_HW_RING_ID; + memset(rxr->ag_buf_ring, 0, + rxr->ag_ring_struct->ring_size * + sizeof(*rxr->ag_buf_ring)); + rxr->ag_prod = 0; + bp->grp_info[queue_index].ag_fw_ring_id = INVALID_HW_RING_ID; + } + if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) + bnxt_free_cp_ring(bp, cpr); + + bp->grp_info[queue_index].cp_fw_ring_id = INVALID_HW_RING_ID; +} + int bnxt_free_all_hwrm_rings(struct bnxt *bp) { unsigned int i; - int rc = 0; for (i = 0; i < bp->tx_cp_nr_rings; i++) { struct bnxt_tx_queue *txq = bp->tx_queues[i]; struct bnxt_tx_ring_info *txr = txq->tx_ring; struct bnxt_ring *ring = txr->tx_ring_struct; struct bnxt_cp_ring_info *cpr = txq->cp_ring; - unsigned int idx = bp->rx_cp_nr_rings + i; if (ring->fw_ring_id != INVALID_HW_RING_ID) { bnxt_hwrm_ring_free(bp, ring, @@ -1856,59 +1890,15 @@ int bnxt_free_all_hwrm_rings(struct bnxt *bp) txr->tx_cons = 0; } if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) { - bnxt_free_cp_ring(bp, cpr, idx); + bnxt_free_cp_ring(bp, cpr); cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID; } } - for (i = 0; i < bp->rx_cp_nr_rings; i++) { - struct bnxt_rx_queue *rxq = bp->rx_queues[i]; - struct bnxt_rx_ring_info *rxr = rxq->rx_ring; - struct bnxt_ring *ring = rxr->rx_ring_struct; - struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + for (i = 0; i < bp->rx_cp_nr_rings; i++) + bnxt_free_hwrm_rx_ring(bp, i); - if (ring->fw_ring_id != INVALID_HW_RING_ID) { - bnxt_hwrm_ring_free(bp, ring, - HWRM_RING_FREE_INPUT_RING_TYPE_RX); - ring->fw_ring_id = INVALID_HW_RING_ID; - bp->grp_info[i].rx_fw_ring_id = INVALID_HW_RING_ID; - memset(rxr->rx_desc_ring, 0, - rxr->rx_ring_struct->ring_size * - sizeof(*rxr->rx_desc_ring)); - memset(rxr->rx_buf_ring, 0, - rxr->rx_ring_struct->ring_size * - sizeof(*rxr->rx_buf_ring)); - rxr->rx_prod = 0; - } - ring = rxr->ag_ring_struct; - if (ring->fw_ring_id != INVALID_HW_RING_ID) { - bnxt_hwrm_ring_free(bp, ring, - HWRM_RING_FREE_INPUT_RING_TYPE_RX); - ring->fw_ring_id = INVALID_HW_RING_ID; - memset(rxr->ag_buf_ring, 0, - rxr->ag_ring_struct->ring_size * - sizeof(*rxr->ag_buf_ring)); - rxr->ag_prod = 0; - bp->grp_info[i].ag_fw_ring_id = INVALID_HW_RING_ID; - } - if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) { - bnxt_free_cp_ring(bp, cpr, i); - bp->grp_info[i].cp_fw_ring_id = INVALID_HW_RING_ID; - cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID; - } - } - - /* Default completion ring */ - { - struct bnxt_cp_ring_info *cpr = bp->def_cp_ring; - - if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) { - bnxt_free_cp_ring(bp, cpr, 0); - cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID; - } - } - - return rc; + return 0; } int bnxt_alloc_all_hwrm_ring_grps(struct bnxt *bp) diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index b83aab306..4a237c4b4 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -107,6 +107,7 @@ int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic); void bnxt_free_all_hwrm_resources(struct bnxt *bp); void bnxt_free_hwrm_resources(struct bnxt *bp); +void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index); int bnxt_alloc_hwrm_resources(struct bnxt *bp); int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link); int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 81eb89d74..fcbd6bc6e 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -276,6 +276,98 @@ static void bnxt_init_dflt_coal(struct bnxt_coal *coal) coal->cmpl_aggr_dma_tmr_during_int = BNXT_CMPL_AGGR_DMA_TMR_DURING_INT; } +int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) +{ + struct rte_pci_device *pci_dev = bp->pdev; + struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index]; + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_ring *cp_ring = cpr->cp_ring_struct; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + struct bnxt_ring *ring = rxr->rx_ring_struct; + unsigned int map_idx = queue_index + bp->rx_cp_nr_rings; + int rc = 0; + + bp->grp_info[queue_index].fw_stats_ctx = cpr->hw_stats_ctx_id; + + /* Rx cmpl */ + rc = bnxt_hwrm_ring_alloc(bp, cp_ring, + HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL, + queue_index, HWRM_NA_SIGNATURE, + HWRM_NA_SIGNATURE); + if (rc) + goto err_out; + + cpr->cp_doorbell = (char *)pci_dev->mem_resource[2].addr + + queue_index * BNXT_DB_SIZE; + bp->grp_info[queue_index].cp_fw_ring_id = cp_ring->fw_ring_id; + B_CP_DIS_DB(cpr, cpr->cp_raw_cons); + + if (!queue_index) { + /* + * In order to save completion resources, use the first + * completion ring from PF or VF as the default completion ring + * for async event and HWRM forward response handling. + */ + bp->def_cp_ring = cpr; + rc = bnxt_hwrm_set_async_event_cr(bp); + if (rc) + goto err_out; + } + /* Rx ring */ + rc = bnxt_hwrm_ring_alloc(bp, ring, HWRM_RING_ALLOC_INPUT_RING_TYPE_RX, + queue_index, cpr->hw_stats_ctx_id, + cp_ring->fw_ring_id); + if (rc) + goto err_out; + + rxr->rx_prod = 0; + rxr->rx_doorbell = (char *)pci_dev->mem_resource[2].addr + + queue_index * BNXT_DB_SIZE; + bp->grp_info[queue_index].rx_fw_ring_id = ring->fw_ring_id; + B_RX_DB(rxr->rx_doorbell, rxr->rx_prod); + + ring = rxr->ag_ring_struct; + /* Agg ring */ + if (!ring) + PMD_DRV_LOG(ERR, "Alloc AGG Ring is NULL!\n"); + + rc = bnxt_hwrm_ring_alloc(bp, ring, HWRM_RING_ALLOC_INPUT_RING_TYPE_RX, + map_idx, HWRM_NA_SIGNATURE, + cp_ring->fw_ring_id); + if (rc) + goto err_out; + + PMD_DRV_LOG(DEBUG, "Alloc AGG Done!\n"); + rxr->ag_prod = 0; + rxr->ag_doorbell = (char *)pci_dev->mem_resource[2].addr + + map_idx * BNXT_DB_SIZE; + bp->grp_info[queue_index].ag_fw_ring_id = ring->fw_ring_id; + B_RX_DB(rxr->ag_doorbell, rxr->ag_prod); + + rxq->rx_buf_use_size = BNXT_MAX_MTU + ETHER_HDR_LEN + + ETHER_CRC_LEN + (2 * VLAN_TAG_SIZE); + + if (bp->eth_dev->data->rx_queue_state[queue_index] == + RTE_ETH_QUEUE_STATE_STARTED) { + if (bnxt_init_one_rx_ring(rxq)) { + RTE_LOG(ERR, PMD, + "bnxt_init_one_rx_ring failed!\n"); + bnxt_rx_queue_release_op(rxq); + rc = -ENOMEM; + goto err_out; + } + B_RX_DB(rxr->rx_doorbell, rxr->rx_prod); + B_RX_DB(rxr->ag_doorbell, rxr->ag_prod); + } + rxq->index = queue_index; + PMD_DRV_LOG(INFO, + "queue %d, rx_deferred_start %d, state %d!\n", + queue_index, rxq->rx_deferred_start, + bp->eth_dev->data->rx_queue_state[queue_index]); + +err_out: + return rc; +} /* ring_grp usage: * [0] = default completion ring * [1 -> +rx_cp_nr_rings] = rx_cp, rx rings diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h index 65bf3e2f5..1446d784f 100644 --- a/drivers/net/bnxt/bnxt_ring.h +++ b/drivers/net/bnxt/bnxt_ring.h @@ -70,6 +70,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, struct bnxt_rx_queue *rxq, struct bnxt_cp_ring_info *cp_ring_info, const char *suffix); +int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index); int bnxt_alloc_hwrm_rings(struct bnxt *bp); #endif diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index c55ddec4b..f405e2575 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -199,12 +199,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp) return rc; } -static void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq) +void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq) { struct bnxt_sw_rx_bd *sw_ring; struct bnxt_tpa_info *tpa_info; uint16_t i; + rte_spinlock_lock(&rxq->lock); + if (rxq) { sw_ring = rxq->rx_ring->rx_buf_ring; if (sw_ring) { @@ -239,6 +241,8 @@ static void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq) } } } + + rte_spinlock_unlock(&rxq->lock); } void bnxt_free_rx_mbufs(struct bnxt *bp) @@ -286,6 +290,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev, uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads; struct bnxt_rx_queue *rxq; int rc = 0; + uint8_t queue_state; if (queue_idx >= bp->max_rx_rings) { PMD_DRV_LOG(ERR, @@ -341,6 +346,11 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev, } rte_atomic64_init(&rxq->rx_mbuf_alloc_fail); + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + queue_state = rxq->rx_deferred_start ? RTE_ETH_QUEUE_STATE_STOPPED : + RTE_ETH_QUEUE_STATE_STARTED; + eth_dev->data->rx_queue_state[queue_idx] = queue_state; + rte_spinlock_init(&rxq->lock); out: return rc; } @@ -389,6 +399,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; struct bnxt_rx_queue *rxq = bp->rx_queues[rx_queue_id]; struct bnxt_vnic_info *vnic = NULL; + int rc = 0; if (rxq == NULL) { PMD_DRV_LOG(ERR, "Invalid Rx queue %d\n", rx_queue_id); @@ -396,28 +407,47 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) } dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; - rxq->rx_deferred_start = false; + + bnxt_free_hwrm_rx_ring(bp, rx_queue_id); + bnxt_alloc_hwrm_rx_ring(bp, rx_queue_id); PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id); + if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) { vnic = rxq->vnic; + if (vnic->fw_grp_ids[rx_queue_id] != INVALID_HW_RING_ID) return 0; - PMD_DRV_LOG(DEBUG, "vnic = %p fw_grp_id = %d\n", - vnic, bp->grp_info[rx_queue_id + 1].fw_grp_id); + + PMD_DRV_LOG(DEBUG, + "vnic = %p fw_grp_id = %d\n", + vnic, bp->grp_info[rx_queue_id].fw_grp_id); + vnic->fw_grp_ids[rx_queue_id] = - bp->grp_info[rx_queue_id + 1].fw_grp_id; - return bnxt_vnic_rss_configure(bp, vnic); + bp->grp_info[rx_queue_id].fw_grp_id; + rc = bnxt_vnic_rss_configure(bp, vnic); } - return 0; + if (rc == 0) + rxq->rx_deferred_start = false; + + return rc; } int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct bnxt *bp = (struct bnxt *)dev->data->dev_private; struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; - struct bnxt_rx_queue *rxq = bp->rx_queues[rx_queue_id]; struct bnxt_vnic_info *vnic = NULL; + struct bnxt_rx_queue *rxq = NULL; + int rc = 0; + + /* Rx CQ 0 also works as Default CQ for async notifications */ + if (!rx_queue_id) { + PMD_DRV_LOG(ERR, "Cannot stop Rx queue id %d\n", rx_queue_id); + return -EINVAL; + } + + rxq = bp->rx_queues[rx_queue_id]; if (rxq == NULL) { PMD_DRV_LOG(ERR, "Invalid Rx queue %d\n", rx_queue_id); @@ -431,7 +461,11 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) { vnic = rxq->vnic; vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID; - return bnxt_vnic_rss_configure(bp, vnic); + rc = bnxt_vnic_rss_configure(bp, vnic); } - return 0; + + if (rc == 0) + bnxt_rx_queue_release_mbufs(rxq); + + return rc; } diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index 8307f603c..e5d6001d3 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -10,6 +10,9 @@ struct bnxt; struct bnxt_rx_ring_info; struct bnxt_cp_ring_info; struct bnxt_rx_queue { + rte_spinlock_t lock; /* Synchronize between rx_queue_stop + * and fast path + */ struct rte_mempool *mb_pool; /* mbuf pool for RX ring */ struct rte_mbuf *pkt_first_seg; /* 1st seg of pkt */ struct rte_mbuf *pkt_last_seg; /* Last seg of pkt */ @@ -54,4 +57,5 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq); #endif diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index b6b72c553..e4d473f4b 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -541,7 +541,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, bool evt = false; /* If Rx Q was stopped return. RxQ0 cannot be stopped. */ - if (rxq->rx_deferred_start && rxq->queue_id) + if (unlikely(((rxq->rx_deferred_start || !rte_spinlock_trylock(&rxq->lock)) && rxq->queue_id))) return 0; /* Handle RX burst request */ @@ -583,7 +583,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * For PMD, there is no need to keep on pushing to REARM * the doorbell if there are no new completions */ - return nb_rx_pkts; + goto done; } if (prod != rxr->rx_prod) @@ -618,16 +618,22 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } } +done: + rte_spinlock_unlock(&rxq->lock); + return nb_rx_pkts; } void bnxt_free_rx_rings(struct bnxt *bp) { int i; + struct bnxt_rx_queue *rxq; - for (i = 0; i < (int)bp->rx_nr_rings; i++) { - struct bnxt_rx_queue *rxq = bp->rx_queues[i]; + if (!bp->rx_queues) + return; + for (i = 0; i < (int)bp->rx_nr_rings; i++) { + rxq = bp->rx_queues[i]; if (!rxq) continue; From patchwork Tue Jun 19 21:30:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41277 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4A8D91B1FC; Tue, 19 Jun 2018 23:31:25 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id E34481B053 for ; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 9924130C10E; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 9924130C10E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443864; bh=4iZbli53Rg0B4H0NwTNpNcJ/8mHTwbLBeM98EEllsLE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Stibj0XOhnwRi3kx1bTCin8jhfiojZfo1HQ/V+tV8Xf4GuLyNO367QZlR+vbomZ5Y OdtR7qd1F2bK4Udu1bq5xvztAN4veL7B/PXwxzRGab6QDz0sdzh15Z7vn4vtYcEct8 s/BybmtjIt9HbBMuX5THaCSImkghToQtdFqFYDxI= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 934E9AC07B1; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:37 -0700 Message-Id: <20180619213058.12273-11-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 10/31] net/bnxt: code cleanup style of bnxt cpr X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_cpr Signed-off-by: Scott Branden Reviewed-by: Ajit Khaparde Reviewed-by: Qingmin Liu Reviewed-by: Ray Jui --- drivers/net/bnxt/bnxt_cpr.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c index ff20b6fdf..7257bbedc 100644 --- a/drivers/net/bnxt/bnxt_cpr.c +++ b/drivers/net/bnxt/bnxt_cpr.c @@ -74,12 +74,12 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl) fwd_cmd = (struct input *)bp->pf.vf_info[vf_id].req_buf; if (fw_vf_id < bp->pf.first_vf_id || - fw_vf_id >= (bp->pf.first_vf_id) + bp->pf.active_vfs) { + fw_vf_id >= bp->pf.first_vf_id + bp->pf.active_vfs) { PMD_DRV_LOG(ERR, - "FWD req's source_id 0x%x out of range 0x%x - 0x%x (%d %d)\n", - fw_vf_id, bp->pf.first_vf_id, - (bp->pf.first_vf_id) + bp->pf.active_vfs - 1, - bp->pf.first_vf_id, bp->pf.active_vfs); + "FWD req 0x%x out of range 0x%x - 0x%x (%d %d)\n", + fw_vf_id, bp->pf.first_vf_id, + bp->pf.first_vf_id + bp->pf.active_vfs - 1, + bp->pf.first_vf_id, bp->pf.active_vfs); goto reject; } @@ -95,7 +95,7 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl) if (vfc->enables & HWRM_FUNC_VF_CFG_INPUT_ENABLES_DFLT_MAC_ADDR) { bnxt_hwrm_func_vf_mac(bp, vf_id, - (const uint8_t *)"\x00\x00\x00\x00\x00"); + (const uint8_t *)"\x00\x00\x00\x00\x00"); } } if (fwd_cmd->req_type == HWRM_CFA_L2_SET_RX_MASK) { @@ -104,10 +104,10 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl) srm->vlan_tag_tbl_addr = rte_cpu_to_le_64(0); srm->num_vlan_tags = rte_cpu_to_le_32(0); - srm->mask &= ~rte_cpu_to_le_32( - HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLANONLY | - HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLAN_NONVLAN | - HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_ANYVLAN_NONVLAN); + srm->mask &= ~rte_cpu_to_le_32 + (HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLANONLY | + HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLAN_NONVLAN | + HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_ANYVLAN_NONVLAN); } /* Forward */ rc = bnxt_hwrm_exec_fwd_resp(bp, fw_vf_id, fwd_cmd, req_len); @@ -128,8 +128,6 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl) fw_vf_id - bp->pf.first_vf_id, rte_le_to_cpu_16(fwd_cmd->req_type)); } - - return; } int bnxt_event_hwrm_resp_handler(struct bnxt *bp, struct cmpl_base *cmp) From patchwork Tue Jun 19 21:30:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41279 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D52581B3A4; Tue, 19 Jun 2018 23:31:29 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 011121B05A for ; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id A75E830C110; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com A75E830C110 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443864; bh=0IxLGtELlBOa3GB8j2vwWDZcFQg6EkRMNUonZlg1ZrI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uqePfDoZBzOSi/c7U/aop+kLaIAabwmiuL/8iC5XC4NmDx4ijy1eSqQrxZH+7ynP6 fFqWi/uivajcZCt0vYYXCX+ULF/n/vDnmI64Mvt1Odcj4rKK807pnLvbru8XUMahBu abPljWG8DXSgleeowAqVZ73IhGOA1PJrd/9DS19U= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id DABEAAC0768; Tue, 19 Jun 2018 14:31:04 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:38 -0700 Message-Id: <20180619213058.12273-12-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 11/31] net/bnxt: code cleanup style of bnxt rxr X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_rxr Signed-off-by: Scott Branden Reviewed-by: Randy Schacher Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_rxr.c | 58 ++++++++++++++++++++++++--------------------- drivers/net/bnxt/bnxt_rxr.h | 6 +++-- 2 files changed, 35 insertions(+), 29 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index e4d473f4b..13928c388 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -72,7 +72,6 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq, if (rx_buf == NULL) PMD_DRV_LOG(ERR, "Jumbo Frame. rx_buf is NULL\n"); - rx_buf->mbuf = mbuf; mbuf->data_off = RTE_PKTMBUF_HEADROOM; @@ -82,7 +81,7 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq, } static inline void bnxt_reuse_rx_mbuf(struct bnxt_rx_ring_info *rxr, - struct rte_mbuf *mbuf) + struct rte_mbuf *mbuf) { uint16_t prod = RING_NEXT(rxr->rx_ring_struct, rxr->rx_prod); struct bnxt_sw_rx_bd *prod_rx_buf; @@ -185,7 +184,8 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq, } static int bnxt_agg_bufs_valid(struct bnxt_cp_ring_info *cpr, - uint8_t agg_bufs, uint32_t raw_cp_cons) + uint8_t agg_bufs, + uint32_t raw_cp_cons) { uint16_t last_cp_cons; struct rx_pkt_cmpl *agg_cmpl; @@ -236,8 +236,7 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, struct rte_mbuf *ag_mbuf; *tmp_raw_cons = NEXT_RAW_CMP(*tmp_raw_cons); cp_cons = RING_CMP(cpr->cp_ring_struct, *tmp_raw_cons); - rxcmp = (struct rx_pkt_cmpl *) - &cpr->cp_desc_ring[cp_cons]; + rxcmp = (struct rx_pkt_cmpl *)&cpr->cp_desc_ring[cp_cons]; #ifdef BNXT_DEBUG bnxt_dump_cmpl(cp_cons, rxcmp); @@ -270,11 +269,11 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, return 0; } -static inline struct rte_mbuf *bnxt_tpa_end( - struct bnxt_rx_queue *rxq, - uint32_t *raw_cp_cons, - struct rx_tpa_end_cmpl *tpa_end, - struct rx_tpa_end_cmpl_hi *tpa_end1 __rte_unused) +static inline +struct rte_mbuf *bnxt_tpa_end(struct bnxt_rx_queue *rxq, + uint32_t *raw_cp_cons, + struct rx_tpa_end_cmpl *tpa_end, + struct rx_tpa_end_cmpl_hi *tpa_end1 __rte_unused) { struct bnxt_cp_ring_info *cpr = rxq->cp_ring; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; @@ -299,6 +298,7 @@ static inline struct rte_mbuf *bnxt_tpa_end( mbuf->l4_len = tpa_end->payload_offset; struct rte_mbuf *new_data = __bnxt_alloc_rx_data(rxq->mb_pool); + RTE_ASSERT(new_data != NULL); if (!new_data) { rte_atomic64_inc(&rxq->rx_mbuf_alloc_fail); @@ -368,7 +368,8 @@ bnxt_parse_pkt_type(struct rx_pkt_cmpl *rxcmp, struct rx_pkt_cmpl_hi *rxcmp1) } static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, - struct bnxt_rx_queue *rxq, uint32_t *raw_cons) + struct bnxt_rx_queue *rxq, + uint32_t *raw_cons) { struct bnxt_cp_ring_info *cpr = rxq->cp_ring; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; @@ -401,14 +402,16 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, cmp_type = CMP_TYPE(rxcmp); if (cmp_type == RX_TPA_START_CMPL_TYPE_RX_TPA_START) { - bnxt_tpa_start(rxq, (struct rx_tpa_start_cmpl *)rxcmp, + bnxt_tpa_start(rxq, + (struct rx_tpa_start_cmpl *)rxcmp, (struct rx_tpa_start_cmpl_hi *)rxcmp1); rc = -EINVAL; /* Continue w/o new mbuf */ goto next_rx; } else if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) { - mbuf = bnxt_tpa_end(rxq, &tmp_raw_cons, - (struct rx_tpa_end_cmpl *)rxcmp, - (struct rx_tpa_end_cmpl_hi *)rxcmp1); + mbuf = bnxt_tpa_end(rxq, + &tmp_raw_cons, + (struct rx_tpa_end_cmpl *)rxcmp, + (struct rx_tpa_end_cmpl_hi *)rxcmp1); if (unlikely(!mbuf)) return -EBUSY; *rx_pkt = mbuf; @@ -525,8 +528,9 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, return rc; } -uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +uint16_t bnxt_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct bnxt_rx_queue *rxq = rx_queue; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; @@ -674,8 +678,8 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) rxq->rx_ring = rxr; ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); if (ring == NULL) return -ENOMEM; rxr->rx_ring_struct = ring; @@ -694,8 +698,8 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) rxq->cp_ring = cpr; ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); if (ring == NULL) return -ENOMEM; cpr->cp_ring_struct = ring; @@ -709,8 +713,8 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) /* Allocate Aggregator rings */ ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); if (ring == NULL) return -ENOMEM; rxr->ag_ring_struct = ring; @@ -762,8 +766,8 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq) for (i = 0; i < ring->ring_size; i++) { if (bnxt_alloc_rx_data(rxq, rxr, prod) != 0) { PMD_DRV_LOG(WARNING, - "init'ed rx ring %d with %d/%d mbufs only\n", - rxq->queue_id, i, ring->ring_size); + "rx ring %d only has %d/%d mbufs\n", + rxq->queue_id, i, ring->ring_size); break; } rxr->rx_prod = prod; @@ -778,8 +782,8 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq) for (i = 0; i < ring->ring_size; i++) { if (bnxt_alloc_ag_data(rxq, rxr, prod) != 0) { PMD_DRV_LOG(WARNING, - "init'ed AG ring %d with %d/%d mbufs only\n", - rxq->queue_id, i, ring->ring_size); + "AG ring %d only has %d/%d mbufs\n", + rxq->queue_id, i, ring->ring_size); break; } rxr->ag_prod = prod; diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index 3815a2199..c8ba22ee1 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -103,8 +103,10 @@ struct bnxt_rx_ring_info { struct bnxt_tpa_info *tpa_info; }; -uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); +uint16_t bnxt_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + void bnxt_free_rx_rings(struct bnxt *bp); int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id); int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq); From patchwork Tue Jun 19 21:30:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41285 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23BFD1B43D; Tue, 19 Jun 2018 23:31:42 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 031521B05B for ; Tue, 19 Jun 2018 23:31:08 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 4D15330C067; Tue, 19 Jun 2018 14:31:05 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 4D15330C067 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443865; bh=aU1sitMD2uDHNusuJ2YwPsTdrGTZQm7JsRvBQDUdldc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ChUppl79JKjs2l08/CjNopPVvur2/3lRUX82YorGxuexqyCDmmLnKOYqOxTvIdNb7 agCcneGFQyv3OXhjrBJgYh9+1VQS+gF6y9l0Chk85uL2ugfRZNKPRIXgE9gSKIiv+x 4+HTiDYofnZYFZ5L7oaycxtKVGL1DHJqjTxZzyX8= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 2EE11AC06AD; Tue, 19 Jun 2018 14:31:05 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:39 -0700 Message-Id: <20180619213058.12273-13-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 12/31] net/bnxt: code cleanup style of rte pmd bnxt file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of rte_pmd_bnxt Signed-off-by: Scott Branden Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/rte_pmd_bnxt.c | 97 +++++++++++++++++++++++++---------------- drivers/net/bnxt/rte_pmd_bnxt.h | 69 +++++++++++++++++++---------- 2 files changed, 105 insertions(+), 61 deletions(-) diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c index c298de83c..e49dba465 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.c +++ b/drivers/net/bnxt/rte_pmd_bnxt.c @@ -77,6 +77,7 @@ static void rte_pmd_bnxt_set_all_queues_drop_en_cb(struct bnxt_vnic_info *vnic, void *onptr) { uint8_t *on = onptr; + vnic->bd_stall = !(*on); } @@ -119,9 +120,12 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on) /* Stall all active VFs */ for (i = 0; i < bp->pf.active_vfs; i++) { - rc = bnxt_hwrm_func_vf_vnic_query_and_config(bp, i, - rte_pmd_bnxt_set_all_queues_drop_en_cb, &on, - bnxt_hwrm_vnic_cfg); + rc = bnxt_hwrm_func_vf_vnic_query_and_config + (bp, + i, + rte_pmd_bnxt_set_all_queues_drop_en_cb, + &on, + bnxt_hwrm_vnic_cfg); if (rc) { PMD_DRV_LOG(ERR, "Failed to update VF VNIC %d.\n", i); break; @@ -131,8 +135,9 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on) return rc; } -int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, - struct ether_addr *mac_addr) +int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, + uint16_t vf, + struct ether_addr *mac_addr) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; @@ -163,8 +168,10 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, return rc; } -int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk) +int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, + uint16_t vf, + uint16_t tx_rate, + uint64_t q_msk) { struct rte_eth_dev *eth_dev; struct rte_eth_dev_info dev_info; @@ -205,7 +212,7 @@ int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, return 0; rc = bnxt_hwrm_func_bw_cfg(bp, vf, tot_rate, - HWRM_FUNC_CFG_INPUT_ENABLES_MAX_BW); + HWRM_FUNC_CFG_INPUT_ENABLES_MAX_BW); if (!rc) bp->pf.vf_info[vf].max_tx_rate = tot_rate; @@ -247,8 +254,9 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on) return 0; func_flags = bp->pf.vf_info[vf].func_cfg_flags; - func_flags &= ~(HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE | - HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE); + func_flags &= + ~(HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE | + HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE); if (on) func_flags |= @@ -298,10 +306,11 @@ int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on) if (!rc) { bp->pf.vf_info[vf].vlan_spoof_en = on; if (on) { - if (bnxt_hwrm_cfa_vlan_antispoof_cfg(bp, - bp->pf.first_vf_id + vf, - bp->pf.vf_info[vf].vlan_count, - bp->pf.vf_info[vf].vlan_as_table)) + if (bnxt_hwrm_cfa_vlan_antispoof_cfg + (bp, + bp->pf.first_vf_id + vf, + bp->pf.vf_info[vf].vlan_count, + bp->pf.vf_info[vf].vlan_as_table)) rc = -1; } } else { @@ -315,6 +324,7 @@ static void rte_pmd_bnxt_set_vf_vlan_stripq_cb(struct bnxt_vnic_info *vnic, void *onptr) { uint8_t *on = onptr; + vnic->vlan_strip = *on; } @@ -345,17 +355,22 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on) return -ENOTSUP; } - rc = bnxt_hwrm_func_vf_vnic_query_and_config(bp, vf, - rte_pmd_bnxt_set_vf_vlan_stripq_cb, &on, - bnxt_hwrm_vnic_cfg); + rc = bnxt_hwrm_func_vf_vnic_query_and_config + (bp, + vf, + rte_pmd_bnxt_set_vf_vlan_stripq_cb, + &on, + bnxt_hwrm_vnic_cfg); if (rc) PMD_DRV_LOG(ERR, "Failed to update VF VNIC %d.\n", vf); return rc; } -int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf, - uint16_t rx_mask, uint8_t on) +int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, + uint16_t vf, + uint16_t rx_mask, + uint8_t on) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; @@ -397,10 +412,12 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf, else bp->pf.vf_info[vf].l2_rx_mask &= ~flag; - rc = bnxt_hwrm_func_vf_vnic_query_and_config(bp, vf, - vf_vnic_set_rxmask_cb, - &bp->pf.vf_info[vf].l2_rx_mask, - bnxt_set_rx_mask_no_vlan); + rc = bnxt_hwrm_func_vf_vnic_query_and_config + (bp, + vf, + vf_vnic_set_rxmask_cb, + &bp->pf.vf_info[vf].l2_rx_mask, + bnxt_set_rx_mask_no_vlan); if (rc) PMD_DRV_LOG(ERR, "bnxt_hwrm_func_vf_vnic_set_rxmask failed\n"); @@ -433,9 +450,11 @@ static int bnxt_set_vf_table(struct bnxt *bp, uint16_t vf) vnic.fw_vnic_id = dflt_vnic; if (bnxt_hwrm_vnic_qcfg(bp, &vnic, bp->pf.first_vf_id + vf) == 0) { - if (bnxt_hwrm_cfa_l2_set_rx_mask(bp, &vnic, - bp->pf.vf_info[vf].vlan_count, - bp->pf.vf_info[vf].vlan_table)) + if (bnxt_hwrm_cfa_l2_set_rx_mask + (bp, + &vnic, + bp->pf.vf_info[vf].vlan_count, + bp->pf.vf_info[vf].vlan_table)) rc = -1; } else { rc = -1; @@ -445,8 +464,10 @@ static int bnxt_set_vf_table(struct bnxt *bp, uint16_t vf) return rc; } -int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, - uint64_t vf_mask, uint8_t vlan_on) +int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, + uint16_t vlan, + uint64_t vf_mask, + uint8_t vlan_on) { struct bnxt_vlan_table_entry *ve; struct bnxt_vlan_antispoof_table_entry *vase; @@ -482,8 +503,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, if (vlan_on) { /* First, search for a duplicate... */ for (j = 0; j < cnt; j++) { - if (rte_be_to_cpu_16( - bp->pf.vf_info[i].vlan_table[j].vid) == vlan) + if (rte_be_to_cpu_16(bp->pf.vf_info[i].vlan_table[j].vid) == vlan) break; } if (j == cnt) { @@ -491,7 +511,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, if (cnt == getpagesize() / sizeof(struct bnxt_vlan_antispoof_table_entry)) { PMD_DRV_LOG(ERR, - "VLAN anti-spoof table is full\n"); + "VLAN anti-spoof table is full\n"); PMD_DRV_LOG(ERR, "VF %d cannot add VLAN %u\n", i, vlan); @@ -517,13 +537,14 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, } } else { for (j = 0; j < cnt; j++) { - if (rte_be_to_cpu_16( - bp->pf.vf_info[i].vlan_table[j].vid) != vlan) + if (rte_be_to_cpu_16(bp->pf.vf_info[i].vlan_table[j].vid) != vlan) continue; + memmove(&bp->pf.vf_info[i].vlan_table[j], &bp->pf.vf_info[i].vlan_table[j + 1], getpagesize() - ((j + 1) * sizeof(struct bnxt_vlan_table_entry))); + memmove(&bp->pf.vf_info[i].vlan_as_table[j], &bp->pf.vf_info[i].vlan_as_table[j + 1], getpagesize() - ((j + 1) * sizeof(struct @@ -647,8 +668,9 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id, count); } -int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *addr, - uint32_t vf_id) +int rte_pmd_bnxt_mac_addr_add(uint16_t port, + struct ether_addr *addr, + uint32_t vf_id) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; @@ -724,8 +746,9 @@ int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *addr, } int -rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf, - uint16_t vlan_id) +rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, + uint16_t vf, + uint16_t vlan_id) { struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; diff --git a/drivers/net/bnxt/rte_pmd_bnxt.h b/drivers/net/bnxt/rte_pmd_bnxt.h index 68fbe34d6..f66c44c19 100644 --- a/drivers/net/bnxt/rte_pmd_bnxt.h +++ b/drivers/net/bnxt/rte_pmd_bnxt.h @@ -19,11 +19,11 @@ enum rte_pmd_bnxt_mb_event_rsp { }; /* mailbox message types */ -#define BNXT_VF_RESET 0x01 /* VF requests reset */ +#define BNXT_VF_RESET 0x01 /* VF requests reset */ #define BNXT_VF_SET_MAC_ADDR 0x02 /* VF requests PF to set MAC addr */ -#define BNXT_VF_SET_VLAN 0x03 /* VF requests PF to set VLAN */ -#define BNXT_VF_SET_MTU 0x04 /* VF requests PF to set MTU */ -#define BNXT_VF_SET_MRU 0x05 /* VF requests PF to set MRU */ +#define BNXT_VF_SET_VLAN 0x03 /* VF requests PF to set VLAN */ +#define BNXT_VF_SET_MTU 0x04 /* VF requests PF to set MTU */ +#define BNXT_VF_SET_MRU 0x05 /* VF requests PF to set MRU */ /* * Data sent to the caller when the callback is executed. @@ -50,7 +50,9 @@ struct rte_pmd_bnxt_mb_event_param { * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on); +int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, + uint16_t vf, + uint8_t on); /** * Set the VF MAC address. @@ -66,8 +68,9 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ -int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, - struct ether_addr *mac_addr); +int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, + uint16_t vf, + struct ether_addr *mac_addr); /** * Enable/Disable vf vlan strip for all queues in a pool @@ -87,7 +90,9 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf, * - (-EINVAL) if bad parameter. */ int -rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on); +rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, + uint16_t vf, + uint8_t on); /** * Enable/Disable vf vlan insert @@ -106,8 +111,9 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on); * - (-EINVAL) if bad parameter. */ int -rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf, - uint16_t vlan_id); +rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, + uint16_t vf, + uint16_t vlan_id); /** * Enable/Disable hardware VF VLAN filtering by an Ethernet device of @@ -128,8 +134,10 @@ rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf, * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, - uint64_t vf_mask, uint8_t vlan_on); +int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, + uint16_t vlan, + uint64_t vf_mask, + uint8_t vlan_on); /** * Enable/Disable tx loopback @@ -145,7 +153,8 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on); +int rte_pmd_bnxt_set_tx_loopback(uint16_t port, + uint8_t on); /** * set all queues drop enable bit @@ -161,7 +170,8 @@ int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on); +int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, + uint8_t on); /** * Set the VF rate limit. @@ -179,8 +189,10 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on); * - (-ENODEV) if *port* invalid. * - (-EINVAL) if *vf* or *mac_addr* is invalid. */ -int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf, - uint16_t tx_rate, uint64_t q_msk); +int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, + uint16_t vf, + uint16_t tx_rate, + uint64_t q_msk); /** * Get VF's statistics @@ -233,7 +245,9 @@ int rte_pmd_bnxt_reset_vf_stats(uint16_t port, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on); +int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, + uint16_t vf, + uint8_t on); /** * Set RX L2 Filtering mode of a VF of an Ethernet device. @@ -252,8 +266,10 @@ int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on); * - (-ENODEV) if *port_id* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf, - uint16_t rx_mask, uint8_t on); +int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, + uint16_t vf, + uint16_t rx_mask, + uint8_t on); /** * Returns the number of default RX queues on a VF @@ -269,7 +285,8 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf, * - (-ENOMEM) on an allocation failure * - (-1) firmware interface error */ -int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id); +int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, + uint16_t vf_id); /** * Queries the TX drop counter for the function @@ -285,7 +302,8 @@ int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id); * - (-EINVAL) invalid vf_id specified. * - (-ENOTSUP) Ethernet device is not a PF */ -int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id, +int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, + uint16_t vf_id, uint64_t *count); /** @@ -303,8 +321,9 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id, * - (-ENOTSUP) Ethernet device is not a PF * - (-ENOMEM) on an allocation failure */ -int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *mac_addr, - uint32_t vf_id); +int rte_pmd_bnxt_mac_addr_add(uint16_t port, + struct ether_addr *mac_addr, + uint32_t vf_id); /** * Enable/Disable VF statistics retention @@ -322,5 +341,7 @@ int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *mac_addr, * - (-ENODEV) if *port* invalid. * - (-EINVAL) if bad parameter. */ -int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port, uint16_t vf, uint8_t on); +int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port, + uint16_t vf, + uint8_t on); #endif /* _PMD_BNXT_H_ */ From patchwork Tue Jun 19 21:30:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41283 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 41CBF1B432; Tue, 19 Jun 2018 23:31:38 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 0EE351B05D for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 95AA630C06B; Tue, 19 Jun 2018 14:31:05 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 95AA630C06B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443865; bh=pz/89ztQ7TY/N7augos+j4Ef1sJsaJWwUTx9PVXcUEE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FMOu27/8uBOD9rCbRhzTpujATwcFj5+qHEA9xazk3Cnu20yBgFmJ7w42tmBXQ5JO8 hIz65hTMj84NAUrLBbVU4i/WD5QoRzxmNb1vTrYJtbNVNzgRzhy12zFYqbofUUAEX5 AuBJ07BqzcDYWGGiwjTsC0zOct4ze/6A5HaqmGpQ= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 7861AAC0768; Tue, 19 Jun 2018 14:31:05 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:40 -0700 Message-Id: <20180619213058.12273-14-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 13/31] net/bnxt: code cleanup style of bnxt stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_stats Signed-off-by: Scott Branden Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_stats.c | 84 ++++++++++++++++++++++++++----------------- drivers/net/bnxt/bnxt_stats.h | 27 +++++++++----- 2 files changed, 70 insertions(+), 41 deletions(-) diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c index a5d3c8660..d930aa00e 100644 --- a/drivers/net/bnxt/bnxt_stats.c +++ b/drivers/net/bnxt/bnxt_stats.c @@ -201,7 +201,7 @@ void bnxt_free_stats(struct bnxt *bp) } int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, - struct rte_eth_stats *bnxt_stats) + struct rte_eth_stats *bnxt_stats) { int rc = 0; unsigned int i; @@ -217,8 +217,11 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, struct bnxt_rx_queue *rxq = bp->rx_queues[i]; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; - rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i, - bnxt_stats, 1); + rc = bnxt_hwrm_ctx_qstats(bp, + cpr->hw_stats_ctx_id, + i, + bnxt_stats, + 1); if (unlikely(rc)) return rc; bnxt_stats->rx_nombuf += @@ -229,8 +232,12 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, struct bnxt_tx_queue *txq = bp->tx_queues[i]; struct bnxt_cp_ring_info *cpr = txq->cp_ring; - rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i, - bnxt_stats, 0); + rc = bnxt_hwrm_ctx_qstats(bp, + cpr->hw_stats_ctx_id, + i, + bnxt_stats, + 0); + if (unlikely(rc)) return rc; } @@ -259,7 +266,8 @@ void bnxt_stats_reset_op(struct rte_eth_dev *eth_dev) } int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, - struct rte_eth_xstat *xstats, unsigned int n) + struct rte_eth_xstat *xstats, + unsigned int n) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; @@ -279,18 +287,20 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, for (i = 0; i < RTE_DIM(bnxt_rx_stats_strings); i++) { uint64_t *rx_stats = (uint64_t *)bp->hw_rx_port_stats; xstats[count].id = count; - xstats[count].value = rte_le_to_cpu_64( - *(uint64_t *)((char *)rx_stats + - bnxt_rx_stats_strings[i].offset)); + xstats[count].value = rte_le_to_cpu_64 + (*(uint64_t *)((char *)rx_stats + + bnxt_rx_stats_strings[i].offset)); + count++; } for (i = 0; i < RTE_DIM(bnxt_tx_stats_strings); i++) { uint64_t *tx_stats = (uint64_t *)bp->hw_tx_port_stats; xstats[count].id = count; - xstats[count].value = rte_le_to_cpu_64( - *(uint64_t *)((char *)tx_stats + - bnxt_tx_stats_strings[i].offset)); + xstats[count].value = rte_le_to_cpu_64 + (*(uint64_t *)((char *)tx_stats + + bnxt_tx_stats_strings[i].offset)); + count++; } @@ -303,8 +313,8 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, } int bnxt_dev_xstats_get_names_op(__rte_unused struct rte_eth_dev *eth_dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned int limit) + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit) { /* Account for the Tx drop pkts aka the Anti spoof counter */ const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) + @@ -316,24 +326,27 @@ int bnxt_dev_xstats_get_names_op(__rte_unused struct rte_eth_dev *eth_dev, for (i = 0; i < RTE_DIM(bnxt_rx_stats_strings); i++) { snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "%s", - bnxt_rx_stats_strings[i].name); + sizeof(xstats_names[count].name), + "%s", + bnxt_rx_stats_strings[i].name); + count++; } for (i = 0; i < RTE_DIM(bnxt_tx_stats_strings); i++) { snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "%s", - bnxt_tx_stats_strings[i].name); + sizeof(xstats_names[count].name), + "%s", + bnxt_tx_stats_strings[i].name); + count++; } snprintf(xstats_names[count].name, - sizeof(xstats_names[count].name), - "%s", - bnxt_func_stats_strings[4].name); + sizeof(xstats_names[count].name), + "%s", + bnxt_func_stats_strings[4].name); + count++; } return stat_cnt; @@ -354,8 +367,10 @@ void bnxt_dev_xstats_reset_op(struct rte_eth_dev *eth_dev) PMD_DRV_LOG(ERR, "Operation not supported\n"); } -int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids, - uint64_t *values, unsigned int limit) +int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, + const uint64_t *ids, + uint64_t *values, + unsigned int limit) { /* Account for the Tx drop pkts aka the Anti spoof counter */ const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) + @@ -370,7 +385,7 @@ int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids, bnxt_dev_xstats_get_by_id_op(dev, NULL, values_copy, stat_cnt); for (i = 0; i < limit; i++) { if (ids[i] >= stat_cnt) { - PMD_DRV_LOG(ERR, "id value isn't valid"); + PMD_DRV_LOG(ERR, "id value isn't valid\n"); return -1; } values[i] = values_copy[ids[i]]; @@ -379,8 +394,9 @@ int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids, } int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - const uint64_t *ids, unsigned int limit) + struct rte_eth_xstat_name *xstats_names, + const uint64_t *ids, + unsigned int limit) { /* Account for the Tx drop pkts aka the Anti spoof counter */ const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) + @@ -391,16 +407,18 @@ int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev, if (!ids) return bnxt_dev_xstats_get_names_op(dev, xstats_names, stat_cnt); - bnxt_dev_xstats_get_names_by_id_op(dev, xstats_names_copy, NULL, - stat_cnt); + + bnxt_dev_xstats_get_names_by_id_op(dev, + xstats_names_copy, + NULL, + stat_cnt); for (i = 0; i < limit; i++) { if (ids[i] >= stat_cnt) { - PMD_DRV_LOG(ERR, "id value isn't valid"); + PMD_DRV_LOG(ERR, "id value isn't valid\n"); return -1; } - strcpy(xstats_names[i].name, - xstats_names_copy[ids[i]].name); + strcpy(xstats_names[i].name, xstats_names_copy[ids[i]].name); } return stat_cnt; } diff --git a/drivers/net/bnxt/bnxt_stats.h b/drivers/net/bnxt/bnxt_stats.h index b0f135a5a..08570238d 100644 --- a/drivers/net/bnxt/bnxt_stats.h +++ b/drivers/net/bnxt/bnxt_stats.h @@ -9,20 +9,31 @@ #include void bnxt_free_stats(struct bnxt *bp); + int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, - struct rte_eth_stats *bnxt_stats); + struct rte_eth_stats *bnxt_stats); + void bnxt_stats_reset_op(struct rte_eth_dev *eth_dev); + int bnxt_dev_xstats_get_names_op(__rte_unused struct rte_eth_dev *eth_dev, - struct rte_eth_xstat_name *xstats_names, - __rte_unused unsigned int limit); + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit); + int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, - struct rte_eth_xstat *xstats, unsigned int n); + struct rte_eth_xstat *xstats, + unsigned int n); + void bnxt_dev_xstats_reset_op(struct rte_eth_dev *eth_dev); -int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids, - uint64_t *values, unsigned int limit); + +int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, + const uint64_t *ids, + uint64_t *values, + unsigned int limit); + int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev, - struct rte_eth_xstat_name *xstats_names, - const uint64_t *ids, unsigned int limit); + struct rte_eth_xstat_name *xstats_names, + const uint64_t *ids, + unsigned int limit); struct bnxt_xstats_name_off { char name[RTE_ETH_XSTATS_NAME_SIZE]; From patchwork Tue Jun 19 21:30:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41282 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D157E1B42B; Tue, 19 Jun 2018 23:31:35 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 0CE2C1B05C for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 8D3A530C06A; Tue, 19 Jun 2018 14:31:05 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 8D3A530C06A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443865; bh=/aDSiIGrNVnDiQJzYcFL7TQLzOFACF9AzID83WnVEpQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jfihG/YM4+gIaH8m6jiRQSD3eA2kC4JvvYLCT47Q3mK1j7WJPml4Eyd4bH8Oyraz3 D5Z1DL+84lGFSztfuaFh8088zCckKoYZGCgwRScBds9y/Bk4JRaVjIiKYKRVXhR2HR 7PHpFDVcMxo/NFe206f7l92YTSejmdJuOK+VyxSU= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id C0B00AC0799; Tue, 19 Jun 2018 14:31:05 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:41 -0700 Message-Id: <20180619213058.12273-15-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 14/31] net/bnxt: code cleanup style of bnxt vnic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_vnic Signed-off-by: Scott Branden Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_vnic.c | 26 +++++++++++++------------- drivers/net/bnxt/bnxt_vnic.h | 8 ++++++-- 2 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index 19d06af55..5d9d369a3 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -64,8 +64,9 @@ void bnxt_init_vnics(struct bnxt *bp) STAILQ_INIT(&bp->ff_pool[i]); } -int bnxt_free_vnic(struct bnxt *bp, struct bnxt_vnic_info *vnic, - int pool) +int bnxt_free_vnic(struct bnxt *bp, + struct bnxt_vnic_info *vnic, + int pool) { struct bnxt_vnic_info *temp; @@ -143,14 +144,16 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp) struct rte_pci_device *pdev = bp->pdev; const struct rte_memzone *mz; char mz_name[RTE_MEMZONE_NAMESIZE]; - uint32_t entry_length = RTE_CACHE_LINE_ROUNDUP( - HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table) + - HW_HASH_KEY_SIZE + - BNXT_MAX_MC_ADDRS * ETHER_ADDR_LEN); + uint32_t entry_length; uint16_t max_vnics; int i; rte_iova_t mz_phys_addr; + entry_length = RTE_CACHE_LINE_ROUNDUP + (HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table) + + HW_HASH_KEY_SIZE + + BNXT_MAX_MC_ADDRS * ETHER_ADDR_LEN); + max_vnics = bp->max_vnics; snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_%04x:%02x:%02x:%02x_vnicattr", pdev->addr.domain, @@ -168,14 +171,11 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp) } mz_phys_addr = mz->iova; if ((unsigned long)mz->addr == mz_phys_addr) { - PMD_DRV_LOG(WARNING, - "Memzone physical address same as virtual.\n"); - PMD_DRV_LOG(WARNING, - "Using rte_mem_virt2iova()\n"); + PMD_DRV_LOG(WARNING, "Memzone phys addr == virtual\n"); + PMD_DRV_LOG(WARNING, "Using rte_mem_virt2iova()\n"); mz_phys_addr = rte_mem_virt2iova(mz->addr); if (mz_phys_addr == 0) { - PMD_DRV_LOG(ERR, - "unable to map vnic address to physical memory\n"); + PMD_DRV_LOG(ERR, "unable to map vnic addr\n"); return -ENOMEM; } } @@ -234,7 +234,7 @@ int bnxt_alloc_vnic_mem(struct bnxt *bp) vnic_mem = rte_zmalloc("bnxt_vnic_info", max_vnics * sizeof(struct bnxt_vnic_info), 0); if (vnic_mem == NULL) { - PMD_DRV_LOG(ERR, "Failed to alloc memory for %d VNICs", + PMD_DRV_LOG(ERR, "Failed to alloc memory for %d VNICs\n", max_vnics); return -ENOMEM; } diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index c521d7e5a..3401ae098 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -58,12 +58,16 @@ struct bnxt_vnic_info { struct bnxt; void bnxt_init_vnics(struct bnxt *bp); -int bnxt_free_vnic(struct bnxt *bp, struct bnxt_vnic_info *vnic, - int pool); + +int bnxt_free_vnic(struct bnxt *bp, + struct bnxt_vnic_info *vnic, + int pool); + struct bnxt_vnic_info *bnxt_alloc_vnic(struct bnxt *bp); void bnxt_free_all_vnics(struct bnxt *bp); void bnxt_free_vnic_attributes(struct bnxt *bp); int bnxt_alloc_vnic_attributes(struct bnxt *bp); void bnxt_free_vnic_mem(struct bnxt *bp); int bnxt_alloc_vnic_mem(struct bnxt *bp); + #endif From patchwork Tue Jun 19 21:30:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41287 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0374E1B44A; Tue, 19 Jun 2018 23:31:46 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id BC1571B05E for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 3129A30C064; Tue, 19 Jun 2018 14:31:06 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 3129A30C064 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443866; bh=F3ra/gFjRStqTHaJOSyvtanMgVlacVKLeeqazN0b5q0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pu9JLguzV70lGpIp8nEDY3NAt8L03nv3WH1QM9SguoSu0KETEIR1iCOwu6KutTg1Y cz+JhtEiLplvf7/QADvJDRgIYmS6g7VjFIzKUGq2Xs+bhv3zOiBAJePn3ODx7tC7kg NQ5YBcvMW2u7uWGoxuJzL7NpJJlrj96RKZEig4jM= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 14FFFAC06AD; Tue, 19 Jun 2018 14:31:06 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:42 -0700 Message-Id: <20180619213058.12273-16-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 15/31] net/bnxt: code cleanup style of bnxt txq X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_txq Signed-off-by: Scott Branden Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_txq.c | 24 ++++++++++++++---------- drivers/net/bnxt/bnxt_txq.h | 9 +++++---- 2 files changed, 19 insertions(+), 14 deletions(-) diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c index b9b975e4c..677bb9692 100644 --- a/drivers/net/bnxt/bnxt_txq.c +++ b/drivers/net/bnxt/bnxt_txq.c @@ -74,10 +74,10 @@ void bnxt_tx_queue_release_op(void *tx_queue) } int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_txconf *tx_conf) + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; struct bnxt_tx_queue *txq; @@ -91,7 +91,7 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev, } if (!nb_desc || nb_desc > MAX_TX_DESC_CNT) { - PMD_DRV_LOG(ERR, "nb_desc %d is invalid", nb_desc); + PMD_DRV_LOG(ERR, "nb_desc %d is invalid\n", nb_desc); rc = -EINVAL; goto out; } @@ -106,7 +106,7 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev, txq = rte_zmalloc_socket("bnxt_tx_queue", sizeof(struct bnxt_tx_queue), RTE_CACHE_LINE_SIZE, socket_id); if (!txq) { - PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!"); + PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!\n"); rc = -ENOMEM; goto out; } @@ -122,16 +122,20 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev, txq->port_id = eth_dev->data->port_id; /* Allocate TX ring hardware descriptors */ - if (bnxt_alloc_rings(bp, queue_idx, txq, NULL, txq->cp_ring, - "txr")) { - PMD_DRV_LOG(ERR, "ring_dma_zone_reserve for tx_ring failed!"); + if (bnxt_alloc_rings(bp, + queue_idx, + txq, + NULL, + txq->cp_ring, + "txr")) { + PMD_DRV_LOG(ERR, "ring_dma_zone_reserve for tx_ring failed!\n"); bnxt_tx_queue_release_op(txq); rc = -ENOMEM; goto out; } if (bnxt_init_one_tx_ring(txq)) { - PMD_DRV_LOG(ERR, "bnxt_init_one_tx_ring failed!"); + PMD_DRV_LOG(ERR, "bnxt_init_one_tx_ring failed!\n"); bnxt_tx_queue_release_op(txq); rc = -ENOMEM; goto out; diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h index f2c712a75..9da8f39d8 100644 --- a/drivers/net/bnxt/bnxt_txq.h +++ b/drivers/net/bnxt/bnxt_txq.h @@ -40,8 +40,9 @@ void bnxt_free_txq_stats(struct bnxt_tx_queue *txq); void bnxt_free_tx_mbufs(struct bnxt *bp); void bnxt_tx_queue_release_op(void *tx_queue); int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_txconf *tx_conf); + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); + #endif From patchwork Tue Jun 19 21:30:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41290 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EE2891B45E; Tue, 19 Jun 2018 23:31:51 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id D8DE31B05F for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 7934930C070; Tue, 19 Jun 2018 14:31:06 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 7934930C070 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443866; bh=Q5Zeb1XQ9Qn2LkcvcbhmqC9EICOvVODVns53G0eo/1g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XiRK6ZynUR7NFrp528Du/Q0tmY4rG0QQUO3vyE5MxpGwgtu99Q1mjLvR0t+RrC61l gbQ0b32t0Hmmi5d90iLKX7sWDirk8ztI1dQc4by3WtHO7ExSvqlbNzPBb0dQY7TUBQ O6bWl78KMPowerGN71AF4EGPKOt5nMnmX0XoVyh4= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 5C75BAC0768; Tue, 19 Jun 2018 14:31:06 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:43 -0700 Message-Id: <20180619213058.12273-17-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 16/31] net/bnxt: code cleanup style of bnxt rxq X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_rxq Signed-off-by: Scott Branden Reviewed-by: Randy Schacher Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_rxq.c | 22 +++++++++++++--------- drivers/net/bnxt/bnxt_rxq.h | 12 +++++++----- 2 files changed, 20 insertions(+), 14 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index f405e2575..d622ad4ef 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -83,8 +83,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp) /* For each pool, allocate MACVLAN CFA rule & VNIC */ max_pools = RTE_MIN(bp->max_vnics, RTE_MIN(bp->max_l2_ctx, - RTE_MIN(bp->max_rsscos_ctx, - ETH_64_POOLS))); + RTE_MIN(bp->max_rsscos_ctx, + ETH_64_POOLS))); if (pools > max_pools) pools = max_pools; break; @@ -280,11 +280,11 @@ void bnxt_rx_queue_release_op(void *rx_queue) } int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp) + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads; @@ -336,8 +336,12 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev, eth_dev->data->rx_queues[queue_idx] = rxq; /* Allocate RX ring hardware descriptors */ - if (bnxt_alloc_rings(bp, queue_idx, NULL, rxq, rxq->cp_ring, - "rxr")) { + if (bnxt_alloc_rings(bp, + queue_idx, + NULL, + rxq, + rxq->cp_ring, + "rxr")) { PMD_DRV_LOG(ERR, "ring_dma_zone_reserve for rx_ring failed!\n"); bnxt_rx_queue_release_op(rxq); diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index e5d6001d3..6e6c04010 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -42,12 +42,14 @@ struct bnxt_rx_queue { void bnxt_free_rxq_stats(struct bnxt_rx_queue *rxq); int bnxt_mq_rx_configure(struct bnxt *bp); void bnxt_rx_queue_release_op(void *rx_queue); + int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp); + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); + void bnxt_free_rx_mbufs(struct bnxt *bp); int bnxt_rx_queue_intr_enable_op(struct rte_eth_dev *eth_dev, uint16_t queue_id); From patchwork Tue Jun 19 21:30:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41286 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 15AC11B444; Tue, 19 Jun 2018 23:31:44 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id BBCF11B056 for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 6E75930C06E; Tue, 19 Jun 2018 14:31:06 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 6E75930C06E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443866; bh=3IR7uj7MDWddRloMoSvvwiAp4ZaZ5OgeapOxslTb170=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QFvu41UGGEn2nEeuiyIhjHu/1NhljaLiKk1Oz7bwYUk347ok/AI0vM/yCixNmatZ/ VWAg00c5U6BiwbPXObgKcLT7corMwspH9beedMOEaw36ZYhR+KjiM0r4Bnouz3dEk3 ExoAbL3f2oy0JBXc+EYQJSi7VWJIS/G1Mpy3X2L0= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id A3F8EAC0799; Tue, 19 Jun 2018 14:31:06 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:44 -0700 Message-Id: <20180619213058.12273-18-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 17/31] net/bnxt: code cleanup style of bnxt vnic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" code cleanup style of bnxt_vnic. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_vnic.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index 5d9d369a3..d5d81fd36 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -235,7 +235,7 @@ int bnxt_alloc_vnic_mem(struct bnxt *bp) max_vnics * sizeof(struct bnxt_vnic_info), 0); if (vnic_mem == NULL) { PMD_DRV_LOG(ERR, "Failed to alloc memory for %d VNICs\n", - max_vnics); + max_vnics); return -ENOMEM; } bp->vnic_info = vnic_mem; From patchwork Tue Jun 19 21:30:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41288 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 28A3A1B44E; Tue, 19 Jun 2018 23:31:47 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id D8F4D1B060 for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 14BA730C077; Tue, 19 Jun 2018 14:31:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 14BA730C077 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443867; bh=VuDHAQc5ebZPenV+tMDzyVI26pn82GHb5BuGbYuJ1a4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZyH7qo42uvO/74USS0u/ZuzT55Kqg7OBawCfEp+OtLVsoE46nhyUGuBh5VHPPWnGQ sv2UFESxjNKLDB0UMBEnVzqpo0+eX3OS872WEETF/VomH3IejXhXzAauDEzHq273Kt iaUvtUmf4Z4Ei0unAAOiL+7ZoDXYYkF8q+1m16kQ= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id EA8AFAC06AD; Tue, 19 Jun 2018 14:31:06 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:45 -0700 Message-Id: <20180619213058.12273-19-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 18/31] net/bnxt: code cleanup style of bnxt txr X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_txr Signed-off-by: Scott Branden Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_txr.c | 5 +++-- drivers/net/bnxt/bnxt_txr.h | 9 +++++---- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 68645b2f7..f8fd22156 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -373,8 +373,9 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq) return nb_tx_pkts; } -uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) +uint16_t bnxt_xmit_pkts(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) { struct bnxt_tx_queue *txq = tx_queue; uint16_t nb_tx_pkts = 0; diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h index 7f3c7cdb0..33cdea5f6 100644 --- a/drivers/net/bnxt/bnxt_txr.h +++ b/drivers/net/bnxt/bnxt_txr.h @@ -30,7 +30,7 @@ struct bnxt_tx_ring_info { }; struct bnxt_sw_tx_bd { - struct rte_mbuf *mbuf; /* mbuf associated with TX descriptor */ + struct rte_mbuf *mbuf; uint8_t is_gso; unsigned short nr_bds; }; @@ -38,8 +38,10 @@ struct bnxt_sw_tx_bd { void bnxt_free_tx_rings(struct bnxt *bp); int bnxt_init_one_tx_ring(struct bnxt_tx_queue *txq); int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id); -uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); + +uint16_t bnxt_xmit_pkts(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); @@ -63,7 +65,6 @@ int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); PKT_TX_OUTER_IP_CKSUM) #define PKT_TX_TCP_UDP_CKSUM (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM) - #define TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM (TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \ TX_BD_LONG_LFLAGS_T_IP_CHKSUM | \ TX_BD_LONG_LFLAGS_IP_CHKSUM) From patchwork Tue Jun 19 21:30:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41291 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 08C2B1B3A3; Tue, 19 Jun 2018 23:31:55 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id E47AB1B05A for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 60E3730C079; Tue, 19 Jun 2018 14:31:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 60E3730C079 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443867; bh=7tJ+Icai6gOmDyX6aiEMueobmNKwOpO53AvmyT8xqeM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gfAueAqN9nm09t47iAZA+/5kKsSgHp2rCtzzSQNNIc4SGo9cw4ann4av38tqzsOID QO+POmU1rUXZ9mTlOGH+Tw81m4Hyh6mPCeqVu46YG4z5/0iBmJWQAsZBs2Oaixo3Ti AsYNyvLhgCz4CDDS92ZF2A0VxQr6ICCJC0wJnvvg= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 3ED53AC0768; Tue, 19 Jun 2018 14:31:07 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:46 -0700 Message-Id: <20180619213058.12273-20-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 19/31] net/bnxt: code cleanup style of bnxt ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_ring Signed-off-by: Scott Branden Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ring.c | 79 ++++++++++++++++++++++++++------------------ drivers/net/bnxt/bnxt_ring.h | 40 +++++++++++----------- 2 files changed, 68 insertions(+), 51 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index fcbd6bc6e..03a5381a3 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -64,10 +64,10 @@ int bnxt_init_ring_grps(struct bnxt *bp) * rx bd ring - Only non-zero length if rx_ring_info is not NULL */ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, - struct bnxt_tx_queue *txq, - struct bnxt_rx_queue *rxq, - struct bnxt_cp_ring_info *cp_ring_info, - const char *suffix) + struct bnxt_tx_queue *txq, + struct bnxt_rx_queue *rxq, + struct bnxt_cp_ring_info *cp_ring_info, + const char *suffix) { struct bnxt_ring *cp_ring = cp_ring_info->cp_ring_struct; struct bnxt_rx_ring_info *rx_ring_info = rxq ? rxq->rx_ring : NULL; @@ -90,20 +90,24 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, int tx_vmem_start = cp_vmem_start + cp_vmem_len; int tx_vmem_len = - tx_ring_info ? RTE_CACHE_LINE_ROUNDUP(tx_ring_info-> - tx_ring_struct->vmem_size) : 0; + tx_ring_info ? + RTE_CACHE_LINE_ROUNDUP(tx_ring_info->tx_ring_struct->vmem_size) + : 0; int rx_vmem_start = tx_vmem_start + tx_vmem_len; int rx_vmem_len = rx_ring_info ? - RTE_CACHE_LINE_ROUNDUP(rx_ring_info-> - rx_ring_struct->vmem_size) : 0; + RTE_CACHE_LINE_ROUNDUP(rx_ring_info->rx_ring_struct->vmem_size) + : 0; + int ag_vmem_start = 0; int ag_vmem_len = 0; int cp_ring_start = 0; ag_vmem_start = rx_vmem_start + rx_vmem_len; - ag_vmem_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP( - rx_ring_info->ag_ring_struct->vmem_size) : 0; + ag_vmem_len = rx_ring_info ? + RTE_CACHE_LINE_ROUNDUP(rx_ring_info->ag_ring_struct->vmem_size) + : 0; + cp_ring_start = ag_vmem_start + ag_vmem_len; int cp_ring_len = RTE_CACHE_LINE_ROUNDUP(cp_ring->ring_size * @@ -124,9 +128,11 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, int ag_bitmap_start = ag_ring_start + ag_ring_len; int ag_bitmap_len = rx_ring_info ? - RTE_CACHE_LINE_ROUNDUP(rte_bitmap_get_memory_footprint( - rx_ring_info->rx_ring_struct->ring_size * - AGG_RING_SIZE_FACTOR)) : 0; + RTE_CACHE_LINE_ROUNDUP + (rte_bitmap_get_memory_footprint + (rx_ring_info->rx_ring_struct->ring_size * + AGG_RING_SIZE_FACTOR)) + : 0; int tpa_info_start = ag_bitmap_start + ag_bitmap_len; int tpa_info_len = rx_ring_info ? @@ -134,6 +140,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, sizeof(struct bnxt_tpa_info)) : 0; int total_alloc_len = tpa_info_start; + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) total_alloc_len += tpa_info_len; @@ -144,12 +151,13 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; mz = rte_memzone_lookup(mz_name); if (!mz) { - mz = rte_memzone_reserve_aligned(mz_name, total_alloc_len, - SOCKET_ID_ANY, - RTE_MEMZONE_2MB | - RTE_MEMZONE_SIZE_HINT_ONLY | - RTE_MEMZONE_IOVA_CONTIG, - getpagesize()); + mz = rte_memzone_reserve_aligned(mz_name, + total_alloc_len, + SOCKET_ID_ANY, + RTE_MEMZONE_2MB | + RTE_MEMZONE_SIZE_HINT_ONLY | + RTE_MEMZONE_IOVA_CONTIG, + getpagesize()); if (mz == NULL) return -ENOMEM; } @@ -165,7 +173,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, mz_phys_addr = rte_mem_virt2iova(mz->addr); if (mz_phys_addr == 0) { PMD_DRV_LOG(ERR, - "unable to map ring address to physical memory\n"); + "unable to map ring addr to phys memory\n"); return -ENOMEM; } } @@ -440,10 +448,12 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) goto err_out; } - rc = bnxt_hwrm_ring_alloc(bp, ring, - HWRM_RING_ALLOC_INPUT_RING_TYPE_RX, - map_idx, HWRM_NA_SIGNATURE, - cp_ring->fw_ring_id); + rc = bnxt_hwrm_ring_alloc(bp, + ring, + HWRM_RING_ALLOC_INPUT_RING_TYPE_RX, + map_idx, + HWRM_NA_SIGNATURE, + cp_ring->fw_ring_id); if (rc) goto err_out; PMD_DRV_LOG(DEBUG, "Alloc AGG Done!\n"); @@ -473,10 +483,13 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) unsigned int idx = i + bp->rx_cp_nr_rings; /* Tx cmpl */ - rc = bnxt_hwrm_ring_alloc(bp, cp_ring, - HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL, - idx, HWRM_NA_SIGNATURE, - HWRM_NA_SIGNATURE); + rc = bnxt_hwrm_ring_alloc + (bp, + cp_ring, + HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL, + idx, + HWRM_NA_SIGNATURE, + HWRM_NA_SIGNATURE); if (rc) goto err_out; @@ -484,10 +497,12 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) B_CP_DIS_DB(cpr, cpr->cp_raw_cons); /* Tx ring */ - rc = bnxt_hwrm_ring_alloc(bp, ring, - HWRM_RING_ALLOC_INPUT_RING_TYPE_TX, - idx, cpr->hw_stats_ctx_id, - cp_ring->fw_ring_id); + rc = bnxt_hwrm_ring_alloc(bp, + ring, + HWRM_RING_ALLOC_INPUT_RING_TYPE_TX, + idx, + cpr->hw_stats_ctx_id, + cp_ring->fw_ring_id); if (rc) goto err_out; diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h index 1446d784f..9348bf2b2 100644 --- a/drivers/net/bnxt/bnxt_ring.h +++ b/drivers/net/bnxt/bnxt_ring.h @@ -10,17 +10,17 @@ #include -#define RING_NEXT(ring, idx) (((idx) + 1) & (ring)->ring_mask) - -#define DB_IDX_MASK 0xffffff -#define DB_IDX_VALID (0x1 << 26) -#define DB_IRQ_DIS (0x1 << 27) -#define DB_KEY_TX (0x0 << 28) -#define DB_KEY_RX (0x1 << 28) -#define DB_KEY_CP (0x2 << 28) -#define DB_KEY_ST (0x3 << 28) -#define DB_KEY_TX_PUSH (0x4 << 28) -#define DB_LONG_TX_PUSH (0x2 << 24) +#define RING_NEXT(ring, idx) (((idx) + 1) & (ring)->ring_mask) + +#define DB_IDX_MASK 0xffffff +#define DB_IDX_VALID (0x1 << 26) +#define DB_IRQ_DIS (0x1 << 27) +#define DB_KEY_TX (0x0 << 28) +#define DB_KEY_RX (0x1 << 28) +#define DB_KEY_CP (0x2 << 28) +#define DB_KEY_ST (0x3 << 28) +#define DB_KEY_TX_PUSH (0x4 << 28) +#define DB_LONG_TX_PUSH (0x2 << 24) #define DEFAULT_CP_RING_SIZE 256 #define DEFAULT_RX_RING_SIZE 256 @@ -31,12 +31,13 @@ #define AGG_RING_MULTIPLIER 2 /* These assume 4k pages */ -#define MAX_RX_DESC_CNT (8 * 1024) -#define MAX_TX_DESC_CNT (4 * 1024) -#define MAX_CP_DESC_CNT (16 * 1024) +#define MAX_RX_DESC_CNT (8 * 1024) +#define MAX_TX_DESC_CNT (4 * 1024) +#define MAX_CP_DESC_CNT (16 * 1024) #define INVALID_HW_RING_ID ((uint16_t)-1) -#define INVALID_STATS_CTX_ID ((uint16_t)-1) +#define INVALID_STATS_CTX_ID ((uint16_t)-1) +#define INVALID_RING_GRP_ID ((uint16_t)-1) struct bnxt_ring { void *bd; @@ -65,11 +66,12 @@ struct bnxt_rx_ring_info; struct bnxt_cp_ring_info; void bnxt_free_ring(struct bnxt_ring *ring); int bnxt_init_ring_grps(struct bnxt *bp); + int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, - struct bnxt_tx_queue *txq, - struct bnxt_rx_queue *rxq, - struct bnxt_cp_ring_info *cp_ring_info, - const char *suffix); + struct bnxt_tx_queue *txq, + struct bnxt_rx_queue *rxq, + struct bnxt_cp_ring_info *cp_ring_info, + const char *suffix); int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index); int bnxt_alloc_hwrm_rings(struct bnxt *bp); From patchwork Tue Jun 19 21:30:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41293 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C2B861B470; Tue, 19 Jun 2018 23:32:00 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id E27D41B057 for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 5AA4130C078; Tue, 19 Jun 2018 14:31:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 5AA4130C078 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443867; bh=nEy+cIHiSi2v++Xzs6K6Pk1cJUxBuVzEIej0Yuqk/eE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L1cz9c1cFewQfmL/TqEjoxN82XXah+YtPkxvHRwl16E6DhQIuq5h0N6CfQAqKvg32 5VZDMGQBz9hZviA2PSDX+z8I0KZpBuHDc7GeciFDMIvpkhVZglxGRPZG8z5mEnqO0k IRSSQe9ga0s6JZ5zVGCdDxCCQkEzUoRB9nRjXhIY= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 8BD7EAC0799; Tue, 19 Jun 2018 14:31:07 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:47 -0700 Message-Id: <20180619213058.12273-21-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 20/31] net/bnxt: code cleanup style of bnxt ethdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Cleanup alignment, brackets, debug string style of bnxt_ethdev Signed-off-by: Scott Branden Reviewed-by: Randy Schacher Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 204 ++++++++++++++++++++++------------------- 1 file changed, 112 insertions(+), 92 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index d66a29758..6516aeedd 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -315,8 +315,9 @@ static int bnxt_init_chip(struct bnxt *bp) intr_vector = bp->eth_dev->data->nb_rx_queues; PMD_DRV_LOG(DEBUG, "intr_vector = %d\n", intr_vector); if (intr_vector > bp->rx_cp_nr_rings) { - PMD_DRV_LOG(ERR, "At most %d intr queues supported", - bp->rx_cp_nr_rings); + PMD_DRV_LOG(ERR, + "At most %d intr queues supported\n", + bp->rx_cp_nr_rings); return -ENOTSUP; } if (rte_intr_efd_enable(intr_handle, intr_vector)) @@ -329,14 +330,15 @@ static int bnxt_init_chip(struct bnxt *bp) bp->eth_dev->data->nb_rx_queues * sizeof(int), 0); if (intr_handle->intr_vec == NULL) { - PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues" - " intr_vec", bp->eth_dev->data->nb_rx_queues); + PMD_DRV_LOG(ERR, + "Failed to allocate %d rx_queues intr_vec\n", + bp->eth_dev->data->nb_rx_queues); return -ENOMEM; } - PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p " - "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n", - intr_handle->intr_vec, intr_handle->nb_efd, - intr_handle->max_intr); + PMD_DRV_LOG(DEBUG, + "intr_handle->intr_vec = %p intr_handle->nb_efd = %d intr_handle->max_intr = %d\n", + intr_handle->intr_vec, intr_handle->nb_efd, + intr_handle->max_intr); } for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues; @@ -404,7 +406,7 @@ static int bnxt_init_nic(struct bnxt *bp) */ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev, - struct rte_eth_dev_info *dev_info) + struct rte_eth_dev_info *dev_info) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; uint16_t max_vnics, i, j, vpool, vrxq; @@ -706,15 +708,22 @@ static void bnxt_mac_addr_remove_op(struct rte_eth_dev *eth_dev, while (filter) { temp_filter = STAILQ_NEXT(filter, next); if (filter->mac_index == index) { - STAILQ_REMOVE(&vnic->filter, filter, - bnxt_filter_info, next); + STAILQ_REMOVE(&vnic->filter, + filter, + bnxt_filter_info, + next); + bnxt_hwrm_clear_l2_filter(bp, filter); filter->mac_index = INVALID_MAC_INDEX; - memset(&filter->l2_addr, 0, + + memset(&filter->l2_addr, + 0, ETHER_ADDR_LEN); - STAILQ_INSERT_TAIL( - &bp->free_filter_list, - filter, next); + + STAILQ_INSERT_TAIL + (&bp->free_filter_list, + filter, + next); } filter = temp_filter; } @@ -785,9 +794,10 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete) out: /* Timed out or success */ if (new.link_status != eth_dev->data->dev_link.link_status || - new.link_speed != eth_dev->data->dev_link.link_speed) { - memcpy(ð_dev->data->dev_link, &new, - sizeof(struct rte_eth_link)); + new.link_speed != eth_dev->data->dev_link.link_speed) { + memcpy(ð_dev->data->dev_link, + &new, + sizeof(struct rte_eth_link)); _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, @@ -856,8 +866,8 @@ static void bnxt_allmulticast_disable_op(struct rte_eth_dev *eth_dev) } static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; @@ -868,9 +878,9 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev, return -EINVAL; if (reta_size != HW_HASH_INDEX_SIZE) { - PMD_DRV_LOG(ERR, "The configured hash table lookup size " - "(%d) must equal the size supported by the hardware " - "(%d)\n", reta_size, HW_HASH_INDEX_SIZE); + PMD_DRV_LOG(ERR, + "Configured hash table lookup size (%d) != (%d)\n", + reta_size, HW_HASH_INDEX_SIZE); return -EINVAL; } /* Update the RSS VNIC(s) */ @@ -900,9 +910,9 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev, return -EINVAL; if (reta_size != HW_HASH_INDEX_SIZE) { - PMD_DRV_LOG(ERR, "The configured hash table lookup size " - "(%d) must equal the size supported by the hardware " - "(%d)\n", reta_size, HW_HASH_INDEX_SIZE); + PMD_DRV_LOG(ERR, + "Configured hash table lookup size (%d) != (%d)\n", + reta_size, HW_HASH_INDEX_SIZE); return -EINVAL; } /* EW - need to revisit here copying from uint64_t to uint16_t */ @@ -1021,8 +1031,8 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev, } if (hash_types) { PMD_DRV_LOG(ERR, - "Unknwon RSS config from firmware (%08x), RSS disabled", - vnic->hash_type); + "Unknown RSS config (%08x), RSS disabled\n", + vnic->hash_type); return -ENOTSUP; } } else { @@ -1032,7 +1042,7 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev, } static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev, - struct rte_eth_fc_conf *fc_conf) + struct rte_eth_fc_conf *fc_conf) { struct bnxt *bp = (struct bnxt *)dev->data->dev_private; struct rte_eth_link link_info; @@ -1064,7 +1074,7 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev, } static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev, - struct rte_eth_fc_conf *fc_conf) + struct rte_eth_fc_conf *fc_conf) { struct bnxt *bp = (struct bnxt *)dev->data->dev_private; @@ -1120,7 +1130,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev, /* Add UDP tunneling port */ static int bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev, - struct rte_eth_udp_tunnel *udp_tunnel) + struct rte_eth_udp_tunnel *udp_tunnel) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; uint16_t tunnel_type = 0; @@ -1168,7 +1178,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev, static int bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev, - struct rte_eth_udp_tunnel *udp_tunnel) + struct rte_eth_udp_tunnel *udp_tunnel) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; uint16_t tunnel_type = 0; @@ -1256,9 +1266,10 @@ static int bnxt_del_vlan_filter(struct bnxt *bp, uint16_t vlan_id) STAILQ_REMOVE(&vnic->filter, filter, bnxt_filter_info, next); bnxt_hwrm_clear_l2_filter(bp, filter); - STAILQ_INSERT_TAIL( - &bp->free_filter_list, - filter, next); + STAILQ_INSERT_TAIL + (&bp->free_filter_list, + filter, + next); /* * Need to examine to see if the MAC @@ -1281,9 +1292,10 @@ static int bnxt_del_vlan_filter(struct bnxt *bp, uint16_t vlan_id) memcpy(new_filter->l2_addr, filter->l2_addr, ETHER_ADDR_LEN); /* MAC only filter */ - rc = bnxt_hwrm_set_l2_filter(bp, - vnic->fw_vnic_id, - new_filter); + rc = bnxt_hwrm_set_l2_filter + (bp, + vnic->fw_vnic_id, + new_filter); if (rc) goto exit; PMD_DRV_LOG(INFO, @@ -1335,9 +1347,10 @@ static int bnxt_add_vlan_filter(struct bnxt *bp, uint16_t vlan_id) bnxt_filter_info, next); bnxt_hwrm_clear_l2_filter(bp, filter); filter->l2_ovlan = 0; - STAILQ_INSERT_TAIL( - &bp->free_filter_list, - filter, next); + STAILQ_INSERT_TAIL + (&bp->free_filter_list, + filter, + next); } new_filter = bnxt_alloc_filter(bp); if (!new_filter) { @@ -1405,6 +1418,7 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask) /* Enable or disable VLAN stripping */ for (i = 0; i < bp->nr_vnics; i++) { struct bnxt_vnic_info *vnic = &bp->vnic_info[i]; + if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP) vnic->vlan_strip = true; else @@ -1460,8 +1474,8 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev, struct ether_addr *addr) static int bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev, - struct ether_addr *mc_addr_set, - uint32_t nb_mc_addr) + struct ether_addr *mc_addr_set, + uint32_t nb_mc_addr) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; char *mc_addr_list = (char *)mc_addr_set; @@ -1497,8 +1511,9 @@ bnxt_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size) uint8_t fw_updt = (bp->fw_ver >> 8) & 0xff; int ret; - ret = snprintf(fw_version, fw_size, "%d.%d.%d", - fw_major, fw_minor, fw_updt); + ret = snprintf(fw_version, fw_size, + "%d.%d.%d", + fw_major, fw_minor, fw_updt); ret += 1; /* add the size of '\0' */ if (fw_size < (uint32_t)ret) @@ -1508,8 +1523,9 @@ bnxt_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size) } static void -bnxt_rxq_info_get_op(struct rte_eth_dev *dev, uint16_t queue_id, - struct rte_eth_rxq_info *qinfo) +bnxt_rxq_info_get_op(struct rte_eth_dev *dev, + uint16_t queue_id, + struct rte_eth_rxq_info *qinfo) { struct bnxt_rx_queue *rxq; @@ -1525,8 +1541,9 @@ bnxt_rxq_info_get_op(struct rte_eth_dev *dev, uint16_t queue_id, } static void -bnxt_txq_info_get_op(struct rte_eth_dev *dev, uint16_t queue_id, - struct rte_eth_txq_info *qinfo) +bnxt_txq_info_get_op(struct rte_eth_dev *dev, + uint16_t queue_id, + struct rte_eth_txq_info *qinfo) { struct bnxt_tx_queue *txq; @@ -1561,7 +1578,6 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) return -EINVAL; } - if (new_mtu > ETHER_MTU) { bp->flags |= BNXT_FLAG_JUMBO; bp->eth_dev->data->dev_conf.rxmode.offloads |= @@ -1655,17 +1671,16 @@ bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id) valid = FLIP_VALID(cons, cpr->cp_ring_struct->ring_mask, valid); cmp_type = CMP_TYPE(rxcmp); if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) { - cmp = (rte_le_to_cpu_32( - ((struct rx_tpa_end_cmpl *) - (rxcmp))->agg_bufs_v1) & - RX_TPA_END_CMPL_AGG_BUFS_MASK) >> - RX_TPA_END_CMPL_AGG_BUFS_SFT; + cmp = (rte_le_to_cpu_32 + (((struct rx_tpa_end_cmpl *) + (rxcmp))->agg_bufs_v1) + & RX_TPA_END_CMPL_AGG_BUFS_MASK) + >> RX_TPA_END_CMPL_AGG_BUFS_SFT; desc++; } else if (cmp_type == 0x11) { desc++; - cmp = (rxcmp->agg_bufs_v1 & - RX_PKT_CMPL_AGG_BUFS_MASK) >> - RX_PKT_CMPL_AGG_BUFS_SFT; + cmp = (rxcmp->agg_bufs_v1 & RX_PKT_CMPL_AGG_BUFS_MASK) + >> RX_PKT_CMPL_AGG_BUFS_SFT; } else { cmp = 1; } @@ -1710,7 +1725,6 @@ bnxt_rx_descriptor_status_op(void *rx_queue, uint16_t offset) if (rx_buf->mbuf == NULL) return RTE_ETH_RX_DESC_UNAVAIL; - return RTE_ETH_RX_DESC_AVAIL; } @@ -2882,16 +2896,20 @@ bnxt_get_eeprom_length_op(struct rte_eth_dev *dev) static int bnxt_get_eeprom_op(struct rte_eth_dev *dev, - struct rte_dev_eeprom_info *in_eeprom) + struct rte_dev_eeprom_info *in_eeprom) { struct bnxt *bp = (struct bnxt *)dev->data->dev_private; uint32_t index; uint32_t offset; - PMD_DRV_LOG(INFO, "%04x:%02x:%02x:%02x in_eeprom->offset = %d " - "len = %d\n", bp->pdev->addr.domain, - bp->pdev->addr.bus, bp->pdev->addr.devid, - bp->pdev->addr.function, in_eeprom->offset, in_eeprom->length); + PMD_DRV_LOG(INFO, + "%04x:%02x:%02x:%02x in_eeprom->offset = %d len = %d\n", + bp->pdev->addr.domain, + bp->pdev->addr.bus, + bp->pdev->addr.devid, + bp->pdev->addr.function, + in_eeprom->offset, + in_eeprom->length); if (in_eeprom->offset == 0) /* special offset value to get directory */ return bnxt_get_nvram_directory(bp, in_eeprom->length, @@ -2953,16 +2971,17 @@ static bool bnxt_dir_type_is_executable(uint16_t dir_type) static int bnxt_set_eeprom_op(struct rte_eth_dev *dev, - struct rte_dev_eeprom_info *in_eeprom) + struct rte_dev_eeprom_info *in_eeprom) { struct bnxt *bp = (struct bnxt *)dev->data->dev_private; uint8_t index, dir_op; uint16_t type, ext, ordinal, attr; - PMD_DRV_LOG(INFO, "%04x:%02x:%02x:%02x in_eeprom->offset = %d " - "len = %d\n", bp->pdev->addr.domain, - bp->pdev->addr.bus, bp->pdev->addr.devid, - bp->pdev->addr.function, in_eeprom->offset, in_eeprom->length); + PMD_DRV_LOG(INFO, + "%04x:%02x:%02x:%02x in_eeprom->offset = %d len = %d\n", + bp->pdev->addr.domain, bp->pdev->addr.bus, + bp->pdev->addr.devid, bp->pdev->addr.function, + in_eeprom->offset, in_eeprom->length); if (!BNXT_PF(bp)) { PMD_DRV_LOG(ERR, "NVM write not supported from a VF\n"); @@ -3195,14 +3214,14 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) pci_dev->addr.function, "rx_port_stats"); mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; mz = rte_memzone_lookup(mz_name); - total_alloc_len = RTE_CACHE_LINE_ROUNDUP( - sizeof(struct rx_port_stats) + 512); + total_alloc_len = RTE_CACHE_LINE_ROUNDUP + (sizeof(struct rx_port_stats) + 512); if (!mz) { mz = rte_memzone_reserve(mz_name, total_alloc_len, - SOCKET_ID_ANY, - RTE_MEMZONE_2MB | - RTE_MEMZONE_SIZE_HINT_ONLY | - RTE_MEMZONE_IOVA_CONTIG); + SOCKET_ID_ANY, + RTE_MEMZONE_2MB | + RTE_MEMZONE_SIZE_HINT_ONLY | + RTE_MEMZONE_IOVA_CONTIG); if (mz == NULL) return -ENOMEM; } @@ -3216,7 +3235,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) mz_phys_addr = rte_mem_virt2iova(mz->addr); if (mz_phys_addr == 0) { PMD_DRV_LOG(ERR, - "unable to map address to physical memory\n"); + "unable to map addr to phys memory\n"); return -ENOMEM; } } @@ -3231,15 +3250,15 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) pci_dev->addr.function, "tx_port_stats"); mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; mz = rte_memzone_lookup(mz_name); - total_alloc_len = RTE_CACHE_LINE_ROUNDUP( - sizeof(struct tx_port_stats) + 512); + total_alloc_len = RTE_CACHE_LINE_ROUNDUP + (sizeof(struct tx_port_stats) + 512); if (!mz) { mz = rte_memzone_reserve(mz_name, - total_alloc_len, - SOCKET_ID_ANY, - RTE_MEMZONE_2MB | - RTE_MEMZONE_SIZE_HINT_ONLY | - RTE_MEMZONE_IOVA_CONTIG); + total_alloc_len, + SOCKET_ID_ANY, + RTE_MEMZONE_2MB | + RTE_MEMZONE_SIZE_HINT_ONLY | + RTE_MEMZONE_IOVA_CONTIG); if (mz == NULL) return -ENOMEM; } @@ -3253,7 +3272,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) mz_phys_addr = rte_mem_virt2iova(mz->addr); if (mz_phys_addr == 0) { PMD_DRV_LOG(ERR, - "unable to map address to physical memory\n"); + "unable to map to phys memory\n"); return -ENOMEM; } } @@ -3298,10 +3317,11 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) goto error_free; } eth_dev->data->mac_addrs = rte_zmalloc("bnxt_mac_addr_tbl", - ETHER_ADDR_LEN * bp->max_l2_ctx, 0); + ETHER_ADDR_LEN * bp->max_l2_ctx, + 0); if (eth_dev->data->mac_addrs == NULL) { PMD_DRV_LOG(ERR, - "Failed to alloc %u bytes needed to store MAC addr tbl", + "Failed to alloc %u bytes to store MAC addr tbl\n", ETHER_ADDR_LEN * bp->max_l2_ctx); rc = -ENOMEM; goto error_free; @@ -3328,7 +3348,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) } bp->grp_info = rte_zmalloc("bnxt_grp_info", - sizeof(*bp->grp_info) * bp->max_ring_grps, 0); + sizeof(*bp->grp_info) * bp->max_ring_grps, + 0); if (!bp->grp_info) { PMD_DRV_LOG(ERR, "Failed to alloc %zu bytes to store group info table\n", @@ -3339,8 +3360,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) /* Forward all requests if firmware is new enough */ if (((bp->fw_ver >= ((20 << 24) | (6 << 16) | (100 << 8))) && - (bp->fw_ver < ((20 << 24) | (7 << 16)))) || - ((bp->fw_ver >= ((20 << 24) | (8 << 16))))) { + (bp->fw_ver < ((20 << 24) | (7 << 16)))) || + (bp->fw_ver >= ((20 << 24) | (8 << 16)))) { memset(bp->pf.vf_req_fwd, 0xff, sizeof(bp->pf.vf_req_fwd)); } else { PMD_DRV_LOG(WARNING, @@ -3363,8 +3384,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) ALLOW_FUNC(HWRM_VNIC_TPA_CFG); rc = bnxt_hwrm_func_driver_register(bp); if (rc) { - PMD_DRV_LOG(ERR, - "Failed to register driver"); + PMD_DRV_LOG(ERR, "Failed to register driver\n"); rc = -EBUSY; goto error_free; } @@ -3477,7 +3497,7 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) } static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *pci_dev) + struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct bnxt), bnxt_dev_init); From patchwork Tue Jun 19 21:30:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41292 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 99EE91B469; Tue, 19 Jun 2018 23:31:58 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id EA61A1B061 for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 01FCC30C042; Tue, 19 Jun 2018 14:31:08 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 01FCC30C042 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443868; bh=wwPrL5O53aZxeLA6C8sk/3nDx2o5FGsRHNm76wxiwew=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q9jvqT9txFtkEwD2R/wKJCdPTc1kDeoPQXQwxJBZM1cENFyGpp5jjP1+mSkRMz3G5 Nf+sQicWTuEbQkc6d1r1/Xdk8N37txA4PfO590mcBYw4dSK8CaWVQ4m4n5RBPwIboX Bmt+DIL6lTQg7WHiaXWtDxwbMA12q10fa5zyutg8= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id D9BD1AC06AD; Tue, 19 Jun 2018 14:31:07 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Scott Branden Date: Tue, 19 Jun 2018 14:30:48 -0700 Message-Id: <20180619213058.12273-22-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 21/31] net/bnxt: move function check zero bytes to bnxt util.h X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Scott Branden Move check_zero_bytes into new bnxt_util.h file. Signed-off-by: Scott Branden Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/Makefile | 1 + drivers/net/bnxt/bnxt_ethdev.c | 1 + drivers/net/bnxt/bnxt_filter.c | 9 --------- drivers/net/bnxt/bnxt_filter.h | 1 - drivers/net/bnxt/bnxt_util.c | 18 ++++++++++++++++++ drivers/net/bnxt/bnxt_util.h | 11 +++++++++++ 6 files changed, 31 insertions(+), 10 deletions(-) create mode 100644 drivers/net/bnxt/bnxt_util.c create mode 100644 drivers/net/bnxt/bnxt_util.h diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile index fd0cb5235..80db03ea8 100644 --- a/drivers/net/bnxt/Makefile +++ b/drivers/net/bnxt/Makefile @@ -38,6 +38,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txq.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c +SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c # diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 6516aeedd..9cfa43778 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -26,6 +26,7 @@ #include "bnxt_vnic.h" #include "hsi_struct_def_dpdk.h" #include "bnxt_nvm_defs.h" +#include "bnxt_util.h" #define DRV_MODULE_NAME "bnxt" static const char bnxt_version[] = diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c index e36da9977..72989ab67 100644 --- a/drivers/net/bnxt/bnxt_filter.c +++ b/drivers/net/bnxt/bnxt_filter.c @@ -231,15 +231,6 @@ nxt_non_void_action(const struct rte_flow_action *cur) } } -int bnxt_check_zero_bytes(const uint8_t *bytes, int len) -{ - int i; - for (i = 0; i < len; i++) - if (bytes[i] != 0x00) - return 0; - return 1; -} - static int bnxt_filter_type_check(const struct rte_flow_item pattern[], struct rte_flow_error *error __rte_unused) diff --git a/drivers/net/bnxt/bnxt_filter.h b/drivers/net/bnxt/bnxt_filter.h index d27be7032..a1ecfb19d 100644 --- a/drivers/net/bnxt/bnxt_filter.h +++ b/drivers/net/bnxt/bnxt_filter.h @@ -69,7 +69,6 @@ struct bnxt_filter_info *bnxt_get_unused_filter(struct bnxt *bp); void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter); struct bnxt_filter_info *bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf, struct bnxt_vnic_info *vnic); -int bnxt_check_zero_bytes(const uint8_t *bytes, int len); #define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR \ HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_MACADDR diff --git a/drivers/net/bnxt/bnxt_util.c b/drivers/net/bnxt/bnxt_util.c new file mode 100644 index 000000000..7d3342719 --- /dev/null +++ b/drivers/net/bnxt/bnxt_util.c @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2014-2018 Broadcom + * All rights reserved. + */ + +#include + +#include "bnxt_util.h" + +int bnxt_check_zero_bytes(const uint8_t *bytes, int len) +{ + int i; + + for (i = 0; i < len; i++) + if (bytes[i] != 0x00) + return 0; + return 1; +} diff --git a/drivers/net/bnxt/bnxt_util.h b/drivers/net/bnxt/bnxt_util.h new file mode 100644 index 000000000..2378833cc --- /dev/null +++ b/drivers/net/bnxt/bnxt_util.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2014-2018 Broadcom + * All rights reserved. + */ + +#ifndef _BNXT_UTIL_H_ +#define _BNXT_UTIL_H_ + +int bnxt_check_zero_bytes(const uint8_t *bytes, int len); + +#endif /* _BNXT_UTIL_H_ */ From patchwork Tue Jun 19 21:30:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41294 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 666CE1B48B; Tue, 19 Jun 2018 23:32:06 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id EEC191B063 for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 66A2630C076; Tue, 19 Jun 2018 14:31:08 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 66A2630C076 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443868; bh=8ZZj3fGVnW1squg5M/8imxpBpX00ws4XudpTftZBqqc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rNxHQ0KDTEhcoWZGorNhyq5uo7O0GdOh1Cxrncp1JZoe7gizpVJ3KM8oAnWDOr6dF QQpFCRY0WHFnmgwU7Yb86bvhyjzUH9OvuYIdy5Ws6husCfkPOmeFFwQNa1FBVnNowE 1kG8C9iiFcp/E/B6n7yHTaF1fi2OpX+FnUAREcmM= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 2E106AC0768; Tue, 19 Jun 2018 14:31:08 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Michael Wildt , Scott Branden Date: Tue, 19 Jun 2018 14:30:49 -0700 Message-Id: <20180619213058.12273-23-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 22/31] net/bnxt: filter/flow refactoring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In preparation of more rte_flow support it has been decided to separate out filter and flow into their own files. Functionally the same. Signed-off-by: Michael Wildt Signed-off-by: Scott Branden Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/Makefile | 1 + drivers/net/bnxt/bnxt_filter.c | 1060 ------------------------------------ drivers/net/bnxt/bnxt_flow.c | 1167 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 1168 insertions(+), 1060 deletions(-) create mode 100644 drivers/net/bnxt/bnxt_flow.c diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile index 80db03ea8..8be3cb0e4 100644 --- a/drivers/net/bnxt/Makefile +++ b/drivers/net/bnxt/Makefile @@ -29,6 +29,7 @@ EXPORT_MAP := rte_pmd_bnxt_version.map SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_cpr.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_ethdev.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_filter.c +SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_flow.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_hwrm.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_ring.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxq.c diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c index 72989ab67..31757d32c 100644 --- a/drivers/net/bnxt/bnxt_filter.c +++ b/drivers/net/bnxt/bnxt_filter.c @@ -180,1063 +180,3 @@ void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter) { STAILQ_INSERT_TAIL(&bp->free_filter_list, filter, next); } - -static int -bnxt_flow_agrs_validate(const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) -{ - if (!pattern) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM_NUM, - NULL, "NULL pattern."); - return -rte_errno; - } - - if (!actions) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_NUM, - NULL, "NULL action."); - return -rte_errno; - } - - if (!attr) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute."); - return -rte_errno; - } - - return 0; -} - -static const struct rte_flow_item * -nxt_non_void_pattern(const struct rte_flow_item *cur) -{ - while (1) { - if (cur->type != RTE_FLOW_ITEM_TYPE_VOID) - return cur; - cur++; - } -} - -static const struct rte_flow_action * -nxt_non_void_action(const struct rte_flow_action *cur) -{ - while (1) { - if (cur->type != RTE_FLOW_ACTION_TYPE_VOID) - return cur; - cur++; - } -} - -static int -bnxt_filter_type_check(const struct rte_flow_item pattern[], - struct rte_flow_error *error __rte_unused) -{ - const struct rte_flow_item *item = nxt_non_void_pattern(pattern); - int use_ntuple = 1; - - while (item->type != RTE_FLOW_ITEM_TYPE_END) { - switch (item->type) { - case RTE_FLOW_ITEM_TYPE_ETH: - use_ntuple = 1; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - use_ntuple = 0; - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - case RTE_FLOW_ITEM_TYPE_IPV6: - case RTE_FLOW_ITEM_TYPE_TCP: - case RTE_FLOW_ITEM_TYPE_UDP: - /* FALLTHROUGH */ - /* need ntuple match, reset exact match */ - if (!use_ntuple) { - PMD_DRV_LOG(ERR, - "VLAN flow cannot use NTUPLE filter\n"); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Cannot use VLAN with NTUPLE"); - return -rte_errno; - } - use_ntuple |= 1; - break; - default: - PMD_DRV_LOG(ERR, "Unknown Flow type"); - use_ntuple |= 1; - } - item++; - } - return use_ntuple; -} - -static int -bnxt_validate_and_parse_flow_type(struct bnxt *bp, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - struct rte_flow_error *error, - struct bnxt_filter_info *filter) -{ - const struct rte_flow_item *item = nxt_non_void_pattern(pattern); - const struct rte_flow_item_vlan *vlan_spec, *vlan_mask; - const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; - const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; - const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; - const struct rte_flow_item_udp *udp_spec, *udp_mask; - const struct rte_flow_item_eth *eth_spec, *eth_mask; - const struct rte_flow_item_nvgre *nvgre_spec; - const struct rte_flow_item_nvgre *nvgre_mask; - const struct rte_flow_item_vxlan *vxlan_spec; - const struct rte_flow_item_vxlan *vxlan_mask; - uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF}; - uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF}; - const struct rte_flow_item_vf *vf_spec; - uint32_t tenant_id_be = 0; - bool vni_masked = 0; - bool tni_masked = 0; - uint32_t vf = 0; - int use_ntuple; - uint32_t en = 0; - uint32_t en_ethertype; - int dflt_vnic; - - use_ntuple = bnxt_filter_type_check(pattern, error); - PMD_DRV_LOG(DEBUG, "Use NTUPLE %d\n", use_ntuple); - if (use_ntuple < 0) - return use_ntuple; - - filter->filter_type = use_ntuple ? - HWRM_CFA_NTUPLE_FILTER : HWRM_CFA_EM_FILTER; - en_ethertype = use_ntuple ? - NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE : - EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE; - - while (item->type != RTE_FLOW_ITEM_TYPE_END) { - if (item->last) { - /* last or range is NOT supported as match criteria */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "No support for range"); - return -rte_errno; - } - if (!item->spec || !item->mask) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "spec/mask is NULL"); - return -rte_errno; - } - switch (item->type) { - case RTE_FLOW_ITEM_TYPE_ETH: - eth_spec = item->spec; - eth_mask = item->mask; - - /* Source MAC address mask cannot be partially set. - * Should be All 0's or all 1's. - * Destination MAC address mask must not be partially - * set. Should be all 1's or all 0's. - */ - if ((!is_zero_ether_addr(ð_mask->src) && - !is_broadcast_ether_addr(ð_mask->src)) || - (!is_zero_ether_addr(ð_mask->dst) && - !is_broadcast_ether_addr(ð_mask->dst))) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "MAC_addr mask not valid"); - return -rte_errno; - } - - /* Mask is not allowed. Only exact matches are */ - if (eth_mask->type && - eth_mask->type != RTE_BE16(0xffff)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "ethertype mask not valid"); - return -rte_errno; - } - - if (is_broadcast_ether_addr(ð_mask->dst)) { - rte_memcpy(filter->dst_macaddr, - ð_spec->dst, 6); - en |= use_ntuple ? - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR : - EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR; - } - if (is_broadcast_ether_addr(ð_mask->src)) { - rte_memcpy(filter->src_macaddr, - ð_spec->src, 6); - en |= use_ntuple ? - NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR : - EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR; - } /* - * else { - * RTE_LOG(ERR, PMD, "Handle this condition\n"); - * } - */ - if (eth_mask->type) { - filter->ethertype = - rte_be_to_cpu_16(eth_spec->type); - en |= en_ethertype; - } - - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - vlan_spec = item->spec; - vlan_mask = item->mask; - if (en & en_ethertype) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "VLAN TPID matching is not" - " supported"); - return -rte_errno; - } - if (vlan_mask->tci && - vlan_mask->tci == RTE_BE16(0x0fff)) { - /* Only the VLAN ID can be matched. */ - filter->l2_ovlan = - rte_be_to_cpu_16(vlan_spec->tci & - RTE_BE16(0x0fff)); - en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID; - } else if (vlan_mask->tci) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "VLAN mask is invalid"); - return -rte_errno; - } - if (vlan_mask->inner_type && - vlan_mask->inner_type != RTE_BE16(0xffff)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "inner ethertype mask not" - " valid"); - return -rte_errno; - } - if (vlan_mask->inner_type) { - filter->ethertype = - rte_be_to_cpu_16(vlan_spec->inner_type); - en |= en_ethertype; - } - - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - /* If mask is not involved, we could use EM filters. */ - ipv4_spec = item->spec; - ipv4_mask = item->mask; - /* Only IP DST and SRC fields are maskable. */ - if (ipv4_mask->hdr.version_ihl || - ipv4_mask->hdr.type_of_service || - ipv4_mask->hdr.total_length || - ipv4_mask->hdr.packet_id || - ipv4_mask->hdr.fragment_offset || - ipv4_mask->hdr.time_to_live || - ipv4_mask->hdr.next_proto_id || - ipv4_mask->hdr.hdr_checksum) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid IPv4 mask."); - return -rte_errno; - } - filter->dst_ipaddr[0] = ipv4_spec->hdr.dst_addr; - filter->src_ipaddr[0] = ipv4_spec->hdr.src_addr; - if (use_ntuple) - en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR | - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR; - else - en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR | - EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR; - if (ipv4_mask->hdr.src_addr) { - filter->src_ipaddr_mask[0] = - ipv4_mask->hdr.src_addr; - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK; - } - if (ipv4_mask->hdr.dst_addr) { - filter->dst_ipaddr_mask[0] = - ipv4_mask->hdr.dst_addr; - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK; - } - filter->ip_addr_type = use_ntuple ? - HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 : - HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4; - if (ipv4_spec->hdr.next_proto_id) { - filter->ip_protocol = - ipv4_spec->hdr.next_proto_id; - if (use_ntuple) - en |= NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO; - else - en |= EM_FLOW_ALLOC_INPUT_EN_IP_PROTO; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - ipv6_spec = item->spec; - ipv6_mask = item->mask; - - /* Only IP DST and SRC fields are maskable. */ - if (ipv6_mask->hdr.vtc_flow || - ipv6_mask->hdr.payload_len || - ipv6_mask->hdr.proto || - ipv6_mask->hdr.hop_limits) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid IPv6 mask."); - return -rte_errno; - } - - if (use_ntuple) - en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR | - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR; - else - en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR | - EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR; - rte_memcpy(filter->src_ipaddr, - ipv6_spec->hdr.src_addr, 16); - rte_memcpy(filter->dst_ipaddr, - ipv6_spec->hdr.dst_addr, 16); - if (!bnxt_check_zero_bytes(ipv6_mask->hdr.src_addr, - 16)) { - rte_memcpy(filter->src_ipaddr_mask, - ipv6_mask->hdr.src_addr, 16); - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK; - } - if (!bnxt_check_zero_bytes(ipv6_mask->hdr.dst_addr, - 16)) { - rte_memcpy(filter->dst_ipaddr_mask, - ipv6_mask->hdr.dst_addr, 16); - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK; - } - filter->ip_addr_type = use_ntuple ? - NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 : - EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6; - break; - case RTE_FLOW_ITEM_TYPE_TCP: - tcp_spec = item->spec; - tcp_mask = item->mask; - - /* Check TCP mask. Only DST & SRC ports are maskable */ - if (tcp_mask->hdr.sent_seq || - tcp_mask->hdr.recv_ack || - tcp_mask->hdr.data_off || - tcp_mask->hdr.tcp_flags || - tcp_mask->hdr.rx_win || - tcp_mask->hdr.cksum || - tcp_mask->hdr.tcp_urp) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid TCP mask"); - return -rte_errno; - } - filter->src_port = tcp_spec->hdr.src_port; - filter->dst_port = tcp_spec->hdr.dst_port; - if (use_ntuple) - en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT | - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT; - else - en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT | - EM_FLOW_ALLOC_INPUT_EN_DST_PORT; - if (tcp_mask->hdr.dst_port) { - filter->dst_port_mask = tcp_mask->hdr.dst_port; - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK; - } - if (tcp_mask->hdr.src_port) { - filter->src_port_mask = tcp_mask->hdr.src_port; - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK; - } - break; - case RTE_FLOW_ITEM_TYPE_UDP: - udp_spec = item->spec; - udp_mask = item->mask; - - if (udp_mask->hdr.dgram_len || - udp_mask->hdr.dgram_cksum) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid UDP mask"); - return -rte_errno; - } - - filter->src_port = udp_spec->hdr.src_port; - filter->dst_port = udp_spec->hdr.dst_port; - if (use_ntuple) - en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT | - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT; - else - en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT | - EM_FLOW_ALLOC_INPUT_EN_DST_PORT; - - if (udp_mask->hdr.dst_port) { - filter->dst_port_mask = udp_mask->hdr.dst_port; - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK; - } - if (udp_mask->hdr.src_port) { - filter->src_port_mask = udp_mask->hdr.src_port; - en |= !use_ntuple ? 0 : - NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK; - } - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - vxlan_spec = item->spec; - vxlan_mask = item->mask; - /* Check if VXLAN item is used to describe protocol. - * If yes, both spec and mask should be NULL. - * If no, both spec and mask shouldn't be NULL. - */ - if ((!vxlan_spec && vxlan_mask) || - (vxlan_spec && !vxlan_mask)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid VXLAN item"); - return -rte_errno; - } - - if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] || - vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] || - vxlan_spec->flags != 0x8) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid VXLAN item"); - return -rte_errno; - } - - /* Check if VNI is masked. */ - if (vxlan_spec && vxlan_mask) { - vni_masked = - !!memcmp(vxlan_mask->vni, vni_mask, - RTE_DIM(vni_mask)); - if (vni_masked) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid VNI mask"); - return -rte_errno; - } - - rte_memcpy(((uint8_t *)&tenant_id_be + 1), - vxlan_spec->vni, 3); - filter->vni = - rte_be_to_cpu_32(tenant_id_be); - filter->tunnel_type = - CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN; - } - break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - nvgre_spec = item->spec; - nvgre_mask = item->mask; - /* Check if NVGRE item is used to describe protocol. - * If yes, both spec and mask should be NULL. - * If no, both spec and mask shouldn't be NULL. - */ - if ((!nvgre_spec && nvgre_mask) || - (nvgre_spec && !nvgre_mask)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid NVGRE item"); - return -rte_errno; - } - - if (nvgre_spec->c_k_s_rsvd0_ver != 0x2000 || - nvgre_spec->protocol != 0x6558) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid NVGRE item"); - return -rte_errno; - } - - if (nvgre_spec && nvgre_mask) { - tni_masked = - !!memcmp(nvgre_mask->tni, tni_mask, - RTE_DIM(tni_mask)); - if (tni_masked) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Invalid TNI mask"); - return -rte_errno; - } - rte_memcpy(((uint8_t *)&tenant_id_be + 1), - nvgre_spec->tni, 3); - filter->vni = - rte_be_to_cpu_32(tenant_id_be); - filter->tunnel_type = - CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE; - } - break; - case RTE_FLOW_ITEM_TYPE_VF: - vf_spec = item->spec; - vf = vf_spec->id; - if (!BNXT_PF(bp)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Configuring on a VF!"); - return -rte_errno; - } - - if (vf >= bp->pdev->max_vfs) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Incorrect VF id!"); - return -rte_errno; - } - - if (!attr->transfer) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Matching VF traffic without" - " affecting it (transfer attribute)" - " is unsupported"); - return -rte_errno; - } - - filter->mirror_vnic_id = - dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf); - if (dflt_vnic < 0) { - /* This simply indicates there's no driver - * loaded. This is not an error. - */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, - "Unable to get default VNIC for VF"); - return -rte_errno; - } - filter->mirror_vnic_id = dflt_vnic; - en |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID; - break; - default: - break; - } - item++; - } - filter->enables = en; - - return 0; -} - -/* Parse attributes */ -static int -bnxt_flow_parse_attr(const struct rte_flow_attr *attr, - struct rte_flow_error *error) -{ - /* Must be input direction */ - if (!attr->ingress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, - attr, "Only support ingress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->egress) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, - attr, "No support for egress."); - return -rte_errno; - } - - /* Not supported */ - if (attr->priority) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, - attr, "No support for priority."); - return -rte_errno; - } - - /* Not supported */ - if (attr->group) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, - attr, "No support for group."); - return -rte_errno; - } - - return 0; -} - -struct bnxt_filter_info * -bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf, - struct bnxt_vnic_info *vnic) -{ - struct bnxt_filter_info *filter1, *f0; - struct bnxt_vnic_info *vnic0; - int rc; - - vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); - f0 = STAILQ_FIRST(&vnic0->filter); - - //This flow has same DST MAC as the port/l2 filter. - if (memcmp(f0->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN) == 0) - return f0; - - //This flow needs DST MAC which is not same as port/l2 - PMD_DRV_LOG(DEBUG, "Create L2 filter for DST MAC\n"); - filter1 = bnxt_get_unused_filter(bp); - if (filter1 == NULL) - return NULL; - filter1->flags = HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX; - filter1->enables = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR | - L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK; - memcpy(filter1->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN); - memset(filter1->l2_addr_mask, 0xff, ETHER_ADDR_LEN); - rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, - filter1); - if (rc) { - bnxt_free_filter(bp, filter1); - return NULL; - } - return filter1; -} - -static int -bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - const struct rte_flow_attr *attr, - struct rte_flow_error *error, - struct bnxt_filter_info *filter) -{ - const struct rte_flow_action *act = nxt_non_void_action(actions); - struct bnxt *bp = (struct bnxt *)dev->data->dev_private; - const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_vf *act_vf; - struct bnxt_vnic_info *vnic, *vnic0; - struct bnxt_filter_info *filter1; - uint32_t vf = 0; - int dflt_vnic; - int rc; - - if (bp->eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) { - PMD_DRV_LOG(ERR, "Cannot create flow on RSS queues\n"); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Cannot create flow on RSS queues"); - rc = -rte_errno; - goto ret; - } - - rc = bnxt_validate_and_parse_flow_type(bp, attr, pattern, error, - filter); - if (rc != 0) - goto ret; - - rc = bnxt_flow_parse_attr(attr, error); - if (rc != 0) - goto ret; - //Since we support ingress attribute only - right now. - if (filter->filter_type == HWRM_CFA_EM_FILTER) - filter->flags = HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX; - - switch (act->type) { - case RTE_FLOW_ACTION_TYPE_QUEUE: - /* Allow this flow. Redirect to a VNIC. */ - act_q = (const struct rte_flow_action_queue *)act->conf; - if (act_q->index >= bp->rx_nr_rings) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid queue ID."); - rc = -rte_errno; - goto ret; - } - PMD_DRV_LOG(DEBUG, "Queue index %d\n", act_q->index); - - vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); - vnic = STAILQ_FIRST(&bp->ff_pool[act_q->index]); - if (vnic == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "No matching VNIC for queue ID."); - rc = -rte_errno; - goto ret; - } - filter->dst_id = vnic->fw_vnic_id; - filter1 = bnxt_get_l2_filter(bp, filter, vnic); - if (filter1 == NULL) { - rc = -ENOSPC; - goto ret; - } - filter->fw_l2_filter_id = filter1->fw_l2_filter_id; - PMD_DRV_LOG(DEBUG, "VNIC found\n"); - break; - case RTE_FLOW_ACTION_TYPE_DROP: - vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); - filter1 = bnxt_get_l2_filter(bp, filter, vnic0); - if (filter1 == NULL) { - rc = -ENOSPC; - goto ret; - } - filter->fw_l2_filter_id = filter1->fw_l2_filter_id; - if (filter->filter_type == HWRM_CFA_EM_FILTER) - filter->flags = - HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP; - else - filter->flags = - HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP; - break; - case RTE_FLOW_ACTION_TYPE_COUNT: - vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); - filter1 = bnxt_get_l2_filter(bp, filter, vnic0); - if (filter1 == NULL) { - rc = -ENOSPC; - goto ret; - } - filter->fw_l2_filter_id = filter1->fw_l2_filter_id; - filter->flags = HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER; - break; - case RTE_FLOW_ACTION_TYPE_VF: - act_vf = (const struct rte_flow_action_vf *)act->conf; - vf = act_vf->id; - if (!BNXT_PF(bp)) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "Configuring on a VF!"); - rc = -rte_errno; - goto ret; - } - - if (vf >= bp->pdev->max_vfs) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "Incorrect VF id!"); - rc = -rte_errno; - goto ret; - } - - filter->mirror_vnic_id = - dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf); - if (dflt_vnic < 0) { - /* This simply indicates there's no driver loaded. - * This is not an error. - */ - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, - "Unable to get default VNIC for VF"); - rc = -rte_errno; - goto ret; - } - filter->mirror_vnic_id = dflt_vnic; - filter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID; - - vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); - filter1 = bnxt_get_l2_filter(bp, filter, vnic0); - if (filter1 == NULL) { - rc = -ENOSPC; - goto ret; - } - filter->fw_l2_filter_id = filter1->fw_l2_filter_id; - break; - - default: - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, act, - "Invalid action."); - rc = -rte_errno; - goto ret; - } - - act = nxt_non_void_action(++act); - if (act->type != RTE_FLOW_ACTION_TYPE_END) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - act, "Invalid action."); - rc = -rte_errno; - goto ret; - } -ret: - return rc; -} - -static int -bnxt_flow_validate(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) -{ - struct bnxt *bp = (struct bnxt *)dev->data->dev_private; - struct bnxt_filter_info *filter; - int ret = 0; - - ret = bnxt_flow_agrs_validate(attr, pattern, actions, error); - if (ret != 0) - return ret; - - filter = bnxt_get_unused_filter(bp); - if (filter == NULL) { - PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n"); - return -ENOMEM; - } - - ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr, - error, filter); - /* No need to hold on to this filter if we are just validating flow */ - filter->fw_l2_filter_id = UINT64_MAX; - bnxt_free_filter(bp, filter); - - return ret; -} - -static int -bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf) -{ - struct bnxt_filter_info *mf; - struct rte_flow *flow; - int i; - - for (i = bp->nr_vnics - 1; i >= 0; i--) { - struct bnxt_vnic_info *vnic = &bp->vnic_info[i]; - - STAILQ_FOREACH(flow, &vnic->flow_list, next) { - mf = flow->filter; - - if (mf->filter_type == nf->filter_type && - mf->flags == nf->flags && - mf->src_port == nf->src_port && - mf->src_port_mask == nf->src_port_mask && - mf->dst_port == nf->dst_port && - mf->dst_port_mask == nf->dst_port_mask && - mf->ip_protocol == nf->ip_protocol && - mf->ip_addr_type == nf->ip_addr_type && - mf->ethertype == nf->ethertype && - mf->vni == nf->vni && - mf->tunnel_type == nf->tunnel_type && - mf->l2_ovlan == nf->l2_ovlan && - mf->l2_ovlan_mask == nf->l2_ovlan_mask && - mf->l2_ivlan == nf->l2_ivlan && - mf->l2_ivlan_mask == nf->l2_ivlan_mask && - !memcmp(mf->l2_addr, nf->l2_addr, ETHER_ADDR_LEN) && - !memcmp(mf->l2_addr_mask, nf->l2_addr_mask, - ETHER_ADDR_LEN) && - !memcmp(mf->src_macaddr, nf->src_macaddr, - ETHER_ADDR_LEN) && - !memcmp(mf->dst_macaddr, nf->dst_macaddr, - ETHER_ADDR_LEN) && - !memcmp(mf->src_ipaddr, nf->src_ipaddr, - sizeof(nf->src_ipaddr)) && - !memcmp(mf->src_ipaddr_mask, nf->src_ipaddr_mask, - sizeof(nf->src_ipaddr_mask)) && - !memcmp(mf->dst_ipaddr, nf->dst_ipaddr, - sizeof(nf->dst_ipaddr)) && - !memcmp(mf->dst_ipaddr_mask, nf->dst_ipaddr_mask, - sizeof(nf->dst_ipaddr_mask))) { - if (mf->dst_id == nf->dst_id) - return -EEXIST; - /* Same Flow, Different queue - * Clear the old ntuple filter - */ - if (nf->filter_type == HWRM_CFA_EM_FILTER) - bnxt_hwrm_clear_em_filter(bp, mf); - if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER) - bnxt_hwrm_clear_ntuple_filter(bp, mf); - /* Free the old filter, update flow - * with new filter - */ - bnxt_free_filter(bp, mf); - flow->filter = nf; - return -EXDEV; - } - } - } - return 0; -} - -static struct rte_flow * -bnxt_flow_create(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) -{ - struct bnxt *bp = (struct bnxt *)dev->data->dev_private; - struct bnxt_filter_info *filter; - struct bnxt_vnic_info *vnic = NULL; - bool update_flow = false; - struct rte_flow *flow; - unsigned int i; - int ret = 0; - - flow = rte_zmalloc("bnxt_flow", sizeof(struct rte_flow), 0); - if (!flow) { - rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "Failed to allocate memory"); - return flow; - } - - ret = bnxt_flow_agrs_validate(attr, pattern, actions, error); - if (ret != 0) { - PMD_DRV_LOG(ERR, "Not a validate flow.\n"); - goto free_flow; - } - - filter = bnxt_get_unused_filter(bp); - if (filter == NULL) { - PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n"); - goto free_flow; - } - - ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr, - error, filter); - if (ret != 0) - goto free_filter; - - ret = bnxt_match_filter(bp, filter); - if (ret == -EEXIST) { - PMD_DRV_LOG(DEBUG, "Flow already exists.\n"); - /* Clear the filter that was created as part of - * validate_and_parse_flow() above - */ - bnxt_hwrm_clear_l2_filter(bp, filter); - goto free_filter; - } else if (ret == -EXDEV) { - PMD_DRV_LOG(DEBUG, "Flow with same pattern exists"); - PMD_DRV_LOG(DEBUG, "Updating with different destination\n"); - update_flow = true; - } - - if (filter->filter_type == HWRM_CFA_EM_FILTER) { - filter->enables |= - HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID; - ret = bnxt_hwrm_set_em_filter(bp, filter->dst_id, filter); - } - if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) { - filter->enables |= - HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID; - ret = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id, filter); - } - - for (i = 0; i < bp->nr_vnics; i++) { - vnic = &bp->vnic_info[i]; - if (filter->dst_id == vnic->fw_vnic_id) - break; - } - - if (!ret) { - flow->filter = filter; - flow->vnic = vnic; - if (update_flow) { - ret = -EXDEV; - goto free_flow; - } - PMD_DRV_LOG(ERR, "Successfully created flow.\n"); - STAILQ_INSERT_TAIL(&vnic->flow_list, flow, next); - return flow; - } -free_filter: - bnxt_free_filter(bp, filter); -free_flow: - if (ret == -EEXIST) - rte_flow_error_set(error, ret, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "Matching Flow exists."); - else if (ret == -EXDEV) - rte_flow_error_set(error, ret, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "Flow with pattern exists, updating destination queue"); - else - rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "Failed to create flow."); - rte_free(flow); - flow = NULL; - return flow; -} - -static int -bnxt_flow_destroy(struct rte_eth_dev *dev, - struct rte_flow *flow, - struct rte_flow_error *error) -{ - struct bnxt *bp = (struct bnxt *)dev->data->dev_private; - struct bnxt_filter_info *filter = flow->filter; - struct bnxt_vnic_info *vnic = flow->vnic; - int ret = 0; - - ret = bnxt_match_filter(bp, filter); - if (ret == 0) - PMD_DRV_LOG(ERR, "Could not find matching flow\n"); - if (filter->filter_type == HWRM_CFA_EM_FILTER) - ret = bnxt_hwrm_clear_em_filter(bp, filter); - if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) - ret = bnxt_hwrm_clear_ntuple_filter(bp, filter); - else - ret = bnxt_hwrm_clear_l2_filter(bp, filter); - if (!ret) { - STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next); - rte_free(flow); - } else { - rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "Failed to destroy flow."); - } - - return ret; -} - -static int -bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) -{ - struct bnxt *bp = (struct bnxt *)dev->data->dev_private; - struct bnxt_vnic_info *vnic; - struct rte_flow *flow; - unsigned int i; - int ret = 0; - - for (i = 0; i < bp->nr_vnics; i++) { - vnic = &bp->vnic_info[i]; - STAILQ_FOREACH(flow, &vnic->flow_list, next) { - struct bnxt_filter_info *filter = flow->filter; - - if (filter->filter_type == HWRM_CFA_EM_FILTER) - ret = bnxt_hwrm_clear_em_filter(bp, filter); - if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) - ret = bnxt_hwrm_clear_ntuple_filter(bp, filter); - - if (ret) { - rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_HANDLE, - NULL, - "Failed to flush flow in HW."); - return -rte_errno; - } - - STAILQ_REMOVE(&vnic->flow_list, flow, - rte_flow, next); - rte_free(flow); - } - } - - return ret; -} - -const struct rte_flow_ops bnxt_flow_ops = { - .validate = bnxt_flow_validate, - .create = bnxt_flow_create, - .destroy = bnxt_flow_destroy, - .flush = bnxt_flow_flush, -}; diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c new file mode 100644 index 000000000..a491e9dbf --- /dev/null +++ b/drivers/net/bnxt/bnxt_flow.c @@ -0,0 +1,1167 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2014-2018 Broadcom + * All rights reserved. + */ + +#include + +#include +#include +#include +#include +#include + +#include "bnxt.h" +#include "bnxt_filter.h" +#include "bnxt_hwrm.h" +#include "bnxt_vnic.h" +#include "bnxt_util.h" +#include "hsi_struct_def_dpdk.h" + +static int +bnxt_flow_args_validate(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + if (!pattern) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, + "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, + "NULL action."); + return -rte_errno; + } + + if (!attr) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, + "NULL attribute."); + return -rte_errno; + } + + return 0; +} + +static const struct rte_flow_item * +bnxt_flow_non_void_item(const struct rte_flow_item *cur) +{ + while (1) { + if (cur->type != RTE_FLOW_ITEM_TYPE_VOID) + return cur; + cur++; + } +} + +static const struct rte_flow_action * +bnxt_flow_non_void_action(const struct rte_flow_action *cur) +{ + while (1) { + if (cur->type != RTE_FLOW_ACTION_TYPE_VOID) + return cur; + cur++; + } +} + +static int +bnxt_filter_type_check(const struct rte_flow_item pattern[], + struct rte_flow_error *error __rte_unused) +{ + const struct rte_flow_item *item = + bnxt_flow_non_void_item(pattern); + int use_ntuple = 1; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + use_ntuple = 1; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + use_ntuple = 0; + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + case RTE_FLOW_ITEM_TYPE_IPV6: + case RTE_FLOW_ITEM_TYPE_TCP: + case RTE_FLOW_ITEM_TYPE_UDP: + /* FALLTHROUGH */ + /* need ntuple match, reset exact match */ + if (!use_ntuple) { + PMD_DRV_LOG(ERR, + "VLAN flow cannot use NTUPLE filter\n"); + rte_flow_error_set + (error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Cannot use VLAN with NTUPLE"); + return -rte_errno; + } + use_ntuple |= 1; + break; + default: + PMD_DRV_LOG(ERR, "Unknown Flow type\n"); + use_ntuple |= 1; + } + item++; + } + return use_ntuple; +} + +static int +bnxt_validate_and_parse_flow_type(struct bnxt *bp, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + struct rte_flow_error *error, + struct bnxt_filter_info *filter) +{ + const struct rte_flow_item *item = bnxt_flow_non_void_item(pattern); + const struct rte_flow_item_vlan *vlan_spec, *vlan_mask; + const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; + const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; + const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; + const struct rte_flow_item_udp *udp_spec, *udp_mask; + const struct rte_flow_item_eth *eth_spec, *eth_mask; + const struct rte_flow_item_nvgre *nvgre_spec; + const struct rte_flow_item_nvgre *nvgre_mask; + const struct rte_flow_item_vxlan *vxlan_spec; + const struct rte_flow_item_vxlan *vxlan_mask; + uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF}; + uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF}; + const struct rte_flow_item_vf *vf_spec; + uint32_t tenant_id_be = 0; + bool vni_masked = 0; + bool tni_masked = 0; + uint32_t vf = 0; + int use_ntuple; + uint32_t en = 0; + uint32_t en_ethertype; + int dflt_vnic; + + use_ntuple = bnxt_filter_type_check(pattern, error); + PMD_DRV_LOG(DEBUG, "Use NTUPLE %d\n", use_ntuple); + if (use_ntuple < 0) + return use_ntuple; + + filter->filter_type = use_ntuple ? + HWRM_CFA_NTUPLE_FILTER : HWRM_CFA_EM_FILTER; + en_ethertype = use_ntuple ? + NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE : + EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (item->last) { + /* last or range is NOT supported as match criteria */ + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "No support for range"); + return -rte_errno; + } + + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "spec/mask is NULL"); + return -rte_errno; + } + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + eth_spec = item->spec; + eth_mask = item->mask; + + /* Source MAC address mask cannot be partially set. + * Should be All 0's or all 1's. + * Destination MAC address mask must not be partially + * set. Should be all 1's or all 0's. + */ + if ((!is_zero_ether_addr(ð_mask->src) && + !is_broadcast_ether_addr(ð_mask->src)) || + (!is_zero_ether_addr(ð_mask->dst) && + !is_broadcast_ether_addr(ð_mask->dst))) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "MAC_addr mask not valid"); + return -rte_errno; + } + + /* Mask is not allowed. Only exact matches are */ + if (eth_mask->type && + eth_mask->type != RTE_BE16(0xffff)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "ethertype mask not valid"); + return -rte_errno; + } + + if (is_broadcast_ether_addr(ð_mask->dst)) { + rte_memcpy(filter->dst_macaddr, + ð_spec->dst, 6); + en |= use_ntuple ? + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR : + EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR; + } + + if (is_broadcast_ether_addr(ð_mask->src)) { + rte_memcpy(filter->src_macaddr, + ð_spec->src, 6); + en |= use_ntuple ? + NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR : + EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR; + } /* + * else { + * PMD_DRV_LOG(ERR, "Handle this condition\n"); + * } + */ + if (eth_mask->type) { + filter->ethertype = + rte_be_to_cpu_16(eth_spec->type); + en |= en_ethertype; + } + + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + vlan_spec = item->spec; + vlan_mask = item->mask; + if (en & en_ethertype) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "VLAN TPID matching is not" + " supported"); + return -rte_errno; + } + if (vlan_mask->tci && + vlan_mask->tci == RTE_BE16(0x0fff)) { + /* Only the VLAN ID can be matched. */ + filter->l2_ovlan = + rte_be_to_cpu_16(vlan_spec->tci & + RTE_BE16(0x0fff)); + en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID; + } else { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "VLAN mask is invalid"); + return -rte_errno; + } + if (vlan_mask->inner_type && + vlan_mask->inner_type != RTE_BE16(0xffff)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "inner ethertype mask not" + " valid"); + return -rte_errno; + } + if (vlan_mask->inner_type) { + filter->ethertype = + rte_be_to_cpu_16(vlan_spec->inner_type); + en |= en_ethertype; + } + + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + /* If mask is not involved, we could use EM filters. */ + ipv4_spec = item->spec; + ipv4_mask = item->mask; + /* Only IP DST and SRC fields are maskable. */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.next_proto_id || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid IPv4 mask."); + return -rte_errno; + } + + filter->dst_ipaddr[0] = ipv4_spec->hdr.dst_addr; + filter->src_ipaddr[0] = ipv4_spec->hdr.src_addr; + + if (use_ntuple) + en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR | + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR; + else + en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR | + EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR; + + if (ipv4_mask->hdr.src_addr) { + filter->src_ipaddr_mask[0] = + ipv4_mask->hdr.src_addr; + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK; + } + + if (ipv4_mask->hdr.dst_addr) { + filter->dst_ipaddr_mask[0] = + ipv4_mask->hdr.dst_addr; + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK; + } + + filter->ip_addr_type = use_ntuple ? + HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 : + HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4; + + if (ipv4_spec->hdr.next_proto_id) { + filter->ip_protocol = + ipv4_spec->hdr.next_proto_id; + if (use_ntuple) + en |= NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO; + else + en |= EM_FLOW_ALLOC_INPUT_EN_IP_PROTO; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ipv6_spec = item->spec; + ipv6_mask = item->mask; + + /* Only IP DST and SRC fields are maskable. */ + if (ipv6_mask->hdr.vtc_flow || + ipv6_mask->hdr.payload_len || + ipv6_mask->hdr.proto || + ipv6_mask->hdr.hop_limits) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid IPv6 mask."); + return -rte_errno; + } + + if (use_ntuple) + en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR | + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR; + else + en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR | + EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR; + + rte_memcpy(filter->src_ipaddr, + ipv6_spec->hdr.src_addr, 16); + rte_memcpy(filter->dst_ipaddr, + ipv6_spec->hdr.dst_addr, 16); + + if (!bnxt_check_zero_bytes(ipv6_mask->hdr.src_addr, + 16)) { + rte_memcpy(filter->src_ipaddr_mask, + ipv6_mask->hdr.src_addr, 16); + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK; + } + + if (!bnxt_check_zero_bytes(ipv6_mask->hdr.dst_addr, + 16)) { + rte_memcpy(filter->dst_ipaddr_mask, + ipv6_mask->hdr.dst_addr, 16); + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK; + } + + filter->ip_addr_type = use_ntuple ? + NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 : + EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + tcp_spec = item->spec; + tcp_mask = item->mask; + + /* Check TCP mask. Only DST & SRC ports are maskable */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.tcp_flags || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid TCP mask"); + return -rte_errno; + } + + filter->src_port = tcp_spec->hdr.src_port; + filter->dst_port = tcp_spec->hdr.dst_port; + + if (use_ntuple) + en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT | + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT; + else + en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT | + EM_FLOW_ALLOC_INPUT_EN_DST_PORT; + + if (tcp_mask->hdr.dst_port) { + filter->dst_port_mask = tcp_mask->hdr.dst_port; + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK; + } + + if (tcp_mask->hdr.src_port) { + filter->src_port_mask = tcp_mask->hdr.src_port; + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK; + } + break; + case RTE_FLOW_ITEM_TYPE_UDP: + udp_spec = item->spec; + udp_mask = item->mask; + + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid UDP mask"); + return -rte_errno; + } + + filter->src_port = udp_spec->hdr.src_port; + filter->dst_port = udp_spec->hdr.dst_port; + + if (use_ntuple) + en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT | + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT; + else + en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT | + EM_FLOW_ALLOC_INPUT_EN_DST_PORT; + + if (udp_mask->hdr.dst_port) { + filter->dst_port_mask = udp_mask->hdr.dst_port; + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK; + } + + if (udp_mask->hdr.src_port) { + filter->src_port_mask = udp_mask->hdr.src_port; + en |= !use_ntuple ? 0 : + NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK; + } + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + vxlan_spec = item->spec; + vxlan_mask = item->mask; + /* Check if VXLAN item is used to describe protocol. + * If yes, both spec and mask should be NULL. + * If no, both spec and mask shouldn't be NULL. + */ + if ((!vxlan_spec && vxlan_mask) || + (vxlan_spec && !vxlan_mask)) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid VXLAN item"); + return -rte_errno; + } + + if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] || + vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] || + vxlan_spec->flags != 0x8) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid VXLAN item"); + return -rte_errno; + } + + /* Check if VNI is masked. */ + if (vxlan_spec && vxlan_mask) { + vni_masked = + !!memcmp(vxlan_mask->vni, vni_mask, + RTE_DIM(vni_mask)); + if (vni_masked) { + rte_flow_error_set + (error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid VNI mask"); + return -rte_errno; + } + + rte_memcpy(((uint8_t *)&tenant_id_be + 1), + vxlan_spec->vni, 3); + filter->vni = + rte_be_to_cpu_32(tenant_id_be); + filter->tunnel_type = + CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN; + } + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + nvgre_spec = item->spec; + nvgre_mask = item->mask; + /* Check if NVGRE item is used to describe protocol. + * If yes, both spec and mask should be NULL. + * If no, both spec and mask shouldn't be NULL. + */ + if ((!nvgre_spec && nvgre_mask) || + (nvgre_spec && !nvgre_mask)) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid NVGRE item"); + return -rte_errno; + } + + if (nvgre_spec->c_k_s_rsvd0_ver != 0x2000 || + nvgre_spec->protocol != 0x6558) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid NVGRE item"); + return -rte_errno; + } + + if (nvgre_spec && nvgre_mask) { + tni_masked = + !!memcmp(nvgre_mask->tni, tni_mask, + RTE_DIM(tni_mask)); + if (tni_masked) { + rte_flow_error_set + (error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid TNI mask"); + return -rte_errno; + } + rte_memcpy(((uint8_t *)&tenant_id_be + 1), + nvgre_spec->tni, 3); + filter->vni = + rte_be_to_cpu_32(tenant_id_be); + filter->tunnel_type = + CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE; + } + break; + case RTE_FLOW_ITEM_TYPE_VF: + vf_spec = item->spec; + vf = vf_spec->id; + + if (!BNXT_PF(bp)) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Configuring on a VF!"); + return -rte_errno; + } + + if (vf >= bp->pdev->max_vfs) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Incorrect VF id!"); + return -rte_errno; + } + + if (!attr->transfer) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Matching VF traffic without" + " affecting it (transfer attribute)" + " is unsupported"); + return -rte_errno; + } + + filter->mirror_vnic_id = + dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf); + if (dflt_vnic < 0) { + /* This simply indicates there's no driver + * loaded. This is not an error. + */ + rte_flow_error_set + (error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Unable to get default VNIC for VF"); + return -rte_errno; + } + + filter->mirror_vnic_id = dflt_vnic; + en |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID; + break; + default: + break; + } + item++; + } + filter->enables = en; + + return 0; +} + +/* Parse attributes */ +static int +bnxt_flow_parse_attr(const struct rte_flow_attr *attr, + struct rte_flow_error *error) +{ + /* Must be input direction */ + if (!attr->ingress) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, + "Only support ingress."); + return -rte_errno; + } + + /* Not supported */ + if (attr->egress) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, + "No support for egress."); + return -rte_errno; + } + + /* Not supported */ + if (attr->priority) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, + "No support for priority."); + return -rte_errno; + } + + /* Not supported */ + if (attr->group) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + attr, + "No support for group."); + return -rte_errno; + } + + return 0; +} + +struct bnxt_filter_info * +bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf, + struct bnxt_vnic_info *vnic) +{ + struct bnxt_filter_info *filter1, *f0; + struct bnxt_vnic_info *vnic0; + int rc; + + vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); + f0 = STAILQ_FIRST(&vnic0->filter); + + /* This flow has same DST MAC as the port/l2 filter. */ + if (memcmp(f0->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN) == 0) + return f0; + + /* This flow needs DST MAC which is not same as port/l2 */ + PMD_DRV_LOG(DEBUG, "Create L2 filter for DST MAC\n"); + filter1 = bnxt_get_unused_filter(bp); + if (filter1 == NULL) + return NULL; + + filter1->flags = HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX; + filter1->enables = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR | + L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK; + memcpy(filter1->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN); + memset(filter1->l2_addr_mask, 0xff, ETHER_ADDR_LEN); + rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, + filter1); + if (rc) { + bnxt_free_filter(bp, filter1); + return NULL; + } + return filter1; +} + +static int +bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + const struct rte_flow_attr *attr, + struct rte_flow_error *error, + struct bnxt_filter_info *filter) +{ + const struct rte_flow_action *act = + bnxt_flow_non_void_action(actions); + struct bnxt *bp = (struct bnxt *)dev->data->dev_private; + const struct rte_flow_action_queue *act_q; + const struct rte_flow_action_vf *act_vf; + struct bnxt_vnic_info *vnic, *vnic0; + struct bnxt_filter_info *filter1; + uint32_t vf = 0; + int dflt_vnic; + int rc; + + if (bp->eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) { + PMD_DRV_LOG(ERR, "Cannot create flow on RSS queues\n"); + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Cannot create flow on RSS queues"); + rc = -rte_errno; + goto ret; + } + + rc = + bnxt_validate_and_parse_flow_type(bp, attr, pattern, error, filter); + if (rc != 0) + goto ret; + + rc = bnxt_flow_parse_attr(attr, error); + if (rc != 0) + goto ret; + + /* Since we support ingress attribute only - right now. */ + if (filter->filter_type == HWRM_CFA_EM_FILTER) + filter->flags = HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX; + + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + /* Allow this flow. Redirect to a VNIC. */ + act_q = (const struct rte_flow_action_queue *)act->conf; + if (act_q->index >= bp->rx_nr_rings) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue ID."); + rc = -rte_errno; + goto ret; + } + PMD_DRV_LOG(DEBUG, "Queue index %d\n", act_q->index); + + vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); + vnic = STAILQ_FIRST(&bp->ff_pool[act_q->index]); + if (vnic == NULL) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "No matching VNIC for queue ID."); + rc = -rte_errno; + goto ret; + } + + filter->dst_id = vnic->fw_vnic_id; + filter1 = bnxt_get_l2_filter(bp, filter, vnic); + if (filter1 == NULL) { + rc = -ENOSPC; + goto ret; + } + + filter->fw_l2_filter_id = filter1->fw_l2_filter_id; + PMD_DRV_LOG(DEBUG, "VNIC found\n"); + break; + case RTE_FLOW_ACTION_TYPE_DROP: + vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); + filter1 = bnxt_get_l2_filter(bp, filter, vnic0); + if (filter1 == NULL) { + rc = -ENOSPC; + goto ret; + } + + filter->fw_l2_filter_id = filter1->fw_l2_filter_id; + if (filter->filter_type == HWRM_CFA_EM_FILTER) + filter->flags = + HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP; + else + filter->flags = + HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP; + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); + filter1 = bnxt_get_l2_filter(bp, filter, vnic0); + if (filter1 == NULL) { + rc = -ENOSPC; + goto ret; + } + + filter->fw_l2_filter_id = filter1->fw_l2_filter_id; + filter->flags = HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER; + break; + case RTE_FLOW_ACTION_TYPE_VF: + act_vf = (const struct rte_flow_action_vf *)act->conf; + vf = act_vf->id; + + if (!BNXT_PF(bp)) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Configuring on a VF!"); + rc = -rte_errno; + goto ret; + } + + if (vf >= bp->pdev->max_vfs) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Incorrect VF id!"); + rc = -rte_errno; + goto ret; + } + + filter->mirror_vnic_id = + dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf); + if (dflt_vnic < 0) { + /* This simply indicates there's no driver loaded. + * This is not an error. + */ + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unable to get default VNIC for VF"); + rc = -rte_errno; + goto ret; + } + + filter->mirror_vnic_id = dflt_vnic; + filter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID; + + vnic0 = STAILQ_FIRST(&bp->ff_pool[0]); + filter1 = bnxt_get_l2_filter(bp, filter, vnic0); + if (filter1 == NULL) { + rc = -ENOSPC; + goto ret; + } + + filter->fw_l2_filter_id = filter1->fw_l2_filter_id; + break; + + default: + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid action."); + rc = -rte_errno; + goto ret; + } + + if (filter1) { + bnxt_free_filter(bp, filter1); + filter1->fw_l2_filter_id = -1; + } + + act = bnxt_flow_non_void_action(++act); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid action."); + rc = -rte_errno; + goto ret; + } +ret: + return rc; +} + +static int +bnxt_flow_validate(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct bnxt *bp = (struct bnxt *)dev->data->dev_private; + struct bnxt_filter_info *filter; + int ret = 0; + + ret = bnxt_flow_args_validate(attr, pattern, actions, error); + if (ret != 0) + return ret; + + filter = bnxt_get_unused_filter(bp); + if (filter == NULL) { + PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n"); + return -ENOMEM; + } + + ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr, + error, filter); + /* No need to hold on to this filter if we are just validating flow */ + filter->fw_l2_filter_id = UINT64_MAX; + bnxt_free_filter(bp, filter); + + return ret; +} + +static int +bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf) +{ + struct bnxt_filter_info *mf; + struct rte_flow *flow; + int i; + + for (i = bp->nr_vnics - 1; i >= 0; i--) { + struct bnxt_vnic_info *vnic = &bp->vnic_info[i]; + + STAILQ_FOREACH(flow, &vnic->flow_list, next) { + mf = flow->filter; + + if (mf->filter_type == nf->filter_type && + mf->flags == nf->flags && + mf->src_port == nf->src_port && + mf->src_port_mask == nf->src_port_mask && + mf->dst_port == nf->dst_port && + mf->dst_port_mask == nf->dst_port_mask && + mf->ip_protocol == nf->ip_protocol && + mf->ip_addr_type == nf->ip_addr_type && + mf->ethertype == nf->ethertype && + mf->vni == nf->vni && + mf->tunnel_type == nf->tunnel_type && + mf->l2_ovlan == nf->l2_ovlan && + mf->l2_ovlan_mask == nf->l2_ovlan_mask && + mf->l2_ivlan == nf->l2_ivlan && + mf->l2_ivlan_mask == nf->l2_ivlan_mask && + !memcmp(mf->l2_addr, nf->l2_addr, ETHER_ADDR_LEN) && + !memcmp(mf->l2_addr_mask, nf->l2_addr_mask, + ETHER_ADDR_LEN) && + !memcmp(mf->src_macaddr, nf->src_macaddr, + ETHER_ADDR_LEN) && + !memcmp(mf->dst_macaddr, nf->dst_macaddr, + ETHER_ADDR_LEN) && + !memcmp(mf->src_ipaddr, nf->src_ipaddr, + sizeof(nf->src_ipaddr)) && + !memcmp(mf->src_ipaddr_mask, nf->src_ipaddr_mask, + sizeof(nf->src_ipaddr_mask)) && + !memcmp(mf->dst_ipaddr, nf->dst_ipaddr, + sizeof(nf->dst_ipaddr)) && + !memcmp(mf->dst_ipaddr_mask, nf->dst_ipaddr_mask, + sizeof(nf->dst_ipaddr_mask))) { + if (mf->dst_id == nf->dst_id) + return -EEXIST; + /* Same Flow, Different queue + * Clear the old ntuple filter + */ + if (nf->filter_type == HWRM_CFA_EM_FILTER) + bnxt_hwrm_clear_em_filter(bp, mf); + if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER) + bnxt_hwrm_clear_ntuple_filter(bp, mf); + /* Free the old filter, update flow + * with new filter + */ + bnxt_free_filter(bp, mf); + flow->filter = nf; + return -EXDEV; + } + } + } + return 0; +} + +static struct rte_flow * +bnxt_flow_create(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct bnxt *bp = (struct bnxt *)dev->data->dev_private; + struct bnxt_filter_info *filter; + struct bnxt_vnic_info *vnic = NULL; + bool update_flow = false; + struct rte_flow *flow; + unsigned int i; + int ret = 0; + + flow = rte_zmalloc("bnxt_flow", sizeof(struct rte_flow), 0); + if (!flow) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to allocate memory"); + return flow; + } + + ret = bnxt_flow_args_validate(attr, pattern, actions, error); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Not a validate flow.\n"); + goto free_flow; + } + + filter = bnxt_get_unused_filter(bp); + if (filter == NULL) { + PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n"); + goto free_flow; + } + + ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr, + error, filter); + if (ret != 0) + goto free_filter; + + ret = bnxt_match_filter(bp, filter); + if (ret == -EEXIST) { + PMD_DRV_LOG(DEBUG, "Flow already exists.\n"); + /* Clear the filter that was created as part of + * validate_and_parse_flow() above + */ + bnxt_hwrm_clear_l2_filter(bp, filter); + goto free_filter; + } else if (ret == -EXDEV) { + PMD_DRV_LOG(DEBUG, "Flow with same pattern exists\n"); + PMD_DRV_LOG(DEBUG, "Updating with different destination\n"); + update_flow = true; + } + + if (filter->filter_type == HWRM_CFA_EM_FILTER) { + filter->enables |= + HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID; + ret = bnxt_hwrm_set_em_filter(bp, filter->dst_id, filter); + } + + if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) { + filter->enables |= + HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID; + ret = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id, filter); + } + + for (i = 0; i < bp->nr_vnics; i++) { + vnic = &bp->vnic_info[i]; + if (filter->dst_id == vnic->fw_vnic_id) + break; + } + + if (!ret) { + flow->filter = filter; + flow->vnic = vnic; + if (update_flow) { + ret = -EXDEV; + goto free_flow; + } + PMD_DRV_LOG(ERR, "Successfully created flow.\n"); + STAILQ_INSERT_TAIL(&vnic->flow_list, flow, next); + return flow; + } +free_filter: + bnxt_free_filter(bp, filter); +free_flow: + if (ret == -EEXIST) + rte_flow_error_set(error, ret, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Matching Flow exists."); + else if (ret == -EXDEV) + rte_flow_error_set(error, ret, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Flow with pattern exists, updating destination queue"); + else + rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to create flow."); + rte_free(flow); + flow = NULL; + return flow; +} + +static int +bnxt_flow_destroy(struct rte_eth_dev *dev, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct bnxt *bp = (struct bnxt *)dev->data->dev_private; + struct bnxt_filter_info *filter = flow->filter; + struct bnxt_vnic_info *vnic = flow->vnic; + int ret = 0; + + ret = bnxt_match_filter(bp, filter); + if (ret == 0) + PMD_DRV_LOG(ERR, "Could not find matching flow\n"); + if (filter->filter_type == HWRM_CFA_EM_FILTER) + ret = bnxt_hwrm_clear_em_filter(bp, filter); + if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) + ret = bnxt_hwrm_clear_ntuple_filter(bp, filter); + else + ret = bnxt_hwrm_clear_l2_filter(bp, filter); + if (!ret) { + STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next); + rte_free(flow); + } else { + rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to destroy flow."); + } + + return ret; +} + +static int +bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) +{ + struct bnxt *bp = (struct bnxt *)dev->data->dev_private; + struct bnxt_vnic_info *vnic; + struct rte_flow *flow; + unsigned int i; + int ret = 0; + + for (i = 0; i < bp->nr_vnics; i++) { + vnic = &bp->vnic_info[i]; + STAILQ_FOREACH(flow, &vnic->flow_list, next) { + struct bnxt_filter_info *filter = flow->filter; + + if (filter->filter_type == HWRM_CFA_EM_FILTER) + ret = bnxt_hwrm_clear_em_filter(bp, filter); + if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) + ret = bnxt_hwrm_clear_ntuple_filter(bp, filter); + + if (ret) { + rte_flow_error_set + (error, + -ret, + RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, + "Failed to flush flow in HW."); + return -rte_errno; + } + + STAILQ_REMOVE(&vnic->flow_list, flow, + rte_flow, next); + rte_free(flow); + } + } + + return ret; +} + +const struct rte_flow_ops bnxt_flow_ops = { + .validate = bnxt_flow_validate, + .create = bnxt_flow_create, + .destroy = bnxt_flow_destroy, + .flush = bnxt_flow_flush, +}; From patchwork Tue Jun 19 21:30:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41289 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E40091B456; Tue, 19 Jun 2018 23:31:48 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id EEAF71B062 for ; Tue, 19 Jun 2018 23:31:09 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 64CAE30C051; Tue, 19 Jun 2018 14:31:08 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 64CAE30C051 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443868; bh=C9hzDFOZgkcxOBWqLVCjEaLfXwLClki8bY98ZIXRco0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uUX+A9LXMqHbqryAy9EgdQg3AnQ7zHAfmK/dg0nJq2F0AXO8m2Ax87ashQXwH/RRi upaLMZrgB+SNooTI4USdCXnMS1nwmp0brgS1/z3UZ7aRgtPngs78Jqr8mNp5xzlb3J YFdDQRkemxiYfDd7Ouz76fxmsEsG8qqXCmN//18o= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 9030FAC06AD; Tue, 19 Jun 2018 14:31:08 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Jay Ding Date: Tue, 19 Jun 2018 14:30:50 -0700 Message-Id: <20180619213058.12273-24-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 23/31] net/bnxt: check for invalid vnic id X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jay Ding Add checking for VNIC id before sending message to firmware in bnxt_hwrm_vnic_plcmode_cfg(). Signed-off-by: Jay Ding Reviewed-by: Randy Schacher Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 64687a69b..910129f12 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1560,6 +1560,11 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp, struct hwrm_vnic_plcmodes_cfg_output *resp = bp->hwrm_cmd_resp_addr; uint16_t size; + if (vnic->fw_vnic_id == INVALID_HW_RING_ID) { + PMD_DRV_LOG(DEBUG, "VNIC ID %x\n", vnic->fw_vnic_id); + return rc; + } + HWRM_PREP(req, VNIC_PLCMODES_CFG); req.flags = rte_cpu_to_le_32( From patchwork Tue Jun 19 21:30:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41295 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 46FEA1B4B0; Tue, 19 Jun 2018 23:32:09 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 39A201B059 for ; Tue, 19 Jun 2018 23:31:10 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 1CE5330C07A; Tue, 19 Jun 2018 14:31:09 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 1CE5330C07A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443869; bh=E1lM5NKlJ4jie1HmLC7fchZJuvh6UzwGzLHaszBg+18=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h8R4ZFbEOvel+ALszaHax5InkPa8W+nc5vAwCv/6s5bUpeZa8o/dDkNfwXpp7rfqc LOuiZM3DbePOuyUG581i81BoM1Eg89I5t8FhWnEialnEWfiQ7tir248ChU5boWn/RR 6GwZBWOSqBCo2fbrxRThhd94Ot/NW9RVEEQdVk30= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id E0443AC0799; Tue, 19 Jun 2018 14:31:08 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Rob Miller , Rob Miller Date: Tue, 19 Jun 2018 14:30:51 -0700 Message-Id: <20180619213058.12273-25-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rob Miller update HWRM API to v1.9.2.9 Signed-off-by: Rob Miller Reviewed-by: Scott Branden Reviewed-by: Ajit Kumar Khaparde Reviewed-by: Randy Schacher Signed-off-by: Rob Miller --- drivers/net/bnxt/hsi_struct_def_dpdk.h | 113 ++++++++++++++++++++++++++++++++- 1 file changed, 111 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h index fd6d8807e..f5c7b4228 100644 --- a/drivers/net/bnxt/hsi_struct_def_dpdk.h +++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h @@ -686,8 +686,8 @@ struct hwrm_err_output { #define HWRM_VERSION_MINOR 9 #define HWRM_VERSION_UPDATE 2 /* non-zero means beta version */ -#define HWRM_VERSION_RSVD 6 -#define HWRM_VERSION_STR "1.9.2.6" +#define HWRM_VERSION_RSVD 9 +#define HWRM_VERSION_STR "1.9.2.9" /**************** * hwrm_ver_get * @@ -3183,6 +3183,9 @@ struct hwrm_async_event_cmpl { /* LLFC/PFC Configuration Change */ #define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LLFC_PFC_CHANGE \ UINT32_C(0x34) + /* Default VNIC Configuration Change */ + #define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE \ + UINT32_C(0x35) /* HWRM Error */ #define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR \ UINT32_C(0xff) @@ -3280,6 +3283,11 @@ struct hwrm_async_event_cmpl_link_status_change { UINT32_C(0xffff0) #define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_ID_SFT \ 4 + /* Indicates the physical function this event occured on. */ + #define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_MASK \ + UINT32_C(0xff00000) + #define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_SFT \ + 20 } __attribute__((packed)); /* hwrm_async_event_cmpl_link_mtu_change (size:128b/16B) */ @@ -4087,6 +4095,10 @@ struct hwrm_async_event_cmpl_vf_flr { #define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_MASK \ UINT32_C(0xffff) #define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_SFT 0 + /* Indicates the physical function this event occured on. */ + #define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_MASK \ + UINT32_C(0xff0000) + #define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_SFT 16 } __attribute__((packed)); /* hwrm_async_event_cmpl_vf_mac_addr_change (size:128b/16B) */ @@ -4354,6 +4366,88 @@ struct hwrm_async_event_cmpl_llfc_pfc_change { 5 } __attribute__((packed)); +/* hwrm_async_event_cmpl_default_vnic_change (size:128b/16B) */ +struct hwrm_async_event_cmpl_default_vnic_change { + uint16_t type; + /* + * This field indicates the exact type of the completion. + * By convention, the LSB identifies the length of the + * record in 16B units. Even values indicate 16B + * records. Odd values indicate 32B + * records. + */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_MASK \ + UINT32_C(0x3f) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_SFT \ + 0 + /* HWRM Asynchronous Event Information */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_HWRM_ASYNC_EVENT \ + UINT32_C(0x2e) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_LAST \ + HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_HWRM_ASYNC_EVENT + /* unused1 is 10 b */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_UNUSED1_MASK \ + UINT32_C(0xffc0) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_UNUSED1_SFT \ + 6 + /* Identifiers of events. */ + uint16_t event_id; + /* Notification of a default vnic allocaiton or free */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_ALLOC_FREE_NOTIFICATION \ + UINT32_C(0x35) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_LAST \ + HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_ALLOC_FREE_NOTIFICATION + /* Event specific data */ + uint32_t event_data2; + uint8_t opaque_v; + /* + * This value is written by the NIC such that it will be different + * for each pass through the completion queue. The even passes + * will write 1. The odd passes will write 0. + */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_V \ + UINT32_C(0x1) + /* opaque is 7 b */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_OPAQUE_MASK \ + UINT32_C(0xfe) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_OPAQUE_SFT 1 + /* 8-lsb timestamp from POR (100-msec resolution) */ + uint8_t timestamp_lo; + /* 16-lsb timestamp from POR (100-msec resolution) */ + uint16_t timestamp_hi; + /* Event specific data */ + uint32_t event_data1; + /* Indicates default vnic configuration change */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_MASK \ + UINT32_C(0x3) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_SFT \ + 0 + /* + * If this field is set to 1, then it indicates that + * a default VNIC has been allocate. + */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_ALLOC \ + UINT32_C(0x1) + /* + * If this field is set to 2, then it indicates that + * a default VNIC has been freed. + */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_FREE \ + UINT32_C(0x2) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_LAST \ + HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_FREE + /* Indicates the physical function this event occured on. */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_MASK \ + UINT32_C(0x3fc) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_SFT \ + 2 + /* Indicates the virtual function this event occured on */ + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_MASK \ + UINT32_C(0x3fffc00) + #define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_SFT \ + 10 +} __attribute__((packed)); + /* hwrm_async_event_cmpl_hwrm_error (size:128b/16B) */ struct hwrm_async_event_cmpl_hwrm_error { uint16_t type; @@ -5196,6 +5290,21 @@ struct hwrm_func_qcaps_output { */ #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PCIE_STATS_SUPPORTED \ UINT32_C(0x10000) + /* + * If the query is for a VF, then this flag shall be ignored, + * If this query is for a PF and this flag is set to 1, + * then the PF has the capability to adopt the VF's belonging + * to another PF. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_ADOPTED_PF_SUPPORTED \ + UINT32_C(0x20000) + /* + * If the query is for a VF, then this flag shall be ignored, + * If this query is for a PF and this flag is set to 1, + * then the PF has the capability to administer another PF. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_ADMIN_PF_SUPPORTED \ + UINT32_C(0x40000) /* * This value is current MAC address configured for this * function. A value of 00-00-00-00-00-00 indicates no From patchwork Tue Jun 19 21:30:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41297 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3948E1B4BE; Tue, 19 Jun 2018 23:32:13 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 994D41B05C; Tue, 19 Jun 2018 23:31:10 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 785FA30C07B; Tue, 19 Jun 2018 14:31:09 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 785FA30C07B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443869; bh=utLYzFdEtNWK1V7azs/PVg0TXdKjXEaNrxq+uTGTXPA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pWb36w5nr/yZlcJazrtI78xUU7Qil0O4mXQolCRypa7a84Gr8bU3+KC1YXF5eYhJe 5ofKuysHZtgt8+vdiuOBr5Bq9dQSuU9h+UxZe3OiZ2J6Pg6Rhl2G6GKUxMpxoXQJtY FaIKR7qghRxNkhAvZk6/EoOPVY1EOD6Wfx6JgrbI= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 48AF5AC06AD; Tue, 19 Jun 2018 14:31:09 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Xiaoxin Peng , stable@dpdk.org Date: Tue, 19 Jun 2018 14:30:52 -0700 Message-Id: <20180619213058.12273-26-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 25/31] net/bnxt: fix Tx with multiple mbuf X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xiaoxin Peng When using multi-mbuf to xmit large packets, we need to use total packet lengths (sum of all segments) to set txbd->flags_type. Packets will not be sent when using tx_pkt->data_len(The first segment of packets). Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code") Cc: stable@dpdk.org Signed-off-by: Xiaoxin Peng Reviewed-by: Herry Chen Reviewed-by: Jason He Reviewed-by: Scott Branden Reviewed-by: Ajit Kumar Khaparde --- drivers/net/bnxt/bnxt_txr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index f8fd22156..23c8e6660 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -160,10 +160,10 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, *cmpl_next = false; } txbd->len = tx_pkt->data_len; - if (txbd->len >= 2014) + if (tx_pkt->pkt_len >= 2014) txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K; else - txbd->flags_type |= lhint_arr[txbd->len >> 9]; + txbd->flags_type |= lhint_arr[tx_pkt->pkt_len >> 9]; txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(tx_buf->mbuf)); if (long_bd) { From patchwork Tue Jun 19 21:30:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41296 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AED321B4B9; Tue, 19 Jun 2018 23:32:11 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 99FD91B05D for ; Tue, 19 Jun 2018 23:31:10 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 8348B30C07C; Tue, 19 Jun 2018 14:31:09 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 8348B30C07C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443869; bh=maLnpqXKVb8bDzJkuyfX47Bz+QBbwOiy6TozKkqWceA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iZcPEhUsKt7MfjkWGBwYOdNFLWY6JHUyGItLFvAFWPq/aryTK8aOG36piBt31adOP BXAUEJweE0Bl4sRi9HCNVnlT2CDVsLDWSRAy7Qj5XoWS1IK/UwiJ9baC40+Pn7mbye igQOBWKg3Ch0gAcOSf7O13KcTVTRYYZB1duv+Z+4= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id A39E3AC0768; Tue, 19 Jun 2018 14:31:09 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Somnath Kotur , ajit.khaparde@broadcom.com Date: Tue, 19 Jun 2018 14:30:53 -0700 Message-Id: <20180619213058.12273-27-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 26/31] net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Somnath Kotur The L2 filter id is needed in many scenarios particularly when we are repurposing the same ntuple filter with different destination queues. Fixes: 1383434c9089("net/bnxt: reset L2 filter id once filter is freed") Cc: ajit.khaparde@broadcom.com Signed-off-by: Somnath Kotur --- drivers/net/bnxt/bnxt_hwrm.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 910129f12..ba8e44a9b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3798,7 +3798,6 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp, HWRM_UNLOCK(); filter->fw_ntuple_filter_id = UINT64_MAX; - filter->fw_l2_filter_id = UINT64_MAX; return 0; } From patchwork Tue Jun 19 21:30:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41298 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AA4E51B4C8; Tue, 19 Jun 2018 23:32:14 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 3D7BB1B056 for ; Tue, 19 Jun 2018 23:31:11 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 26FC530C03F; Tue, 19 Jun 2018 14:31:10 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 26FC530C03F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443870; bh=oIe5dsW43Cc/uaEICkcCzPS9WFEZPihAnrV/kpAXD9A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CK4n5xemk2ADFqmB4+HHT5yGtA+00772I4iNvvXoxP2FZ7Xt7/gB+1W9u3QR1o8lc lr7x4LXROCyhwkFUcspZgeVl0YK7gr4g/eR5OCBW2QvamjYvSImVmqPvd+nD/cqC8g d54WvOtCfBcneMQXylg9hVqrLunz7R6+4fwEpuzM= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 0A7D3AC06AD; Tue, 19 Jun 2018 14:31:10 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:54 -0700 Message-Id: <20180619213058.12273-28-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 27/31] net/bnxt: check filter type before clearing it X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In bnxt_free_filter_mem(), check the filter type and call the appropriate HWRM command to clear the filter from HW. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_filter.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c index 31757d32c..1038941e8 100644 --- a/drivers/net/bnxt/bnxt_filter.c +++ b/drivers/net/bnxt/bnxt_filter.c @@ -117,16 +117,29 @@ void bnxt_free_filter_mem(struct bnxt *bp) max_filters = bp->max_l2_ctx; for (i = 0; i < max_filters; i++) { filter = &bp->filter_info[i]; - if (filter->fw_l2_filter_id != ((uint64_t)-1)) { - PMD_DRV_LOG(ERR, "HWRM filter is not freed??\n"); + if (filter->fw_l2_filter_id != ((uint64_t)-1) && + filter->filter_type == HWRM_CFA_L2_FILTER) { + PMD_DRV_LOG(ERR, "L2 filter is not free\n"); /* Call HWRM to try to free filter again */ rc = bnxt_hwrm_clear_l2_filter(bp, filter); if (rc) PMD_DRV_LOG(ERR, - "HWRM filter cannot be freed rc = %d\n", - rc); + "Cannot free L2 filter: %d\n", + rc); } filter->fw_l2_filter_id = UINT64_MAX; + + if (filter->fw_ntuple_filter_id != ((uint64_t)-1) && + filter->filter_type == HWRM_CFA_NTUPLE_FILTER) { + PMD_DRV_LOG(ERR, "NTUPLE filter is not free\n"); + /* Call HWRM to try to free filter again */ + rc = bnxt_hwrm_clear_ntuple_filter(bp, filter); + if (rc) + PMD_DRV_LOG(ERR, + "Cannot free NTUPLE filter: %d\n", + rc); + } + filter->fw_ntuple_filter_id = UINT64_MAX; } STAILQ_INIT(&bp->free_filter_list); From patchwork Tue Jun 19 21:30:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41300 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9D9B91B4D1; Tue, 19 Jun 2018 23:32:18 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 91B011B059; Tue, 19 Jun 2018 23:31:11 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 7A39130C07D; Tue, 19 Jun 2018 14:31:10 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 7A39130C07D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443870; bh=eLL8HHYQ0hYo6m6jeC0VVjDmsY7rvkQCH2WIdr0k/Ns=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GqBfY8qsXeojhPwudGlIMmRFRY3waTDQJbyUyHFgQ8hMlka+ku8Q3KQO1ySaq03tG wMWIc14hYx30HiXfFapCXFJq5O7EcV3Lyy0IMlqZb8hIDjTBKVcTgOe/p387nT4Ctf Oq8JmNFYFVUtcFGogPevrLhsY8hBfEq2gQAdknhI= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 52562AC0768; Tue, 19 Jun 2018 14:31:10 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org Date: Tue, 19 Jun 2018 14:30:55 -0700 Message-Id: <20180619213058.12273-29-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is no need to update hardware configuration if new MTU is not greater than the max data the mbuf can accommodate. Fixes: daef48efe5e5 ("net/bnxt: support set MTU") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 9cfa43778..1145bc195 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1597,6 +1597,7 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) for (i = 0; i < bp->nr_vnics; i++) { struct bnxt_vnic_info *vnic = &bp->vnic_info[i]; + uint16_t size = 0; vnic->mru = bp->eth_dev->data->mtu + ETHER_HDR_LEN + ETHER_CRC_LEN + VLAN_TAG_SIZE * 2; @@ -1604,9 +1605,14 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) if (rc) break; - rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic); - if (rc) - return rc; + size = rte_pktmbuf_data_room_size(bp->rx_queues[0]->mb_pool); + size -= RTE_PKTMBUF_HEADROOM; + + if (size < new_mtu) { + rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic); + if (rc) + return rc; + } } return rc; From patchwork Tue Jun 19 21:30:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41299 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A60431B4CF; Tue, 19 Jun 2018 23:32:16 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 917691B057; Tue, 19 Jun 2018 23:31:11 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 7B78C30C07E; Tue, 19 Jun 2018 14:31:10 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 7B78C30C07E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443870; bh=PekTvKybUCrJKzY84kWwZkURrM3ifdOn0tlWT6btxKU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EsJFtvH1TeOHJiFrir23xuikWUq8Ni+ekxEQu66j/czw1V3EnmWG1bJmL7H+aDZfq uHclew/kQKR0+JWEEKT6L4eRFySwXHSSFjnd+NPxK86MmiEZEBytErKx/+33yCIxZ0 WY6Cme0c5Xy2SC+hL/JB7ThLhtHWzkSzaS+gsiCE= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id A633AAC0799; Tue, 19 Jun 2018 14:31:10 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org Date: Tue, 19 Jun 2018 14:30:56 -0700 Message-Id: <20180619213058.12273-30-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 29/31] net/bnxt: fix incorrect IO address handling in Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_mbuf_data_iova returns a 64-bit address. But we are incorrectly using only 32-bits of that. Use rte_cpu_to_le_64 instead of rte_cpu_to_le_32 Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_txr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 23c8e6660..4e684f208 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -164,7 +164,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K; else txbd->flags_type |= lhint_arr[tx_pkt->pkt_len >> 9]; - txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(tx_buf->mbuf)); + txbd->address = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_buf->mbuf)); if (long_bd) { txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG; @@ -287,7 +287,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, tx_buf = &txr->tx_buf_ring[txr->tx_prod]; txbd = &txr->tx_desc_ring[txr->tx_prod]; - txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(m_seg)); + txbd->address = rte_cpu_to_le_64(rte_mbuf_data_iova(m_seg)); txbd->flags_type |= TX_BD_SHORT_TYPE_TX_BD_SHORT; txbd->len = m_seg->data_len; From patchwork Tue Jun 19 21:30:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41301 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 480701B4DC; Tue, 19 Jun 2018 23:32:20 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 34B321B056 for ; Tue, 19 Jun 2018 23:31:12 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 1E8ED30C07F; Tue, 19 Jun 2018 14:31:11 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 1E8ED30C07F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443871; bh=AKPlHYB4nDzf44sWe4Iw48BZzRXGzomcn5ZC9ycRIz8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H91NFUHx/WBT5H91o0svg1n7xrzbqZWrs878AAsKSkU0G+rRLRu/rrn+g2E6FSj7+ 8KOt0fYH+2PKxySmAJ2rxRbkN5DU6zuk7SgDjp3Y82VfugsH39IqpODcnLIqnBP4pb /3mn+d2l4rj0UAzQju8z7IYrCY9143VApsWAUm9A= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 03479AC06AD; Tue, 19 Jun 2018 14:31:10 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Tue, 19 Jun 2018 14:30:57 -0700 Message-Id: <20180619213058.12273-31-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 30/31] net/bnxt: allocate RSS context only if RSS mode is enabled. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" allocate RSS context only if RSS mode is enabled. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 1145bc195..dfae6e2d2 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -248,6 +248,7 @@ static int bnxt_init_chip(struct bnxt *bp) /* VNIC configuration */ for (i = 0; i < bp->nr_vnics; i++) { + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; struct bnxt_vnic_info *vnic = &bp->vnic_info[i]; rc = bnxt_hwrm_vnic_alloc(bp, vnic); @@ -257,12 +258,15 @@ static int bnxt_init_chip(struct bnxt *bp) goto err_out; } - rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic); - if (rc) { - PMD_DRV_LOG(ERR, - "HWRM vnic %d ctx alloc failure rc: %x\n", - i, rc); - goto err_out; + /* Alloc RSS context only if RSS mode is enabled */ + if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) { + rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic); + if (rc) { + PMD_DRV_LOG(ERR, + "HWRM vnic %d ctx alloc failure rc: %x\n", + i, rc); + goto err_out; + } } rc = bnxt_hwrm_vnic_cfg(bp, vnic); From patchwork Tue Jun 19 21:30:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 41302 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 08AD81B4E4; Tue, 19 Jun 2018 23:32:22 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 9073C1B057; Tue, 19 Jun 2018 23:31:12 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 7954B30C044; Tue, 19 Jun 2018 14:31:11 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 7954B30C044 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1529443871; bh=lu9sspGwTPjftqEYzeLvFxv0aXt+YHt2hN/cXvEyesY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wOZzYLgA0th4jRp2Dj8L9Ee8ETBCB0BxtijP+FAFH96C7dUDqIvO2pBRg8t7VZBpi OX6JRaxanfRWrbRdoQWfPSE2Bykczc/sDcUJTVjK4dZQrwcr/OmAEn4ddH5IL9Cpcu 2cvXYRDidYQu9DKJ6xsjcFOpgdHUidtEr8zseNVo= Received: from C02VPB22HTD6.dhcp.broadcom.net (c02vpb22htd6.dhcp.broadcom.net [10.136.50.120]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 4A437AC0768; Tue, 19 Jun 2018 14:31:11 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Somnath Kotur , stable@dpdk.org Date: Tue, 19 Jun 2018 14:30:58 -0700 Message-Id: <20180619213058.12273-32-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.15.1 (Apple Git-101) In-Reply-To: <20180619213058.12273-1-ajit.khaparde@broadcom.com> References: <20180619213058.12273-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH 31/31] net/bnxt: fix to move a flow to a different queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Somnath Kotur While moving a flow to a different destination queue, the l2_filter_id being passed to the FW command was incorrect. Fix it by re-using the matching filter's l2_filter_id since that is supposed to be the same in this case. Fixes: 5ef3b79fdfe6 ("net/bnxt: support flow filter ops") Cc: stable@dpdk.org Signed-off-by: Somnath Kotur Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_flow.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index a491e9dbf..ac7656741 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -968,9 +968,13 @@ bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf) sizeof(nf->dst_ipaddr_mask))) { if (mf->dst_id == nf->dst_id) return -EEXIST; - /* Same Flow, Different queue + /* + * Same Flow, Different queue * Clear the old ntuple filter + * Reuse the matching L2 filter + * ID for the new filter */ + nf->fw_l2_filter_id = mf->fw_l2_filter_id; if (nf->filter_type == HWRM_CFA_EM_FILTER) bnxt_hwrm_clear_em_filter(bp, mf); if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER)