From patchwork Fri Oct 4 03:48:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60500 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 93D581C11B; Fri, 4 Oct 2019 05:49:14 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 6C0521C121; Fri, 4 Oct 2019 05:49:12 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id F0ED330C2C2; Thu, 3 Oct 2019 20:47:51 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com F0ED330C2C2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160872; bh=+5QVK9cvJ01Vjj949sXw3zT/Q68WlFMmlEMD+aWJQ6A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VnPQ51d/U8kIHZILiaHnWeIMG3/9cABY22zoinzDXE39HJOfR/ccp/OjOo/vJiYFQ 0O4S69vo1/1bDUFdf9J/kx0xzGQyDoOC5Dwcy23LfVlD0rY7n5dN14fh/ek2Bj4ls/ hMFd8LR/8/naylhAAaudKrCNs+ASAuddZd31nHTw= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 7611C14008B; Thu, 3 Oct 2019 20:49:07 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , stable@dpdk.org, Kalesh Anakkur Purayil Date: Thu, 3 Oct 2019 20:48:55 -0700 Message-Id: <20191004034903.85233-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/9] net/bnxt: increase tqm entry allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson The current TQM backing store size isn't sufficient to allow 512 transmit rings. Fix by correcting TQM SP queue size calculation. Fixes: f8168ca0e690 ("net/bnxt: support thor controller") Cc: stable@dpdk.org Signed-off-by: Lance Richardson Reviewed-by: Ajit Khaparde Reviewed-by: Kalesh Anakkur Purayil --- drivers/net/bnxt/bnxt_ethdev.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 02eacf7965..0e893cc956 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4292,7 +4292,9 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) if (rc) return rc; - entries = ctx->qp_max_l2_entries; + entries = ctx->qp_max_l2_entries + + ctx->vnic_max_vnic_entries + + ctx->tqm_min_entries_per_ring; entries = bnxt_roundup(entries, ctx->tqm_entries_multiple); entries = clamp_t(uint32_t, entries, ctx->tqm_min_entries_per_ring, ctx->tqm_max_entries_per_ring); From patchwork Fri Oct 4 03:48:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60501 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A3E761C128; Fri, 4 Oct 2019 05:49:16 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 6982D1C120 for ; Fri, 4 Oct 2019 05:49:12 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 38B9030C2C3; Thu, 3 Oct 2019 20:47:52 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 38B9030C2C3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160872; bh=mnsicfqzg600p1pZVd2hew8dkSLRT+V0lq5z+SRPeJ4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ngz/tC+qw3d90LLChZAM74JQmfEtwPNGbN6NGARNZWMAdvQ/WyBxUthTeOHhfweAA 0Fm0w12sNBLM++ievTv5QF72FxbIwJh4Tuh5JTAtWDBtbzzDbr7GRRhRWMnAATCZPU XVTJeIJcW8hzYnNxCYNLyajWtCFAxtQOaFJbUlHc= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id B5F8514008C; Thu, 3 Oct 2019 20:49:07 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Ajit Kumar Khaparde Date: Thu, 3 Oct 2019 20:48:56 -0700 Message-Id: <20191004034903.85233-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/9] net/bnxt: fix ring alignment for thor-based adapters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson When using transmit/receive queue sizes smaller than 256, alignment requirements are not being met for Thor-based adapters. Fix by forcing memory addresses used for transmit/receive/aggregation ring allocations to be on 4K boundaries. Fixes: f8168ca0e690 ("net/bnxt: support thor controller") Signed-off-by: Lance Richardson Reviewed-by: Ajit Kumar Khaparde --- drivers/net/bnxt/bnxt_ring.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index a3e5a68714..4662e94132 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -162,18 +162,21 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, int nq_ring_len = BNXT_CHIP_THOR(bp) ? cp_ring_len : 0; int tx_ring_start = nq_ring_start + nq_ring_len; + tx_ring_start = RTE_ALIGN(tx_ring_start, 4096); int tx_ring_len = tx_ring_info ? RTE_CACHE_LINE_ROUNDUP(tx_ring_info->tx_ring_struct->ring_size * sizeof(struct tx_bd_long)) : 0; tx_ring_len = RTE_ALIGN(tx_ring_len, 4096); int rx_ring_start = tx_ring_start + tx_ring_len; + rx_ring_start = RTE_ALIGN(rx_ring_start, 4096); int rx_ring_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(rx_ring_info->rx_ring_struct->ring_size * sizeof(struct rx_prod_pkt_bd)) : 0; rx_ring_len = RTE_ALIGN(rx_ring_len, 4096); int ag_ring_start = rx_ring_start + rx_ring_len; + ag_ring_start = RTE_ALIGN(ag_ring_start, 4096); int ag_ring_len = rx_ring_len * AGG_RING_SIZE_FACTOR; ag_ring_len = RTE_ALIGN(ag_ring_len, 4096); From patchwork Fri Oct 4 03:48:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60504 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0EA7F1C1AA; Fri, 4 Oct 2019 05:49:23 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 66F0E1C11C for ; Fri, 4 Oct 2019 05:49:12 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 6DEA730C2C5; Thu, 3 Oct 2019 20:47:52 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 6DEA730C2C5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160872; bh=jFDf3OZzLUUNMHgkPFAOdzuszNE7v3QZb6jvNbu+S80=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K2fk42I3XZjOuycJMmivo+2JCSbIesFZxwoyw07cQZuEnul/k7xZKPUUyfNCqScom hSf8vYgnxFNDXAlqecqG75ZqW7AXM/SaauHQtbKMIWd0Q5OY8baUmsFfP/3e1NNq1M 3xqEzVHB0E5RiKohvMNhEk+KQY2LzRvB+o1H95Z4= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id F04DC140069; Thu, 3 Oct 2019 20:49:07 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Kalesh Anakkur Purayil Date: Thu, 3 Oct 2019 20:48:57 -0700 Message-Id: <20191004034903.85233-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/9] net/bnxt: add support for LRO on thor adapters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson Add support for LRO for adapters based on Thor (BCM57508). Reviewed-by: Kalesh Anakkur Purayil Signed-off-by: Lance Richardson Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 16 ++++++++ drivers/net/bnxt/bnxt_ethdev.c | 4 ++ drivers/net/bnxt/bnxt_hwrm.c | 33 +++++++++++++--- drivers/net/bnxt/bnxt_hwrm.h | 1 + drivers/net/bnxt/bnxt_ring.c | 14 ++++--- drivers/net/bnxt/bnxt_ring.h | 1 - drivers/net/bnxt/bnxt_rxq.c | 4 +- drivers/net/bnxt/bnxt_rxr.c | 72 +++++++++++++++++++++++++--------- drivers/net/bnxt/bnxt_rxr.h | 41 +++++++++++++++---- 9 files changed, 149 insertions(+), 37 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index c34582c0ad..ad97e0e593 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -37,6 +37,21 @@ #define BNXT_MAX_RX_RING_DESC 8192 #define BNXT_DB_SIZE 0x80 +#define TPA_MAX_AGGS 64 +#define TPA_MAX_AGGS_TH 1024 + +#define TPA_MAX_NUM_SEGS 32 +#define TPA_MAX_SEGS_TH 8 /* 32 segments in 4-segment units */ +#define TPA_MAX_SEGS 5 /* 32 segments in log2 units */ + +#define BNXT_TPA_MAX_AGGS(bp) \ + (BNXT_CHIP_THOR(bp) ? TPA_MAX_AGGS_TH : \ + TPA_MAX_AGGS) + +#define BNXT_TPA_MAX_SEGS(bp) \ + (BNXT_CHIP_THOR(bp) ? TPA_MAX_SEGS_TH : \ + TPA_MAX_SEGS) + #ifdef RTE_ARCH_ARM64 #define BNXT_NUM_ASYNC_CPR(bp) (BNXT_STINGRAY(bp) ? 0 : 1) #else @@ -525,6 +540,7 @@ struct bnxt { uint16_t max_rx_em_flows; uint16_t max_vnics; uint16_t max_stat_ctx; + uint16_t max_tpa_v2; uint16_t first_vf_id; uint16_t vlan; #define BNXT_OUTER_TPID_MASK 0x0000ffff diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 0e893cc956..4fc182b8cc 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4553,6 +4553,10 @@ static int bnxt_init_fw(struct bnxt *bp) if (rc) return rc; + rc = bnxt_hwrm_vnic_qcaps(bp); + if (rc) + return rc; + rc = bnxt_hwrm_func_qcfg(bp, &mtu); if (rc) return rc; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 011cd05ae0..9f30214c99 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -700,6 +700,27 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp) return rc; } +int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) +{ + int rc = 0; + struct hwrm_vnic_qcaps_input req = {.req_type = 0 }; + struct hwrm_vnic_qcaps_output *resp = bp->hwrm_cmd_resp_addr; + + HWRM_PREP(req, VNIC_QCAPS, BNXT_USE_CHIMP_MB); + + req.target_id = rte_cpu_to_le_16(0xffff); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + + HWRM_CHECK_RESULT(); + + bp->max_tpa_v2 = rte_le_to_cpu_16(resp->max_aggs_supported); + + HWRM_UNLOCK(); + + return rc; +} + int bnxt_hwrm_func_reset(struct bnxt *bp) { int rc = 0; @@ -1959,8 +1980,11 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, struct hwrm_vnic_tpa_cfg_input req = {.req_type = 0 }; struct hwrm_vnic_tpa_cfg_output *resp = bp->hwrm_cmd_resp_addr; - if (BNXT_CHIP_THOR(bp)) - return 0; + if (BNXT_CHIP_THOR(bp) && !bp->max_tpa_v2) { + if (enable) + PMD_DRV_LOG(ERR, "No HW support for LRO\n"); + return -ENOTSUP; + } if (vnic->fw_vnic_id == INVALID_HW_RING_ID) { PMD_DRV_LOG(DEBUG, "Invalid vNIC ID\n"); @@ -1981,9 +2005,8 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, HWRM_VNIC_TPA_CFG_INPUT_FLAGS_GRO | HWRM_VNIC_TPA_CFG_INPUT_FLAGS_AGG_WITH_ECN | HWRM_VNIC_TPA_CFG_INPUT_FLAGS_AGG_WITH_SAME_GRE_SEQ); - req.max_agg_segs = rte_cpu_to_le_16(5); - req.max_aggs = - rte_cpu_to_le_16(HWRM_VNIC_TPA_CFG_INPUT_MAX_AGGS_MAX); + req.max_agg_segs = rte_cpu_to_le_16(BNXT_TPA_MAX_AGGS(bp)); + req.max_aggs = rte_cpu_to_le_16(BNXT_TPA_MAX_SEGS(bp)); req.min_agg_len = rte_cpu_to_le_32(512); } req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 07181d4020..8912a4ed3e 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -110,6 +110,7 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic, int16_t fw_vf_id); +int bnxt_hwrm_vnic_qcaps(struct bnxt *bp); int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic, uint16_t ctx_idx); int bnxt_hwrm_vnic_ctx_free(struct bnxt *bp, struct bnxt_vnic_info *vnic); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 4662e94132..14cfb8c155 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -187,13 +187,17 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, AGG_RING_SIZE_FACTOR)) : 0; int tpa_info_start = ag_bitmap_start + ag_bitmap_len; - int tpa_info_len = rx_ring_info ? - RTE_CACHE_LINE_ROUNDUP(BNXT_TPA_MAX * - sizeof(struct bnxt_tpa_info)) : 0; + int tpa_info_len = 0; + + if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) { + int tpa_max = BNXT_TPA_MAX_AGGS(bp); + + tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info); + tpa_info_len = RTE_CACHE_LINE_ROUNDUP(tpa_info_len); + } int total_alloc_len = tpa_info_start; - if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) - total_alloc_len += tpa_info_len; + total_alloc_len += tpa_info_len; snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_%04x:%02x:%02x:%02x-%04x_%s", pdev->addr.domain, diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h index a31d59ea39..a5d5106986 100644 --- a/drivers/net/bnxt/bnxt_ring.h +++ b/drivers/net/bnxt/bnxt_ring.h @@ -27,7 +27,6 @@ #define DEFAULT_RX_RING_SIZE 256 #define DEFAULT_TX_RING_SIZE 256 -#define BNXT_TPA_MAX 64 #define AGG_RING_SIZE_FACTOR 2 #define AGG_RING_MULTIPLIER 2 diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 371534db6b..03b115dbaf 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -227,7 +227,9 @@ void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq) /* Free up mbufs in TPA */ tpa_info = rxq->rx_ring->tpa_info; if (tpa_info) { - for (i = 0; i < BNXT_TPA_MAX; i++) { + int max_aggs = BNXT_TPA_MAX_AGGS(rxq->bp); + + for (i = 0; i < max_aggs; i++) { if (tpa_info[i].mbuf) { rte_pktmbuf_free_seg(tpa_info[i].mbuf); tpa_info[i].mbuf = NULL; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index b3cc0d8a04..1a6fb7944b 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -124,12 +124,13 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq, struct rx_tpa_start_cmpl_hi *tpa_start1) { struct bnxt_rx_ring_info *rxr = rxq->rx_ring; - uint8_t agg_id = rte_le_to_cpu_32(tpa_start->agg_id & - RX_TPA_START_CMPL_AGG_ID_MASK) >> RX_TPA_START_CMPL_AGG_ID_SFT; + uint16_t agg_id; uint16_t data_cons; struct bnxt_tpa_info *tpa_info; struct rte_mbuf *mbuf; + agg_id = bnxt_tpa_start_agg_id(rxq->bp, tpa_start); + data_cons = tpa_start->opaque; tpa_info = &rxr->tpa_info[agg_id]; @@ -137,6 +138,7 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq, bnxt_reuse_rx_mbuf(rxr, tpa_info->mbuf); + tpa_info->agg_count = 0; tpa_info->mbuf = mbuf; tpa_info->len = rte_le_to_cpu_32(tpa_start->len); @@ -206,7 +208,7 @@ static int bnxt_prod_ag_mbuf(struct bnxt_rx_queue *rxq) static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, struct rte_mbuf *mbuf, uint32_t *tmp_raw_cons, - uint8_t agg_buf) + uint8_t agg_buf, struct bnxt_tpa_info *tpa_info) { struct bnxt_cp_ring_info *cpr = rxq->cp_ring; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; @@ -214,14 +216,20 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, uint16_t cp_cons, ag_cons; struct rx_pkt_cmpl *rxcmp; struct rte_mbuf *last = mbuf; + bool is_thor_tpa = tpa_info && BNXT_CHIP_THOR(rxq->bp); for (i = 0; i < agg_buf; i++) { struct bnxt_sw_rx_bd *ag_buf; struct rte_mbuf *ag_mbuf; - *tmp_raw_cons = NEXT_RAW_CMP(*tmp_raw_cons); - cp_cons = RING_CMP(cpr->cp_ring_struct, *tmp_raw_cons); - rxcmp = (struct rx_pkt_cmpl *) + + if (is_thor_tpa) { + rxcmp = (void *)&tpa_info->agg_arr[i]; + } else { + *tmp_raw_cons = NEXT_RAW_CMP(*tmp_raw_cons); + cp_cons = RING_CMP(cpr->cp_ring_struct, *tmp_raw_cons); + rxcmp = (struct rx_pkt_cmpl *) &cpr->cp_desc_ring[cp_cons]; + } #ifdef BNXT_DEBUG bnxt_dump_cmpl(cp_cons, rxcmp); @@ -258,29 +266,42 @@ static inline struct rte_mbuf *bnxt_tpa_end( struct bnxt_rx_queue *rxq, uint32_t *raw_cp_cons, struct rx_tpa_end_cmpl *tpa_end, - struct rx_tpa_end_cmpl_hi *tpa_end1 __rte_unused) + struct rx_tpa_end_cmpl_hi *tpa_end1) { struct bnxt_cp_ring_info *cpr = rxq->cp_ring; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; - uint8_t agg_id = (tpa_end->agg_id & RX_TPA_END_CMPL_AGG_ID_MASK) - >> RX_TPA_END_CMPL_AGG_ID_SFT; + uint16_t agg_id; struct rte_mbuf *mbuf; uint8_t agg_bufs; + uint8_t payload_offset; struct bnxt_tpa_info *tpa_info; + if (BNXT_CHIP_THOR(rxq->bp)) { + struct rx_tpa_v2_end_cmpl *th_tpa_end; + struct rx_tpa_v2_end_cmpl_hi *th_tpa_end1; + + th_tpa_end = (void *)tpa_end; + th_tpa_end1 = (void *)tpa_end1; + agg_id = BNXT_TPA_END_AGG_ID_TH(th_tpa_end); + agg_bufs = BNXT_TPA_END_AGG_BUFS_TH(th_tpa_end1); + payload_offset = th_tpa_end1->payload_offset; + } else { + agg_id = BNXT_TPA_END_AGG_ID(tpa_end); + agg_bufs = BNXT_TPA_END_AGG_BUFS(tpa_end); + if (!bnxt_agg_bufs_valid(cpr, agg_bufs, *raw_cp_cons)) + return NULL; + payload_offset = tpa_end->payload_offset; + } + tpa_info = &rxr->tpa_info[agg_id]; mbuf = tpa_info->mbuf; RTE_ASSERT(mbuf != NULL); rte_prefetch0(mbuf); - agg_bufs = (rte_le_to_cpu_32(tpa_end->agg_bufs_v1) & - RX_TPA_END_CMPL_AGG_BUFS_MASK) >> RX_TPA_END_CMPL_AGG_BUFS_SFT; if (agg_bufs) { - if (!bnxt_agg_bufs_valid(cpr, agg_bufs, *raw_cp_cons)) - return NULL; - bnxt_rx_pages(rxq, mbuf, raw_cp_cons, agg_bufs); + bnxt_rx_pages(rxq, mbuf, raw_cp_cons, agg_bufs, tpa_info); } - mbuf->l4_len = tpa_end->payload_offset; + mbuf->l4_len = payload_offset; struct rte_mbuf *new_data = __bnxt_alloc_rx_data(rxq->mb_pool); RTE_ASSERT(new_data != NULL); @@ -395,6 +416,20 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, rxcmp = (struct rx_pkt_cmpl *) &cpr->cp_desc_ring[cp_cons]; + cmp_type = CMP_TYPE(rxcmp); + + if (cmp_type == RX_TPA_V2_ABUF_CMPL_TYPE_RX_TPA_AGG) { + struct rx_tpa_v2_abuf_cmpl *rx_agg = (void *)rxcmp; + uint16_t agg_id = rte_cpu_to_le_16(rx_agg->agg_id); + struct bnxt_tpa_info *tpa_info; + + tpa_info = &rxr->tpa_info[agg_id]; + RTE_ASSERT(tpa_info->agg_count < 16); + tpa_info->agg_arr[tpa_info->agg_count++] = *rx_agg; + rc = -EINVAL; /* Continue w/o new mbuf */ + goto next_rx; + } + tmp_raw_cons = NEXT_RAW_CMP(tmp_raw_cons); cp_cons = RING_CMP(cpr->cp_ring_struct, tmp_raw_cons); rxcmp1 = (struct rx_pkt_cmpl_hi *)&cpr->cp_desc_ring[cp_cons]; @@ -406,7 +441,6 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, cpr->cp_ring_struct->ring_mask, cpr->valid); - cmp_type = CMP_TYPE(rxcmp); if (cmp_type == RX_TPA_START_CMPL_TYPE_RX_TPA_START) { bnxt_tpa_start(rxq, (struct rx_tpa_start_cmpl *)rxcmp, (struct rx_tpa_start_cmpl_hi *)rxcmp1); @@ -463,7 +497,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, } #endif if (agg_buf) - bnxt_rx_pages(rxq, mbuf, &tmp_raw_cons, agg_buf); + bnxt_rx_pages(rxq, mbuf, &tmp_raw_cons, agg_buf, NULL); if (rxcmp1->flags2 & RX_PKT_CMPL_FLAGS2_META_FORMAT_VLAN) { mbuf->vlan_tci = rxcmp1->metadata & @@ -861,7 +895,9 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq) PMD_DRV_LOG(DEBUG, "AGG Done!\n"); if (rxr->tpa_info) { - for (i = 0; i < BNXT_TPA_MAX; i++) { + unsigned int max_aggs = BNXT_TPA_MAX_AGGS(rxq->bp); + + for (i = 0; i < max_aggs; i++) { rxr->tpa_info[i].mbuf = __bnxt_alloc_rx_data(rxq->mb_pool); if (!rxr->tpa_info[i].mbuf) { diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index 493b754066..76bf88d707 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -5,6 +5,7 @@ #ifndef _BNXT_RXR_H_ #define _BNXT_RXR_H_ +#include "hsi_struct_def_dpdk.h" #define B_RX_DB(db, prod) \ (*(uint32_t *)db = (DB_KEY_RX | (prod))) @@ -110,6 +111,36 @@ IS_L4_TUNNEL_PKT_ONLY_INNER_L4_CS(flags2_f) \ ) +#define BNXT_TPA_START_AGG_ID_PRE_TH(cmp) \ + ((rte_le_to_cpu_16((cmp)->agg_id) & RX_TPA_START_CMPL_AGG_ID_MASK) >> \ + RX_TPA_START_CMPL_AGG_ID_SFT) + +#define BNXT_TPA_START_AGG_ID_TH(cmp) \ + rte_le_to_cpu_16((cmp)->agg_id) + +static inline uint16_t bnxt_tpa_start_agg_id(struct bnxt *bp, + struct rx_tpa_start_cmpl *cmp) +{ + if (BNXT_CHIP_THOR(bp)) + return BNXT_TPA_START_AGG_ID_TH(cmp); + else + return BNXT_TPA_START_AGG_ID_PRE_TH(cmp); +} + +#define BNXT_TPA_END_AGG_BUFS(cmp) \ + (((cmp)->agg_bufs_v1 & RX_TPA_END_CMPL_AGG_BUFS_MASK) \ + >> RX_TPA_END_CMPL_AGG_BUFS_SFT) + +#define BNXT_TPA_END_AGG_BUFS_TH(cmp) \ + ((cmp)->tpa_agg_bufs) + +#define BNXT_TPA_END_AGG_ID(cmp) \ + (((cmp)->agg_id & RX_TPA_END_CMPL_AGG_ID_MASK) >> \ + RX_TPA_END_CMPL_AGG_ID_SFT) + +#define BNXT_TPA_END_AGG_ID_TH(cmp) \ + rte_le_to_cpu_16((cmp)->agg_id) + #define RX_CMP_L4_CS_BITS \ rte_cpu_to_le_32(RX_PKT_CMPL_FLAGS2_L4_CS_CALC) @@ -144,14 +175,10 @@ enum pkt_hash_types { }; struct bnxt_tpa_info { - struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf; uint16_t len; - unsigned short gso_type; - uint32_t flags2; - uint32_t metadata; - enum pkt_hash_types hash_type; - uint32_t rss_hash; - uint32_t hdr_info; + uint32_t agg_count; + struct rx_tpa_v2_abuf_cmpl agg_arr[TPA_MAX_NUM_SEGS]; }; struct bnxt_sw_rx_bd { From patchwork Fri Oct 4 03:48:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60503 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 529CA1C1A1; Fri, 4 Oct 2019 05:49:21 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 642FE1C11B for ; Fri, 4 Oct 2019 05:49:12 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 64D5A30C2BA; Thu, 3 Oct 2019 20:47:52 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 64D5A30C2BA DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160872; bh=yPxOX+7hTrrRJNTZ5jDgM0kRe+r8miqLUnB9gsa1Zr8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uX4dKNzgw+aCufl3JlqSQhH6CAdkzngKJYGTqAs4+knNahp0/ZXt/6WG3pKhMuEqY SJXIQYh2F202Q4q9zuUe7weW6f/RBfpJ0iGkZgLVBpuS6xZmID4HqQSkHJA9fqz2k+ fy7IqYKDcsqEU7nEd1q8CJ2lcharPzzQEc8PbQII= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 315B414008D; Thu, 3 Oct 2019 20:49:08 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Venkat Duvvuru , Santoshkumar Karanappa Rastapur , Kalesh Anakkur Purayil , Somnath Kotur Date: Thu, 3 Oct 2019 20:48:58 -0700 Message-Id: <20191004034903.85233-5-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 4/9] net/bnxt: add support for CoS classification X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Venkat Duvvuru Class of Service (CoS) is a way to manage multiple types of traffic over a network to offer different types of services to applications. CoS classification (priority to cosqueue) is determined by the user and configured through the PF driver. DPDK driver queries this configuration and maps the cos queue ids to different VNICs. This patch adds this support. Signed-off-by: Venkat Duvvuru Reviewed-by: Santoshkumar Karanappa Rastapur Reviewed-by: Kalesh Anakkur Purayil Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 10 ++- drivers/net/bnxt/bnxt_ethdev.c | 30 +++++++-- drivers/net/bnxt/bnxt_hwrm.c | 84 +++++++++++++++++++------- drivers/net/bnxt/bnxt_hwrm.h | 13 +++- drivers/net/bnxt/bnxt_ring.c | 18 ++++-- drivers/net/bnxt/bnxt_rxq.c | 3 +- drivers/net/bnxt/bnxt_vnic.h | 1 + drivers/net/bnxt/hsi_struct_def_dpdk.h | 28 ++++++++- 8 files changed, 149 insertions(+), 38 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index ad97e0e593..5cfe5ee2c7 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -470,8 +470,10 @@ struct bnxt { uint32_t flow_flags; #define BNXT_FLOW_FLAG_L2_HDR_SRC_FILTER_EN BIT(0) - pthread_mutex_t flow_lock; + + uint32_t vnic_cap_flags; +#define BNXT_VNIC_CAP_COS_CLASSIFY BIT(0) unsigned int rx_nr_rings; unsigned int rx_cp_nr_rings; unsigned int rx_num_qs_per_vnic; @@ -523,8 +525,10 @@ struct bnxt { uint16_t hwrm_max_ext_req_len; struct bnxt_link_info link_info; - struct bnxt_cos_queue_info cos_queue[BNXT_COS_QUEUE_COUNT]; - uint8_t tx_cosq_id; + struct bnxt_cos_queue_info rx_cos_queue[BNXT_COS_QUEUE_COUNT]; + struct bnxt_cos_queue_info tx_cos_queue[BNXT_COS_QUEUE_COUNT]; + uint8_t tx_cosq_id[BNXT_COS_QUEUE_COUNT]; + uint8_t rx_cosq_cnt; uint8_t max_tc; uint8_t max_lltc; uint8_t max_q; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 4fc182b8cc..9adcd94ff8 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -308,6 +308,25 @@ static int bnxt_init_chip(struct bnxt *bp) goto err_out; } + if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY)) + goto skip_cosq_cfg; + + for (j = 0, i = 0; i < BNXT_COS_QUEUE_COUNT; i++) { + if (bp->rx_cos_queue[i].id != 0xff) { + struct bnxt_vnic_info *vnic = &bp->vnic_info[j++]; + + if (!vnic) { + PMD_DRV_LOG(ERR, + "Num pools more than FW profile\n"); + rc = -EINVAL; + goto err_out; + } + vnic->cos_queue_id = bp->rx_cos_queue[i].id; + bp->rx_cosq_cnt++; + } + } + +skip_cosq_cfg: rc = bnxt_mq_rx_configure(bp); if (rc) { PMD_DRV_LOG(ERR, "MQ mode configure failure rc: %x\n", rc); @@ -4540,7 +4559,7 @@ static int bnxt_init_fw(struct bnxt *bp) if (rc) return -EIO; - rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp); + rc = bnxt_hwrm_vnic_qcaps(bp); if (rc) return rc; @@ -4548,16 +4567,19 @@ static int bnxt_init_fw(struct bnxt *bp) if (rc) return rc; - /* Get the MAX capabilities for this function */ + /* Get the MAX capabilities for this function. + * This function also allocates context memory for TQM rings and + * informs the firmware about this allocated backing store memory. + */ rc = bnxt_hwrm_func_qcaps(bp); if (rc) return rc; - rc = bnxt_hwrm_vnic_qcaps(bp); + rc = bnxt_hwrm_func_qcfg(bp, &mtu); if (rc) return rc; - rc = bnxt_hwrm_func_qcfg(bp, &mtu); + rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp); if (rc) return rc; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 9f30214c99..76ef004237 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -700,6 +700,7 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp) return rc; } +/* VNIC cap covers capability of all VNICs. So no need to pass vnic_id */ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) { int rc = 0; @@ -714,6 +715,12 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) HWRM_CHECK_RESULT(); + if (rte_le_to_cpu_32(resp->flags) & + HWRM_VNIC_QCAPS_OUTPUT_FLAGS_COS_ASSIGNMENT_CAP) { + bp->vnic_cap_flags |= BNXT_VNIC_CAP_COS_CLASSIFY; + PMD_DRV_LOG(INFO, "CoS assignment capability enabled\n"); + } + bp->max_tpa_v2 = rte_le_to_cpu_16(resp->max_aggs_supported); HWRM_UNLOCK(); @@ -1199,11 +1206,13 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) int rc = 0; struct hwrm_queue_qportcfg_input req = {.req_type = 0 }; struct hwrm_queue_qportcfg_output *resp = bp->hwrm_cmd_resp_addr; + uint32_t dir = HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX; int i; +get_rx_info: HWRM_PREP(req, QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB); - req.flags = HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX; + req.flags = rte_cpu_to_le_32(dir); /* HWRM Version >= 1.9.1 */ if (bp->hwrm_spec_code >= HWRM_VERSION_1_9_1) req.drv_qmap_cap = @@ -1212,30 +1221,51 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) HWRM_CHECK_RESULT(); -#define GET_QUEUE_INFO(x) \ - bp->cos_queue[x].id = resp->queue_id##x; \ - bp->cos_queue[x].profile = resp->queue_id##x##_service_profile - - GET_QUEUE_INFO(0); - GET_QUEUE_INFO(1); - GET_QUEUE_INFO(2); - GET_QUEUE_INFO(3); - GET_QUEUE_INFO(4); - GET_QUEUE_INFO(5); - GET_QUEUE_INFO(6); - GET_QUEUE_INFO(7); + if (dir == HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX) { + GET_TX_QUEUE_INFO(0); + GET_TX_QUEUE_INFO(1); + GET_TX_QUEUE_INFO(2); + GET_TX_QUEUE_INFO(3); + GET_TX_QUEUE_INFO(4); + GET_TX_QUEUE_INFO(5); + GET_TX_QUEUE_INFO(6); + GET_TX_QUEUE_INFO(7); + } else { + GET_RX_QUEUE_INFO(0); + GET_RX_QUEUE_INFO(1); + GET_RX_QUEUE_INFO(2); + GET_RX_QUEUE_INFO(3); + GET_RX_QUEUE_INFO(4); + GET_RX_QUEUE_INFO(5); + GET_RX_QUEUE_INFO(6); + GET_RX_QUEUE_INFO(7); + } HWRM_UNLOCK(); + if (dir == HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_RX) + goto done; + if (bp->hwrm_spec_code < HWRM_VERSION_1_9_1) { - bp->tx_cosq_id = bp->cos_queue[0].id; + bp->tx_cosq_id[0] = bp->tx_cos_queue[0].id; } else { + int j; + /* iterate and find the COSq profile to use for Tx */ - for (i = 0; i < BNXT_COS_QUEUE_COUNT; i++) { - if (bp->cos_queue[i].profile == - HWRM_QUEUE_SERVICE_PROFILE_LOSSY) { - bp->tx_cosq_id = bp->cos_queue[i].id; - break; + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) { + for (j = 0, i = 0; i < BNXT_COS_QUEUE_COUNT; i++) { + if (bp->tx_cos_queue[i].id != 0xff) + bp->tx_cosq_id[j++] = + bp->tx_cos_queue[i].id; + } + } else { + for (i = BNXT_COS_QUEUE_COUNT - 1; i >= 0; i--) { + if (bp->tx_cos_queue[i].profile == + HWRM_QUEUE_SERVICE_PROFILE_LOSSY) { + bp->tx_cosq_id[0] = + bp->tx_cos_queue[i].id; + break; + } } } } @@ -1246,15 +1276,20 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) bp->max_tc = BNXT_MAX_QUEUE; bp->max_q = bp->max_tc; - PMD_DRV_LOG(DEBUG, "Tx Cos Queue to use: %d\n", bp->tx_cosq_id); + if (dir == HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX) { + dir = HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_RX; + goto get_rx_info; + } +done: return rc; } int bnxt_hwrm_ring_alloc(struct bnxt *bp, struct bnxt_ring *ring, uint32_t ring_type, uint32_t map_index, - uint32_t stats_ctx_id, uint32_t cmpl_ring_id) + uint32_t stats_ctx_id, uint32_t cmpl_ring_id, + uint16_t tx_cosq_id) { int rc = 0; uint32_t enables = 0; @@ -1276,7 +1311,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp, req.ring_type = ring_type; req.cmpl_ring_id = rte_cpu_to_le_16(cmpl_ring_id); req.stat_ctx_id = rte_cpu_to_le_32(stats_ctx_id); - req.queue_id = rte_cpu_to_le_16(bp->tx_cosq_id); + req.queue_id = rte_cpu_to_le_16(tx_cosq_id); if (stats_ctx_id != INVALID_STATS_CTX_ID) enables |= HWRM_RING_ALLOC_INPUT_ENABLES_STAT_CTX_ID_VALID; @@ -1682,6 +1717,11 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) ctx_enable_flag |= HWRM_VNIC_CFG_INPUT_ENABLES_MRU; ctx_enable_flag |= HWRM_VNIC_CFG_INPUT_ENABLES_RSS_RULE; } + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) { + ctx_enable_flag |= HWRM_VNIC_CFG_INPUT_ENABLES_QUEUE_ID; + req.queue_id = rte_cpu_to_le_16(vnic->cos_queue_id); + } + enables |= ctx_enable_flag; req.dflt_ring_grp = rte_cpu_to_le_16(vnic->dflt_ring_grp); req.rss_rule = rte_cpu_to_le_16(vnic->rss_rule); diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 8912a4ed3e..fcbce60589 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -52,6 +52,16 @@ HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED HWRM_FUNC_BACKING_STORE_CFG_INPUT_ENABLES_VNIC | \ HWRM_FUNC_BACKING_STORE_CFG_INPUT_ENABLES_STAT) +#define GET_TX_QUEUE_INFO(x) \ + bp->tx_cos_queue[x].id = resp->queue_id##x; \ + bp->tx_cos_queue[x].profile = \ + resp->queue_id##x##_service_profile + +#define GET_RX_QUEUE_INFO(x) \ + bp->rx_cos_queue[x].id = resp->queue_id##x; \ + bp->rx_cos_queue[x].profile = \ + resp->queue_id##x##_service_profile + int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic, @@ -90,7 +100,8 @@ int bnxt_hwrm_set_async_event_cr(struct bnxt *bp); int bnxt_hwrm_ring_alloc(struct bnxt *bp, struct bnxt_ring *ring, uint32_t ring_type, uint32_t map_index, - uint32_t stats_ctx_id, uint32_t cmpl_ring_id); + uint32_t stats_ctx_id, uint32_t cmpl_ring_id, + uint16_t tx_cosq_id); int bnxt_hwrm_ring_free(struct bnxt *bp, struct bnxt_ring *ring, uint32_t ring_type); int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp, unsigned int idx); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 14cfb8c155..cf0c24c9dc 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -424,7 +424,7 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index, } rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, cp_ring_index, - HWRM_NA_SIGNATURE, nq_ring_id); + HWRM_NA_SIGNATURE, nq_ring_id, 0); if (rc) return rc; @@ -450,7 +450,7 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index, ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ; rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, nq_ring_index, - HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE); + HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE, 0); if (rc) return rc; @@ -475,7 +475,7 @@ static int bnxt_alloc_rx_ring(struct bnxt *bp, int queue_index) rc = bnxt_hwrm_ring_alloc(bp, ring, ring_type, queue_index, cpr->hw_stats_ctx_id, - cp_ring->fw_ring_id); + cp_ring->fw_ring_id, 0); if (rc) return rc; @@ -510,7 +510,7 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index) } rc = bnxt_hwrm_ring_alloc(bp, ring, ring_type, map_idx, - hw_stats_ctx_id, cp_ring->fw_ring_id); + hw_stats_ctx_id, cp_ring->fw_ring_id, 0); if (rc) return rc; @@ -701,6 +701,7 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) struct bnxt_tx_ring_info *txr = txq->tx_ring; struct bnxt_ring *ring = txr->tx_ring_struct; unsigned int idx = i + bp->rx_cp_nr_rings; + uint16_t tx_cosq_id = 0; if (BNXT_HAS_NQ(bp)) { if (bnxt_alloc_nq_ring(bp, idx, nqr)) @@ -710,12 +711,17 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) if (bnxt_alloc_cmpl_ring(bp, idx, cpr, nqr)) goto err_out; + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) + tx_cosq_id = bp->tx_cosq_id[i < bp->max_lltc ? i : 0]; + else + tx_cosq_id = bp->tx_cosq_id[0]; /* Tx ring */ ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_TX; rc = bnxt_hwrm_ring_alloc(bp, ring, ring_type, i, cpr->hw_stats_ctx_id, - cp_ring->fw_ring_id); + cp_ring->fw_ring_id, + tx_cosq_id); if (rc) goto err_out; @@ -747,7 +753,7 @@ int bnxt_alloc_async_cp_ring(struct bnxt *bp) ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL; rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, 0, - HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE); + HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE, 0); if (rc) return rc; diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 03b115dbaf..5d291cbafd 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -76,6 +76,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp) switch (dev_conf->rxmode.mq_mode) { case ETH_MQ_RX_VMDQ_RSS: case ETH_MQ_RX_VMDQ_ONLY: + case ETH_MQ_RX_VMDQ_DCB_RSS: /* FALLTHROUGH */ /* ETH_8/64_POOLs */ pools = conf->nb_queue_pools; @@ -91,7 +92,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp) pools = max_pools; break; case ETH_MQ_RX_RSS: - pools = 1; + pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1; break; default: PMD_DRV_LOG(ERR, "Unsupported mq_mod %d\n", diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index de34b21eb8..4f760e0b08 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -45,6 +45,7 @@ struct bnxt_vnic_info { uint16_t cos_rule; uint16_t lb_rule; uint16_t rx_queue_cnt; + uint16_t cos_queue_id; bool vlan_strip; bool func_default; bool bd_stall; diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h index 26d12cf20a..c45d0883ac 100644 --- a/drivers/net/bnxt/hsi_struct_def_dpdk.h +++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h @@ -21157,7 +21157,7 @@ struct hwrm_vnic_free_output { *****************/ -/* hwrm_vnic_cfg_input (size:320b/40B) */ +/* hwrm_vnic_cfg_input (size:384b/48B) */ struct hwrm_vnic_cfg_input { /* The HWRM command request type. */ uint16_t req_type; @@ -21300,6 +21300,9 @@ struct hwrm_vnic_cfg_input { */ #define HWRM_VNIC_CFG_INPUT_ENABLES_DEFAULT_CMPL_RING_ID \ UINT32_C(0x40) + /* This bit must be '1' for the queue_id field to be configured. */ + #define HWRM_VNIC_CFG_INPUT_ENABLES_QUEUE_ID \ + UINT32_C(0x80) /* Logical vnic ID */ uint16_t vnic_id; /* @@ -21345,6 +21348,19 @@ struct hwrm_vnic_cfg_input { * be chosen if packet does not match any RSS rules. */ uint16_t default_cmpl_ring_id; + /* + * When specified, only incoming packets classified to the specified CoS + * queue ID will be arriving on this VNIC. Packet priority to CoS mapping + * rules can be specified using HWRM_QUEUE_PRI2COS_CFG. In this mode, + * ntuple filters with VNIC destination specified are invalid since they + * conflict with the the CoS to VNIC steering rules in this mode. + * + * If this field is not specified, packet to VNIC steering will be + * subject to the standard L2 filter rules and any additional ntuple + * filter rules with destination VNIC specified. + */ + uint16_t queue_id; + uint8_t unused0[6]; } __attribute__((packed)); /* hwrm_vnic_cfg_output (size:128b/16B) */ @@ -21640,6 +21656,16 @@ struct hwrm_vnic_qcaps_output { */ #define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_OUTERMOST_RSS_CAP \ UINT32_C(0x80) + /* + * When this bit is '1', it indicates that firmware supports the + * ability to steer incoming packets from one CoS queue to one + * VNIC. This optional feature can then be enabled + * using HWRM_VNIC_CFG on any VNIC. This feature is only + * available when NVM option “enable_cos_classfication” is set + * to 1. If set to '0', firmware does not support this feature. + */ + #define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_COS_ASSIGNMENT_CAP \ + UINT32_C(0x100) /* * This field advertises the maximum concurrent TPA aggregations * supported by the VNIC on new devices that support TPA v2. From patchwork Fri Oct 4 03:48:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60509 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EF1DE1C1DA; Fri, 4 Oct 2019 05:49:30 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 6C3721C117; Fri, 4 Oct 2019 05:49:13 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 0180F30C2BC; Thu, 3 Oct 2019 20:47:53 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 0180F30C2BC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160873; bh=KY4MgoaXUEO89OGdnoQgiUj7SlG4CLlXZKgKvtSZtmU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c2HcWUBOc/EJGcwisE1d5Z3x/YybqNRs3CKYFhXQaia1KgSlHAU/OR8oqx1nX/PMq b72ypAzcQhQl1NnYwG14VN+/+WzJCkacW9mCtYInmFd/GCBOtsiMC83m6W7gSVAFHq l0fKJSFYXkxX93DGVDTR/ukTjCSWtMAkCiBZK9DI= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 77DF214008C; Thu, 3 Oct 2019 20:49:08 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , stable@dpdk.org, Somnath Kotur , Kalesh Anakkur Purayil Date: Thu, 3 Oct 2019 20:48:59 -0700 Message-Id: <20191004034903.85233-6-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 5/9] net/bnxt: use common receive transmit nq ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson Thor queue scaling is currently limited by the number of NQs that can be allocated. Fix by using a common NQ for all receive/transmit rings instead of allocating a separate NQ for each ring. Fixes: f8168ca0e690 ("net/bnxt: support thor controller") Cc: stable@dpdk.org Signed-off-by: Lance Richardson Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde Reviewed-by: Kalesh Anakkur Purayil --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_ethdev.c | 5 ++ drivers/net/bnxt/bnxt_hwrm.c | 7 +-- drivers/net/bnxt/bnxt_ring.c | 107 ++++++++++++++++++++++----------- drivers/net/bnxt/bnxt_ring.h | 2 + drivers/net/bnxt/bnxt_rxq.c | 4 +- drivers/net/bnxt/bnxt_rxq.h | 1 - drivers/net/bnxt/bnxt_rxr.c | 27 --------- drivers/net/bnxt/bnxt_txq.c | 4 +- drivers/net/bnxt/bnxt_txq.h | 1 - drivers/net/bnxt/bnxt_txr.c | 25 -------- 11 files changed, 84 insertions(+), 100 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 5cfe5ee2c7..ad0b18dddd 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -497,6 +497,7 @@ struct bnxt { /* Default completion ring */ struct bnxt_cp_ring_info *async_cp_ring; + struct bnxt_cp_ring_info *rxtx_nq_ring; uint32_t max_ring_grps; struct bnxt_ring_grp_info *grp_info; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 9adcd94ff8..2845e9185a 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -223,6 +223,7 @@ static void bnxt_free_mem(struct bnxt *bp, bool reconfig) bnxt_free_rx_rings(bp); } bnxt_free_async_cp_ring(bp); + bnxt_free_rxtx_nq_ring(bp); } static int bnxt_alloc_mem(struct bnxt *bp, bool reconfig) @@ -253,6 +254,10 @@ static int bnxt_alloc_mem(struct bnxt *bp, bool reconfig) if (rc) goto alloc_mem_err; + rc = bnxt_alloc_rxtx_nq_ring(bp); + if (rc) + goto alloc_mem_err; + return 0; alloc_mem_err: diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 76ef004237..b5211aea75 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -2325,11 +2325,8 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) bp->grp_info[queue_index].ag_fw_ring_id = INVALID_HW_RING_ID; } - if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) { + if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) bnxt_free_cp_ring(bp, cpr); - if (rxq->nq_ring) - bnxt_free_nq_ring(bp, rxq->nq_ring); - } if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].cp_fw_ring_id = INVALID_HW_RING_ID; @@ -2361,8 +2358,6 @@ int bnxt_free_all_hwrm_rings(struct bnxt *bp) if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) { bnxt_free_cp_ring(bp, cpr); cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID; - if (txq->nq_ring) - bnxt_free_nq_ring(bp, txq->nq_ring); } } diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index cf0c24c9dc..19fc45395d 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -125,7 +125,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, int cp_vmem_len = RTE_CACHE_LINE_ROUNDUP(cp_ring->vmem_size); cp_vmem_len = RTE_ALIGN(cp_vmem_len, 128); - int nq_vmem_len = BNXT_CHIP_THOR(bp) ? + int nq_vmem_len = nq_ring_info ? RTE_CACHE_LINE_ROUNDUP(cp_ring->vmem_size) : 0; nq_vmem_len = RTE_ALIGN(nq_vmem_len, 128); @@ -159,7 +159,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, nq_ring_start = cp_ring_start + cp_ring_len; nq_ring_start = RTE_ALIGN(nq_ring_start, 4096); - int nq_ring_len = BNXT_CHIP_THOR(bp) ? cp_ring_len : 0; + int nq_ring_len = nq_ring_info ? cp_ring_len : 0; int tx_ring_start = nq_ring_start + nq_ring_len; tx_ring_start = RTE_ALIGN(tx_ring_start, 4096); @@ -403,12 +403,12 @@ static void bnxt_set_db(struct bnxt *bp, } static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index, - struct bnxt_cp_ring_info *cpr, - struct bnxt_cp_ring_info *nqr) + struct bnxt_cp_ring_info *cpr) { struct bnxt_ring *cp_ring = cpr->cp_ring_struct; uint32_t nq_ring_id = HWRM_NA_SIGNATURE; int cp_ring_index = queue_index + BNXT_NUM_ASYNC_CPR(bp); + struct bnxt_cp_ring_info *nqr = bp->rxtx_nq_ring; uint8_t ring_type; int rc = 0; @@ -436,31 +436,85 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index, return 0; } -static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index, - struct bnxt_cp_ring_info *nqr) +int bnxt_alloc_rxtx_nq_ring(struct bnxt *bp) { - struct bnxt_ring *nq_ring = nqr->cp_ring_struct; - int nq_ring_index = queue_index + BNXT_NUM_ASYNC_CPR(bp); + struct bnxt_cp_ring_info *nqr; + struct bnxt_ring *ring; + int ring_index = BNXT_NUM_ASYNC_CPR(bp); + unsigned int socket_id; uint8_t ring_type; int rc = 0; - if (!BNXT_HAS_NQ(bp)) - return -EINVAL; + if (!BNXT_HAS_NQ(bp) || bp->rxtx_nq_ring) + return 0; + + socket_id = rte_lcore_to_socket_id(rte_get_master_lcore()); + + nqr = rte_zmalloc_socket("nqr", + sizeof(struct bnxt_cp_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (nqr == NULL) + return -ENOMEM; + + ring = rte_zmalloc_socket("bnxt_cp_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) { + rte_free(nqr); + return -ENOMEM; + } + + ring->bd = (void *)nqr->cp_desc_ring; + ring->bd_dma = nqr->cp_desc_mapping; + ring->ring_size = rte_align32pow2(DEFAULT_CP_RING_SIZE); + ring->ring_mask = ring->ring_size - 1; + ring->vmem_size = 0; + ring->vmem = NULL; + + nqr->cp_ring_struct = ring; + rc = bnxt_alloc_rings(bp, 0, NULL, NULL, nqr, NULL, "l2_nqr"); + if (rc) { + rte_free(ring); + rte_free(nqr); + return -ENOMEM; + } ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ; - rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, nq_ring_index, + rc = bnxt_hwrm_ring_alloc(bp, ring, ring_type, ring_index, HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE, 0); - if (rc) + if (rc) { + rte_free(ring); + rte_free(nqr); return rc; + } - bnxt_set_db(bp, &nqr->cp_db, ring_type, nq_ring_index, - nq_ring->fw_ring_id); + bnxt_set_db(bp, &nqr->cp_db, ring_type, ring_index, + ring->fw_ring_id); bnxt_db_nq(nqr); + bp->rxtx_nq_ring = nqr; + return 0; } +/* Free RX/TX NQ ring. */ +void bnxt_free_rxtx_nq_ring(struct bnxt *bp) +{ + struct bnxt_cp_ring_info *nqr = bp->rxtx_nq_ring; + + if (!nqr) + return; + + bnxt_free_nq_ring(bp, nqr); + + bnxt_free_ring(nqr->cp_ring_struct); + rte_free(nqr->cp_ring_struct); + nqr->cp_ring_struct = NULL; + rte_free(nqr); + bp->rxtx_nq_ring = NULL; +} + static int bnxt_alloc_rx_ring(struct bnxt *bp, int queue_index) { struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index]; @@ -529,17 +583,10 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index]; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; struct bnxt_ring *cp_ring = cpr->cp_ring_struct; - struct bnxt_cp_ring_info *nqr = rxq->nq_ring; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; int rc; - if (BNXT_HAS_NQ(bp)) { - rc = bnxt_alloc_nq_ring(bp, queue_index, nqr); - if (rc) - goto err_out; - } - - rc = bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr); + rc = bnxt_alloc_cmpl_ring(bp, queue_index, cpr); if (rc) goto err_out; @@ -644,16 +691,10 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) for (i = 0; i < bp->rx_cp_nr_rings; i++) { struct bnxt_rx_queue *rxq = bp->rx_queues[i]; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; - struct bnxt_cp_ring_info *nqr = rxq->nq_ring; struct bnxt_ring *cp_ring = cpr->cp_ring_struct; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; - if (BNXT_HAS_NQ(bp)) { - if (bnxt_alloc_nq_ring(bp, i, nqr)) - goto err_out; - } - - if (bnxt_alloc_cmpl_ring(bp, i, cpr, nqr)) + if (bnxt_alloc_cmpl_ring(bp, i, cpr)) goto err_out; if (BNXT_HAS_RING_GRPS(bp)) { @@ -697,18 +738,12 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) struct bnxt_tx_queue *txq = bp->tx_queues[i]; struct bnxt_cp_ring_info *cpr = txq->cp_ring; struct bnxt_ring *cp_ring = cpr->cp_ring_struct; - struct bnxt_cp_ring_info *nqr = txq->nq_ring; struct bnxt_tx_ring_info *txr = txq->tx_ring; struct bnxt_ring *ring = txr->tx_ring_struct; unsigned int idx = i + bp->rx_cp_nr_rings; uint16_t tx_cosq_id = 0; - if (BNXT_HAS_NQ(bp)) { - if (bnxt_alloc_nq_ring(bp, idx, nqr)) - goto err_out; - } - - if (bnxt_alloc_cmpl_ring(bp, idx, cpr, nqr)) + if (bnxt_alloc_cmpl_ring(bp, idx, cpr)) goto err_out; if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h index a5d5106986..833118391b 100644 --- a/drivers/net/bnxt/bnxt_ring.h +++ b/drivers/net/bnxt/bnxt_ring.h @@ -78,6 +78,8 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp); int bnxt_alloc_async_cp_ring(struct bnxt *bp); void bnxt_free_async_cp_ring(struct bnxt *bp); int bnxt_alloc_async_ring_struct(struct bnxt *bp); +int bnxt_alloc_rxtx_nq_ring(struct bnxt *bp); +void bnxt_free_rxtx_nq_ring(struct bnxt *bp); static inline void bnxt_db_write(struct bnxt_db_info *db, uint32_t idx) { diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 5d291cbafd..9439fcd1fb 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -341,8 +341,8 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev, eth_dev->data->rx_queues[queue_idx] = rxq; /* Allocate RX ring hardware descriptors */ - if (bnxt_alloc_rings(bp, queue_idx, NULL, rxq, rxq->cp_ring, - rxq->nq_ring, "rxr")) { + if (bnxt_alloc_rings(bp, queue_idx, NULL, rxq, rxq->cp_ring, NULL, + "rxr")) { PMD_DRV_LOG(ERR, "ring_dma_zone_reserve for rx_ring failed!\n"); bnxt_rx_queue_release_op(rxq); diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index 3693d89a60..4f5182d9e9 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -39,7 +39,6 @@ struct bnxt_rx_queue { uint32_t rx_buf_size; struct bnxt_rx_ring_info *rx_ring; struct bnxt_cp_ring_info *cp_ring; - struct bnxt_cp_ring_info *nq_ring; rte_atomic64_t rx_mbuf_alloc_fail; const struct rte_memzone *mz; }; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 1a6fb7944b..bda4f4c1b9 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -742,7 +742,6 @@ void bnxt_free_rx_rings(struct bnxt *bp) int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) { struct bnxt_cp_ring_info *cpr; - struct bnxt_cp_ring_info *nqr; struct bnxt_rx_ring_info *rxr; struct bnxt_ring *ring; @@ -789,32 +788,6 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) ring->vmem_size = 0; ring->vmem = NULL; - if (BNXT_HAS_NQ(rxq->bp)) { - nqr = rte_zmalloc_socket("bnxt_rx_ring_cq", - sizeof(struct bnxt_cp_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (nqr == NULL) - return -ENOMEM; - - rxq->nq_ring = nqr; - - ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - - nqr->cp_ring_struct = ring; - ring->ring_size = - rte_align32pow2(rxr->rx_ring_struct->ring_size * - (2 + AGG_RING_SIZE_FACTOR)); - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)nqr->cp_desc_ring; - ring->bd_dma = nqr->cp_desc_mapping; - ring->vmem_size = 0; - ring->vmem = NULL; - } - /* Allocate Aggregator rings */ ring = rte_zmalloc_socket("bnxt_rx_ring_struct", sizeof(struct bnxt_ring), diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c index ea20d737fe..5ad4ee155e 100644 --- a/drivers/net/bnxt/bnxt_txq.c +++ b/drivers/net/bnxt/bnxt_txq.c @@ -141,8 +141,8 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev, txq->port_id = eth_dev->data->port_id; /* Allocate TX ring hardware descriptors */ - if (bnxt_alloc_rings(bp, queue_idx, txq, NULL, txq->cp_ring, - txq->nq_ring, "txr")) { + if (bnxt_alloc_rings(bp, queue_idx, txq, NULL, txq->cp_ring, NULL, + "txr")) { PMD_DRV_LOG(ERR, "ring_dma_zone_reserve for tx_ring failed!"); bnxt_tx_queue_release_op(txq); rc = -ENOMEM; diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h index 7a442516d2..37a3f9539f 100644 --- a/drivers/net/bnxt/bnxt_txq.h +++ b/drivers/net/bnxt/bnxt_txq.h @@ -33,7 +33,6 @@ struct bnxt_tx_queue { unsigned int cp_nr_rings; struct bnxt_cp_ring_info *cp_ring; - struct bnxt_cp_ring_info *nq_ring; const struct rte_memzone *mz; struct rte_mbuf **free; }; diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 0ed6581bed..6e2ee86c05 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -57,7 +57,6 @@ int bnxt_init_one_tx_ring(struct bnxt_tx_queue *txq) int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id) { struct bnxt_cp_ring_info *cpr; - struct bnxt_cp_ring_info *nqr; struct bnxt_tx_ring_info *txr; struct bnxt_ring *ring; @@ -101,30 +100,6 @@ int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id) ring->vmem_size = 0; ring->vmem = NULL; - if (BNXT_HAS_NQ(txq->bp)) { - nqr = rte_zmalloc_socket("bnxt_tx_ring_nq", - sizeof(struct bnxt_cp_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (nqr == NULL) - return -ENOMEM; - - txq->nq_ring = nqr; - - ring = rte_zmalloc_socket("bnxt_tx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - - nqr->cp_ring_struct = ring; - ring->ring_size = txr->tx_ring_struct->ring_size; - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)nqr->cp_desc_ring; - ring->bd_dma = nqr->cp_desc_mapping; - ring->vmem_size = 0; - ring->vmem = NULL; - } - return 0; } From patchwork Fri Oct 4 03:49:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60507 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D08AA1C1C4; Fri, 4 Oct 2019 05:49:27 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 7579F1C120; Fri, 4 Oct 2019 05:49:13 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 44C5230C2C8; Thu, 3 Oct 2019 20:47:53 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 44C5230C2C8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160873; bh=uvW9taRHg5wCUrEnp/Tdg23PYqkeqZUSiigpXP7Zacs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bLQW1NsiKwNelaDSD6WEjhH5D5CIt6hUbBNvfNpmsS1NtzhHeF2av1tfAILgsOQmB 2wzZMo6nojXaXiBqxk02h/HX6Qf4SewkaXzrfSosIBf7WB0wTIqnuCB2QsLvEsV3aF kwPMkA5AaReTUC2KWjXKWBduYXwbrP4OJhqJCA44= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id BADBF14008B; Thu, 3 Oct 2019 20:49:08 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , stable@dpdk.org, Somnath Kotur Date: Thu, 3 Oct 2019 20:49:00 -0700 Message-Id: <20191004034903.85233-7-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 6/9] net/bnxt: fix stats context calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson The required number of statistics contexts is computed as the sum of the number of receive and transmit rings plus one for the async completion ring. A statistics context is not actually required for the async completion ring, so remove it from the calculation. Fixes: bd0a14c99f65 ("net/bnxt: use dedicated CPR for async events") Cc: stable@dpdk.org Signed-off-by: Lance Richardson Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index b5211aea75..1e65c3b80b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -851,9 +851,7 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test) req.num_tx_rings = rte_cpu_to_le_16(bp->tx_nr_rings); req.num_rx_rings = rte_cpu_to_le_16(bp->rx_nr_rings * AGG_RING_MULTIPLIER); - req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings + - bp->tx_nr_rings + - BNXT_NUM_ASYNC_CPR(bp)); + req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings + bp->tx_nr_rings); req.num_cmpl_rings = rte_cpu_to_le_16(bp->rx_nr_rings + bp->tx_nr_rings + BNXT_NUM_ASYNC_CPR(bp)); From patchwork Fri Oct 4 03:49:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60505 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8249E1C1B0; Fri, 4 Oct 2019 05:49:24 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 6C5C31C11B for ; Fri, 4 Oct 2019 05:49:13 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 3224330C2C6; Thu, 3 Oct 2019 20:47:53 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 3224330C2C6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160873; bh=OWB37nhoEJuQorqorNeq2/HdDZInJhTlPm0VK/2cCe4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jSheTYhuZO4T2K4KaxGHstvYZMcvfnl3Z8y28OQTeEvROPrLrkAh/pEnRP57TdPHe wT5xcpJ0NjnZrkWKN5Hqdu0aospOB5qa0V339qyFYtIY9ktoqhbBe5FbqyJdw+qrQg mJLWyLjjMTFkYVoJ7QBTWVFUrzvcqdfUJmn27dbY= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 09266140069; Thu, 3 Oct 2019 20:49:09 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Ajit Kumar Khaparde Date: Thu, 3 Oct 2019 20:49:01 -0700 Message-Id: <20191004034903.85233-8-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 7/9] net/bnxt: use correct default Rx queue for thor X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson Use first receive queue assigned to VNIC as the default receive queue when configuring Thor VNICs. This is necessary e.g. in order for flow redirection to a specific receive queue to work correctly. Fixes: f8168ca0e690 ("net/bnxt: support thor controller") Signed-off-by: Lance Richardson Reviewed-by: Ajit Kumar Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 1e65c3b80b..0d5362581e 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1692,7 +1692,8 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_PREP(req, VNIC_CFG, BNXT_USE_CHIMP_MB); if (BNXT_CHIP_THOR(bp)) { - struct bnxt_rx_queue *rxq = bp->eth_dev->data->rx_queues[0]; + struct bnxt_rx_queue *rxq = + bp->eth_dev->data->rx_queues[vnic->start_grp_id]; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; From patchwork Fri Oct 4 03:49:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60506 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 03E411C1B8; Fri, 4 Oct 2019 05:49:26 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 74E911C11C; Fri, 4 Oct 2019 05:49:13 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id C404830C2C9; Thu, 3 Oct 2019 20:47:53 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com C404830C2C9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160873; bh=mGp/FgozeIg5XJbbXAgXW1JEpgasPnZ6ZDb4tG6JN1o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nCMiiX66+5XdfeqV6gZ5nnPX2xYYqq9QxvLmwCqDEz/PchZEsLR2+goxHT8emMLIr GNerqJMqdxd9goVDO5Igrs19UOBcxbYOcQcVO0XReJURQ06uxSK7zlc4oNBdyNdN23 7Nf2L7JOSvYJZZz1TqcWh993bD4WZXnNpZTsaSe8= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 444E014008D; Thu, 3 Oct 2019 20:49:09 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , stable@dpdk.org, Somnath Kotur Date: Thu, 3 Oct 2019 20:49:02 -0700 Message-Id: <20191004034903.85233-9-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 8/9] net/bnxt: advertise scatter receive offload capability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson Scattered receive is supported but not included in receive offload capabilities. Fix by adding it and including in scattered receive calculation. Fixes: 9c1507d96ab8 ("net/bnxt: switch to the new offload API") Cc: stable@dpdk.org Signed-off-by: Lance Richardson Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 2845e9185a..5160ac002b 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -163,7 +163,8 @@ static const struct rte_pci_id bnxt_pci_id_map[] = { DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_KEEP_CRC | \ DEV_RX_OFFLOAD_VLAN_EXTEND | \ - DEV_RX_OFFLOAD_TCP_LRO) + DEV_RX_OFFLOAD_TCP_LRO | \ + DEV_RX_OFFLOAD_SCATTER) static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask); static void bnxt_print_link_info(struct rte_eth_dev *eth_dev); @@ -749,6 +750,9 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev) uint16_t buf_size; int i; + if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) + return 1; + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { struct bnxt_rx_queue *rxq = eth_dev->data->rx_queues[i]; From patchwork Fri Oct 4 03:49:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 60508 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 943191C1D3; Fri, 4 Oct 2019 05:49:29 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (unknown [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 7959A1C121 for ; Fri, 4 Oct 2019 05:49:13 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 0EB3C30C2B5; Thu, 3 Oct 2019 20:47:54 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 0EB3C30C2B5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1570160874; bh=ys+goBf7lkWI66Au+Y3mvsHtyKF24a5i/MAhYVj3zwo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VZpGQ6s1lQEkvzDPLM4/7bcBaE/8vP28abn2EKoU0cbWaxmWTBZ7eMFGIGG2sMOat rNQ6pUr/ecBkbvYuyEzp+dYygOzhGR5QV6WdHyWBYUHQblxrR/zIzRsZVu9QO+dr1y OKA/k2LOHC8lD824of0S2mpX6qCR+eCKiZFdtnvg= Received: from localhost.localdomain (unknown [10.230.30.225]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 8A819140069; Thu, 3 Oct 2019 20:49:09 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Ajit Kumar Khaparde Date: Thu, 3 Oct 2019 20:49:03 -0700 Message-Id: <20191004034903.85233-10-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20191004034903.85233-1-ajit.khaparde@broadcom.com> References: <20191004034903.85233-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 9/9] net/bnxt: improve CPR handling in vector PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson Reduce overhead of CPR descriptor validity checking in vector receive and transmit functions. Preserve raw cpr consumer index in vector transmit completion function. Remove an unneeded prefetch (per benchmarking) from vector transmit completion function. Fixes: bc4a000f2f53 ("net/bnxt: implement SSE vector mode") Signed-off-by: Lance Richardson Reviewed-by: Ajit Kumar Khaparde --- drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 26 ++++---------------------- 1 file changed, 4 insertions(+), 22 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c index 029053e305..22d9f9e84a 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c +++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c @@ -245,10 +245,6 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, if (!CMP_VALID(rxcmp, raw_cons, cpr->cp_ring_struct)) break; - cpr->valid = FLIP_VALID(cons, - cpr->cp_ring_struct->ring_mask, - cpr->valid); - if (likely(CMP_TYPE(rxcmp) == RX_PKT_CMPL_TYPE_RX_L2)) { struct rx_pkt_cmpl_hi *rxcmp1; uint32_t tmp_raw_cons; @@ -272,10 +268,6 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, rte_prefetch0(mbuf); rxr->rx_buf_ring[cons].mbuf = NULL; - cpr->valid = FLIP_VALID(cp_cons, - cpr->cp_ring_struct->ring_mask, - cpr->valid); - /* Set constant fields from mbuf initializer. */ _mm_store_si128((__m128i *)&mbuf->rearm_data, mbuf_init); @@ -318,22 +310,13 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, rxq->rxrearm_nb += nb_rx_pkts; cpr->cp_raw_cons = raw_cons; + cpr->valid = !!(cpr->cp_raw_cons & cpr->cp_ring_struct->ring_size); if (nb_rx_pkts || evt) bnxt_db_cq(cpr); return nb_rx_pkts; } -static inline void bnxt_next_cmpl(struct bnxt_cp_ring_info *cpr, uint32_t *idx, - bool *v, uint32_t inc) -{ - *idx += inc; - if (unlikely(*idx == cpr->cp_ring_struct->ring_size)) { - *v = !*v; - *idx = 0; - } -} - static void bnxt_tx_cmp_vec(struct bnxt_tx_queue *txq, int nr_pkts) { @@ -379,10 +362,8 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq) cons = RING_CMPL(ring_mask, raw_cons); txcmp = (struct tx_cmpl *)&cp_desc_ring[cons]; - if (!CMPL_VALID(txcmp, cpr->valid)) + if (!CMP_VALID(txcmp, raw_cons, cp_ring_struct)) break; - bnxt_next_cmpl(cpr, &cons, &cpr->valid, 1); - rte_prefetch0(&cp_desc_ring[cons]); if (likely(CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2)) nb_tx_pkts += txcmp->opaque; @@ -390,9 +371,10 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq) RTE_LOG_DP(ERR, PMD, "Unhandled CMP type %02x\n", CMP_TYPE(txcmp)); - raw_cons = cons; + raw_cons = NEXT_RAW_CMP(raw_cons); } while (nb_tx_pkts < ring_mask); + cpr->valid = !!(raw_cons & cp_ring_struct->ring_size); if (nb_tx_pkts) { bnxt_tx_cmp_vec(txq, nb_tx_pkts); cpr->cp_raw_cons = raw_cons;