From patchwork Wed Dec 27 04:21:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135589 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3F21F437A1; Wed, 27 Dec 2023 05:21:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 67806402E2; Wed, 27 Dec 2023 05:21:29 +0100 (CET) Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com [209.85.160.178]) by mails.dpdk.org (Postfix) with ESMTP id 0572E402D4 for ; Wed, 27 Dec 2023 05:21:27 +0100 (CET) Received: by mail-qt1-f178.google.com with SMTP id d75a77b69052e-427b1bf1896so35393711cf.3 for ; Tue, 26 Dec 2023 20:21:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650886; x=1704255686; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=6dKa0pLpkzftUTo0mqNwA03p0/8U/DeVQDJjuuT6np8=; b=Nn8jMZIsRLyxk9SNd3GtDtBb/oXyEw7DvUT/U6plZPhikm1YhrvAHQf1/Y9whAgWNG 1nOx4ybKf5ZHiXZMyjpCSkQ1o772FSaEFkqbeI39Sfb6/ui0WjEIqg8C3B90OGwLZTFv tDLmHLcd/uzHrYvBgof62LodEiCBQHMebInfs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650886; x=1704255686; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6dKa0pLpkzftUTo0mqNwA03p0/8U/DeVQDJjuuT6np8=; b=xFO2JJnMtb1PlMze/jUk5SDrYCpA2fEn52kDgzJp2uaHea2tyQjvOCN9tQI31b3Wyy ZpEJawXp//W/JN8zzwTsEqOPHLT929LqMSeHPIHDXjS24SBMfOcHaH1b8ON4TLoXlv+C sFsfFh4ZxZYspH0d050TmcnzJFLhVkVhyTCkMw8PHK/EAthSz1ZpiORxM1K8zinvJ8R1 EdRNpypL3IFVvwQQeDPVeDGr0qhjnqji4OuIPfJqU+Te4J2hi3WiM53wOc0sccj892Xb 2b9c3EpGVTEMnlUnLhSba7UCbKuIKk9APaSksWhSU/fByMGHjKWybqtUKlZ/NeMRdlSq Nmhg== X-Gm-Message-State: AOJu0YxxWNQ+NOLdsJabUmfeLUWGQXFK8MZ9Tz3IEN2ETmsBV3dl8QDy r4JHJgdS0c/Nfa7idzlVFaSu6Pak3lvrgdBciit8Ru0xEIENL9zF0f0PDmsbppNFz3TtcZuHr1i 6C/T9UuSsDnLems5IIVAp0SiridR4jPDOt+h1PcKjVU2k23E1pohY08y7QxFSE57qyxR8NAeGWn 4= X-Google-Smtp-Source: AGHT+IHLVYTFVZuhe0xtaAtgZ07dIESdgkwzyznXnxBiT4Pq2mzqj1redBnWHLZ3FYHTfdT+M49SKQ== X-Received: by 2002:a05:622a:3cc:b0:427:7a3e:57b3 with SMTP id k12-20020a05622a03cc00b004277a3e57b3mr11255390qtx.93.1703650886005; Tue, 26 Dec 2023 20:21:26 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:25 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 01/18] net/bnxt: add support for UDP GSO Date: Tue, 26 Dec 2023 20:21:02 -0800 Message-Id: <20231227042119.72469-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org UDP GSO aka UDP Fragmentation Offload allows an application or stack to provide a data payload larger than the MTU. The application then updates the mbuf ol_flags and sets the PKT_TX_UDP_SEG flag. Then based on the tso_segs and tso_mss fields in the mbuf the PMD can indicate the UDP GSO transmit request to the hardware. This feature is supported on Thor2 and will be enabled when the firmware sets the UDP GSO support via the HWRM_FUNC_QCAPS. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_hwrm.c | 2 ++ drivers/net/bnxt/bnxt_txq.c | 2 ++ drivers/net/bnxt/bnxt_txr.c | 7 ++++++- 4 files changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 7aed4c3da3..4b5c2c4b8f 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -801,6 +801,7 @@ struct bnxt { (BNXT_CHIP_P5_P7((bp)) && \ (bp)->hwrm_spec_code >= HWRM_VERSION_1_9_2 && \ !BNXT_VF((bp))) +#define BNXT_FW_CAP_UDP_GSO BIT(13) #define BNXT_TRUFLOW_EN(bp) ((bp)->fw_cap & BNXT_FW_CAP_TRUFLOW_EN &&\ (bp)->app_id != 0xFF) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index e56f7693af..37cf179938 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -950,6 +950,8 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) flags_ext2 = rte_le_to_cpu_32(resp->flags_ext2); if (flags_ext2 & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_RX_ALL_PKTS_TIMESTAMPS_SUPPORTED) bp->fw_cap |= BNXT_FW_CAP_RX_ALL_PKT_TS; + if (flags_ext2 & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_UDP_GSO_SUPPORTED) + bp->fw_cap |= BNXT_FW_CAP_UDP_GSO; unlock: HWRM_UNLOCK(); diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c index 4df4604975..f99ad211db 100644 --- a/drivers/net/bnxt/bnxt_txq.c +++ b/drivers/net/bnxt/bnxt_txq.c @@ -42,6 +42,8 @@ uint64_t bnxt_get_tx_port_offloads(struct bnxt *bp) tx_offload_capa |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO; if (BNXT_TUNNELED_OFFLOADS_CAP_IPINIP_EN(bp)) tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO; + if (bp->fw_cap & BNXT_FW_CAP_UDP_GSO) + tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_TSO; return tx_offload_capa; } diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 899986764f..38da2d2829 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -123,6 +123,10 @@ bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq) return false; } +/* Used for verifying TSO segments during TCP Segmentation Offload or + * UDP Fragmentation Offload. tx_pkt->tso_segsz stores the number of + * segments or fragments in those cases. + */ static bool bnxt_zero_data_len_tso_segsz(struct rte_mbuf *tx_pkt, uint8_t data_len_chk) { @@ -308,7 +312,8 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt, else txbd1->cfa_action = txq->bp->tx_cfa_action; - if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG) { + if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG || + tx_pkt->ol_flags & RTE_MBUF_F_TX_UDP_SEG) { uint16_t hdr_size; /* TSO */ From patchwork Wed Dec 27 04:21:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135590 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C735C437A1; Wed, 27 Dec 2023 05:21:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F342D402EE; Wed, 27 Dec 2023 05:21:31 +0100 (CET) Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by mails.dpdk.org (Postfix) with ESMTP id EFB63402D4 for ; Wed, 27 Dec 2023 05:21:28 +0100 (CET) Received: by mail-qt1-f172.google.com with SMTP id d75a77b69052e-427e1a9cc12so6115281cf.2 for ; Tue, 26 Dec 2023 20:21:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650888; x=1704255688; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=fDu0jUb2ILbhMJ54f+rUt6EXDk//lWoR6MKUS7kWc5g=; b=VKPzemtGtGslt4YuKxOo6sHTDeFjtktYop1IVxu2yLOjXh6wTVlNDJUC2A1tiO3CRh ALWsB8dliJWfdeS8IpBx9BpeO4JjrlqSLOdY44qsqx6XBMTmUROEL9Kbxtk1FFKAXufq ySLh/BQv4+5gEnleQbVpkKLXCDKxJhihgkqLI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650888; x=1704255688; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fDu0jUb2ILbhMJ54f+rUt6EXDk//lWoR6MKUS7kWc5g=; b=XUf+eZrakKgyjxsybtVd/uXCcizp4mjzI7iP5oQ1bEoVxUShh0jwLCz5nT9Y7FEs0U a5AJOFklzjt4ig7QMEVx1mWGN+feoMX0+o0CKd7em0VpuJMJ8FnLp9SH7M1deH/dmN4w ZJDtup7O68QzN6aUjmn771beOiV7zRd5mGwW2ZGh3DhNP2IJXT3QXMY3XjCIZL3ha2yV XH+1M5QNvzkOYfdFFxjTiyXl6PqfY0H4TDA4FTarY2KF8bhFdn3vlEcD0oCnkF3OlB6c pO7I2AABx1VYuQLCWXS5tlMD8CRI6q5UbHkoWRPQCuhfkkCuPMyPtLM2N4tHrthvHk+8 TPWA== X-Gm-Message-State: AOJu0Yx0VGOppAvtco9bJ2V3mx2xNM+XQIqRIGIuYnzlgMCgH0S0V9X3 PzyLlRl0ApeeGX5hwBffIcbssNmIaiEoJoh0DwbQyYPA5S97zXCfWMtgoZQ/Ss2YOyhun8v2f7Z ma2x5Ze9sdoHh0M2d4xqKXuLmcnuXyCuFO6CpUC6J9gyvkbpzsBrxivuwLD/XI4yDD00jckb8oR 4= X-Google-Smtp-Source: AGHT+IHAscCR5/H6zfBRyQmSsYhQj1cA1JSd3u5PI03eaHfuIhZ3TprLt8v2jPbgYGhhzCgU90Qg8Q== X-Received: by 2002:a05:622a:1746:b0:427:eae2:a054 with SMTP id l6-20020a05622a174600b00427eae2a054mr924138qtk.102.1703650887579; Tue, 26 Dec 2023 20:21:27 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:26 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Somnath Kotur , Damodharam Ammepalli Subject: [PATCH v3 02/18] net/bnxt: add support for compressed Rx CQE Date: Tue, 26 Dec 2023 20:21:03 -0800 Message-Id: <20231227042119.72469-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Thor2 supports compressed Rx completions instead of the full featured 32-byte Rx completions. Add support for these compressed CQEs in scalar mode. Unlike in the typical Rx completions, the hardware does not provide the opaque field to index into the aggregator descriptor ring. So maintain the consumer index for the aggregation ring in the driver. Signed-off-by: Ajit Khaparde Reviewed-by: Somnath Kotur Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 17 +++ drivers/net/bnxt/bnxt_ethdev.c | 51 +++++++++ drivers/net/bnxt/bnxt_hwrm.c | 16 +++ drivers/net/bnxt/bnxt_ring.c | 13 ++- drivers/net/bnxt/bnxt_rxr.c | 201 +++++++++++++++++++++++++++++++++ drivers/net/bnxt/bnxt_rxr.h | 55 +++++++++ 6 files changed, 352 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 4b5c2c4b8f..cfdbfd3f54 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -782,6 +782,7 @@ struct bnxt { #define BNXT_MULTIROOT_EN(bp) \ ((bp)->flags2 & BNXT_FLAGS2_MULTIROOT_EN) +#define BNXT_FLAGS2_COMPRESSED_RX_CQE BIT(5) uint32_t fw_cap; #define BNXT_FW_CAP_HOT_RESET BIT(0) #define BNXT_FW_CAP_IF_CHANGE BIT(1) @@ -814,6 +815,7 @@ struct bnxt { #define BNXT_VNIC_CAP_VLAN_RX_STRIP BIT(3) #define BNXT_RX_VLAN_STRIP_EN(bp) ((bp)->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP) #define BNXT_VNIC_CAP_OUTER_RSS_TRUSTED_VF BIT(4) +#define BNXT_VNIC_CAP_L2_CQE_MODE BIT(8) unsigned int rx_nr_rings; unsigned int rx_cp_nr_rings; unsigned int rx_num_qs_per_vnic; @@ -1013,6 +1015,21 @@ inline uint16_t bnxt_max_rings(struct bnxt *bp) return max_rings; } +static inline bool +bnxt_compressed_rx_cqe_mode_enabled(struct bnxt *bp) +{ + uint64_t rx_offloads = bp->eth_dev->data->dev_conf.rxmode.offloads; + + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_L2_CQE_MODE && + bp->flags2 & BNXT_FLAGS2_COMPRESSED_RX_CQE && + !(rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + !(rx_offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) && + !bp->num_reps && !bp->ieee_1588) + return true; + + return false; +} + #define BNXT_FC_TIMER 1 /* Timer freq in Sec Flow Counters */ /** diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 75e968394f..0f1c4326c4 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -103,6 +103,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = { #define BNXT_DEVARG_REP_FC_F2R "rep-fc-f2r" #define BNXT_DEVARG_APP_ID "app-id" #define BNXT_DEVARG_IEEE_1588 "ieee-1588" +#define BNXT_DEVARG_CQE_MODE "cqe-mode" static const char *const bnxt_dev_args[] = { BNXT_DEVARG_REPRESENTOR, @@ -116,9 +117,15 @@ static const char *const bnxt_dev_args[] = { BNXT_DEVARG_REP_FC_F2R, BNXT_DEVARG_APP_ID, BNXT_DEVARG_IEEE_1588, + BNXT_DEVARG_CQE_MODE, NULL }; +/* + * cqe-mode = an non-negative 8-bit number + */ +#define BNXT_DEVARG_CQE_MODE_INVALID(val) ((val) > 1) + /* * app-id = an non-negative 8-bit number */ @@ -5706,6 +5713,43 @@ bnxt_parse_devarg_max_num_kflows(__rte_unused const char *key, return 0; } +static int +bnxt_parse_devarg_cqe_mode(__rte_unused const char *key, + const char *value, void *opaque_arg) +{ + struct bnxt *bp = opaque_arg; + unsigned long cqe_mode; + char *end = NULL; + + if (!value || !opaque_arg) { + PMD_DRV_LOG(ERR, + "Invalid parameter passed to cqe-mode " + "devargs.\n"); + return -EINVAL; + } + + cqe_mode = strtoul(value, &end, 10); + if (end == NULL || *end != '\0' || + (cqe_mode == ULONG_MAX && errno == ERANGE)) { + PMD_DRV_LOG(ERR, + "Invalid parameter passed to cqe-mode " + "devargs.\n"); + return -EINVAL; + } + + if (BNXT_DEVARG_CQE_MODE_INVALID(cqe_mode)) { + PMD_DRV_LOG(ERR, "Invalid cqe-mode(%d) devargs.\n", + (uint16_t)cqe_mode); + return -EINVAL; + } + + if (cqe_mode == 1) + bp->flags2 |= BNXT_FLAGS2_COMPRESSED_RX_CQE; + PMD_DRV_LOG(INFO, "cqe-mode=%d feature enabled.\n", (uint8_t)cqe_mode); + + return 0; +} + static int bnxt_parse_devarg_app_id(__rte_unused const char *key, const char *value, void *opaque_arg) @@ -6047,6 +6091,13 @@ bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs) rte_kvargs_process(kvlist, BNXT_DEVARG_IEEE_1588, bnxt_parse_devarg_ieee_1588, bp); + /* + * Handler for "cqe-mode" devarg. + * Invoked as for ex: "-a 000:00:0d.0,cqe-mode=1" + */ + rte_kvargs_process(kvlist, BNXT_DEVARG_CQE_MODE, + bnxt_parse_devarg_cqe_mode, bp); + rte_kvargs_free(kvlist); return ret; } diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 37cf179938..378be997d3 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -2228,6 +2228,12 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) req.lb_rule = rte_cpu_to_le_16(vnic->lb_rule); config_mru: + if (bnxt_compressed_rx_cqe_mode_enabled(bp)) { + req.l2_cqe_mode = HWRM_VNIC_CFG_INPUT_L2_CQE_MODE_COMPRESSED; + enables |= HWRM_VNIC_CFG_INPUT_ENABLES_L2_CQE_MODE; + PMD_DRV_LOG(DEBUG, "Enabling compressed Rx CQE\n"); + } + req.enables = rte_cpu_to_le_32(enables); req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); req.mru = rte_cpu_to_le_16(vnic->mru); @@ -2604,6 +2610,16 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, struct hwrm_vnic_tpa_cfg_input req = {.req_type = 0 }; struct hwrm_vnic_tpa_cfg_output *resp = bp->hwrm_cmd_resp_addr; + if (bnxt_compressed_rx_cqe_mode_enabled(bp)) { + /* Don't worry if disabling TPA */ + if (!enable) + return 0; + + /* Return an error if enabling TPA w/ compressed Rx CQE. */ + PMD_DRV_LOG(ERR, "No HW support for LRO with compressed Rx\n"); + return -ENOTSUP; + } + if ((BNXT_CHIP_P5(bp) || BNXT_CHIP_P7(bp)) && !bp->max_tpa_v2) { if (enable) PMD_DRV_LOG(ERR, "No HW support for LRO\n"); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 90cad6c9c6..4bf0b9c6ed 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -573,6 +573,7 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index) return rc; rxr->ag_raw_prod = 0; + rxr->ag_cons = 0; if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].ag_fw_ring_id = ring->fw_ring_id; bnxt_set_db(bp, &rxr->ag_db, ring_type, map_idx, ring->fw_ring_id, @@ -595,7 +596,17 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) * Storage for the cp ring is allocated based on worst-case * usage, the actual size to be used by hw is computed here. */ - cp_ring->ring_size = rxr->rx_ring_struct->ring_size * 2; + if (bnxt_compressed_rx_cqe_mode_enabled(bp)) { + if (bnxt_need_agg_ring(bp->eth_dev)) + /* Worst case scenario, needed to accommodate Rx flush + * completion during RING_FREE. + */ + cp_ring->ring_size = rxr->rx_ring_struct->ring_size * 2; + else + cp_ring->ring_size = rxr->rx_ring_struct->ring_size; + } else { + cp_ring->ring_size = rxr->rx_ring_struct->ring_size * 2; + } if (bnxt_need_agg_ring(bp->eth_dev)) cp_ring->ring_size *= AGG_RING_SIZE_FACTOR; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 59ea0121de..b919922a64 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -907,6 +907,203 @@ void bnxt_set_mark_in_mbuf(struct bnxt *bp, mbuf->ol_flags |= RTE_MBUF_F_RX_FDIR | RTE_MBUF_F_RX_FDIR_ID; } +static void +bnxt_set_ol_flags_crx(struct bnxt_rx_ring_info *rxr, + struct rx_pkt_compress_cmpl *rxcmp, + struct rte_mbuf *mbuf) +{ + uint16_t flags_type, errors, flags; + uint16_t cserr, tmp; + uint64_t ol_flags; + + flags_type = rte_le_to_cpu_16(rxcmp->flags_type); + + cserr = rte_le_to_cpu_16(rxcmp->metadata1_cs_error_calc_v1) & + (RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_MASK | + BNXT_RXC_METADATA1_VLAN_VALID); + + flags = cserr & BNXT_CRX_CQE_CSUM_CALC_MASK; + tmp = flags; + + /* Set tunnel frame indicator. + * This is to correctly index into the flags_err table. + */ + flags |= (flags & BNXT_CRX_TUN_CS_CALC) ? BNXT_PKT_CMPL_T_IP_CS_CALC << 3 : 0; + + flags = flags >> BNXT_CRX_CQE_CSUM_CALC_SFT; + + errors = cserr & BNXT_CRX_CQE_CSUM_ERROR_MASK; + errors = (errors >> RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_SFT) & flags; + + ol_flags = rxr->ol_flags_table[flags & ~errors]; + + if (unlikely(errors)) { + /* Set tunnel frame indicator. + * This is to correctly index into the flags_err table. + */ + errors |= (tmp & BNXT_CRX_TUN_CS_CALC) ? BNXT_PKT_CMPL_T_IP_CS_CALC << 2 : 0; + ol_flags |= rxr->ol_flags_err_table[errors]; + } + + if (flags_type & RX_PKT_COMPRESS_CMPL_FLAGS_RSS_VALID) { + mbuf->hash.rss = rte_le_to_cpu_32(rxcmp->rss_hash); + ol_flags |= RTE_MBUF_F_RX_RSS_HASH; + } + +#ifdef RTE_LIBRTE_IEEE1588 + /* TODO: TIMESTAMP flags need to be parsed and set. */ +#endif + + mbuf->ol_flags = ol_flags; +} + +static uint32_t +bnxt_parse_pkt_type_crx(struct rx_pkt_compress_cmpl *rxcmp) +{ + uint16_t flags_type, meta_cs; + uint8_t index; + + flags_type = rte_le_to_cpu_16(rxcmp->flags_type); + meta_cs = rte_le_to_cpu_16(rxcmp->metadata1_cs_error_calc_v1); + + /* Validate ptype table indexing at build time. */ + /* TODO */ + /* bnxt_check_ptype_constants(); */ + + /* + * Index format: + * bit 0: Set if IP tunnel encapsulated packet. + * bit 1: Set if IPv6 packet, clear if IPv4. + * bit 2: Set if VLAN tag present. + * bits 3-6: Four-bit hardware packet type field. + */ + index = BNXT_CMPL_ITYPE_TO_IDX(flags_type) | + BNXT_CMPL_VLAN_TUN_TO_IDX_CRX(meta_cs) | + BNXT_CMPL_IP_VER_TO_IDX(flags_type); + + return bnxt_ptype_table[index]; +} + +static int bnxt_rx_pages_crx(struct bnxt_rx_queue *rxq, struct rte_mbuf *mbuf, + uint32_t *tmp_raw_cons, uint8_t agg_buf) +{ + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + int i; + uint16_t cp_cons, ag_cons; + struct rx_pkt_compress_cmpl *rxcmp; + struct rte_mbuf *last = mbuf; + + for (i = 0; i < agg_buf; i++) { + struct rte_mbuf **ag_buf; + struct rte_mbuf *ag_mbuf; + + *tmp_raw_cons = NEXT_RAW_CMP(*tmp_raw_cons); + cp_cons = RING_CMP(cpr->cp_ring_struct, *tmp_raw_cons); + rxcmp = (struct rx_pkt_compress_cmpl *)&cpr->cp_desc_ring[cp_cons]; + +#ifdef BNXT_DEBUG + bnxt_dump_cmpl(cp_cons, rxcmp); +#endif + + /* + * The consumer index aka the opaque field for the agg buffers + * is not * available in errors_agg_bufs_opaque. So maintain it + * in driver itself. + */ + ag_cons = rxr->ag_cons; + ag_buf = &rxr->ag_buf_ring[ag_cons]; + ag_mbuf = *ag_buf; + + ag_mbuf->data_len = rte_le_to_cpu_16(rxcmp->len); + + mbuf->nb_segs++; + mbuf->pkt_len += ag_mbuf->data_len; + + last->next = ag_mbuf; + last = ag_mbuf; + + *ag_buf = NULL; + /* + * As aggregation buffer consumed out of order in TPA module, + * use bitmap to track freed slots to be allocated and notified + * to NIC. TODO: Is this needed. Most likely not. + */ + rte_bitmap_set(rxr->ag_bitmap, ag_cons); + rxr->ag_cons = RING_IDX(rxr->ag_ring_struct, RING_NEXT(ag_cons)); + } + last->next = NULL; + bnxt_prod_ag_mbuf(rxq); + return 0; +} + +static int bnxt_crx_pkt(struct rte_mbuf **rx_pkt, + struct bnxt_rx_queue *rxq, + struct rx_pkt_compress_cmpl *rxcmp, + uint32_t *raw_cons) +{ + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + uint32_t tmp_raw_cons = *raw_cons; + uint16_t cons, raw_prod; + struct rte_mbuf *mbuf; + int rc = 0; + uint8_t agg_buf = 0; + + agg_buf = BNXT_CRX_CQE_AGG_BUFS(rxcmp); + /* + * Since size of rx_pkt_cmpl is same as rx_pkt_compress_cmpl, + * we should be able to use bnxt_agg_bufs_valid to check if AGG + * bufs are valid when using compressed CQEs. + * All we want to check here is if the CQE is valid and the + * location of valid bit is same irrespective of the CQE type. + */ + if (agg_buf && !bnxt_agg_bufs_valid(cpr, agg_buf, tmp_raw_cons)) + return -EBUSY; + + raw_prod = rxr->rx_raw_prod; + + cons = rxcmp->errors_agg_bufs_opaque & BNXT_CRX_CQE_OPAQUE_MASK; + mbuf = bnxt_consume_rx_buf(rxr, cons); + if (mbuf == NULL) + return -EBUSY; + + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->next = NULL; + mbuf->pkt_len = rxcmp->len; + mbuf->data_len = mbuf->pkt_len; + mbuf->port = rxq->port_id; + +#ifdef RTE_LIBRTE_IEEE1588 + /* TODO: Add timestamp support. */ +#endif + + bnxt_set_ol_flags_crx(rxr, rxcmp, mbuf); + mbuf->packet_type = bnxt_parse_pkt_type_crx(rxcmp); + bnxt_set_vlan_crx(rxcmp, mbuf); + + if (bnxt_alloc_rx_data(rxq, rxr, raw_prod)) { + PMD_DRV_LOG(ERR, "mbuf alloc failed with prod=0x%x\n", + raw_prod); + rc = -ENOMEM; + goto rx; + } + raw_prod = RING_NEXT(raw_prod); + rxr->rx_raw_prod = raw_prod; + + if (agg_buf) + bnxt_rx_pages_crx(rxq, mbuf, &tmp_raw_cons, agg_buf); + +rx: + rxr->rx_next_cons = RING_IDX(rxr->rx_ring_struct, RING_NEXT(cons)); + *rx_pkt = mbuf; + + *raw_cons = tmp_raw_cons; + + return rc; +} + static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, struct bnxt_rx_queue *rxq, uint32_t *raw_cons) { @@ -1148,6 +1345,10 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, break; if (CMP_TYPE(rxcmp) == CMPL_BASE_TYPE_HWRM_DONE) { PMD_DRV_LOG(ERR, "Rx flush done\n"); + } else if (CMP_TYPE(rxcmp) == CMPL_BASE_TYPE_RX_L2_COMPRESS) { + rc = bnxt_crx_pkt(&rx_pkts[nb_rx_pkts], rxq, + (struct rx_pkt_compress_cmpl *)rxcmp, + &raw_cons); } else if ((CMP_TYPE(rxcmp) >= CMPL_BASE_TYPE_RX_TPA_START_V2) && (CMP_TYPE(rxcmp) <= CMPL_BASE_TYPE_RX_TPA_START_V3)) { rc = bnxt_rx_pkt(&rx_pkts[nb_rx_pkts], rxq, &raw_cons); diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index 439d29a07f..c51bb2d62c 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -52,6 +52,52 @@ static inline uint16_t bnxt_tpa_start_agg_id(struct bnxt *bp, #define BNXT_OL_FLAGS_TBL_DIM 64 #define BNXT_OL_FLAGS_ERR_TBL_DIM 32 +#define BNXT_CRX_CQE_OPAQUE_MASK \ + RX_PKT_COMPRESS_CMPL_ERRORS_AGG_BUFS_OPAQUE_OPAQUE_MASK +#define BNXT_CRX_CQE_AGG_BUF_MASK \ + RX_PKT_COMPRESS_CMPL_ERRORS_AGG_BUFS_OPAQUE_AGG_BUFS_MASK +#define BNXT_CRX_CQE_AGG_BUF_SFT \ + RX_PKT_COMPRESS_CMPL_ERRORS_AGG_BUFS_OPAQUE_AGG_BUFS_SFT +#define BNXT_CRX_CQE_AGG_BUFS(cmp) \ + (((cmp)->errors_agg_bufs_opaque & BNXT_CRX_CQE_AGG_BUF_MASK) >> \ + BNXT_CRX_CQE_AGG_BUF_SFT) +#define BNXT_CRX_CQE_CSUM_CALC_MASK \ + (RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_IP_CS_CALC | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_L4_CS_CALC | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_T_IP_CS_CALC | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_T_L4_CS_CALC) +#define BNXT_CRX_CQE_CSUM_CALC_SFT 8 +#define BNXT_PKT_CMPL_T_IP_CS_CALC 0x4 + +#define BNXT_CRX_TUN_CS_CALC \ + (!!(RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_T_IP_CS_CALC | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_T_L4_CS_CALC)) + +# define BNXT_CRX_CQE_CSUM_ERROR_MASK \ + (RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_IP_CS_ERROR | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_L4_CS_ERROR | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_T_IP_CS_ERROR | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_T_L4_CS_ERROR) + +/* meta_format != 0 and bit3 is valid, the value in meta is VLAN. + * Use the bit as VLAN valid bit + */ +#define BNXT_RXC_METADATA1_VLAN_VALID \ + RX_PKT_COMPRESS_CMPL_METADATA1_VALID + +static inline void bnxt_set_vlan_crx(struct rx_pkt_compress_cmpl *rxcmp, + struct rte_mbuf *mbuf) +{ + uint16_t metadata = rte_le_to_cpu_16(rxcmp->metadata1_cs_error_calc_v1); + uint16_t vlan_tci = rte_le_to_cpu_16(rxcmp->vlanc_tcid); + + if (metadata & RX_PKT_COMPRESS_CMPL_METADATA1_VALID) + mbuf->vlan_tci = + vlan_tci & (RX_PKT_COMPRESS_CMPL_VLANC_TCID_VID_MASK | + RX_PKT_COMPRESS_CMPL_VLANC_TCID_DE | + RX_PKT_COMPRESS_CMPL_VLANC_TCID_PRI_MASK); +} + struct bnxt_tpa_info { struct rte_mbuf *mbuf; uint16_t len; @@ -70,6 +116,7 @@ struct bnxt_tpa_info { struct bnxt_rx_ring_info { uint16_t rx_raw_prod; uint16_t ag_raw_prod; + uint16_t ag_cons; /* Needed with compressed CQE */ uint16_t rx_cons; /* Needed for representor */ uint16_t rx_next_cons; struct bnxt_db_info rx_db; @@ -160,6 +207,10 @@ bnxt_cfa_code_dynfield(struct rte_mbuf *mbuf) #define CMPL_FLAGS2_VLAN_TUN_MSK \ (RX_PKT_CMPL_FLAGS2_META_FORMAT_VLAN | RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC) +#define CMPL_FLAGS2_VLAN_TUN_MSK_CRX \ + (RX_PKT_COMPRESS_CMPL_METADATA1_VALID | \ + RX_PKT_COMPRESS_CMPL_CS_ERROR_CALC_T_IP_CS_CALC) + #define BNXT_CMPL_ITYPE_TO_IDX(ft) \ (((ft) & RX_PKT_CMPL_FLAGS_ITYPE_MASK) >> \ (RX_PKT_CMPL_FLAGS_ITYPE_SFT - BNXT_PTYPE_TBL_TYPE_SFT)) @@ -168,6 +219,10 @@ bnxt_cfa_code_dynfield(struct rte_mbuf *mbuf) (((f2) & CMPL_FLAGS2_VLAN_TUN_MSK) >> \ (RX_PKT_CMPL_FLAGS2_META_FORMAT_SFT - BNXT_PTYPE_TBL_VLAN_SFT)) +#define BNXT_CMPL_VLAN_TUN_TO_IDX_CRX(md) \ + (((md) & CMPL_FLAGS2_VLAN_TUN_MSK_CRX) >> \ + (RX_PKT_COMPRESS_CMPL_METADATA1_SFT - BNXT_PTYPE_TBL_VLAN_SFT)) + #define BNXT_CMPL_IP_VER_TO_IDX(f2) \ (((f2) & RX_PKT_CMPL_FLAGS2_IP_TYPE) >> \ (RX_PKT_CMPL_FLAGS2_IP_TYPE_SFT - BNXT_PTYPE_TBL_IP_VER_SFT)) From patchwork Wed Dec 27 04:21:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135591 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 724E3437A1; Wed, 27 Dec 2023 05:21:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 30A5A402F1; Wed, 27 Dec 2023 05:21:33 +0100 (CET) Received: from mail-qk1-f178.google.com (mail-qk1-f178.google.com [209.85.222.178]) by mails.dpdk.org (Postfix) with ESMTP id 52BA2402E5 for ; Wed, 27 Dec 2023 05:21:30 +0100 (CET) Received: by mail-qk1-f178.google.com with SMTP id af79cd13be357-78104f6f692so376088585a.1 for ; Tue, 26 Dec 2023 20:21:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650889; x=1704255689; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=GYbFZxWf+uxmltZ5jehAfAQL5FbAF+Gl6saPHeuLmHQ=; b=OLqgpMw6DChCPzkvtkCrFlkAyora03vAC0f3mcH0aCtq/RXnUlrCFJaMI7tPGEk+3P w0D5f00RVqREjsNntHfvpBTN6GqS7IbTlHStw48qo+d9wU+uQDeT+7a5O1l7I1CJoJob dipyLhF/v6lehW4ougPMrym7BgORXCuBD/Iqs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650889; x=1704255689; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GYbFZxWf+uxmltZ5jehAfAQL5FbAF+Gl6saPHeuLmHQ=; b=uhqBzFYq3oixF0e5xc+EHsym9T97mrL2Hcx1qDICfe9bKNDiJ4kq/gdIRdhCAMdfqS nD5vofEEfO3834sHu/eeuvVwrdN6ehrCaIuqk0rIunUAFWVskwCTUJOU45/cGj6KYcWC ZpuUl9Vy6TB8J/mmZz4w7MXbvYEOUtRioC3LLixnpWNvBzUHI1yd5/Q++d1ydHINAtBW PWDcxV6idladiV2vPGkLLhgNue+3gZ5I1ktAnDeuHvzFw6eJtehEjUJ8smH1VSGBtj3d I8BIgrzDX/jz3xh8YeeFAewEaZXPUDGMYD1YXZc/01oySOq1F6rmvaXbgvjGpDrQ82bi ibRg== X-Gm-Message-State: AOJu0YxqkeNSnG0dwbr5syyIqAfHKkOTSFiSIF7krO4jwEdc5m5hj8Sd RqQQP38B9Tya5xyTkn61qRR8zKR8zDnBBWNQBXf7FtrGLm4LbhPb66c3r4jRQ6q13xCuBlX8FRv aSP5sp4waXVzk541l/t20+DnwuRg+WC8kHvJ9RGyqVjvXwUxGiNqUdtDad1ey7X4MJuitOFupZu 8= X-Google-Smtp-Source: AGHT+IGOZpU/ajhokjXOtdZV774mA4aUPKoZJSh1LUMA7dMfJCw7TW2AHrLRwDh/32ET3zcrQO9f4Q== X-Received: by 2002:a05:620a:4aca:b0:781:5a79:ef98 with SMTP id sq10-20020a05620a4aca00b007815a79ef98mr1509197qkn.0.1703650889118; Tue, 26 Dec 2023 20:21:29 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:28 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP Subject: [PATCH v3 03/18] net/bnxt: fix a typo while parsing link speed Date: Tue, 26 Dec 2023 20:21:04 -0800 Message-Id: <20231227042119.72469-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP While setting forced speed, the speed should have mapped to macro "HWRM_PORT_PHY_CFG_INPUT_FORCE_xxx" instead of "HWRM_PORT_PHY_CFG_INPUT_AUTO_xxx". We do not see any issue as both these macros are defined to the same value. Fixing it for better convey the intent. Signed-off-by: Kalesh AP Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 378be997d3..8f99582819 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3168,15 +3168,15 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, case RTE_ETH_LINK_SPEED_100M_HD: /* FALLTHROUGH */ eth_link_speed = - HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB; + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100MB; break; case RTE_ETH_LINK_SPEED_1G: eth_link_speed = - HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB; + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_1GB; break; case RTE_ETH_LINK_SPEED_2_5G: eth_link_speed = - HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB; + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_2_5GB; break; case RTE_ETH_LINK_SPEED_10G: eth_link_speed = @@ -3184,11 +3184,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, break; case RTE_ETH_LINK_SPEED_20G: eth_link_speed = - HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB; + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_20GB; break; case RTE_ETH_LINK_SPEED_25G: eth_link_speed = - HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB; + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_25GB; break; case RTE_ETH_LINK_SPEED_40G: eth_link_speed = From patchwork Wed Dec 27 04:21:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135592 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB349437A1; Wed, 27 Dec 2023 05:22:04 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 55E7140685; Wed, 27 Dec 2023 05:21:36 +0100 (CET) Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) by mails.dpdk.org (Postfix) with ESMTP id 6CD10402F1 for ; Wed, 27 Dec 2023 05:21:31 +0100 (CET) Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-4279f38b5a3so36140531cf.2 for ; Tue, 26 Dec 2023 20:21:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650890; x=1704255690; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=UJIugBuBqtxbuUvkeVJI+0oJ96+JHmPs5Ns2P14pQIU=; b=fuD8Iy/6qvCEiNrFyE42PWy8i/NXRpdqX6qTkI3+IkCakYK64rJTv7V3XgxaxxAYmM 0l1EroLDJJSRKHyXE90BdUJ2b2Eh36fyClvS/sw4PKF8k+LzhcSu18f3ZF+T0fceE/H7 r3MYHaR/Pvu/zLcm/6Fe8UOJ4iZB2rf6/ogrs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650890; x=1704255690; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UJIugBuBqtxbuUvkeVJI+0oJ96+JHmPs5Ns2P14pQIU=; b=SaSWYqZlYJZt0bSZ9IFQzx2NP5OO+gNqtMmSUJnQbuvsS1bDjv40N/Hi4aeMb+UkSc R4sAnxwfrf9BDlkw2bnl3lXTm1KvdB0hQQ4xTl/qzCucHvJkPPvEuHDGjyV4bHjQbWqD HGbHmI0XO8wmvYnjvn0VjkZCnqiUrgz5N5OZ6MaRQ1jyQQBdiKlCIhKthmTOtM0/C5M9 DsBMsuBtpNkA0ywK4PhPz4jEIR6yJFo0mZmXaDieqLpnfSK9qSbv20Esd2kphwhG2H0c v9fjHdDOJoJU2tx++rO+8DpZmX7eEvsj9mu6ROnN/vm2wpkTga0HzrEQsMrXXSeKSO/v jdjg== X-Gm-Message-State: AOJu0YzWWEtLxu4WL9IH9DXkapbvCxUsDRMYRxiDEJdxTK6GnNH1hLky b5YWDe7H6VJ/+z/l1g1eWJ4dWtNMQvqzK5/JuMtx0KwYi40AVB8ed3UoGZuX7ykrL3zaKsjtMb+ eNHoQlL0ntbHYOUkFphmdkC5AVA+3UyI/Iz31EiphjsCSZeUac0Amrd8XV76ZWnHaFq7jwNDoq9 w= X-Google-Smtp-Source: AGHT+IHDZKGVRN/XZUV1s8jzMBxbZixQc77ZSgwTQ1FLVajweCf5GalMS+M2yS1Bx0XQEkswq1eKLQ== X-Received: by 2002:a05:622a:607:b0:425:f204:906d with SMTP id z7-20020a05622a060700b00425f204906dmr11678455qta.118.1703650890580; Tue, 26 Dec 2023 20:21:30 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:29 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , stable@dpdk.org, Somnath Kotur Subject: [PATCH v3 04/18] net/bnxt: fix setting 50G and 100G forced speed Date: Tue, 26 Dec 2023 20:21:05 -0800 Message-Id: <20231227042119.72469-5-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP Thor based NICs can support PAM4 as well as NRZ link negotiation. While PAM4 can negotiate speeds at 50G, 100G and 200G, the PMD will use NRZ signaling for 50G and 100G speeds. PAM4 signaling will be used only for 200G speed negotiations. Driver has to check for NRZ speed support first while forcing speed. Fixes: c23f9ded0391 ("net/bnxt: support 200G PAM4 link") Cc: stable@dpdk.org Signed-off-by: Kalesh AP Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 8f99582819..c31a5d4226 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3158,6 +3158,8 @@ static uint16_t bnxt_check_eth_link_autoneg(uint32_t conf_link) static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, struct bnxt_link_info *link_info) { + uint16_t support_pam4_speeds = link_info->support_pam4_speeds; + uint16_t support_speeds = link_info->support_speeds; uint16_t eth_link_speed = 0; if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG) @@ -3195,23 +3197,23 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB; break; case RTE_ETH_LINK_SPEED_50G: - if (link_info->support_pam4_speeds & - HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G) { - eth_link_speed = HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB; - link_info->link_signal_mode = BNXT_SIG_MODE_PAM4; - } else { + if (support_speeds & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB) { eth_link_speed = HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB; link_info->link_signal_mode = BNXT_SIG_MODE_NRZ; + } else if (support_pam4_speeds & + HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G) { + eth_link_speed = HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB; + link_info->link_signal_mode = BNXT_SIG_MODE_PAM4; } break; case RTE_ETH_LINK_SPEED_100G: - if (link_info->support_pam4_speeds & - HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G) { - eth_link_speed = HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB; - link_info->link_signal_mode = BNXT_SIG_MODE_PAM4; - } else { + if (support_speeds & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB) { eth_link_speed = HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB; link_info->link_signal_mode = BNXT_SIG_MODE_NRZ; + } else if (support_pam4_speeds & + HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G) { + eth_link_speed = HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB; + link_info->link_signal_mode = BNXT_SIG_MODE_PAM4; } break; case RTE_ETH_LINK_SPEED_200G: From patchwork Wed Dec 27 04:21:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135593 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B3ED437A1; Wed, 27 Dec 2023 05:22:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD45640696; Wed, 27 Dec 2023 05:21:37 +0100 (CET) Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by mails.dpdk.org (Postfix) with ESMTP id 37D4140608 for ; Wed, 27 Dec 2023 05:21:33 +0100 (CET) Received: by mail-qt1-f182.google.com with SMTP id d75a77b69052e-427e83601c4so3804871cf.0 for ; Tue, 26 Dec 2023 20:21:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650892; x=1704255692; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=XqE8+Ljkq5xRrH7h2pHSqnIkS2/lKd7gJcV1eKXVNLE=; b=IYrbCTxH0vXanXtXhS1lrFfrC1/QljmehnUMYUyomFeZ/7iJWWzaXKAQOwKqwpjypo KMbvT+L0MQ2e+rhz7Gr1t1D70LdfKktwFkPbh2NuNnDJ5qSBrYAWFII+9my5w7Tnk5Jg VBKHACyu4oYfLvPzmJKcO2qIyt6KyXA2/4jLc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650892; x=1704255692; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XqE8+Ljkq5xRrH7h2pHSqnIkS2/lKd7gJcV1eKXVNLE=; b=h7UBwcUQ6orJ364aFMh7yimDoT8SoytfgNKx6Oqd40Gj5ccuudFPfx9Uw7TfZYKCsg t5bv6+SjH7AKFgBoUr5z2f6AqBzCPCKtnJ+5K0XFEeruaWUQCx7PICkfhTdyaufHqNEJ twM+BOFi1ZoVZmKU0SQ7DEKybmA1tYDR0rPuOOJuOslyHEo4VoMWX6cEXk96OykvULuR nrGlqVrKRmpyeht1lsIGsxlNj0CdKuCdak5XP/316ZRx2l4dwBZrWLktuUrtEE6f/D+p 1VIy9kssk5E33xQw9G/x6XPTPeJ4CJrJhhEvoDE+zwkBt1qmESS5aexczBrMq1Pl9XXp LrUg== X-Gm-Message-State: AOJu0YzWxB/2Hw33fvJ45OA/GBkP14EX/tHD/RiRugbVEvYeAQ48zkX6 xdci9jzX41JX4qepTchPBehwakTBg1ReGwNVJ8YQ2ZjVPZy3MNZe5wyTnwUP3aKE3i+gfu+oL4w qe3lXFCVc0ZJFJ/brmGVFRZcUaRcZyUZEmN076bElTPPO5G+0m16wRzuXa8LSPByJo28EPGiQZf Q= X-Google-Smtp-Source: AGHT+IGkUlPozeW/g0oU1kTUL5jJ631/pMSKhRgoqxxbwoZYnvuvJgYxWgxFjVAQzB8x+/Ehl0nTIg== X-Received: by 2002:ac8:5fd2:0:b0:427:ee30:a0a5 with SMTP id k18-20020ac85fd2000000b00427ee30a0a5mr410938qta.123.1703650892118; Tue, 26 Dec 2023 20:21:32 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:31 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , stable@dpdk.org, Somnath Kotur Subject: [PATCH v3 05/18] net/bnxt: fix speed change from 200G to 25G on Thor Date: Tue, 26 Dec 2023 20:21:06 -0800 Message-Id: <20231227042119.72469-6-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP While forcing speed to 200G, driver sets the structure variable "bp->link_info->link_signal_mode" value to BNXT_SIG_MODE_PAM4. After that when the user forces the speed back to 25G, this cached value is not set back to BNXT_SIG_MODE_NRZ which results in issuing the HWRM_PORT_PHY_CFG command with wrong inputs. Fixes: c23f9ded0391 ("net/bnxt: support 200G PAM4 link") Cc: stable@dpdk.org Reviewed-by: Somnath Kotur Signed-off-by: Kalesh AP Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index c31a5d4226..a1f3a8251f 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3191,6 +3191,7 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, case RTE_ETH_LINK_SPEED_25G: eth_link_speed = HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_25GB; + link_info->link_signal_mode = BNXT_SIG_MODE_NRZ; break; case RTE_ETH_LINK_SPEED_40G: eth_link_speed = From patchwork Wed Dec 27 04:21:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135594 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B514D437A1; Wed, 27 Dec 2023 05:22:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 47F14406B4; Wed, 27 Dec 2023 05:21:39 +0100 (CET) Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by mails.dpdk.org (Postfix) with ESMTP id 6A60E402E5 for ; Wed, 27 Dec 2023 05:21:34 +0100 (CET) Received: by mail-qt1-f175.google.com with SMTP id d75a77b69052e-427eae54dd3so3132351cf.1 for ; Tue, 26 Dec 2023 20:21:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650893; x=1704255693; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=XlFgpTM/a1tDaGPVG6jGSu5fNth8/CiT4Ewyb9iY2fU=; b=KWMBlcnyrafjVwCqrr3GevdjGVtY/VXXYv/3zxBze25BflPIIxNhwAv5YoR2/kkXBP GYJYU9BRfZDxlUB0imZ1Z5BQ7BVwmAOqz+IQ9LGME1fwlsi8YktK4TbMYwueZTbISPOT nAjXyJdyRUhZmnmirE0qpuVTKc/jCWPEZHEXE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650893; x=1704255693; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XlFgpTM/a1tDaGPVG6jGSu5fNth8/CiT4Ewyb9iY2fU=; b=KrnDcMbCGLie4qBIpbbzlUASYwRn5+hN3MKh1bcZKwF8E/f94oFBKE9iWZaOwKt2fb Dsucs6EdGMGhR3E7mNXWJnhD49d6kRiWV/E4CxsJkuJ7DLnXrkjytovWZPFZQsYLYJBP UpoTAaM1WZ0s7dxSIEnUHNjzG2/QdBghl9aKM11c7c8X/Bd5rtNp7dkVyVtBNkHIrhts jlbGqoqAs0cJyZTDke4K6x/vkAJjfkTPf5d5wp2bCVe+7fHUKINRTkVES2HE5u23EQ9V hlsoIhZju4whJrefVLNI01OqrbvfkiFdTNGvYtDv3275NxrR6f5RbaimpxXr0bRQ91zo vjfA== X-Gm-Message-State: AOJu0YzfcF9eUityzL9/n+FxpzffU3GRkqhvFBh/1r4RdQe64xmubpGn NuV7UDRxrz2anmZN130DYx7N7nzildNtLMN9vNzBlzijmyESrF+Lbvg1NEew8xfsxwpev7jYYWJ CGQpzKHbpvzcktC/PPG0bhOu4mWfRg9HXFDS1L8OtQXy3mU+2PcTnUbCP+xsrGQS4rnY9v/e+sL o= X-Google-Smtp-Source: AGHT+IGjltdzpC4LR6xDpD+q7TfecSL1ASs0/dXmRImwkAmzyKTxaqMmMo5Tl4lOupXF/1vV8yF8GQ== X-Received: by 2002:a05:622a:55:b0:427:8319:16fa with SMTP id y21-20020a05622a005500b00427831916famr11746137qtw.15.1703650893387; Tue, 26 Dec 2023 20:21:33 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:32 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , stable@dpdk.org Subject: [PATCH v3 06/18] net/bnxt: support backward compatibility Date: Tue, 26 Dec 2023 20:21:07 -0800 Message-Id: <20231227042119.72469-7-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP On older firmware versions, HWRM_FUNC_QCAPS response is not returning the maximum number of multicast filters that can be supported by the function. As a result, memory allocation with size 0 fails. Bugzilla ID: 1309 Cc: stable@dpdk.org Signed-off-by: Kalesh AP Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_hwrm.c | 2 ++ 2 files changed, 3 insertions(+) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index cfdbfd3f54..cd85a944e8 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -974,6 +974,7 @@ struct bnxt { struct rte_ether_addr *mcast_addr_list; rte_iova_t mc_list_dma_addr; uint32_t nb_mc_addr; +#define BNXT_DFLT_MAX_MC_ADDR 16 /* for compatibility with older firmware */ uint32_t max_mcast_addr; /* maximum number of mcast filters supported */ struct rte_eth_rss_conf rss_conf; /* RSS configuration. */ diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index a1f3a8251f..d649f217ec 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -901,6 +901,8 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) bp->max_l2_ctx, bp->max_vnics); bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx); bp->max_mcast_addr = rte_le_to_cpu_32(resp->max_mcast_filters); + if (!bp->max_mcast_addr) + bp->max_mcast_addr = BNXT_DFLT_MAX_MC_ADDR; memcpy(bp->dsn, resp->device_serial_number, sizeof(bp->dsn)); if (BNXT_PF(bp)) From patchwork Wed Dec 27 04:21:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135595 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D8F08437A1; Wed, 27 Dec 2023 05:22:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E984240A6C; Wed, 27 Dec 2023 05:21:40 +0100 (CET) Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) by mails.dpdk.org (Postfix) with ESMTP id 3D438402E8 for ; Wed, 27 Dec 2023 05:21:36 +0100 (CET) Received: by mail-qk1-f181.google.com with SMTP id af79cd13be357-7811c02cfecso325486585a.2 for ; Tue, 26 Dec 2023 20:21:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650895; x=1704255695; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=sZrm4gtpYL36epb/Z19gxFHMKSJ0fWTcwZe9DrfI/HY=; b=dzmgPmBdWkv9D7GHskrwU13w73OW2SmaOPArWcgECY0YFlW6dYxJ5AJFVkcBzd/eHD uUF6eUfhAyMwC+hGMjnXOjcUSTwzDg3P5+aqNqJhgIFIBGygloexfqZpJdDrElZ9ZX/+ JUy7bOp8Kuk6aHgsjkkob50Mu97wJpWnU2AsM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650895; x=1704255695; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sZrm4gtpYL36epb/Z19gxFHMKSJ0fWTcwZe9DrfI/HY=; b=gmqK4km/ImQG0whM/Pu5IScVEsF30QFvOR4Qx4KxZ5yLN2ru2raiL8pnWCwGQaLDqh ltlAAuRFGU5ZL2CuFo+pyqzqxHpi/EBi93dK/ad1qev0hJz/z+dPebimT5JpnXRQPGiL NhpMGWx/iQs4Axm+RF43OQlmaFKSSPm5jwynagH0R2fi2yu4aWO+GBnhat2GqW7duXFP pt8KPL2cUlGUMBsEqFUS0raxwGi+hrpJhMom7bmPGYrghf5CCJo4/rGGhT/tSTzdb+iP pqNT/JIq7zV+GyukDNEJSWxDnbBaSlILma/mk7mRZOdtQrcTShpf1ALfipemCEwh0Flm K3+A== X-Gm-Message-State: AOJu0YyWi5/9WNaKKGLUulg/QlW2hSf0U5+4tZnFKMF6RKQb68+iVN9f 3aHZMYUhBPhhvegNXZwNjhnpgi/nelDY/nkiJFp5jvzMzabsggb1C/FmJczfcyr9akMDMV6uH8O LfTdcWS6vvyjxmjH/9hRZkmXW8bhvqm7exWpZeQe5a0veMTJyzRtvSGu6Jkz9+yVJAr6viTfnD3 g= X-Google-Smtp-Source: AGHT+IH6745m7vTPigauECh7jX8KXpA5lr+zkt+7GLfoemrxOJPZkF1CG1Orlgwku24UeKZnVGRlKg== X-Received: by 2002:a05:622a:1306:b0:427:8fa3:c5bd with SMTP id v6-20020a05622a130600b004278fa3c5bdmr10856841qtk.114.1703650894740; Tue, 26 Dec 2023 20:21:34 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:33 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Somnath Kotur Subject: [PATCH v3 07/18] net/bnxt: reattempt mbuf allocation for Rx and AGG rings Date: Tue, 26 Dec 2023 20:21:08 -0800 Message-Id: <20231227042119.72469-8-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Normally the PMD allocates a new mbuf for every mbuf consumed. In case of mbuf alloc failure, that slot in the Rx or AGG ring remains empty till a new mbuf is not allocated for that slot. If this happens too frequently the Rx ring or the aggregation ring could be completely drained of mbufs and can cause unexpected behavior. To prevent this, in case of an mbuf allocation failure, set a bit and try to reattempt mbuf allocation to fill the empty slots. Since this should not happen under normal circumstances, it should not impact regular Rx performance. The need_realloc bit is set in the RxQ if mbuf allocation fails for Rx ring or the AGG ring. As long as the application calls the Rx burst function even in cases where the Rx rings became completely empty, the logic should be able to reattempt buffer allocation for the associated Rx and aggregation rings. Signed-off-by: Ajit Khaparde Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_rxq.h | 1 + drivers/net/bnxt/bnxt_rxr.c | 101 ++++++++++++++++++++++-------------- 2 files changed, 64 insertions(+), 38 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index b9908be5f4..77bc382a1d 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -41,6 +41,7 @@ struct bnxt_rx_queue { struct bnxt_cp_ring_info *cp_ring; struct rte_mbuf fake_mbuf; uint64_t rx_mbuf_alloc_fail; + uint8_t need_realloc; const struct rte_memzone *mz; }; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index b919922a64..c5c9f9e6e6 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -50,6 +50,8 @@ static inline int bnxt_alloc_rx_data(struct bnxt_rx_queue *rxq, mbuf = __bnxt_alloc_rx_data(rxq->mb_pool); if (!mbuf) { __atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1, __ATOMIC_RELAXED); + /* If buff has failed already, setting this again won't hurt */ + rxq->need_realloc = 1; return -ENOMEM; } @@ -85,6 +87,8 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq, mbuf = __bnxt_alloc_rx_data(rxq->mb_pool); if (!mbuf) { __atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1, __ATOMIC_RELAXED); + /* If buff has failed already, setting this again won't hurt */ + rxq->need_realloc = 1; return -ENOMEM; } @@ -139,7 +143,6 @@ static void bnxt_rx_ring_reset(void *arg) int i, rc = 0; struct bnxt_rx_queue *rxq; - for (i = 0; i < (int)bp->rx_nr_rings; i++) { struct bnxt_rx_ring_info *rxr; @@ -357,7 +360,8 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, RTE_ASSERT(ag_cons <= rxr->ag_ring_struct->ring_mask); ag_buf = &rxr->ag_buf_ring[ag_cons]; ag_mbuf = *ag_buf; - RTE_ASSERT(ag_mbuf != NULL); + if (ag_mbuf == NULL) + return -EBUSY; ag_mbuf->data_len = rte_le_to_cpu_16(rxcmp->len); @@ -452,7 +456,7 @@ static inline struct rte_mbuf *bnxt_tpa_end( RTE_ASSERT(mbuf != NULL); if (agg_bufs) { - bnxt_rx_pages(rxq, mbuf, raw_cp_cons, agg_bufs, tpa_info); + (void)bnxt_rx_pages(rxq, mbuf, raw_cp_cons, agg_bufs, tpa_info); } mbuf->l4_len = payload_offset; @@ -1230,8 +1234,11 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf); reuse_rx_mbuf: - if (agg_buf) - bnxt_rx_pages(rxq, mbuf, &tmp_raw_cons, agg_buf, NULL); + if (agg_buf) { + rc = bnxt_rx_pages(rxq, mbuf, &tmp_raw_cons, agg_buf, NULL); + if (rc != 0) + return -EBUSY; + } #ifdef BNXT_DEBUG if (rxcmp1->errors_v2 & RX_CMP_L2_ERRORS) { @@ -1293,6 +1300,48 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, return rc; } +static void bnxt_reattempt_buffer_alloc(struct bnxt_rx_queue *rxq) +{ + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + struct bnxt_ring *ring; + uint16_t raw_prod; + uint32_t cnt; + + /* Assume alloc passes. On failure, + * need_realloc will be set inside bnxt_alloc_XY_data. + */ + rxq->need_realloc = 0; + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + goto alloc_rx; + + raw_prod = rxr->ag_raw_prod; + bnxt_prod_ag_mbuf(rxq); + if (raw_prod != rxr->ag_raw_prod) + bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); + +alloc_rx: + raw_prod = rxr->rx_raw_prod; + ring = rxr->rx_ring_struct; + for (cnt = 0; cnt < ring->ring_size; cnt++) { + struct rte_mbuf **rx_buf; + uint16_t ndx; + + ndx = RING_IDX(ring, raw_prod + cnt); + rx_buf = &rxr->rx_buf_ring[ndx]; + + /* Buffer already allocated for this index. */ + if (*rx_buf != NULL && *rx_buf != &rxq->fake_mbuf) + continue; + + /* This slot is empty. Alloc buffer for Rx */ + if (bnxt_alloc_rx_data(rxq, rxr, raw_prod + cnt)) + break; + + rxr->rx_raw_prod = raw_prod + cnt; + bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); + } +} + uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -1302,7 +1351,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t rx_raw_prod = rxr->rx_raw_prod; uint16_t ag_raw_prod = rxr->ag_raw_prod; uint32_t raw_cons = cpr->cp_raw_cons; - bool alloc_failed = false; uint32_t cons; int nb_rx_pkts = 0; int nb_rep_rx_pkts = 0; @@ -1358,10 +1406,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, break; else if (rc == -ENODEV) /* completion for representor */ nb_rep_rx_pkts++; - else if (rc == -ENOMEM) { + else if (rc == -ENOMEM) nb_rx_pkts++; - alloc_failed = true; - } } else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) { evt = bnxt_event_hwrm_resp_handler(rxq->bp, @@ -1372,7 +1418,12 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } raw_cons = NEXT_RAW_CMP(raw_cons); - if (nb_rx_pkts == nb_pkts || nb_rep_rx_pkts == nb_pkts || evt) + /* + * The HW reposting may fall behind if mbuf allocation has + * failed. Break and reattempt allocation to prevent that. + */ + if (nb_rx_pkts == nb_pkts || nb_rep_rx_pkts == nb_pkts || evt || + rxq->need_realloc != 0) break; } @@ -1395,35 +1446,9 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, /* Ring the AGG ring DB */ if (ag_raw_prod != rxr->ag_raw_prod) bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); - - /* Attempt to alloc Rx buf in case of a previous allocation failure. */ - if (alloc_failed) { - int cnt; - - rx_raw_prod = RING_NEXT(rx_raw_prod); - for (cnt = 0; cnt < nb_rx_pkts + nb_rep_rx_pkts; cnt++) { - struct rte_mbuf **rx_buf; - uint16_t ndx; - - ndx = RING_IDX(rxr->rx_ring_struct, rx_raw_prod + cnt); - rx_buf = &rxr->rx_buf_ring[ndx]; - - /* Buffer already allocated for this index. */ - if (*rx_buf != NULL && *rx_buf != &rxq->fake_mbuf) - continue; - - /* This slot is empty. Alloc buffer for Rx */ - if (!bnxt_alloc_rx_data(rxq, rxr, rx_raw_prod + cnt)) { - rxr->rx_raw_prod = rx_raw_prod + cnt; - bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); - } else { - PMD_DRV_LOG(ERR, "Alloc mbuf failed\n"); - break; - } - } - } - done: + if (unlikely(rxq->need_realloc)) + bnxt_reattempt_buffer_alloc(rxq); return nb_rx_pkts; } From patchwork Wed Dec 27 04:21:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135596 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B6067437A1; Wed, 27 Dec 2023 05:22:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 14AD64067A; Wed, 27 Dec 2023 05:21:43 +0100 (CET) Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by mails.dpdk.org (Postfix) with ESMTP id 6EEC04068A for ; Wed, 27 Dec 2023 05:21:37 +0100 (CET) Received: by mail-qt1-f170.google.com with SMTP id d75a77b69052e-427cff62e19so22648041cf.2 for ; Tue, 26 Dec 2023 20:21:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650896; x=1704255696; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=TrBkhvd7MSGpB2ErSqB86YPgkxyK1I1Mjy3SIkQOJSk=; b=b7+dnXAQ5c5EDcEkKvPxCph4VqfQp2lyjsN0qWeme9rKuxGp4cgvyJn50nns5UdCNy tV3K37/9kzn1SUAEpDg5izLJdZgBl+/Or6OSspOIiAAfmFSouA23+IUTJ1JpzPYWYoyk DDCPfs3nj5DNrlRT+tzATKwNlKIzHWpQvkunA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650896; x=1704255696; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TrBkhvd7MSGpB2ErSqB86YPgkxyK1I1Mjy3SIkQOJSk=; b=vOpIFF9xdGKYNYtPwjKcUptGtigJF4RHYHjSlA1PXIze8N6W3R070pRj9m/vRkHuwF EIUB7wZdxqJA6k5iXxNGV1NCuH315Yna0KfVvZ7jRIJZvX5Y3LucA0KIEMVM3jx+otbD 07bz4t0H476x8tQqBm4X6fjO2J7juYLWWiMGD+D+LJbFDja7DA7Skz8Co77L4jpVuSR5 bWbN7jRv8Sp6gQhQEgjz3jqt/kdgWZIXuWakxShd9MwIeICLna2om4DmxrKZwB8lcdwK wFVfmIvOv5QziRFv1xaXHMoHp8vhVoPVzu9ooZQAJfxK4EuZlsppHDUs9KayXzebDLGy Qotg== X-Gm-Message-State: AOJu0Yy4Y58bc9W8yBgGaIvnKz7XSDiLqSQN7tnpSzhdZ/TmIDcrQ3Sq 9zVVrnwSSBLsvw3/FO9VUSCJTYfwXhUS5/HqwQftruaxukQ6+6da8WAClbOJW8+fEw0mJwMV+nc eDdyOv3YpYNTpn2cKevqnQUqFE1RPDwXY6LMnl9Rb3WzLIkAY4f2BDzWkSJtd2NSNrcDEBnKRLt 8= X-Google-Smtp-Source: AGHT+IE4+qurfQsGpE/x4bAWc0eHVLL4mYGpayT1rjN08xOg7v9uob+IKKxtjmWIYCwFfJaytWBYtQ== X-Received: by 2002:ac8:5956:0:b0:427:a2d0:cf52 with SMTP id 22-20020ac85956000000b00427a2d0cf52mr15015654qtz.88.1703650896346; Tue, 26 Dec 2023 20:21:36 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:35 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 08/18] net/bnxt: refactor Rx doorbell during Rx flush Date: Tue, 26 Dec 2023 20:21:09 -0800 Message-Id: <20231227042119.72469-9-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Ring the Rx doorbell during the Rx ring flush processing only if there is a valid completion. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_rxr.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index c5c9f9e6e6..d0706874a6 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -1713,10 +1713,11 @@ int bnxt_flush_rx_cmp(struct bnxt_cp_ring_info *cpr) nb_rx++; } while (nb_rx < ring_mask); - cpr->cp_raw_cons = raw_cons; - - /* Ring the completion queue doorbell. */ - bnxt_db_cq(cpr); + if (nb_rx) { + cpr->cp_raw_cons = raw_cons; + /* Ring the completion queue doorbell. */ + bnxt_db_cq(cpr); + } return 0; } From patchwork Wed Dec 27 04:21:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135597 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B77D437A1; Wed, 27 Dec 2023 05:22:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2ED2840A75; Wed, 27 Dec 2023 05:21:44 +0100 (CET) Received: from mail-qt1-f174.google.com (mail-qt1-f174.google.com [209.85.160.174]) by mails.dpdk.org (Postfix) with ESMTP id 11C974069D for ; Wed, 27 Dec 2023 05:21:39 +0100 (CET) Received: by mail-qt1-f174.google.com with SMTP id d75a77b69052e-427e83601c4so3805221cf.0 for ; Tue, 26 Dec 2023 20:21:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650898; x=1704255698; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=jFkdE5yxNFM1s+xri0E7kWHQCY7JdOrW6/qJ/p/33JM=; b=DQaV+b37PJHHKRizRZlWE1JE5q2A46QujbuT7x/fMpj3bANQh06u1qtjTokPRXqSVN /lMGIw/D4LrarmT1JuY0TEoUODBHOAZ4TAzgAYR9XFCepC1gIqHLsCzxyXO6d2v+8ND+ 2oaoCzX7taQQURI2Ip8NjCoeMBOF0ZUzl9qSw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650898; x=1704255698; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jFkdE5yxNFM1s+xri0E7kWHQCY7JdOrW6/qJ/p/33JM=; b=nkSgajJEpGsLQxxaCTkUdj/4epHWQsIKCBDwotw9SsxtTc4CfVKt0ZXykEBXGRthXl oZ+CKtWf/4Z9gGX5nF04P3UKzndZyoBuPmvyzX/cI1L7/rsyC5yki+lSOaBDYfNJWjJi y3tvsjRLjPiDTn6CsR+my0CP8oVXxSq9P0jI+qwt/k1lu5hymngk8sAfHHoOvvY0sa58 Wi/ofrhKKiiSdmFZkQhYJH1gsKySPX6V7oOsBx3zls88KXriIaR4V7d/wWcdkBBieveG wBzvTvIz6A1Tzcsrjdg9BVwqr7GAFLiuaash4mDuR2+9VK5dDRXwInqSSr4Kl6x/xNbQ DrbA== X-Gm-Message-State: AOJu0YzMv4YHh6v2T9Vc+rahdnLIdE5oDZRdAPwiOxluSsw2EMb0TU9m mJTLsA0jKrqWM75iWsuOhaE2Yu52sce9ubiB9w83cY1WmzxJCQxQA41mgFOJQwjzgOH7dP0COPg z+gFUfhShgT+8dwjXJ14R+hXGPP0UTgMLDeDQ77uprlsNxLSWFbc6r130G0tFAdyeougMK8835P M= X-Google-Smtp-Source: AGHT+IGd9p+wIpYO2FNf9RpkE2fplKgpRbDwP6yPGd1okbF0rNZPsd4LqEOZrAEC+AMXtMwcEH7XGw== X-Received: by 2002:a05:622a:107:b0:427:f034:b74 with SMTP id u7-20020a05622a010700b00427f0340b74mr69375qtw.73.1703650897804; Tue, 26 Dec 2023 20:21:37 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:36 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 09/18] net/bnxt: extend RSS hash support for P7 devices Date: Tue, 26 Dec 2023 20:21:10 -0800 Message-Id: <20231227042119.72469-10-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org P7 adapters support XOR based and checksum based RSS hashing. Add support for checksum and XOR based RSS hash for these adapters. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 15 +-- drivers/net/bnxt/bnxt_ethdev.c | 72 ++++++--------- drivers/net/bnxt/bnxt_flow.c | 37 +++++++- drivers/net/bnxt/bnxt_hwrm.c | 6 ++ drivers/net/bnxt/bnxt_reps.c | 2 +- drivers/net/bnxt/bnxt_vnic.c | 161 +++++++++++++++++++++++++++++++-- drivers/net/bnxt/bnxt_vnic.h | 18 +++- 7 files changed, 242 insertions(+), 69 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index cd85a944e8..e7b288c849 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -638,15 +638,6 @@ struct bnxt_rep_info { #define BNXT_FW_STATUS_HEALTHY 0x8000 #define BNXT_FW_STATUS_SHUTDOWN 0x100000 -#define BNXT_ETH_RSS_SUPPORT ( \ - RTE_ETH_RSS_IPV4 | \ - RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ - RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ - RTE_ETH_RSS_IPV6 | \ - RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ - RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ - RTE_ETH_RSS_LEVEL_MASK) - #define BNXT_HWRM_SHORT_REQ_LEN sizeof(struct hwrm_short_input) struct bnxt_flow_stat_info { @@ -815,7 +806,10 @@ struct bnxt { #define BNXT_VNIC_CAP_VLAN_RX_STRIP BIT(3) #define BNXT_RX_VLAN_STRIP_EN(bp) ((bp)->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP) #define BNXT_VNIC_CAP_OUTER_RSS_TRUSTED_VF BIT(4) -#define BNXT_VNIC_CAP_L2_CQE_MODE BIT(8) +#define BNXT_VNIC_CAP_XOR_MODE BIT(5) +#define BNXT_VNIC_CAP_CHKSM_MODE BIT(6) +#define BNXT_VNIC_CAP_L2_CQE_MODE BIT(8) + unsigned int rx_nr_rings; unsigned int rx_cp_nr_rings; unsigned int rx_num_qs_per_vnic; @@ -1176,4 +1170,5 @@ void bnxt_handle_vf_cfg_change(void *arg); int bnxt_flow_meter_ops_get(struct rte_eth_dev *eth_dev, void *arg); struct bnxt_vnic_info *bnxt_get_default_vnic(struct bnxt *bp); struct tf *bnxt_get_tfp_session(struct bnxt *bp, enum bnxt_session_type type); +uint64_t bnxt_eth_rss_support(struct bnxt *bp); #endif diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 0f1c4326c4..ef5e65ff16 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -982,6 +982,25 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp) return speed_capa; } +uint64_t bnxt_eth_rss_support(struct bnxt *bp) +{ + uint64_t support; + + support = RTE_ETH_RSS_IPV4 | + RTE_ETH_RSS_NONFRAG_IPV4_TCP | + RTE_ETH_RSS_NONFRAG_IPV4_UDP | + RTE_ETH_RSS_IPV6 | + RTE_ETH_RSS_NONFRAG_IPV6_TCP | + RTE_ETH_RSS_NONFRAG_IPV6_UDP | + RTE_ETH_RSS_LEVEL_MASK; + + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_CHKSM_MODE) + support |= (RTE_ETH_RSS_IPV4_CHKSUM | + RTE_ETH_RSS_L4_CHKSUM); + + return support; +} + static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info) { @@ -1023,7 +1042,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev, dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; dev_info->tx_offload_capa = bnxt_get_tx_port_offloads(bp) | dev_info->tx_queue_offload_capa; - dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT; + dev_info->flow_type_rss_offloads = bnxt_eth_rss_support(bp); dev_info->speed_capa = bnxt_get_speed_capabilities(bp); dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | @@ -2175,7 +2194,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev, if (!rss_conf->rss_hf) PMD_DRV_LOG(ERR, "Hash type NONE\n"); } else { - if (rss_conf->rss_hf & BNXT_ETH_RSS_SUPPORT) + if (rss_conf->rss_hf & bnxt_eth_rss_support(bp)) return -EINVAL; } @@ -2185,6 +2204,12 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev, vnic->hash_mode = bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf, RTE_ETH_RSS_LEVEL(rss_conf->rss_hf)); + rc = bnxt_rte_eth_to_hwrm_ring_select_mode(bp, rss_conf->rss_hf, vnic); + if (rc != 0) + return rc; + + /* Cache the hash function */ + bp->rss_conf.rss_hf = rss_conf->rss_hf; /* Cache the hash function */ bp->rss_conf.rss_hf = rss_conf->rss_hf; @@ -2218,60 +2243,21 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev, struct bnxt *bp = eth_dev->data->dev_private; struct bnxt_vnic_info *vnic = bnxt_get_default_vnic(bp); int len, rc; - uint32_t hash_types; rc = is_bnxt_in_error(bp); if (rc) return rc; - /* RSS configuration is the same for all VNICs */ + /* Return the RSS configuration of the default VNIC. */ if (vnic && vnic->rss_hash_key) { if (rss_conf->rss_key) { len = rss_conf->rss_key_len <= HW_HASH_KEY_SIZE ? rss_conf->rss_key_len : HW_HASH_KEY_SIZE; memcpy(rss_conf->rss_key, vnic->rss_hash_key, len); } - - hash_types = vnic->hash_type; - rss_conf->rss_hf = 0; - if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) { - rss_conf->rss_hf |= RTE_ETH_RSS_IPV4; - hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4; - } - if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) { - rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP; - hash_types &= - ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4; - } - if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) { - rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP; - hash_types &= - ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4; - } - if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) { - rss_conf->rss_hf |= RTE_ETH_RSS_IPV6; - hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6; - } - if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) { - rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP; - hash_types &= - ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6; - } - if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) { - rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP; - hash_types &= - ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6; - } - + bnxt_hwrm_rss_to_rte_hash_conf(vnic, &rss_conf->rss_hf); rss_conf->rss_hf |= bnxt_hwrm_to_rte_rss_level(bp, vnic->hash_mode); - - if (hash_types) { - PMD_DRV_LOG(ERR, - "Unknown RSS config from firmware (%08x), RSS disabled", - vnic->hash_type); - return -ENOTSUP; - } } else { rss_conf->rss_hf = 0; } diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 15f0e1b308..2d707b48d2 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -881,6 +881,7 @@ static void bnxt_vnic_cleanup(struct bnxt *bp, struct bnxt_vnic_info *vnic) vnic->fw_grp_ids = NULL; vnic->rx_queue_cnt = 0; + vnic->hash_type = 0; } static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic, @@ -1067,7 +1068,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp, { const struct rte_flow_action_rss *rss; unsigned int rss_idx, i, j, fw_idx; - uint16_t hash_type; + uint32_t hash_type; uint64_t types; int rc; @@ -1115,9 +1116,9 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp, } } - /* Currently only Toeplitz hash is supported. */ - if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT && - rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ) { + if (BNXT_IS_HASH_FUNC_DEFAULT(rss->func) && + BNXT_IS_HASH_FUNC_TOEPLITZ(rss->func) && + BNXT_IS_HASH_FUNC_SIMPLE_XOR(bp, rss->func)) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, @@ -1175,6 +1176,34 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp, vnic->hash_mode = bnxt_rte_to_hwrm_hash_level(bp, rss->types, rss->level); + /* For P7 chips update the hash_type if hash_type not explicitly passed. + * TODO: For P5 chips. + */ + if (BNXT_CHIP_P7(bp) && + vnic->hash_mode == BNXT_HASH_MODE_DEFAULT && !hash_type) + vnic->hash_type = HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4 | + HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6; + + /* TODO: + * hash will be performed on the L3 and L4 packet headers. + * specific RSS hash types like IPv4-TCP etc... or L4-chksum or IPV4-chksum + * will NOT have any bearing and will not be honored. + * Check and reject flow create accordingly. TODO. + */ + + rc = bnxt_rte_flow_to_hwrm_ring_select_mode(rss->func, + rss->types, + bp, vnic); + if (rc) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unsupported RSS hash parameters"); + rc = -rte_errno; + goto ret; + } + /* Update RSS key only if key_len != 0 */ if (rss->key_len != 0) memcpy(vnic->rss_hash_key, rss->key, rss->key_len); diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index d649f217ec..587433a878 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1025,6 +1025,12 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) PMD_DRV_LOG(DEBUG, "Rx VLAN strip capability enabled\n"); } + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RING_SELECT_MODE_XOR_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_XOR_MODE; + + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RING_SELECT_MODE_TOEPLITZ_CHKSM_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_CHKSM_MODE; + bp->max_tpa_v2 = rte_le_to_cpu_16(resp->max_aggs_supported); HWRM_UNLOCK(); diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index 78337431af..d96d972904 100644 --- a/drivers/net/bnxt/bnxt_reps.c +++ b/drivers/net/bnxt/bnxt_reps.c @@ -569,7 +569,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev, dev_info->rx_offload_capa = bnxt_get_rx_port_offloads(parent_bp); dev_info->tx_offload_capa = bnxt_get_tx_port_offloads(parent_bp); - dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT; + dev_info->flow_type_rss_offloads = bnxt_eth_rss_support(parent_bp); dev_info->switch_info.name = eth_dev->device->name; dev_info->switch_info.domain_id = rep_bp->switch_domain_id; diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index bf93120d28..6a57f85ea7 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -256,10 +256,15 @@ int bnxt_vnic_grp_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic) return 0; } -uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type) +uint32_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type) { - uint16_t hwrm_type = 0; + uint32_t hwrm_type = 0; + if (rte_type & RTE_ETH_RSS_IPV4_CHKSUM) + hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4; + if (rte_type & RTE_ETH_RSS_L4_CHKSUM) + hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4 | + HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6; if ((rte_type & RTE_ETH_RSS_IPV4) || (rte_type & RTE_ETH_RSS_ECPRI)) hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4; @@ -273,6 +278,9 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type) hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6; if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP) hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6; + if (rte_type & RTE_ETH_RSS_IPV4_CHKSUM) + hwrm_type |= + HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ_CHECKSUM; return hwrm_type; } @@ -287,6 +295,8 @@ int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl) RTE_ETH_RSS_NONFRAG_IPV6_TCP)); bool l3_only = l3 && !l4; bool l3_and_l4 = l3 && l4; + bool cksum = !!(hash_f & + (RTE_ETH_RSS_IPV4_CHKSUM | RTE_ETH_RSS_L4_CHKSUM)); /* If FW has not advertised capability to configure outer/inner * RSS hashing , just log a message. HW will work in default RSS mode. @@ -302,12 +312,12 @@ int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl) switch (lvl) { case BNXT_RSS_LEVEL_INNERMOST: /* Irrespective of what RTE says, FW always does 4 tuple */ - if (l3_and_l4 || l4 || l3_only) + if (l3_and_l4 || l4 || l3_only || cksum) mode = BNXT_HASH_MODE_INNERMOST; break; case BNXT_RSS_LEVEL_OUTERMOST: /* Irrespective of what RTE says, FW always does 4 tuple */ - if (l3_and_l4 || l4 || l3_only) + if (l3_and_l4 || l4 || l3_only || cksum) mode = BNXT_HASH_MODE_OUTERMOST; break; default: @@ -733,6 +743,16 @@ bnxt_vnic_rss_create(struct bnxt *bp, goto fail_cleanup; } + /* Remove unsupported types */ + rss_info->rss_types &= bnxt_eth_rss_support(bp); + + /* If only unsupported type(s) are specified then quit */ + if (rss_info->rss_types == 0) { + PMD_DRV_LOG(ERR, + "Unsupported RSS hash type(s)\n"); + goto fail_cleanup; + } + /* hwrm_type conversion */ vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_info->rss_types); vnic->hash_mode = bnxt_rte_to_hwrm_hash_level(bp, rss_info->rss_types, @@ -803,9 +823,11 @@ bnxt_vnic_rss_hash_algo_update(struct bnxt *bp, struct bnxt_vnic_rss_info *rss_info) { uint8_t old_rss_hash_key[HW_HASH_KEY_SIZE] = { 0 }; - uint16_t hash_type; - uint8_t hash_mode; + uint32_t hash_type; + uint8_t hash_mode; + uint8_t ring_mode; uint32_t apply = 0; + int rc; /* validate key length */ if (rss_info->key_len != 0 && rss_info->key_len != HW_HASH_KEY_SIZE) { @@ -815,12 +837,40 @@ bnxt_vnic_rss_hash_algo_update(struct bnxt *bp, return -EINVAL; } + /* Remove unsupported types */ + rss_info->rss_types &= bnxt_eth_rss_support(bp); + + /* If only unsupported type(s) are specified then quit */ + if (!rss_info->rss_types) { + PMD_DRV_LOG(ERR, + "Unsupported RSS hash type\n"); + return -EINVAL; + } + /* hwrm_type conversion */ hash_type = bnxt_rte_to_hwrm_hash_types(rss_info->rss_types); hash_mode = bnxt_rte_to_hwrm_hash_level(bp, rss_info->rss_types, rss_info->rss_level); + ring_mode = vnic->ring_select_mode; + + /* For P7 chips update the hash_type if hash_type not explicitly passed. + * TODO: For P5 chips. + */ + if (BNXT_CHIP_P7(bp) && + hash_mode == BNXT_HASH_MODE_DEFAULT && !hash_type) + vnic->hash_type = HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4 | + HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6; + + rc = bnxt_rte_flow_to_hwrm_ring_select_mode(rss_info->rss_func, + rss_info->rss_types, + bp, + vnic); + if (rc) + return -EINVAL; + if (vnic->hash_mode != hash_mode || - vnic->hash_type != hash_type) { + vnic->hash_type != hash_type || + vnic->ring_select_mode != ring_mode) { apply = 1; vnic->hash_mode = hash_mode; vnic->hash_type = hash_type; @@ -839,10 +889,10 @@ bnxt_vnic_rss_hash_algo_update(struct bnxt *bp, if (apply) { if (bnxt_hwrm_vnic_rss_cfg(bp, vnic)) { memcpy(vnic->rss_hash_key, old_rss_hash_key, HW_HASH_KEY_SIZE); - BNXT_TF_DBG(ERR, "Error configuring vnic RSS config\n"); + PMD_DRV_LOG(ERR, "Error configuring vnic RSS config\n"); return -EINVAL; } - BNXT_TF_DBG(INFO, "Rss config successfully applied\n"); + PMD_DRV_LOG(INFO, "Rss config successfully applied\n"); } return 0; } @@ -1245,3 +1295,96 @@ bnxt_get_default_vnic(struct bnxt *bp) { return &bp->vnic_info[bp->vnic_queue_db.dflt_vnic_id]; } + +uint8_t _bnxt_rte_to_hwrm_ring_select_mode(enum rte_eth_hash_function hash_f) +{ + /* If RTE_ETH_HASH_FUNCTION_DEFAULT || RTE_ETH_HASH_FUNCTION_TOEPLITZ */ + uint8_t mode = HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ; + + if (hash_f == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) + mode = HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_XOR; + + return mode; +} + +int bnxt_rte_flow_to_hwrm_ring_select_mode(enum rte_eth_hash_function hash_f, + uint64_t types, struct bnxt *bp, + struct bnxt_vnic_info *vnic) +{ + if (hash_f != RTE_ETH_HASH_FUNCTION_TOEPLITZ && + hash_f != RTE_ETH_HASH_FUNCTION_DEFAULT) { + if (hash_f == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ || + (!BNXT_CHIP_P7(bp) && hash_f == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)) { + PMD_DRV_LOG(ERR, "Unsupported hash function\n"); + return -ENOTSUP; + } + } + + if (types & RTE_ETH_RSS_IPV4_CHKSUM || types & RTE_ETH_RSS_L4_CHKSUM) { + if ((bp->vnic_cap_flags & BNXT_VNIC_CAP_CHKSM_MODE) && + (hash_f == RTE_ETH_HASH_FUNCTION_DEFAULT || + hash_f == RTE_ETH_HASH_FUNCTION_TOEPLITZ)) { + /* Checksum mode cannot with hash func makes no sense */ + vnic->ring_select_mode = + HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ_CHECKSUM; + /* shadow copy types as !hash_f is always true with default func */ + return 0; + } + PMD_DRV_LOG(ERR, "Hash function not supported with checksun type\n"); + return -ENOTSUP; + } + + vnic->ring_select_mode = _bnxt_rte_to_hwrm_ring_select_mode(hash_f); + return 0; +} + +int bnxt_rte_eth_to_hwrm_ring_select_mode(struct bnxt *bp, uint64_t types, + struct bnxt_vnic_info *vnic) +{ + /* If the config update comes via ethdev, there is no way to + * specify anything for hash function. + * So its either TOEPLITZ or the Checksum mode. + * Note that checksum mode is not supported on older devices. + */ + if (types == RTE_ETH_RSS_IPV4_CHKSUM) { + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_CHKSM_MODE) + vnic->ring_select_mode = + HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ_CHECKSUM; + else + return -ENOTSUP; + } + + /* Older devices can support TOEPLITZ only. + * Thor2 supports other hash functions, but can't change using this path. + */ + vnic->ring_select_mode = + HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ; + return 0; +} + +void bnxt_hwrm_rss_to_rte_hash_conf(struct bnxt_vnic_info *vnic, + uint64_t *rss_conf) +{ + uint32_t hash_types; + + hash_types = vnic->hash_type; + *rss_conf = 0; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) + *rss_conf |= RTE_ETH_RSS_IPV4; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) + *rss_conf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) + *rss_conf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) + *rss_conf |= RTE_ETH_RSS_IPV6; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) + *rss_conf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) + *rss_conf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV6 || + hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV4) + *rss_conf |= RTE_ETH_RSS_AH; + if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV6 || + hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV4) + *rss_conf |= RTE_ETH_RSS_ESP; +} diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index 7a6a0aa739..d01c9ebdb4 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -31,6 +31,11 @@ (BNXT_VF(bp) && BNXT_VF_IS_TRUSTED(bp) && \ !((bp)->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS_TRUSTED_VF)) || \ (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))) +#define BNXT_IS_HASH_FUNC_DEFAULT(f) ((f) != RTE_ETH_HASH_FUNCTION_DEFAULT) +#define BNXT_IS_HASH_FUNC_TOEPLITZ(f) ((f) != RTE_ETH_HASH_FUNCTION_TOEPLITZ) +#define BNXT_IS_HASH_FUNC_SIMPLE_XOR(b, f) \ + ((b)->vnic_cap_flags & BNXT_VNIC_CAP_XOR_MODE && \ + ((f) != RTE_ETH_HASH_FUNCTION_SIMPLE_XOR)) struct bnxt_vnic_info { STAILQ_ENTRY(bnxt_vnic_info) next; @@ -73,6 +78,7 @@ struct bnxt_vnic_info { STAILQ_HEAD(, bnxt_filter_info) filter; STAILQ_HEAD(, rte_flow) flow_list; + uint8_t ring_select_mode; }; struct bnxt_vnic_queue_db { @@ -83,6 +89,7 @@ struct bnxt_vnic_queue_db { /* RSS structure to pass values as an structure argument*/ struct bnxt_vnic_rss_info { + uint32_t rss_func; uint32_t rss_level; uint64_t rss_types; uint32_t key_len; /**< Hash key length in bytes. */ @@ -102,7 +109,7 @@ void bnxt_free_vnic_mem(struct bnxt *bp); int bnxt_alloc_vnic_mem(struct bnxt *bp); int bnxt_vnic_grp_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic); void bnxt_prandom_bytes(void *dest_ptr, size_t len); -uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type); +uint32_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type); int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl); uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode); @@ -139,5 +146,12 @@ struct bnxt_vnic_info * bnxt_vnic_queue_id_get_next(struct bnxt *bp, uint16_t queue_id, uint16_t *vnic_idx); void bnxt_vnic_tpa_cfg(struct bnxt *bp, uint16_t queue_id, bool flag); - +uint8_t _bnxt_rte_to_hwrm_ring_select_mode(enum rte_eth_hash_function hash_f); +int bnxt_rte_flow_to_hwrm_ring_select_mode(enum rte_eth_hash_function hash_f, + uint64_t types, struct bnxt *bp, + struct bnxt_vnic_info *vnic); +int bnxt_rte_eth_to_hwrm_ring_select_mode(struct bnxt *bp, uint64_t types, + struct bnxt_vnic_info *vnic); +void bnxt_hwrm_rss_to_rte_hash_conf(struct bnxt_vnic_info *vnic, + uint64_t *rss_conf); #endif From patchwork Wed Dec 27 04:21:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135598 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 869FB437A1; Wed, 27 Dec 2023 05:22:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B19340A7D; Wed, 27 Dec 2023 05:21:45 +0100 (CET) Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by mails.dpdk.org (Postfix) with ESMTP id 391DC4069D for ; Wed, 27 Dec 2023 05:21:40 +0100 (CET) Received: by mail-qt1-f177.google.com with SMTP id d75a77b69052e-427ca22a680so23140481cf.3 for ; Tue, 26 Dec 2023 20:21:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650899; x=1704255699; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=DOJ8/sQkqG0vIga8kUN8ISI+WhZvagjqJ1D+YT5qThA=; b=WIMoPaN8uKqwaobCL44hDiuEvmkD3PanZ12W0iOXOYzs6s0LIJSxM9vZeJAs+wCieO rQ2Ii520KNXmkiExuD87DBtx+UDa2ec5DfvexAMo3fqY93O0PTlYSFsv/b6/lDvsEaIc S2WskZD5FCvz5NIHzl4geIyyyrcO+06+nC1nQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650899; x=1704255699; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DOJ8/sQkqG0vIga8kUN8ISI+WhZvagjqJ1D+YT5qThA=; b=Opbh93jVV5jvsp1cgrfchlxNEC+AGQDDWA0YBH+LTyfnPJG+8fGLq6jaUXFBSIRfqR CIxJc/ck1v+KlkpEPznZVEK+5QavhkC28i/OPtrlR02ev0Em24pYzM+3EWtZnM+KfJIc GHa6a/z3IkFg1nI4fJ6uuYTWWgVQ766f9LgTI3dyRj+/gB3ZAnytUzn5BjAhG+hieYJR LcfaGlfLK3dy5Tgo529/CNJi1u6zZk1ga4GdkabPT8h6wpWcv9CKTkaFq7b0Q6Iq24FJ x3EU43sDeP2/kkJajv+C+IeI3YyEPhYSQA4HFrMOejZzoYu2a5X/K6Cm3dxorAA/QXyB 0eug== X-Gm-Message-State: AOJu0YyhAZzhj0wdt9wuBRoWHfDxJ2q8ZktbQUKLsjolk76Qt7IQ0xlx uCT8Gkq4DRjruts1PjVCI0kO2ZdL0kcQ6NoWnWQ+YxDkNZUddNz7bDixLaMjhGFpNXq2uqcKZgJ CRsOkBBVYwhDXCGo7ryPjv5YHsvx82fLB8J3+UqgCx5OvK70pCWyOhyZNKThjjVMzkzC3QG8KXK 8= X-Google-Smtp-Source: AGHT+IGVKq3kMiIYyZacxitmRNS8sP+73O0zbYzNh8MLR4nCpMfevggzAdRLQ3wbCP2NUZd/ErYm+Q== X-Received: by 2002:ac8:5753:0:b0:427:820e:910d with SMTP id 19-20020ac85753000000b00427820e910dmr10131260qtx.84.1703650899113; Tue, 26 Dec 2023 20:21:39 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:38 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 10/18] net/bnxt: add flow query callback Date: Tue, 26 Dec 2023 20:21:11 -0800 Message-Id: <20231227042119.72469-11-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Damodharam Ammepalli This patch addsbnxt query callback to rte_flow_ops in non TruFlow mode. At this point only the RSS hash function type is displayed. Signed-off-by: Damodharam Ammepalli Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_flow.c | 61 ++++++++++++++++++++++++++++++++++++ drivers/net/bnxt/bnxt_vnic.c | 11 +++++++ drivers/net/bnxt/bnxt_vnic.h | 2 ++ 3 files changed, 74 insertions(+) diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 2d707b48d2..f25bc6ff78 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -1917,6 +1917,66 @@ void bnxt_flow_cnt_alarm_cb(void *arg) (void *)bp); } +/* Query an requested flow rule. */ +static int +bnxt_flow_query_all(struct rte_flow *flow, + const struct rte_flow_action *actions, void *data, + struct rte_flow_error *error) +{ + struct rte_flow_action_rss *rss_conf; + struct bnxt_vnic_info *vnic; + + vnic = flow->vnic; + if (vnic == NULL) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, flow, + "Invalid flow: failed to query flow."); + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + break; + case RTE_FLOW_ACTION_TYPE_RSS: + /* Full details of rte_flow_action_rss not available yet TBD*/ + rss_conf = (struct rte_flow_action_rss *)data; + + /* toeplitz is default */ + if (vnic->ring_select_mode == + HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ) + rss_conf->func = vnic->hash_f_local; + else + rss_conf->func = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; + + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "action is not supported"); + } + } + + return 0; +} + +static int +bnxt_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow, + const struct rte_flow_action *actions, void *data, + struct rte_flow_error *error) +{ + struct bnxt *bp = dev->data->dev_private; + int ret = 0; + + if (bp == NULL) + return -ENODEV; + + bnxt_acquire_flow_lock(bp); + ret = bnxt_flow_query_all(flow, actions, data, error); + bnxt_release_flow_lock(bp); + + return ret; +} static struct rte_flow * bnxt_flow_create(struct rte_eth_dev *dev, @@ -2374,4 +2434,5 @@ const struct rte_flow_ops bnxt_flow_ops = { .create = bnxt_flow_create, .destroy = bnxt_flow_destroy, .flush = bnxt_flow_flush, + .query = bnxt_flow_query, }; diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index 6a57f85ea7..bf1f0ea09f 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -1335,6 +1335,9 @@ int bnxt_rte_flow_to_hwrm_ring_select_mode(enum rte_eth_hash_function hash_f, } vnic->ring_select_mode = _bnxt_rte_to_hwrm_ring_select_mode(hash_f); + vnic->hash_f_local = hash_f; + /* shadow copy types as !hash_f is always true with default func */ + vnic->rss_types_local = types; return 0; } @@ -1359,6 +1362,8 @@ int bnxt_rte_eth_to_hwrm_ring_select_mode(struct bnxt *bp, uint64_t types, */ vnic->ring_select_mode = HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ; + vnic->hash_f_local = + HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ; return 0; } @@ -1367,6 +1372,12 @@ void bnxt_hwrm_rss_to_rte_hash_conf(struct bnxt_vnic_info *vnic, { uint32_t hash_types; + /* check for local shadow rte types */ + if (vnic->rss_types_local != 0) { + *rss_conf = vnic->rss_types_local; + return; + } + hash_types = vnic->hash_type; *rss_conf = 0; if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index d01c9ebdb4..93155648e2 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -79,6 +79,8 @@ struct bnxt_vnic_info { STAILQ_HEAD(, bnxt_filter_info) filter; STAILQ_HEAD(, rte_flow) flow_list; uint8_t ring_select_mode; + enum rte_eth_hash_function hash_f_local; + uint64_t rss_types_local; }; struct bnxt_vnic_queue_db { From patchwork Wed Dec 27 04:21:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135599 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06A72437A1; Wed, 27 Dec 2023 05:22:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ABC7440A87; Wed, 27 Dec 2023 05:21:46 +0100 (CET) Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com [209.85.160.179]) by mails.dpdk.org (Postfix) with ESMTP id 9976440A6E for ; Wed, 27 Dec 2023 05:21:41 +0100 (CET) Received: by mail-qt1-f179.google.com with SMTP id d75a77b69052e-427dcdebf8fso13378121cf.3 for ; Tue, 26 Dec 2023 20:21:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650900; x=1704255700; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=fKduRJ+63AKztDn0E0Gtb9a/gMqpGKON5T+kWQQQVRY=; b=P5a2OkqsGUFEttFFfWsntKZB2X1GXGfEkv2X7mHhMMGw3pHjg7vruXQ5WQnjhgKXNi MMRvIZdy7rfqreAMjTpx981L+7FUOtXw0M4kGdtW1qJE0Qt/iC7YILoeg7UoyiMnV1YC cdXF/MGNpvTsCLAhM4Dtyp/uc4w6ZPC2/P9xY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650900; x=1704255700; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fKduRJ+63AKztDn0E0Gtb9a/gMqpGKON5T+kWQQQVRY=; b=HpH2/xI8uyo/mRJUTrUiYdaOZbx5wkh8BsvZOdoAtn8Qv+Ac/PuIQwI75y3+dItAUB 9f3u5rexxfZRXgK5ow8fsw70u+ijj5sHAgB7NXrB5yC1Q3APN43Rlr4IFBh22cqUPfMy EJZDDjBnm9+aQGKXIt9tlgzhS6G6lOOrF3ocsw0k/sCQqiOuBhpNUzcjqdSJCYaFSIQ7 4xz7+MVUPe0N1YcKl24vhYWBQIe35fnz95jvO03SdhfhfX4CXoMrPV2UCV0DLgIYh2Tl QkM9BnFnU6H/sVOo7Zc+9tiD4OIcklXrbfh4R4qprZFXcuLisOTBx4CfFjOLlsyXHJG5 FDlg== X-Gm-Message-State: AOJu0YyORnu6BTQvTSwLABCD9NzfHDnXbXKFTgQdgJSZiNn6HzR4p09t VtLO9HCYhhixETl5VKsekGKC6jsiJaHPk7/6GhzEfc2wXSM5sxcNmAOl1WyltBBtRD+Czc3vB6f D45K8xsjtQgJIW7O9kWIbq9a8Vc0uT3mn3Kq2VuPT7hAJ4/LgDC9W2N2XxfYxv0pu5fFvD9aYUG E= X-Google-Smtp-Source: AGHT+IG8lZ+Qo2alCOw5xZB74oL9b+3RRW/qu4qWkSsUSQ1Kt3AIlwurYMRA76kPSz77aLXt5FCvaA== X-Received: by 2002:a05:622a:15c5:b0:425:4043:419e with SMTP id d5-20020a05622a15c500b004254043419emr11589883qty.74.1703650900415; Tue, 26 Dec 2023 20:21:40 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:39 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 11/18] net/bnxt: add ESP and AH header based RSS support Date: Tue, 26 Dec 2023 20:21:12 -0800 Message-Id: <20231227042119.72469-12-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Check if the firmware can support RSS based on these types and program the hardware accordingly when requested when the firmware indicates that the underlying hardware supports the functionality. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 6 ++ drivers/net/bnxt/bnxt_ethdev.c | 8 ++- drivers/net/bnxt/bnxt_hwrm.c | 104 +++++++++++++++++++++++++-------- drivers/net/bnxt/bnxt_hwrm.h | 1 + drivers/net/bnxt/bnxt_vnic.c | 13 ++++- 5 files changed, 102 insertions(+), 30 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index e7b288c849..576688bbff 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -809,6 +809,12 @@ struct bnxt { #define BNXT_VNIC_CAP_XOR_MODE BIT(5) #define BNXT_VNIC_CAP_CHKSM_MODE BIT(6) #define BNXT_VNIC_CAP_L2_CQE_MODE BIT(8) +#define BNXT_VNIC_CAP_AH_SPI4_CAP BIT(9) +#define BNXT_VNIC_CAP_AH_SPI6_CAP BIT(10) +#define BNXT_VNIC_CAP_ESP_SPI4_CAP BIT(11) +#define BNXT_VNIC_CAP_ESP_SPI6_CAP BIT(12) +#define BNXT_VNIC_CAP_AH_SPI_CAP (BNXT_VNIC_CAP_AH_SPI4_CAP | BNXT_VNIC_CAP_AH_SPI6_CAP) +#define BNXT_VNIC_CAP_ESP_SPI_CAP (BNXT_VNIC_CAP_ESP_SPI4_CAP | BNXT_VNIC_CAP_ESP_SPI6_CAP) unsigned int rx_nr_rings; unsigned int rx_cp_nr_rings; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index ef5e65ff16..5b775e7716 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -995,8 +995,12 @@ uint64_t bnxt_eth_rss_support(struct bnxt *bp) RTE_ETH_RSS_LEVEL_MASK; if (bp->vnic_cap_flags & BNXT_VNIC_CAP_CHKSM_MODE) - support |= (RTE_ETH_RSS_IPV4_CHKSUM | - RTE_ETH_RSS_L4_CHKSUM); + support |= RTE_ETH_RSS_IPV4_CHKSUM | + RTE_ETH_RSS_L4_CHKSUM; + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_AH_SPI_CAP) + support |= RTE_ETH_RSS_AH; + if (bp->vnic_cap_flags & BNXT_VNIC_CAP_ESP_SPI_CAP) + support |= RTE_ETH_RSS_ESP; return support; } diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 587433a878..1ac3f30074 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1031,6 +1031,21 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RING_SELECT_MODE_TOEPLITZ_CHKSM_CAP) bp->vnic_cap_flags |= BNXT_VNIC_CAP_CHKSM_MODE; + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_L2_CQE_MODE_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_L2_CQE_MODE; + + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RSS_IPSEC_AH_SPI_IPV4_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_AH_SPI4_CAP; + + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RSS_IPSEC_AH_SPI_IPV6_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_AH_SPI6_CAP; + + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RSS_IPSEC_ESP_SPI_IPV4_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_ESP_SPI4_CAP; + + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RSS_IPSEC_ESP_SPI_IPV6_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_ESP_SPI6_CAP; + bp->max_tpa_v2 = rte_le_to_cpu_16(resp->max_aggs_supported); HWRM_UNLOCK(); @@ -2412,6 +2427,52 @@ int bnxt_hwrm_vnic_free(struct bnxt *bp, struct bnxt_vnic_info *vnic) return rc; } +static uint32_t bnxt_sanitize_rss_type(struct bnxt *bp, uint32_t types) +{ + uint32_t hwrm_type = types; + + if (types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV4 && + !(bp->vnic_cap_flags & BNXT_VNIC_CAP_ESP_SPI4_CAP)) + hwrm_type &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV4; + if (types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV6 && + !(bp->vnic_cap_flags & BNXT_VNIC_CAP_ESP_SPI6_CAP)) + hwrm_type &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV6; + + if (types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV4 && + !(bp->vnic_cap_flags & BNXT_VNIC_CAP_AH_SPI4_CAP)) + hwrm_type &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV4; + + if (types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV6 && + !(bp->vnic_cap_flags & BNXT_VNIC_CAP_AH_SPI6_CAP)) + hwrm_type &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV6; + + return hwrm_type; +} + +#ifdef RTE_LIBRTE_BNXT_TRUFLOW_DEBUG +static int +bnxt_hwrm_vnic_rss_qcfg_p5(struct bnxt *bp) +{ + struct hwrm_vnic_rss_qcfg_output *resp = bp->hwrm_cmd_resp_addr; + struct hwrm_vnic_rss_qcfg_input req = {0}; + int rc; + + HWRM_PREP(&req, HWRM_VNIC_RSS_QCFG, BNXT_USE_CHIMP_MB); + /* vnic_id and rss_ctx_idx must be set to INVALID to read the + * global hash mode. + */ + req.vnic_id = rte_cpu_to_le_16(BNXT_DFLT_VNIC_ID_INVALID); + req.rss_ctx_idx = rte_cpu_to_le_16(BNXT_RSS_CTX_IDX_INVALID); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), + BNXT_USE_CHIMP_MB); + HWRM_CHECK_RESULT(); + HWRM_UNLOCK(); + PMD_DRV_LOG(DEBUG, "RSS QCFG: Hash level %d\n", resp->hash_mode_flags); + + return rc; +} +#endif + static int bnxt_hwrm_vnic_rss_cfg_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic) { @@ -2425,7 +2486,10 @@ bnxt_hwrm_vnic_rss_cfg_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB); req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); - req.hash_type = rte_cpu_to_le_32(vnic->hash_type); + req.hash_type = rte_cpu_to_le_32(bnxt_sanitize_rss_type(bp, vnic->hash_type)); + /* Update req with vnic ring_select_mode for P7 */ + if (BNXT_CHIP_P7(bp)) + req.ring_select_mode = vnic->ring_select_mode; /* When the vnic_id in the request field is a valid * one, the hash_mode_flags in the request field must * be set to DEFAULT. And any request to change the @@ -2524,7 +2588,7 @@ bnxt_hwrm_vnic_rss_cfg_non_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB); - req.hash_type = rte_cpu_to_le_32(vnic->hash_type); + req.hash_type = rte_cpu_to_le_32(bnxt_sanitize_rss_type(bp, vnic->hash_type)); req.hash_mode_flags = vnic->hash_mode; req.ring_grp_tbl_addr = @@ -2550,29 +2614,18 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp, if (!vnic->rss_table) return 0; - if (BNXT_CHIP_P5(bp)) { - rc = bnxt_hwrm_vnic_rss_cfg_p5(bp, vnic); - if (rc) - return rc; - /* Configuring the hash mode has to be done in a - * different VNIC_RSS_CFG HWRM command by setting - * vnic_id & rss_ctx_id to INVALID. The only - * exception to this is if the USER doesn't want - * to change the default behavior. So, ideally - * bnxt_hwrm_vnic_rss_cfg_hash_mode_p5 should be - * called when user is explicitly changing the hash - * mode. However, this logic will unconditionally - * call bnxt_hwrm_vnic_rss_cfg_hash_mode_p5 to - * simplify the logic as there is no harm in calling - * bnxt_hwrm_vnic_rss_cfg_hash_mode_p5 even when - * user is not setting it explicitly. Because, this - * routine will convert the default value to inner - * which is our adapter's default behavior. - */ + /* Handle all the non-thor skus rss here */ + if (!BNXT_CHIP_P5_P7(bp)) + return bnxt_hwrm_vnic_rss_cfg_non_p5(bp, vnic); + + /* Handle Thor2 and Thor skus rss here */ + rc = bnxt_hwrm_vnic_rss_cfg_p5(bp, vnic); + + /* configure hash mode for Thor/Thor2 */ + if (!rc) return bnxt_hwrm_vnic_rss_cfg_hash_mode_p5(bp, vnic); - } - return bnxt_hwrm_vnic_rss_cfg_non_p5(bp, vnic); + return rc; } int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp, @@ -5343,7 +5396,7 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp, return 0; } -static int +int bnxt_vnic_rss_configure_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic) { struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr; @@ -5363,8 +5416,9 @@ bnxt_vnic_rss_configure_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB); req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); - req.hash_type = rte_cpu_to_le_32(vnic->hash_type); + req.hash_type = rte_cpu_to_le_32(bnxt_sanitize_rss_type(bp, vnic->hash_type)); req.hash_mode_flags = vnic->hash_mode; + req.ring_select_mode = vnic->ring_select_mode; req.ring_grp_tbl_addr = rte_cpu_to_le_64(vnic->rss_table_dma_addr + diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 3d5194257b..56b232d7de 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -357,6 +357,7 @@ void bnxt_free_hwrm_tx_ring(struct bnxt *bp, int queue_index); int bnxt_alloc_hwrm_tx_ring(struct bnxt *bp, int queue_index); int bnxt_hwrm_config_host_mtu(struct bnxt *bp); int bnxt_vnic_rss_clear_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic); +int bnxt_vnic_rss_configure_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp); int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp, struct bnxt_ctx_mem *ctxm); diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index bf1f0ea09f..5ea34f7cb6 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -268,6 +268,8 @@ uint32_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type) if ((rte_type & RTE_ETH_RSS_IPV4) || (rte_type & RTE_ETH_RSS_ECPRI)) hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4; + if (rte_type & RTE_ETH_RSS_ECPRI) + hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4; if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP) hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4; if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP) @@ -278,9 +280,12 @@ uint32_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type) hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6; if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP) hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6; - if (rte_type & RTE_ETH_RSS_IPV4_CHKSUM) - hwrm_type |= - HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ_CHECKSUM; + if (rte_type & RTE_ETH_RSS_ESP) + hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV4 | + HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_ESP_SPI_IPV6; + if (rte_type & RTE_ETH_RSS_AH) + hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV4 | + HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_AH_SPI_IPV6; return hwrm_type; } @@ -1327,7 +1332,9 @@ int bnxt_rte_flow_to_hwrm_ring_select_mode(enum rte_eth_hash_function hash_f, /* Checksum mode cannot with hash func makes no sense */ vnic->ring_select_mode = HWRM_VNIC_RSS_CFG_INPUT_RING_SELECT_MODE_TOEPLITZ_CHECKSUM; + vnic->hash_f_local = RTE_ETH_HASH_FUNCTION_TOEPLITZ; /* shadow copy types as !hash_f is always true with default func */ + vnic->rss_types_local = types; return 0; } PMD_DRV_LOG(ERR, "Hash function not supported with checksun type\n"); From patchwork Wed Dec 27 04:21:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135600 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D0DF437A1; Wed, 27 Dec 2023 05:23:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72F4640DFB; Wed, 27 Dec 2023 05:21:48 +0100 (CET) Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by mails.dpdk.org (Postfix) with ESMTP id D3F2D4067A for ; Wed, 27 Dec 2023 05:21:42 +0100 (CET) Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-427e7aa0207so3779011cf.2 for ; Tue, 26 Dec 2023 20:21:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650902; x=1704255702; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=f3j5Whb0nYLfca2DHDYMnzqwU0qXgggExxRZF6YuPQE=; b=HsakXjUxk53fJ4c6Mfp2UNTlFLdryVX27Ia5tMLoq1Be6DIO1cfyqoYLQ6PioaZbk9 /0Oco+QuT3LAyK4/a3Pcv891bxT/odrYurrpXKWt6Um3GsUUQSSEw24JR1uPliaLuPd0 qUUbT/vZtHjSYtY73N5xQOHv/UaG39O37t0XY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650902; x=1704255702; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=f3j5Whb0nYLfca2DHDYMnzqwU0qXgggExxRZF6YuPQE=; b=DPKsXp+ZZHgZS3434VgQFaFNXAo85aBvtJxw94EqQUrvjTn6Ey+nnImNBr2zIH2SLg o8Os3NwfpLkDGXDsFOzKV4pQfw2QzrNZR/6r9LGOQ5LnfstjAuXk9KQJt3tgl2tNsCkt icXvvCItGbf4TMegWPyhmneg0/B1ShcDErk2rjYsRNR5dLrFerFoRiENJk3cWHcP4/LS 9QSqY36hyCZoeyjmX3gKx2VbA72w2wA7BHaR7ZQUzGGIH0E5nrmYqgigfJPQoqJD4sV+ ascpFhHrsFXqJLUoxmMiSunCsLhidRHVsP5mr2A0ArwLCkHEdq/7YkVSnZf4oLt0OPsE iq9g== X-Gm-Message-State: AOJu0YxLTTQp7NbIQb8r8sqFus6KY/djaxI8wicixh2eGzWVon8hwx1h I5AU/LjzE7EuVE4HS1VX7kX3z/cFV+hgGrPizEKdBmAseSbSPsk7o8LjGhtXMOwqi9qA1isVGq/ z4B9CmE0qZeGYBv89l0a9Sfr4T7vGjbflp/hZOcGTCQ7uhkH6RnNGJVpnNkJ20SbG5DwPJ49yxT c= X-Google-Smtp-Source: AGHT+IH4I4OFL2zWuirJIcPM6x16jiyAa9EWSIvlzlB7M50m1bRdn1JQVlcU9vR7a6KgqiFM+3yWBg== X-Received: by 2002:ac8:7f0c:0:b0:427:86ad:47dd with SMTP id f12-20020ac87f0c000000b0042786ad47ddmr10879215qtk.16.1703650901739; Tue, 26 Dec 2023 20:21:41 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:41 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 12/18] net/bnxt: set allmulti mode if multicast filter fails Date: Tue, 26 Dec 2023 20:21:13 -0800 Message-Id: <20231227042119.72469-13-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fallback to all multicast mode if FW rejects multicast filter programming. The firmware can reject the MC filter programming request if it is running low in resources when there is a large number of functions. The driver must be prepared to fallback to the all-multicast mode if the original MC filter programming request is rejected. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 5b775e7716..7aed6d3ab6 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -2947,7 +2947,17 @@ bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev, vnic->flags &= ~BNXT_VNIC_INFO_MCAST; allmulti: - return bnxt_hwrm_cfa_l2_set_rx_mask(bp, vnic, 0, NULL); + rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, vnic, 0, NULL); + if (rc == -ENOSPC && (vnic->flags & BNXT_VNIC_INFO_MCAST)) { + /* If MCAST addition failed because FW ran out of + * multicast filters, enable all multicast mode. + */ + vnic->flags &= ~BNXT_VNIC_INFO_MCAST; + vnic->flags |= BNXT_VNIC_INFO_ALLMULTI; + goto allmulti; + } + + return rc; } static int From patchwork Wed Dec 27 04:21:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135601 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A858437A1; Wed, 27 Dec 2023 05:23:15 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C90840E09; Wed, 27 Dec 2023 05:21:49 +0100 (CET) Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by mails.dpdk.org (Postfix) with ESMTP id 3A98C40A77 for ; Wed, 27 Dec 2023 05:21:44 +0100 (CET) Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-42786ec994bso44746051cf.1 for ; Tue, 26 Dec 2023 20:21:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650903; x=1704255703; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=YSHwYCerwdApKA+wFejuJlRR1O3HlaVpuHzMR2wAwPI=; b=ZuF/XtL3r7tyjMlUqfk0YuOxN5qhkHjRMIDLBrCT/RT5aQgDRK6KRIbbSjG2TjLCom CxOGUJINbL271WZVzm/CfpC8XlmlA5PIyGzxN2bP2JaVrPKWQtMiwtTqAL33UGKA6ugQ w7SwuVbpk/iRgAye2oOIj9+RJ2t3jVZTjsNzw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650903; x=1704255703; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YSHwYCerwdApKA+wFejuJlRR1O3HlaVpuHzMR2wAwPI=; b=OGSQ/4PfEw+GyNHvyNnMYvacvMUMeA+Y2i6cbWlLmvw9CNWBKtk+GQltPYfyuY8zA5 gzfsMMO20ZKOzZgiwsb/w+ogjPB9B+gSMNa79b12rdYTCOTynWqUQl3EYAteHqVYzLhi 9uaf54JPUVUyRMxbipFt8BaPrti30J0+YRgE10EI4yeMIJzavcrK/Hj62G2t/zXKML6L bwkXzX1soeT7iXWcYudYc+qLMIv+Pt4KAlwNCFEtOrB/i2BW5XKCFHiMAskjB1qKwql3 vLHYLauj+2Of+jyDJMDElolRnuZnFefU8qh/Mcu9zKsN60zEuaUYnUKC5HwfEfBgv6xD wOoQ== X-Gm-Message-State: AOJu0YzrMNwZ0CqJHXeDiZ/f5HR34hUm3oBYOgg/EDNCFwZHQwq14gyq ZZh5aOGZbpkd7WnpRGmMowCvwC7snIdw89JFHICYyJVrkS5aTdGGj8Shg4+ngXk6xB7yGFKHqGC AIXTOpxp/RUhMmSV87aAxHR1YhUq425SVUPQ4O5wjQjzqx3sjU0zBBsPPtgUul8+hqI2juzMa2Z k= X-Google-Smtp-Source: AGHT+IFTvvIgMDSIBhJN+FQoErswa6YS0JgkDiFal3yDvGHwYVG13zW98IcxU1Pbwc3LAbAYAcjrOw== X-Received: by 2002:a05:622a:60f:b0:425:4043:5f3b with SMTP id z15-20020a05622a060f00b0042540435f3bmr15369519qta.121.1703650903110; Tue, 26 Dec 2023 20:21:43 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:42 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Jay Ding Subject: [PATCH v3 13/18] net/bnxt: add VF FLR async event handler Date: Tue, 26 Dec 2023 20:21:14 -0800 Message-Id: <20231227042119.72469-14-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jay Ding When a VF undergoes an FLR, the firmware indicates this via an async notification to the PF. Note that the PF driver needs to register for the notification with the firmware. Add support for VF_FLR async event handling when the driver is running on a PF. Signed-off-by: Jay Ding Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_cpr.c | 20 ++++++++++++++++++++ drivers/net/bnxt/bnxt_hwrm.c | 6 ++++-- drivers/net/bnxt/bnxt_hwrm.h | 2 ++ 3 files changed, 26 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c index 0733cf4df2..fb43bc58da 100644 --- a/drivers/net/bnxt/bnxt_cpr.c +++ b/drivers/net/bnxt/bnxt_cpr.c @@ -127,6 +127,23 @@ void bnxt_handle_vf_cfg_change(void *arg) } } +static void +bnxt_process_vf_flr(struct bnxt *bp, uint32_t data1) +{ + uint16_t pfid, vfid; + + if (!BNXT_TRUFLOW_EN(bp)) + return; + + pfid = (data1 & HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_MASK) >> + HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_SFT; + vfid = (data1 & HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_MASK) >> + HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_SFT; + + PMD_DRV_LOG(INFO, "VF FLR async event received pfid: %u, vfid: %u\n", + pfid, vfid); +} + /* * Async event handling */ @@ -264,6 +281,9 @@ void bnxt_handle_async_event(struct bnxt *bp, case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_ERROR_REPORT: bnxt_handle_event_error_report(bp, data1, data2); break; + case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_VF_FLR: + bnxt_process_vf_flr(bp, data1); + break; default: PMD_DRV_LOG(DEBUG, "handle_async_event id = 0x%x\n", event_id); break; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 1ac3f30074..3c16abea69 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1125,9 +1125,11 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp) req.async_event_fwd[1] |= rte_cpu_to_le_32(ASYNC_CMPL_EVENT_ID_DBG_NOTIFICATION); - if (BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) + if (BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) { req.async_event_fwd[1] |= - rte_cpu_to_le_32(ASYNC_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE); + rte_cpu_to_le_32(ASYNC_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE | + ASYNC_CMPL_EVENT_ID_VF_FLR); + } req.async_event_fwd[2] |= rte_cpu_to_le_32(ASYNC_CMPL_EVENT_ID_ECHO_REQUEST | diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 56b232d7de..6116253787 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -29,6 +29,8 @@ struct hwrm_func_qstats_output; (1 << HWRM_ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY) #define ASYNC_CMPL_EVENT_ID_PF_DRVR_UNLOAD \ (1 << (HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_UNLOAD - 32)) +#define ASYNC_CMPL_EVENT_ID_VF_FLR \ + (1 << (HWRM_ASYNC_EVENT_CMPL_EVENT_ID_VF_FLR - 32)) #define ASYNC_CMPL_EVENT_ID_VF_CFG_CHANGE \ (1 << (HWRM_ASYNC_EVENT_CMPL_EVENT_ID_VF_CFG_CHANGE - 32)) #define ASYNC_CMPL_EVENT_ID_DBG_NOTIFICATION \ From patchwork Wed Dec 27 04:21:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135602 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB876437A1; Wed, 27 Dec 2023 05:23:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20ACC40E68; Wed, 27 Dec 2023 05:21:51 +0100 (CET) Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by mails.dpdk.org (Postfix) with ESMTP id 931AA40A80 for ; Wed, 27 Dec 2023 05:21:45 +0100 (CET) Received: by mail-qk1-f171.google.com with SMTP id af79cd13be357-78160ce40ceso7969685a.1 for ; Tue, 26 Dec 2023 20:21:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650904; x=1704255704; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=+yi3pf2D/BKklDH999h67FEEeMNEQeyTGlmWUPC9W2o=; b=Nqrtl45ov/lR5zaxMxGwHszVeTpJjS7xeuU0FManpmL/Zu3zxXx27ntORn9OMuh+A1 vYvuvOSM4XMSMxQgm4nIK6La7PfRxKiTiHTZogjEg0CG9uZhOcFxx8qOCB1NkmmrEZP8 ES9+FXR5DAuX7ADdaQGTG1Jpyq8CLzrRkaccA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650904; x=1704255704; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+yi3pf2D/BKklDH999h67FEEeMNEQeyTGlmWUPC9W2o=; b=MsjJUZ2YwftLzZkQC99RKU58puwdjbS7WmLUhayah9SmgKMsXURMyeqlT/ij1LZsST Ti5JFbF2IY6JzX0rQ5BS86NO+Lmf6JZ9I4jwVaohMGU+/TCF5j90pzRcbt2sdDm4NSKQ zsoXY38HnSf4oayybqr8oDJbi+IKzPwMsjrdqB883EqdCbTE7j+gVhwqGpby9UPFv28I WRCkcSw3lkq2MdJ64SmvghjThVzn58r/os3oBL0GYKioTu/H/EaktV4B1vnR9bGm7VMt fDBoC4PfaJYMQ3l36NqLCh9+/9JQIB4aWf4flsGNXFFJvlpSN74k3zwj90mui4gQmT4C PfnQ== X-Gm-Message-State: AOJu0Yx70CdkF/FTY5G8ZJHRsQuSHZzZB7anhZR6o57VKUpkBMhS78ot 2QyuahWtICwpu24G/m9zM80Gw5UorNTt0J17Sjl5TkERi8h3qJkygTk0pOPd6f4WvkPyi6xvsWk TYoiJMS9epvNcYwwTiq7cHh3+CNVXcvcElDtuCp/WS1U+gRLxd1zUtMFYJlaI9ampWcS++C494X o= X-Google-Smtp-Source: AGHT+IFkEZV0gO9PR+P8xfc+Ky7MumgBnTIW8gV3DES9eH9MB4R/YQAUTjMpzkg1E+7ffF8Tm4wxZQ== X-Received: by 2002:a05:622a:15c5:b0:427:88c0:14f7 with SMTP id d5-20020a05622a15c500b0042788c014f7mr10416878qty.77.1703650904573; Tue, 26 Dec 2023 20:21:44 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:43 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 14/18] net/bnxt: add tunnel TPA support Date: Tue, 26 Dec 2023 20:21:15 -0800 Message-Id: <20231227042119.72469-15-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Damodharam Ammepalli This patch adds support for tunnel TPA type. The tunnel TPA support is brought in by the updated bit_field tnl_tpa_en(4) in hwrm_vnic_tpa_cfg_input->enables, which is used by the firmware to indicate the capability of the underlying hardware. This patch updates hwrm HWRM_VNIC_TPA_CFG request for vxlan, geneve and default tunnel type bit_fields. The patch also updates to use the V3 TPA completion which the P7 devices support. Signed-off-by: Damodharam Ammepalli Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 4 ++ drivers/net/bnxt/bnxt_hwrm.c | 74 ++++++++++++++++++++++++++++++++++++ drivers/net/bnxt/bnxt_rxr.c | 9 +++-- drivers/net/bnxt/bnxt_vnic.c | 16 ++++++++ 4 files changed, 100 insertions(+), 3 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 576688bbff..2357e9f747 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -18,6 +18,7 @@ #include #include #include +#include #include "bnxt_cpr.h" #include "bnxt_util.h" @@ -119,6 +120,8 @@ (BNXT_CHIP_P5_P7(bp) ? TPA_MAX_SEGS_TH : \ TPA_MAX_SEGS) +#define BNXT_TPA_MAX_PAGES 65536 + /* * Define the number of async completion rings to be used. Set to zero for * configurations in which the maximum number of packet completion rings @@ -815,6 +818,7 @@ struct bnxt { #define BNXT_VNIC_CAP_ESP_SPI6_CAP BIT(12) #define BNXT_VNIC_CAP_AH_SPI_CAP (BNXT_VNIC_CAP_AH_SPI4_CAP | BNXT_VNIC_CAP_AH_SPI6_CAP) #define BNXT_VNIC_CAP_ESP_SPI_CAP (BNXT_VNIC_CAP_ESP_SPI4_CAP | BNXT_VNIC_CAP_ESP_SPI6_CAP) +#define BNXT_VNIC_CAP_VNIC_TUNNEL_TPA BIT(13) unsigned int rx_nr_rings; unsigned int rx_cp_nr_rings; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 3c16abea69..f896a41653 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1046,6 +1046,9 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RSS_IPSEC_ESP_SPI_IPV6_CAP) bp->vnic_cap_flags |= BNXT_VNIC_CAP_ESP_SPI6_CAP; + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_HW_TUNNEL_TPA_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_VNIC_TUNNEL_TPA; + bp->max_tpa_v2 = rte_le_to_cpu_16(resp->max_aggs_supported); HWRM_UNLOCK(); @@ -2666,6 +2669,30 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp, return rc; } +#define BNXT_DFLT_TUNL_TPA_BMAP \ + (HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE | \ + HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV4 | \ + HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV6) + +static void bnxt_vnic_update_tunl_tpa_bmap(struct bnxt *bp, + struct hwrm_vnic_tpa_cfg_input *req) +{ + uint32_t tunl_tpa_bmap = BNXT_DFLT_TUNL_TPA_BMAP; + + if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_VNIC_TUNNEL_TPA)) + return; + + if (bp->vxlan_port_cnt) + tunl_tpa_bmap |= HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN | + HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE; + + if (bp->geneve_port_cnt) + tunl_tpa_bmap |= HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GENEVE; + + req->enables |= rte_cpu_to_le_32(HWRM_VNIC_TPA_CFG_INPUT_ENABLES_TNL_TPA_EN); + req->tnl_tpa_en_bitmap = rte_cpu_to_le_32(tunl_tpa_bmap); +} + int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic, bool enable) { @@ -2714,6 +2741,29 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, if (BNXT_CHIP_P5_P7(bp)) req.max_aggs = rte_cpu_to_le_16(bp->max_tpa_v2); + + /* For tpa v2 handle as per spec mss and log2 units */ + if (BNXT_CHIP_P7(bp)) { + uint32_t nsegs, n, segs = 0; + uint16_t mss = bp->eth_dev->data->mtu - 40; + size_t page_size = rte_mem_page_size(); + uint32_t max_mbuf_frags = + BNXT_TPA_MAX_PAGES / (rte_mem_page_size() + 1); + + /* Calculate the number of segs based on mss */ + if (mss <= page_size) { + n = page_size / mss; + nsegs = (max_mbuf_frags - 1) * n; + } else { + n = mss / page_size; + if (mss & (page_size - 1)) + n++; + nsegs = (max_mbuf_frags - n) / n; + } + segs = rte_log2_u32(nsegs); + req.max_agg_segs = rte_cpu_to_le_16(segs); + } + bnxt_vnic_update_tunl_tpa_bmap(bp, &req); } req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); @@ -4242,6 +4292,27 @@ int bnxt_hwrm_pf_evb_mode(struct bnxt *bp) return rc; } +static int bnxt_hwrm_set_tpa(struct bnxt *bp) +{ + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; + uint64_t rx_offloads = dev_conf->rxmode.offloads; + bool tpa_flags = 0; + int rc, i; + + tpa_flags = (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ? true : false; + for (i = 0; i < bp->max_vnics; i++) { + struct bnxt_vnic_info *vnic = &bp->vnic_info[i]; + + if (vnic->fw_vnic_id == INVALID_HW_RING_ID) + continue; + + rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic, tpa_flags); + if (rc) + return rc; + } + return 0; +} + int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port, uint8_t tunnel_type) { @@ -4278,6 +4349,8 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port, HWRM_UNLOCK(); + bnxt_hwrm_set_tpa(bp); + return rc; } @@ -4346,6 +4419,7 @@ int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, uint16_t port, bp->ecpri_port_cnt = 0; } + bnxt_hwrm_set_tpa(bp); return rc; } diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index d0706874a6..3542975600 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -153,7 +153,8 @@ static void bnxt_rx_ring_reset(void *arg) rxr = rxq->rx_ring; /* Disable and flush TPA before resetting the RX ring */ if (rxr->tpa_info) - bnxt_hwrm_vnic_tpa_cfg(bp, rxq->vnic, false); + bnxt_vnic_tpa_cfg(bp, rxq->queue_id, false); + rc = bnxt_hwrm_rx_ring_reset(bp, i); if (rc) { PMD_DRV_LOG(ERR, "Rx ring%d reset failed\n", i); @@ -163,12 +164,13 @@ static void bnxt_rx_ring_reset(void *arg) bnxt_rx_queue_release_mbufs(rxq); rxr->rx_raw_prod = 0; rxr->ag_raw_prod = 0; + rxr->ag_cons = 0; rxr->rx_next_cons = 0; bnxt_init_one_rx_ring(rxq); bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); if (rxr->tpa_info) - bnxt_hwrm_vnic_tpa_cfg(bp, rxq->vnic, true); + bnxt_vnic_tpa_cfg(bp, rxq->queue_id, true); rxq->in_reset = 0; } @@ -1151,7 +1153,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, return -EBUSY; if (cmp_type == RX_TPA_START_CMPL_TYPE_RX_TPA_START || - cmp_type == RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2) { + cmp_type == RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2 || + cmp_type == RX_TPA_START_V3_CMPL_TYPE_RX_TPA_START_V3) { bnxt_tpa_start(rxq, (struct rx_tpa_start_cmpl *)rxcmp, (struct rx_tpa_start_cmpl_hi *)rxcmp1); rc = -EINVAL; /* Continue w/o new mbuf */ diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index 5ea34f7cb6..5092a7d774 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -464,7 +464,9 @@ bnxt_vnic_queue_delete(struct bnxt *bp, uint16_t vnic_idx) static struct bnxt_vnic_info* bnxt_vnic_queue_create(struct bnxt *bp, int32_t vnic_id, uint16_t q_index) { + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; uint8_t *rx_queue_state = bp->eth_dev->data->rx_queue_state; + uint64_t rx_offloads = dev_conf->rxmode.offloads; struct bnxt_vnic_info *vnic; struct bnxt_rx_queue *rxq = NULL; int32_t rc = -EINVAL; @@ -523,6 +525,12 @@ bnxt_vnic_queue_create(struct bnxt *bp, int32_t vnic_id, uint16_t q_index) goto cleanup; } + rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic, + (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ? + true : false); + if (rc) + PMD_DRV_LOG(DEBUG, "Failed to configure TPA on this vnic %d\n", q_index); + rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic); if (rc) { PMD_DRV_LOG(DEBUG, "Failed to configure vnic plcmode %d\n", @@ -658,7 +666,9 @@ bnxt_vnic_rss_create(struct bnxt *bp, struct bnxt_vnic_rss_info *rss_info, uint16_t vnic_id) { + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; uint8_t *rx_queue_state = bp->eth_dev->data->rx_queue_state; + uint64_t rx_offloads = dev_conf->rxmode.offloads; struct bnxt_vnic_info *vnic; struct bnxt_rx_queue *rxq = NULL; uint32_t idx, nr_ctxs, config_rss = 0; @@ -741,6 +751,12 @@ bnxt_vnic_rss_create(struct bnxt *bp, goto fail_cleanup; } + rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic, + (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ? + true : false); + if (rc) + PMD_DRV_LOG(DEBUG, "Failed to configure TPA on this vnic %d\n", idx); + rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic); if (rc) { PMD_DRV_LOG(ERR, "Failed to configure vnic plcmode %d\n", From patchwork Wed Dec 27 04:21:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135603 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 80463437A1; Wed, 27 Dec 2023 05:23:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 459D640E72; Wed, 27 Dec 2023 05:21:52 +0100 (CET) Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by mails.dpdk.org (Postfix) with ESMTP id 63DEE40A4B for ; Wed, 27 Dec 2023 05:21:47 +0100 (CET) Received: by mail-qt1-f172.google.com with SMTP id d75a77b69052e-427941528d8so42552621cf.2 for ; Tue, 26 Dec 2023 20:21:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650906; x=1704255706; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=Bplko7hCQu3X7lGDkPA6LKynR4D7jJPIDeMo+U1DtzI=; b=J2E/t3H/B2Bj7KvYpdfrFAbz5emR+cFhYsQmbr5qxi6mfPKezKdVMfP0u8U7o6Bgb2 Sn1qZXD5FZln6NRl7uL6nZw7gUX8q0VhqRIEjhHP5wNb+mWkJQLDYxMKj59krTZocQ2Q tn/+rn0i3MhS3bJ2M/ET1fOlBlGz76lbiMTeA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650906; x=1704255706; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Bplko7hCQu3X7lGDkPA6LKynR4D7jJPIDeMo+U1DtzI=; b=lAj7Shbwwc2+ym9nAowLqO1ozYZgBPI7PjcgrUyzJ7ZNCe8XurkKRkCPdraBASpeb1 gQBiGgmwm5NgsfdvoBq45LMQjYW/nVrDhIjM8yuHL1nOiZcZMY80rfxEiNodKaSN7Omc ZjdYAvLbuwXa8jZ0QjAW6dm0yuKWU+hm7f7ALGmyB2VzQ30UwKjdjevxcNMH2lBbroxn EoV/B+r+oS2aZfTAHhjKS/bCHsFNK90vE980N9Wr7RYBaEUsdH9pZWQG6ASF7VQzdUM9 321fey1daIw38VH+c20go2Mxz30c02WteQpY1ooqCCfz4gSi9OkiU2xCw07QQsb9meMt /CXQ== X-Gm-Message-State: AOJu0YzZHMSEIgOvm1qrktGdZMDTKCSKhlndk2Lmh3pyUBdv0G5AXhpN uDy8ejJgLbkpmvJcRAJd1FtnM/WfekGZ37iGqlCl5OQEv0N1MZ8fDSFBrHDKHX37bbo7RdSkFM9 yvJiCDooVg1+DpgKWwu9ZDc4R8VB9AQ44/henVOp06MMy1MhB2j/CoHYqZRV2QiE6p81zMSOTFV E= X-Google-Smtp-Source: AGHT+IGmJ3K5Ptao0N7LO8jUEykGePOFj6Pidb3NT55GeJmN8YWV3x/981Wqq4IrfxeenkqB7fQC4g== X-Received: by 2002:a05:622a:34d:b0:427:8ff9:39af with SMTP id r13-20020a05622a034d00b004278ff939afmr13177316qtw.121.1703650905848; Tue, 26 Dec 2023 20:21:45 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:45 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 15/18] net/bnxt: add 400G get support for P7 devices Date: Tue, 26 Dec 2023 20:21:16 -0800 Message-Id: <20231227042119.72469-16-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Damodharam Ammepalli P7 devices report speeds over speeds2 hsi fields. Adding required support to capture the capability from phy_qcap and save the speeds2 fields into driver priv structure. In fixed mode update the link_speed from force_link_speeds2 field. Updates to logging to provide more info regarding numbers of lanes and the link signal mode. Some code refactoring done for PHY auto detect and displaying XCVR information. Signed-off-by: Damodharam Ammepalli Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 15 + drivers/net/bnxt/bnxt_ethdev.c | 57 ++- drivers/net/bnxt/bnxt_hwrm.c | 493 ++++++++++++++++++++++++- drivers/net/bnxt/bnxt_hwrm.h | 1 + drivers/net/bnxt/hsi_struct_def_dpdk.h | 10 +- 5 files changed, 568 insertions(+), 8 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 2357e9f747..858689533b 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -318,6 +318,16 @@ struct bnxt_link_info { uint16_t support_pam4_auto_speeds; uint8_t req_signal_mode; uint8_t module_status; + /* P7 speeds2 fields */ + bool support_speeds_v2; + uint16_t supported_speeds2_force_mode; + uint16_t supported_speeds2_auto_mode; + uint16_t support_speeds2; + uint16_t force_link_speeds2; + uint16_t auto_link_speeds2; + uint16_t cfg_auto_link_speeds2_mask; + uint8_t active_lanes; + uint8_t option_flags; }; #define BNXT_COS_QUEUE_COUNT 8 @@ -1156,6 +1166,11 @@ extern int bnxt_logtype_driver; #define PMD_DRV_LOG(level, fmt, args...) \ PMD_DRV_LOG_RAW(level, fmt, ## args) +#define BNXT_LINK_SPEEDS_V2_OPTIONS(f) \ + ((f) & HWRM_PORT_PHY_QCFG_OUTPUT_OPTION_FLAGS_SPEEDS2_SUPPORTED) +#define BNXT_LINK_SPEEDS_V2_VF(bp) (BNXT_VF((bp)) && ((bp)->link_info->option_flags)) +#define BNXT_LINK_SPEEDS_V2(bp) (((bp)->link_info) && (((bp)->link_info->support_speeds_v2) || \ + BNXT_LINK_SPEEDS_V2_VF((bp)))) extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops; int32_t bnxt_ulp_port_init(struct bnxt *bp); void bnxt_ulp_port_deinit(struct bnxt *bp); diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 7aed6d3ab6..625e5f1f9a 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -697,7 +697,10 @@ static inline bool bnxt_force_link_config(struct bnxt *bp) static int bnxt_update_phy_setting(struct bnxt *bp) { + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; + struct rte_eth_link *link = &bp->eth_dev->data->dev_link; struct rte_eth_link new; + uint32_t curr_speed_bit; int rc; rc = bnxt_get_hwrm_link_config(bp, &new); @@ -706,13 +709,17 @@ static int bnxt_update_phy_setting(struct bnxt *bp) return rc; } + /* convert to speedbit flag */ + curr_speed_bit = rte_eth_speed_bitflag((uint32_t)link->link_speed, 1); + /* * Device is not obliged link down in certain scenarios, even * when forced. When FW does not allow any user other than BMC * to shutdown the port, bnxt_get_hwrm_link_config() call always * returns link up. Force phy update always in that case. */ - if (!new.link_status || bnxt_force_link_config(bp)) { + if (!new.link_status || bnxt_force_link_config(bp) || + (BNXT_LINK_SPEEDS_V2(bp) && dev_conf->link_speeds != curr_speed_bit)) { rc = bnxt_set_hwrm_link_config(bp, true); if (rc) { PMD_DRV_LOG(ERR, "Failed to update PHY settings\n"); @@ -933,6 +940,50 @@ static int bnxt_shutdown_nic(struct bnxt *bp) * Device configuration and status function */ +static uint32_t bnxt_get_speed_capabilities_v2(struct bnxt *bp) +{ + uint32_t link_speed = 0; + uint32_t speed_capa = 0; + + if (bp->link_info == NULL) + return 0; + + link_speed = bp->link_info->support_speeds2; + + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_1GB) + speed_capa |= RTE_ETH_LINK_SPEED_1G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_10GB) + speed_capa |= RTE_ETH_LINK_SPEED_10G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_25GB) + speed_capa |= RTE_ETH_LINK_SPEED_25G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_40GB) + speed_capa |= RTE_ETH_LINK_SPEED_40G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB) + speed_capa |= RTE_ETH_LINK_SPEED_50G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB) + speed_capa |= RTE_ETH_LINK_SPEED_100G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB_PAM4_56) + speed_capa |= RTE_ETH_LINK_SPEED_50G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_56) + speed_capa |= RTE_ETH_LINK_SPEED_100G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_56) + speed_capa |= RTE_ETH_LINK_SPEED_200G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_56) + speed_capa |= RTE_ETH_LINK_SPEED_400G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_112) + speed_capa |= RTE_ETH_LINK_SPEED_100G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_112) + speed_capa |= RTE_ETH_LINK_SPEED_200G; + if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_112) + speed_capa |= RTE_ETH_LINK_SPEED_400G; + + if (bp->link_info->auto_mode == + HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE) + speed_capa |= RTE_ETH_LINK_SPEED_FIXED; + + return speed_capa; +} + uint32_t bnxt_get_speed_capabilities(struct bnxt *bp) { uint32_t pam4_link_speed = 0; @@ -942,6 +993,10 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp) if (bp->link_info == NULL) return 0; + /* P7 uses speeds_v2 */ + if (BNXT_LINK_SPEEDS_V2(bp)) + return bnxt_get_speed_capabilities_v2(bp); + link_speed = bp->link_info->support_speeds; /* If PAM4 is configured, use PAM4 supported speed */ diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index f896a41653..4f202361ea 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -68,6 +68,282 @@ const char *bnxt_backing_store_types[] = { "Invalid type" }; +const char *media_type[] = { "Unknown", "Twisted Pair", + "Direct Attached Copper", "Fiber" +}; + +#define MAX_MEDIA_TYPE (sizeof(media_type) / sizeof(const char *)) + +const char *link_status_str[] = { "Down. No link or cable detected.", + "Down. No link, but a cable has been detected.", "Up.", +}; + +#define MAX_LINK_STR (sizeof(link_status_str) / sizeof(const char *)) + +const char *fec_mode[] = { + "No active FEC", + "FEC CLAUSE 74 (Fire Code).", + "FEC CLAUSE 91 RS(528,514).", + "FEC RS544_1XN", + "FEC RS(544,528)", + "FEC RS272_1XN", + "FEC RS(272,257)" +}; + +#define MAX_FEC_MODE (sizeof(fec_mode) / sizeof(const char *)) + +const char *signal_mode[] = { + "NRZ", "PAM4", "PAM4_112" +}; + +#define MAX_SIG_MODE (sizeof(signal_mode) / sizeof(const char *)) + +/* multi-purpose multi-key table container. + * Add a unique entry for a new PHY attribs as per HW CAS. + * Query it using a helper functions. + */ +struct link_speeds2_tbl { + uint16_t force_val; + uint16_t auto_val; + uint32_t rte_speed; + uint32_t rte_speed_num; + uint16_t hwrm_speed; + uint16_t sig_mode; + const char *desc; +} link_speeds2_tbl[] = { + { + 10, + 0, + RTE_ETH_LINK_SPEED_1G, + RTE_ETH_SPEED_NUM_1G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_1GB, + BNXT_SIG_MODE_NRZ, + "1Gb NRZ", + }, { + 100, + 1, + RTE_ETH_LINK_SPEED_10G, + RTE_ETH_SPEED_NUM_10G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_10GB, + BNXT_SIG_MODE_NRZ, + "10Gb NRZ", + }, { + 250, + 2, + RTE_ETH_LINK_SPEED_25G, + RTE_ETH_SPEED_NUM_25G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_25GB, + BNXT_SIG_MODE_NRZ, + "25Gb NRZ", + }, { + 400, + 3, + RTE_ETH_LINK_SPEED_40G, + RTE_ETH_SPEED_NUM_40G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_40GB, + BNXT_SIG_MODE_NRZ, + "40Gb NRZ", + }, { + 500, + 4, + RTE_ETH_LINK_SPEED_50G, + RTE_ETH_SPEED_NUM_50G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB, + BNXT_SIG_MODE_NRZ, + "50Gb NRZ", + }, { + 1000, + 5, + RTE_ETH_LINK_SPEED_100G, + RTE_ETH_SPEED_NUM_100G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB, + BNXT_SIG_MODE_NRZ, + "100Gb NRZ", + }, { + 501, + 6, + RTE_ETH_LINK_SPEED_50G, + RTE_ETH_SPEED_NUM_50G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB_PAM4_56, + BNXT_SIG_MODE_PAM4, + "50Gb (PAM4-56: 50G per lane)", + }, { + 1001, + 7, + RTE_ETH_LINK_SPEED_100G, + RTE_ETH_SPEED_NUM_100G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_56, + BNXT_SIG_MODE_PAM4, + "100Gb (PAM4-56: 50G per lane)", + }, { + 2001, + 8, + RTE_ETH_LINK_SPEED_200G, + RTE_ETH_SPEED_NUM_200G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_56, + BNXT_SIG_MODE_PAM4, + "200Gb (PAM4-56: 50G per lane)", + }, { + 4001, + 9, + RTE_ETH_LINK_SPEED_400G, + RTE_ETH_SPEED_NUM_400G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_56, + BNXT_SIG_MODE_PAM4, + "400Gb (PAM4-56: 50G per lane)", + }, { + 1002, + 10, + RTE_ETH_LINK_SPEED_100G, + RTE_ETH_SPEED_NUM_100G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_112, + BNXT_SIG_MODE_PAM4_112, + "100Gb (PAM4-112: 100G per lane)", + }, { + 2002, + 11, + RTE_ETH_LINK_SPEED_200G, + RTE_ETH_SPEED_NUM_200G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_112, + BNXT_SIG_MODE_PAM4_112, + "200Gb (PAM4-112: 100G per lane)", + }, { + 4002, + 12, + RTE_ETH_LINK_SPEED_400G, + RTE_ETH_SPEED_NUM_400G, + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112, + BNXT_SIG_MODE_PAM4_112, + "400Gb (PAM4-112: 100G per lane)", + }, { + 0, + 13, + RTE_ETH_LINK_SPEED_AUTONEG, /* None matches, AN is default 0 */ + RTE_ETH_SPEED_NUM_NONE, /* None matches, No speed */ + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_1GB, /* Placeholder for wrong HWRM */ + BNXT_SIG_MODE_NRZ, /* default sig */ + "Unknown", + }, +}; + +#define BNXT_SPEEDS2_TBL_SZ (sizeof(link_speeds2_tbl) / sizeof(*link_speeds2_tbl)) + +/* In hwrm_phy_qcfg reports trained up speeds in link_speed(offset:0x8[31:16]) */ +struct link_speeds_tbl { + uint16_t hwrm_speed; + uint32_t rte_speed_num; + const char *desc; +} link_speeds_tbl[] = { + { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB, + RTE_ETH_SPEED_NUM_100M, "100 MB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB, + RTE_ETH_SPEED_NUM_1G, "1 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB, + RTE_ETH_SPEED_NUM_2_5G, "25 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB, + RTE_ETH_SPEED_NUM_10G, "10 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB, + RTE_ETH_SPEED_NUM_20G, "20 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB, + RTE_ETH_SPEED_NUM_40G, "40 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB, + RTE_ETH_SPEED_NUM_50G, "50 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB, + RTE_ETH_SPEED_NUM_100G, "100 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB, + RTE_ETH_SPEED_NUM_200G, "200 GB", + }, { + HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_400GB, + RTE_ETH_SPEED_NUM_400G, "400 GB", + }, { + 0, RTE_ETH_SPEED_NUM_NONE, "None", + }, +}; + +#define BNXT_SPEEDS_TBL_SZ (sizeof(link_speeds_tbl) / sizeof(*link_speeds_tbl)) + +static const char *bnxt_get_xcvr_type(uint32_t xcvr_identifier_type_tx_lpi_timer) +{ + uint32_t xcvr_type = HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_MASK & + xcvr_identifier_type_tx_lpi_timer; + + /* Addressing only known CMIS types */ + switch (xcvr_type) { + case HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_SFP: + return "SFP"; + case HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFP: + return "QSFP"; + case HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFPPLUS: + return "QSFP+"; + case HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFP28: + return "QSFP28"; + case HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFPDD: + return "QSFP112"; + case HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFP112: + return "QSFP-DD"; + case HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_UNKNOWN: + return "Unknown"; + default: + /* All other/new CMIS variants belong here */ + return "QSFP-xx new CMIS variant"; + } +} + +/* Utility function to lookup speeds2 table and + * return a rte to hwrm speed matching row to the client + */ +static +struct link_speeds2_tbl *bnxt_get_rte_hwrm_speeds2_entry(uint32_t speed) +{ + int i, max; + + max = BNXT_SPEEDS2_TBL_SZ - 1; + speed &= ~RTE_ETH_LINK_SPEED_FIXED; + for (i = 0; i < max; i++) { + if (speed == link_speeds2_tbl[i].rte_speed) + break; + } + return (struct link_speeds2_tbl *)&link_speeds2_tbl[i]; +} + +/* Utility function to lookup speeds2 table and + * return a hwrm to rte speed matching row to the client + */ +static struct link_speeds2_tbl *bnxt_get_hwrm_to_rte_speeds2_entry(uint16_t speed) +{ + int i, max; + + max = BNXT_SPEEDS2_TBL_SZ - 1; + for (i = 0; i < max; i++) { + if (speed == link_speeds2_tbl[i].hwrm_speed) + break; + } + return (struct link_speeds2_tbl *)&link_speeds2_tbl[i]; +} + +/* Helper function to lookup auto link_speed table */ +static struct link_speeds_tbl *bnxt_get_hwrm_to_rte_speeds_entry(uint16_t speed) +{ + int i, max; + + max = BNXT_SPEEDS_TBL_SZ - 1; + + for (i = 0; i < max ; i++) { + if (speed == link_speeds_tbl[i].hwrm_speed) + break; + } + return (struct link_speeds_tbl *)&link_speeds_tbl[i]; +} + static int page_getenum(size_t size) { if (size <= 1 << 4) @@ -1564,15 +1840,64 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp, link_info->phy_ver[2] = resp->phy_bld; link_info->link_signal_mode = resp->active_fec_signal_mode & HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_MASK; + link_info->option_flags = resp->option_flags; link_info->force_pam4_link_speed = rte_le_to_cpu_16(resp->force_pam4_link_speed); link_info->support_pam4_speeds = rte_le_to_cpu_16(resp->support_pam4_speeds); link_info->auto_pam4_link_speed_mask = rte_le_to_cpu_16(resp->auto_pam4_link_speed_mask); + /* P7 uses speeds2 fields */ + if (BNXT_LINK_SPEEDS_V2(bp) && BNXT_LINK_SPEEDS_V2_OPTIONS(link_info->option_flags)) { + link_info->support_speeds2 = rte_le_to_cpu_16(resp->support_speeds2); + link_info->force_link_speeds2 = rte_le_to_cpu_16(resp->force_link_speeds2); + link_info->auto_link_speeds2 = rte_le_to_cpu_16(resp->auto_link_speeds2); + link_info->active_lanes = resp->active_lanes; + if (!link_info->auto_mode) + link_info->link_speed = link_info->force_link_speeds2; + } link_info->module_status = resp->module_status; HWRM_UNLOCK(); + /* Display the captured P7 phy details */ + if (BNXT_LINK_SPEEDS_V2(bp)) { + PMD_DRV_LOG(DEBUG, "Phytype:%d, Media_type:%d, Status: %d, Link Signal:%d\n" + "Active Fec: %d Support_speeds2:%x, Force_link_speedsv2:%x\n" + "Auto_link_speedsv2:%x, Active_lanes:%d\n", + link_info->phy_type, + link_info->media_type, + link_info->phy_link_status, + link_info->link_signal_mode, + (resp->active_fec_signal_mode & + HWRM_PORT_PHY_QCFG_OUTPUT_ACTIVE_FEC_MASK) >> 4, + link_info->support_speeds2, link_info->force_link_speeds2, + link_info->auto_link_speeds2, + link_info->active_lanes); + + const char *desc; + + if (link_info->auto_mode) + desc = ((struct link_speeds_tbl *) + bnxt_get_hwrm_to_rte_speeds_entry(link_info->link_speed))->desc; + else + desc = ((struct link_speeds2_tbl *) + bnxt_get_hwrm_to_rte_speeds2_entry(link_info->link_speed))->desc; + + PMD_DRV_LOG(INFO, "Link Speed: %s %s, Status: %s Signal-mode: %s\n" + "Media type: %s, Xcvr type: %s, Active FEC: %s Lanes: %d\n", + desc, + !(link_info->auto_mode) ? "Forced" : "AutoNegotiated", + link_status_str[link_info->phy_link_status % MAX_LINK_STR], + signal_mode[link_info->link_signal_mode % MAX_SIG_MODE], + media_type[link_info->media_type % MAX_MEDIA_TYPE], + bnxt_get_xcvr_type(rte_le_to_cpu_32 + (resp->xcvr_identifier_type_tx_lpi_timer)), + fec_mode[((resp->active_fec_signal_mode & + HWRM_PORT_PHY_QCFG_OUTPUT_ACTIVE_FEC_MASK) >> 4) % + MAX_FEC_MODE], link_info->active_lanes); + return rc; + } + PMD_DRV_LOG(DEBUG, "Link Speed:%d,Auto:%d:%x:%x,Support:%x,Force:%x\n", link_info->link_speed, link_info->auto_mode, link_info->auto_link_speed, link_info->auto_link_speed_mask, @@ -1608,6 +1933,15 @@ int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp) if (resp->supported_pam4_speeds_auto_mode) link_info->support_pam4_auto_speeds = rte_le_to_cpu_16(resp->supported_pam4_speeds_auto_mode); + /* P7 chips now report all speeds here */ + if (resp->flags2 & HWRM_PORT_PHY_QCAPS_OUTPUT_FLAGS2_SPEEDS2_SUPPORTED) + link_info->support_speeds_v2 = true; + if (link_info->support_speeds_v2) { + link_info->supported_speeds2_force_mode = + rte_le_to_cpu_16(resp->supported_speeds2_force_mode); + link_info->supported_speeds2_auto_mode = + rte_le_to_cpu_16(resp->supported_speeds2_auto_mode); + } HWRM_UNLOCK(); @@ -3268,7 +3602,14 @@ static uint16_t bnxt_check_eth_link_autoneg(uint32_t conf_link) return !conf_link; } -static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, +static uint16_t bnxt_parse_eth_link_speed_v2(uint32_t conf_link_speed) +{ + /* get bitmap value based on speed */ + return ((struct link_speeds2_tbl *) + bnxt_get_rte_hwrm_speeds2_entry(conf_link_speed))->force_val; +} + +static uint16_t bnxt_parse_eth_link_speed(struct bnxt *bp, uint32_t conf_link_speed, struct bnxt_link_info *link_info) { uint16_t support_pam4_speeds = link_info->support_pam4_speeds; @@ -3278,6 +3619,10 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG) return RTE_ETH_LINK_SPEED_AUTONEG; + /* Handle P7 chips saperately. It got enhanced phy attribs to choose from */ + if (BNXT_LINK_SPEEDS_V2(bp)) + return bnxt_parse_eth_link_speed_v2(conf_link_speed); + switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) { case RTE_ETH_LINK_SPEED_100M: case RTE_ETH_LINK_SPEED_100M_HD: @@ -3349,6 +3694,9 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \ RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \ RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G) +#define BNXT_SUPPORTED_SPEEDS2 ((BNXT_SUPPORTED_SPEEDS | RTE_ETH_LINK_SPEED_400G) & \ + ~(RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \ + RTE_ETH_LINK_SPEED_2_5G | RTE_ETH_LINK_SPEED_20G)) static int bnxt_validate_link_speed(struct bnxt *bp) { @@ -3388,11 +3736,25 @@ static int bnxt_validate_link_speed(struct bnxt *bp) return 0; } +static uint16_t +bnxt_parse_eth_link_speed_mask_v2(struct bnxt *bp, uint32_t link_speed) +{ + uint16_t ret = 0; + + if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) + return bp->link_info->supported_speeds2_auto_mode; + + return ret; +} + static uint16_t bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed) { uint16_t ret = 0; + if (BNXT_LINK_SPEEDS_V2(bp)) + return bnxt_parse_eth_link_speed_mask_v2(bp, link_speed); + if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) { if (bp->link_info->support_speeds) return bp->link_info->support_speeds; @@ -3424,10 +3786,21 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed) return ret; } -static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed) +static uint32_t bnxt_parse_hw_link_speed_v2(uint16_t hw_link_speed) +{ + return ((struct link_speeds2_tbl *) + bnxt_get_hwrm_to_rte_speeds2_entry(hw_link_speed))->rte_speed_num; +} + +static uint32_t bnxt_parse_hw_link_speed(struct bnxt *bp, uint16_t hw_link_speed) { uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE; + /* query fixed speed2 table if not autoneg */ + if (BNXT_LINK_SPEEDS_V2(bp) && !bp->link_info->auto_mode) + return bnxt_parse_hw_link_speed_v2(hw_link_speed); + + /* for P7 and earlier nics link_speed carries AN'd speed */ switch (hw_link_speed) { case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB: eth_link_speed = RTE_ETH_SPEED_NUM_100M; @@ -3459,6 +3832,9 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed) case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB: eth_link_speed = RTE_ETH_SPEED_NUM_200G; break; + case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_400GB: + eth_link_speed = RTE_ETH_SPEED_NUM_400G; + break; case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB: default: PMD_DRV_LOG(ERR, "HWRM link speed %d not defined\n", @@ -3505,8 +3881,7 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link) } if (link_info->link_speed) - link->link_speed = - bnxt_parse_hw_link_speed(link_info->link_speed); + link->link_speed = bnxt_parse_hw_link_speed(bp, link_info->link_speed); else link->link_speed = RTE_ETH_SPEED_NUM_NONE; link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex); @@ -3518,6 +3893,111 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link) return rc; } +static int bnxt_hwrm_port_phy_cfg_v2(struct bnxt *bp, struct bnxt_link_info *conf) +{ + struct hwrm_port_phy_cfg_output *resp = bp->hwrm_cmd_resp_addr; + struct hwrm_port_phy_cfg_input req = {0}; + uint32_t enables = 0; + int rc = 0; + + HWRM_PREP(&req, HWRM_PORT_PHY_CFG, BNXT_USE_CHIMP_MB); + + if (!conf->link_up) { + req.flags = + rte_cpu_to_le_32(HWRM_PORT_PHY_CFG_INPUT_FLAGS_FORCE_LINK_DWN); + PMD_DRV_LOG(ERR, "Force Link Down\n"); + goto link_down; + } + + /* Setting Fixed Speed. But AutoNeg is ON, So disable it */ + if (bp->link_info->auto_mode && conf->link_speed) { + req.auto_mode = HWRM_PORT_PHY_CFG_INPUT_AUTO_MODE_NONE; + PMD_DRV_LOG(DEBUG, "Disabling AutoNeg\n"); + } + req.flags = rte_cpu_to_le_32(conf->phy_flags); + if (!conf->link_speed) { + /* No speeds specified. Enable AutoNeg - all speeds */ + enables |= HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEEDS2_MASK; + enables |= HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_MODE; + req.auto_mode = HWRM_PORT_PHY_CFG_INPUT_AUTO_MODE_SPEED_MASK; + req.auto_link_speeds2_mask = + rte_cpu_to_le_16(bp->link_info->supported_speeds2_auto_mode); + } else { + enables |= HWRM_PORT_PHY_CFG_INPUT_ENABLES_FORCE_LINK_SPEEDS2; + req.force_link_speeds2 = rte_cpu_to_le_16(conf->link_speed); + } + + /* Fill rest of the req message */ + req.auto_duplex = conf->duplex; + if (req.auto_mode != HWRM_PORT_PHY_CFG_INPUT_AUTO_MODE_SPEED_MASK) + enables |= HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_DUPLEX; + req.auto_pause = conf->auto_pause; + req.force_pause = conf->force_pause; + if (req.auto_pause) + req.force_pause = 0; + /* Set force_pause if there is no auto or if there is a force */ + if (req.auto_pause && !req.force_pause) + enables |= HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_PAUSE; + else + enables |= HWRM_PORT_PHY_CFG_INPUT_ENABLES_FORCE_PAUSE; + req.enables = rte_cpu_to_le_32(enables); + +link_down: + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + + HWRM_CHECK_RESULT(); + HWRM_UNLOCK(); + return rc; +} + +static int bnxt_set_hwrm_link_config_v2(struct bnxt *bp, bool link_up) +{ + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; + struct bnxt_link_info link_req; + uint16_t speed, autoneg; + int rc = 0; + + memset(&link_req, 0, sizeof(link_req)); + link_req.link_up = link_up; + if (!link_up) + goto port_phy_cfg; + + autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds); + speed = bnxt_parse_eth_link_speed(bp, dev_conf->link_speeds, + bp->link_info); + link_req.phy_flags = HWRM_PORT_PHY_CFG_INPUT_FLAGS_RESET_PHY; + if (autoneg == 1) { + link_req.phy_flags |= + HWRM_PORT_PHY_CFG_INPUT_FLAGS_RESTART_AUTONEG; + link_req.cfg_auto_link_speeds2_mask = + bnxt_parse_eth_link_speed_mask(bp, dev_conf->link_speeds); + } else { + if (bp->link_info->phy_type == + HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_BASET || + bp->link_info->phy_type == + HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_BASETE || + bp->link_info->media_type == + HWRM_PORT_PHY_QCFG_OUTPUT_MEDIA_TYPE_TP) { + PMD_DRV_LOG(ERR, "10GBase-T devices must autoneg\n"); + return -EINVAL; + } + + link_req.phy_flags |= HWRM_PORT_PHY_CFG_INPUT_FLAGS_FORCE; + /* If user wants a particular speed try that first. */ + link_req.link_speed = speed; + } + link_req.duplex = bnxt_parse_eth_link_duplex(dev_conf->link_speeds); + link_req.auto_pause = bp->link_info->auto_pause; + link_req.force_pause = bp->link_info->force_pause; + +port_phy_cfg: + rc = bnxt_hwrm_port_phy_cfg_v2(bp, &link_req); + if (rc) + PMD_DRV_LOG(ERR, "Set link config failed with rc %d\n", rc); + + return rc; +} + int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up) { int rc = 0; @@ -3532,6 +4012,9 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up) if (rc) goto error; + if (BNXT_LINK_SPEEDS_V2(bp)) + return bnxt_set_hwrm_link_config_v2(bp, link_up); + memset(&link_req, 0, sizeof(link_req)); link_req.link_up = link_up; if (!link_up) @@ -3557,7 +4040,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up) PMD_DRV_LOG(DEBUG, "Disabling autoneg for 200G\n"); } - speed = bnxt_parse_eth_link_speed(dev_conf->link_speeds, + speed = bnxt_parse_eth_link_speed(bp, dev_conf->link_speeds, bp->link_info); link_req.phy_flags = HWRM_PORT_PHY_CFG_INPUT_FLAGS_RESET_PHY; /* Autoneg can be done only when the FW allows. */ diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 6116253787..179d5dc1f0 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -145,6 +145,7 @@ struct bnxt_pf_resource_info { #define BNXT_SIG_MODE_NRZ HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_NRZ #define BNXT_SIG_MODE_PAM4 HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4 +#define BNXT_SIG_MODE_PAM4_112 HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4_112 #define BNXT_TUNNELED_OFFLOADS_CAP_VXLAN_EN(bp) \ (!((bp)->tunnel_disable_flag & HWRM_FUNC_QCAPS_OUTPUT_TUNNEL_DISABLE_FLAG_DISABLE_VXLAN)) diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h index 65f3f0576b..b012a84d36 100644 --- a/drivers/net/bnxt/hsi_struct_def_dpdk.h +++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h @@ -27273,11 +27273,17 @@ struct hwrm_port_phy_qcfg_output { /* QSFP+ */ #define HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFPPLUS \ (UINT32_C(0xd) << 24) - /* QSFP28 */ + /* QSFP28/QSFP56 or later */ #define HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFP28 \ (UINT32_C(0x11) << 24) + /* QSFP-DD */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFPDD \ + (UINT32_C(0x18) << 24) + /* QSFP112 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFP112 \ + (UINT32_C(0x1e) << 24) #define HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_LAST \ - HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFP28 + HWRM_PORT_PHY_QCFG_OUTPUT_XCVR_IDENTIFIER_TYPE_QSFP112 /* * This value represents the current configuration of * Forward Error Correction (FEC) on the port. From patchwork Wed Dec 27 04:21:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135604 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 285A4437A1; Wed, 27 Dec 2023 05:23:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6E28240E7C; Wed, 27 Dec 2023 05:21:53 +0100 (CET) Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by mails.dpdk.org (Postfix) with ESMTP id BC2D840E01 for ; Wed, 27 Dec 2023 05:21:48 +0100 (CET) Received: by mail-qt1-f177.google.com with SMTP id d75a77b69052e-427cff62e19so22648611cf.2 for ; Tue, 26 Dec 2023 20:21:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650908; x=1704255708; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=XnQgLMSomMBzeXlYNf8JLO6vHtgB0zQcbxmze/sxPog=; b=ZuusXtDFMKv00V7Z3aY5kmDlOAwhIsRzneZOsKC4PORT7Kbc0zVhfdywWvBGYBDYcz J6q6JiFDs4YAWo2PDBT2oMmpdhs4Gf8O+nIQGYUYxNC3bTl+jWMdn6BWqKLaWaC7bY77 MQpgrvmzu4vocLKgWg1OUdZHm+qu+hQqbJ36c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650908; x=1704255708; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XnQgLMSomMBzeXlYNf8JLO6vHtgB0zQcbxmze/sxPog=; b=BO0SPf5wk8tBqom7a1DmOkuHrxvLcnczMBlxHQPgZoAXeYbjNK+RuH9lkyxKA62Cdm eOw3LAdkWzVi2h3NpYGeMSgs8ePmvdrwo8KT0N0FUCAFVO7Q1AqEJ6hACNjG7n/iscdm amZO2aVgqCJ+qHNLJa6jXkPcPC47ldzwVdMk/1uHEWxhVjoizC49LVjrWh7C4fqQMBIo QiqDhFUvZ6rxeyDpR4Nw3rYtBXZB9m+yNdaxpd3rW4ByCch7W8wNmbX9pQ59mQtv9tke oQ8DpwOnnPBMya7d/xJAzvjzdhwVvtgdipNj2ZcM3OECcNEYexWwPlyQZ5yndp0xgeMP spyg== X-Gm-Message-State: AOJu0Yy8R7IWVKJdiSA7dyX50OM+eA1YxvcVYZIFFuOtRi+gITqwyZCR Zda2XJah1nSlLKr38IutLpQmgAbZdRoavIu8cJPEfymXITIiOWK/bIx4hw7mRQTIaBZ2bIKkRK1 MK3wTHREj7awgWxm/t9tTEA4uwhUyTs1vE7J1LmotOtW/RdTbFRDnUwuTncVsdJg7pGhZAJ1bC6 U= X-Google-Smtp-Source: AGHT+IFxjgskqMSBrBtfOwbrWpIozGu9femN1HdmBaOg/h64hRfDvJrqC2LJexYrDnrFKsRofONxyg== X-Received: by 2002:ac8:5f12:0:b0:427:979f:12a2 with SMTP id x18-20020ac85f12000000b00427979f12a2mr14615270qta.96.1703650907573; Tue, 26 Dec 2023 20:21:47 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:46 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 16/18] net/bnxt: query extended stats from firmware Date: Tue, 26 Dec 2023 20:21:17 -0800 Message-Id: <20231227042119.72469-17-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Damodharam Ammepalli Add the driver support for HWRM_STAT_EXT_CTX_QUERY HWRM msg. In this patch only P7 chipset is enabled for this HWRM while P5 and previous generation remain with HWRM_STAT_CTX_QUERY. Signed-off-by: Damodharam Ammepalli Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 49 ++++++ drivers/net/bnxt/bnxt_cpr.h | 3 +- drivers/net/bnxt/bnxt_ethdev.c | 36 ++++- drivers/net/bnxt/bnxt_hwrm.c | 117 ++++++++++++++ drivers/net/bnxt/bnxt_hwrm.h | 12 +- drivers/net/bnxt/bnxt_ring.c | 6 +- drivers/net/bnxt/bnxt_rxq.c | 8 +- drivers/net/bnxt/bnxt_stats.c | 279 ++++++++++++++++++++++++++++++--- 8 files changed, 483 insertions(+), 27 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 858689533b..d91f0e427d 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -705,6 +705,53 @@ struct bnxt_ring_stats { uint64_t rx_agg_aborts; }; +struct bnxt_ring_stats_ext { + /* Number of received unicast packets */ + uint64_t rx_ucast_pkts; + /* Number of received multicast packets */ + uint64_t rx_mcast_pkts; + /* Number of received broadcast packets */ + uint64_t rx_bcast_pkts; + /* Number of discarded packets on receive path */ + uint64_t rx_discard_pkts; + /* Number of packets on receive path with error */ + uint64_t rx_error_pkts; + /* Number of received bytes for unicast traffic */ + uint64_t rx_ucast_bytes; + /* Number of received bytes for multicast traffic */ + uint64_t rx_mcast_bytes; + /* Number of received bytes for broadcast traffic */ + uint64_t rx_bcast_bytes; + /* Number of transmitted unicast packets */ + uint64_t tx_ucast_pkts; + /* Number of transmitted multicast packets */ + uint64_t tx_mcast_pkts; + /* Number of transmitted broadcast packets */ + uint64_t tx_bcast_pkts; + /* Number of packets on transmit path with error */ + uint64_t tx_error_pkts; + /* Number of discarded packets on transmit path */ + uint64_t tx_discard_pkts; + /* Number of transmitted bytes for unicast traffic */ + uint64_t tx_ucast_bytes; + /* Number of transmitted bytes for multicast traffic */ + uint64_t tx_mcast_bytes; + /* Number of transmitted bytes for broadcast traffic */ + uint64_t tx_bcast_bytes; + /* Number of TPA eligible packets */ + uint64_t rx_tpa_eligible_pkt; + /* Number of TPA eligible bytes */ + uint64_t rx_tpa_eligible_bytes; + /* Number of TPA packets */ + uint64_t rx_tpa_pkt; + /* Number of TPA bytes */ + uint64_t rx_tpa_bytes; + /* Number of TPA errors */ + uint64_t rx_tpa_errors; + /* Number of TPA events */ + uint64_t rx_tpa_events; +}; + enum bnxt_session_type { BNXT_SESSION_TYPE_REGULAR = 0, BNXT_SESSION_TYPE_SHARED_COMMON, @@ -982,6 +1029,8 @@ struct bnxt { uint16_t tx_cfa_action; struct bnxt_ring_stats *prev_rx_ring_stats; struct bnxt_ring_stats *prev_tx_ring_stats; + struct bnxt_ring_stats_ext *prev_rx_ring_stats_ext; + struct bnxt_ring_stats_ext *prev_tx_ring_stats_ext; struct bnxt_vnic_queue_db vnic_queue_db; #define BNXT_MAX_MC_ADDRS ((bp)->max_mcast_addr) diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h index 26e81a6a7e..c7b3480dc9 100644 --- a/drivers/net/bnxt/bnxt_cpr.h +++ b/drivers/net/bnxt/bnxt_cpr.h @@ -68,7 +68,8 @@ struct bnxt_cp_ring_info { struct bnxt_db_info cp_db; rte_iova_t cp_desc_mapping; - struct ctx_hw_stats *hw_stats; + char *hw_stats; + uint16_t hw_ring_stats_size; rte_iova_t hw_stats_map; uint32_t hw_stats_ctx_id; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 625e5f1f9a..031028eda1 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -732,15 +732,49 @@ static int bnxt_update_phy_setting(struct bnxt *bp) static void bnxt_free_prev_ring_stats(struct bnxt *bp) { + /* tpa v2 devices use ext variant local struct */ + if (BNXT_TPA_V2_P7(bp)) { + rte_free(bp->prev_rx_ring_stats_ext); + rte_free(bp->prev_tx_ring_stats_ext); + bp->prev_rx_ring_stats_ext = NULL; + bp->prev_tx_ring_stats_ext = NULL; + return; + } rte_free(bp->prev_rx_ring_stats); rte_free(bp->prev_tx_ring_stats); - bp->prev_rx_ring_stats = NULL; bp->prev_tx_ring_stats = NULL; } +static int bnxt_alloc_prev_ring_ext_stats(struct bnxt *bp) +{ + bp->prev_rx_ring_stats_ext = rte_zmalloc("bnxt_prev_rx_ring_stats_ext", + sizeof(struct bnxt_ring_stats_ext) * + bp->rx_cp_nr_rings, + 0); + if (bp->prev_rx_ring_stats_ext == NULL) + return -ENOMEM; + + bp->prev_tx_ring_stats_ext = rte_zmalloc("bnxt_prev_tx_ring_stats_ext", + sizeof(struct bnxt_ring_stats_ext) * + bp->tx_cp_nr_rings, + 0); + + if (bp->tx_cp_nr_rings > 0 && bp->prev_tx_ring_stats_ext == NULL) + goto error; + + return 0; + +error: + bnxt_free_prev_ring_stats(bp); + return -ENOMEM; +} + static int bnxt_alloc_prev_ring_stats(struct bnxt *bp) { + if (BNXT_TPA_V2_P7(bp)) + return bnxt_alloc_prev_ring_ext_stats(bp); + bp->prev_rx_ring_stats = rte_zmalloc("bnxt_prev_rx_ring_stats", sizeof(struct bnxt_ring_stats) * bp->rx_cp_nr_rings, diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 4f202361ea..802973ba97 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -2386,6 +2386,8 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr) HWRM_PREP(&req, HWRM_STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB); + req.stats_dma_length = rte_cpu_to_le_16(BNXT_HWRM_CTX_GET_SIZE(bp)); + req.update_period_ms = rte_cpu_to_le_32(0); req.stats_dma_addr = rte_cpu_to_le_64(cpr->hw_stats_map); @@ -5187,6 +5189,8 @@ static void bnxt_update_prev_stat(uint64_t *cntr, uint64_t *prev_cntr) * returned by HW in this iteration, so use the previous * iteration's counter value */ + if (!cntr || !prev_cntr) + return; if (*prev_cntr && *cntr == 0) *cntr = *prev_cntr; else @@ -5295,6 +5299,119 @@ int bnxt_hwrm_ring_stats(struct bnxt *bp, uint32_t cid, int idx, return rc; } +int bnxt_hwrm_ring_stats_ext(struct bnxt *bp, uint32_t cid, int idx, + struct bnxt_ring_stats_ext *ring_stats, bool rx) +{ + int rc = 0; + struct hwrm_stat_ext_ctx_query_input req = {.req_type = 0}; + struct hwrm_stat_ext_ctx_query_output *resp = bp->hwrm_cmd_resp_addr; + + HWRM_PREP(&req, HWRM_STAT_EXT_CTX_QUERY, BNXT_USE_CHIMP_MB); + + req.stat_ctx_id = rte_cpu_to_le_32(cid); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + + HWRM_CHECK_RESULT(); + + if (rx) { + struct bnxt_ring_stats_ext *prev_stats = &bp->prev_rx_ring_stats_ext[idx]; + + ring_stats->rx_ucast_pkts = rte_le_to_cpu_64(resp->rx_ucast_pkts); + bnxt_update_prev_stat(&ring_stats->rx_ucast_pkts, + &prev_stats->rx_ucast_pkts); + + ring_stats->rx_mcast_pkts = rte_le_to_cpu_64(resp->rx_mcast_pkts); + bnxt_update_prev_stat(&ring_stats->rx_mcast_pkts, + &prev_stats->rx_mcast_pkts); + + ring_stats->rx_bcast_pkts = rte_le_to_cpu_64(resp->rx_bcast_pkts); + bnxt_update_prev_stat(&ring_stats->rx_bcast_pkts, + &prev_stats->rx_bcast_pkts); + + ring_stats->rx_ucast_bytes = rte_le_to_cpu_64(resp->rx_ucast_bytes); + bnxt_update_prev_stat(&ring_stats->rx_ucast_bytes, + &prev_stats->rx_ucast_bytes); + + ring_stats->rx_mcast_bytes = rte_le_to_cpu_64(resp->rx_mcast_bytes); + bnxt_update_prev_stat(&ring_stats->rx_mcast_bytes, + &prev_stats->rx_mcast_bytes); + + ring_stats->rx_bcast_bytes = rte_le_to_cpu_64(resp->rx_bcast_bytes); + bnxt_update_prev_stat(&ring_stats->rx_bcast_bytes, + &prev_stats->rx_bcast_bytes); + + ring_stats->rx_discard_pkts = rte_le_to_cpu_64(resp->rx_discard_pkts); + bnxt_update_prev_stat(&ring_stats->rx_discard_pkts, + &prev_stats->rx_discard_pkts); + + ring_stats->rx_error_pkts = rte_le_to_cpu_64(resp->rx_error_pkts); + bnxt_update_prev_stat(&ring_stats->rx_error_pkts, + &prev_stats->rx_error_pkts); + + ring_stats->rx_tpa_eligible_pkt = rte_le_to_cpu_64(resp->rx_tpa_eligible_pkt); + bnxt_update_prev_stat(&ring_stats->rx_tpa_eligible_pkt, + &prev_stats->rx_tpa_eligible_pkt); + + ring_stats->rx_tpa_eligible_bytes = rte_le_to_cpu_64(resp->rx_tpa_eligible_bytes); + bnxt_update_prev_stat(&ring_stats->rx_tpa_eligible_bytes, + &prev_stats->rx_tpa_eligible_bytes); + + ring_stats->rx_tpa_pkt = rte_le_to_cpu_64(resp->rx_tpa_pkt); + bnxt_update_prev_stat(&ring_stats->rx_tpa_pkt, + &prev_stats->rx_tpa_pkt); + + ring_stats->rx_tpa_bytes = rte_le_to_cpu_64(resp->rx_tpa_bytes); + bnxt_update_prev_stat(&ring_stats->rx_tpa_bytes, + &prev_stats->rx_tpa_bytes); + + ring_stats->rx_tpa_errors = rte_le_to_cpu_64(resp->rx_tpa_errors); + bnxt_update_prev_stat(&ring_stats->rx_tpa_errors, + &prev_stats->rx_tpa_errors); + + ring_stats->rx_tpa_events = rte_le_to_cpu_64(resp->rx_tpa_events); + bnxt_update_prev_stat(&ring_stats->rx_tpa_events, + &prev_stats->rx_tpa_events); + } else { + struct bnxt_ring_stats_ext *prev_stats = &bp->prev_tx_ring_stats_ext[idx]; + + ring_stats->tx_ucast_pkts = rte_le_to_cpu_64(resp->tx_ucast_pkts); + bnxt_update_prev_stat(&ring_stats->tx_ucast_pkts, + &prev_stats->tx_ucast_pkts); + + ring_stats->tx_mcast_pkts = rte_le_to_cpu_64(resp->tx_mcast_pkts); + bnxt_update_prev_stat(&ring_stats->tx_mcast_pkts, + &prev_stats->tx_mcast_pkts); + + ring_stats->tx_bcast_pkts = rte_le_to_cpu_64(resp->tx_bcast_pkts); + bnxt_update_prev_stat(&ring_stats->tx_bcast_pkts, + &prev_stats->tx_bcast_pkts); + + ring_stats->tx_ucast_bytes = rte_le_to_cpu_64(resp->tx_ucast_bytes); + bnxt_update_prev_stat(&ring_stats->tx_ucast_bytes, + &prev_stats->tx_ucast_bytes); + + ring_stats->tx_mcast_bytes = rte_le_to_cpu_64(resp->tx_mcast_bytes); + bnxt_update_prev_stat(&ring_stats->tx_mcast_bytes, + &prev_stats->tx_mcast_bytes); + + ring_stats->tx_bcast_bytes = rte_le_to_cpu_64(resp->tx_bcast_bytes); + bnxt_update_prev_stat(&ring_stats->tx_bcast_bytes, + &prev_stats->tx_bcast_bytes); + + ring_stats->tx_discard_pkts = rte_le_to_cpu_64(resp->tx_discard_pkts); + bnxt_update_prev_stat(&ring_stats->tx_discard_pkts, + &prev_stats->tx_discard_pkts); + + ring_stats->tx_error_pkts = rte_le_to_cpu_64(resp->tx_error_pkts); + bnxt_update_prev_stat(&ring_stats->tx_error_pkts, + &prev_stats->tx_error_pkts); + } + + HWRM_UNLOCK(); + + return rc; +} + int bnxt_hwrm_port_qstats(struct bnxt *bp) { struct hwrm_port_qstats_input req = {0}; diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 179d5dc1f0..19fb35f223 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -167,8 +167,14 @@ struct bnxt_pf_resource_info { BNXT_TUNNELED_OFFLOADS_CAP_GRE_EN(bp) && \ BNXT_TUNNELED_OFFLOADS_CAP_IPINIP_EN(bp)) -#define BNXT_SIG_MODE_NRZ HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_NRZ -#define BNXT_SIG_MODE_PAM4 HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4 +/* Is this tpa_v2 and P7 + * Just add P5 to this once we validate on Thor FW + */ +#define BNXT_TPA_V2_P7(bp) ((bp)->max_tpa_v2 && BNXT_CHIP_P7(bp)) +/* Get the size of the stat context size for DMA from HW */ +#define BNXT_HWRM_CTX_GET_SIZE(bp) (BNXT_TPA_V2_P7(bp) ? \ + sizeof(struct ctx_hw_stats_ext) : \ + sizeof(struct ctx_hw_stats)) int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic); @@ -352,6 +358,8 @@ int bnxt_hwrm_poll_ver_get(struct bnxt *bp); int bnxt_hwrm_rx_ring_reset(struct bnxt *bp, int queue_index); int bnxt_hwrm_ring_stats(struct bnxt *bp, uint32_t cid, int idx, struct bnxt_ring_stats *stats, bool rx); +int bnxt_hwrm_ring_stats_ext(struct bnxt *bp, uint32_t cid, int idx, + struct bnxt_ring_stats_ext *ring_stats, bool rx); int bnxt_hwrm_read_sfp_module_eeprom_info(struct bnxt *bp, uint16_t i2c_addr, uint16_t page_number, uint16_t start_addr, uint16_t data_length, uint8_t *buf); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 4bf0b9c6ed..9e512321d9 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -119,8 +119,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, int ag_ring_len = 0; int stats_len = (tx_ring_info || rx_ring_info) ? - RTE_CACHE_LINE_ROUNDUP(sizeof(struct hwrm_stat_ctx_query_output) - - sizeof (struct hwrm_resp_hdr)) : 0; + RTE_CACHE_LINE_ROUNDUP(BNXT_HWRM_CTX_GET_SIZE(bp)) : 0; stats_len = RTE_ALIGN(stats_len, 128); int cp_vmem_start = stats_len; @@ -305,8 +304,9 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, *cp_ring->vmem = ((char *)mz->addr + stats_len); if (stats_len) { cp_ring_info->hw_stats = mz->addr; - cp_ring_info->hw_stats_map = mz_phys_addr; } + cp_ring_info->hw_stats_map = mz_phys_addr; + cp_ring_info->hw_stats_ctx_id = HWRM_NA_SIGNATURE; if (nq_ring_info) { diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 575e7f193f..913856e6eb 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -483,8 +483,12 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) /* reset the previous stats for the rx_queue since the counters * will be cleared when the queue is started. */ - memset(&bp->prev_rx_ring_stats[rx_queue_id], 0, - sizeof(struct bnxt_ring_stats)); + if (BNXT_TPA_V2_P7(bp)) + memset(&bp->prev_rx_ring_stats_ext[rx_queue_id], 0, + sizeof(struct bnxt_ring_stats_ext)); + else + memset(&bp->prev_rx_ring_stats[rx_queue_id], 0, + sizeof(struct bnxt_ring_stats)); /* Set the queue state to started here. * We check the status of the queue while posting buffer. diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c index 0e25207fc3..6a6feab6cf 100644 --- a/drivers/net/bnxt/bnxt_stats.c +++ b/drivers/net/bnxt/bnxt_stats.c @@ -258,6 +258,53 @@ static const struct bnxt_xstats_name_off bnxt_tx_stats_strings[] = { tx_stat_error)}, }; +static const struct bnxt_xstats_name_off bnxt_func_stats_ext_strings[] = { + {"tx_ucast_pkts", offsetof(struct hwrm_func_qstats_ext_output, + tx_ucast_pkts)}, + {"tx_mcast_pkts", offsetof(struct hwrm_func_qstats_ext_output, + tx_mcast_pkts)}, + {"tx_bcast_pkts", offsetof(struct hwrm_func_qstats_ext_output, + tx_bcast_pkts)}, + {"tx_discard_pkts", offsetof(struct hwrm_func_qstats_ext_output, + tx_discard_pkts)}, + {"tx_drop_pkts", offsetof(struct hwrm_func_qstats_ext_output, + tx_error_pkts)}, + {"tx_ucast_bytes", offsetof(struct hwrm_func_qstats_ext_output, + tx_ucast_bytes)}, + {"tx_mcast_bytes", offsetof(struct hwrm_func_qstats_ext_output, + tx_mcast_bytes)}, + {"tx_bcast_bytes", offsetof(struct hwrm_func_qstats_ext_output, + tx_bcast_bytes)}, + {"rx_ucast_pkts", offsetof(struct hwrm_func_qstats_ext_output, + rx_ucast_pkts)}, + {"rx_mcast_pkts", offsetof(struct hwrm_func_qstats_ext_output, + rx_mcast_pkts)}, + {"rx_bcast_pkts", offsetof(struct hwrm_func_qstats_ext_output, + rx_bcast_pkts)}, + {"rx_discard_pkts", offsetof(struct hwrm_func_qstats_ext_output, + rx_discard_pkts)}, + {"rx_drop_pkts", offsetof(struct hwrm_func_qstats_ext_output, + rx_error_pkts)}, + {"rx_ucast_bytes", offsetof(struct hwrm_func_qstats_ext_output, + rx_ucast_bytes)}, + {"rx_mcast_bytes", offsetof(struct hwrm_func_qstats_ext_output, + rx_mcast_bytes)}, + {"rx_bcast_bytes", offsetof(struct hwrm_func_qstats_ext_output, + rx_bcast_bytes)}, + {"rx_tpa_eligible_pkt", offsetof(struct hwrm_func_qstats_ext_output, + rx_tpa_eligible_pkt)}, + {"rx_tpa_eligible_bytes", offsetof(struct hwrm_func_qstats_ext_output, + rx_tpa_eligible_bytes)}, + {"rx_tpa_pkt", offsetof(struct hwrm_func_qstats_ext_output, + rx_tpa_pkt)}, + {"rx_tpa_bytes", offsetof(struct hwrm_func_qstats_ext_output, + rx_tpa_bytes)}, + {"rx_tpa_errors", offsetof(struct hwrm_func_qstats_ext_output, + rx_tpa_errors)}, + {"rx_tpa_events", offsetof(struct hwrm_func_qstats_ext_output, + rx_tpa_events)}, +}; + static const struct bnxt_xstats_name_off bnxt_func_stats_strings[] = { {"tx_ucast_pkts", offsetof(struct hwrm_func_qstats_output, tx_ucast_pkts)}, @@ -417,6 +464,12 @@ static const struct bnxt_xstats_name_off bnxt_rx_ext_stats_strings[] = { rx_discard_packets_cos6)}, {"rx_discard_packets_cos7", offsetof(struct rx_port_stats_ext, rx_discard_packets_cos7)}, + {"rx_fec_corrected_blocks", offsetof(struct rx_port_stats_ext, + rx_fec_corrected_blocks)}, + {"rx_fec_uncorrectable_blocks", offsetof(struct rx_port_stats_ext, + rx_fec_uncorrectable_blocks)}, + {"rx_filter_miss", offsetof(struct rx_port_stats_ext, + rx_filter_miss)}, }; static const struct bnxt_xstats_name_off bnxt_tx_ext_stats_strings[] = { @@ -506,6 +559,45 @@ void bnxt_free_stats(struct bnxt *bp) } } +static void bnxt_fill_rte_eth_stats_ext(struct rte_eth_stats *stats, + struct bnxt_ring_stats_ext *ring_stats, + unsigned int i, bool rx) +{ + if (rx) { + stats->q_ipackets[i] = ring_stats->rx_ucast_pkts; + stats->q_ipackets[i] += ring_stats->rx_mcast_pkts; + stats->q_ipackets[i] += ring_stats->rx_bcast_pkts; + + stats->ipackets += stats->q_ipackets[i]; + + stats->q_ibytes[i] = ring_stats->rx_ucast_bytes; + stats->q_ibytes[i] += ring_stats->rx_mcast_bytes; + stats->q_ibytes[i] += ring_stats->rx_bcast_bytes; + + stats->ibytes += stats->q_ibytes[i]; + + stats->q_errors[i] = ring_stats->rx_discard_pkts; + stats->q_errors[i] += ring_stats->rx_error_pkts; + + stats->imissed += ring_stats->rx_discard_pkts; + stats->ierrors += ring_stats->rx_error_pkts; + } else { + stats->q_opackets[i] = ring_stats->tx_ucast_pkts; + stats->q_opackets[i] += ring_stats->tx_mcast_pkts; + stats->q_opackets[i] += ring_stats->tx_bcast_pkts; + + stats->opackets += stats->q_opackets[i]; + + stats->q_obytes[i] = ring_stats->tx_ucast_bytes; + stats->q_obytes[i] += ring_stats->tx_mcast_bytes; + stats->q_obytes[i] += ring_stats->tx_bcast_bytes; + + stats->obytes += stats->q_obytes[i]; + + stats->oerrors += ring_stats->tx_discard_pkts; + } +} + static void bnxt_fill_rte_eth_stats(struct rte_eth_stats *stats, struct bnxt_ring_stats *ring_stats, unsigned int i, bool rx) @@ -545,6 +637,57 @@ static void bnxt_fill_rte_eth_stats(struct rte_eth_stats *stats, } } +static int bnxt_stats_get_ext(struct rte_eth_dev *eth_dev, + struct rte_eth_stats *bnxt_stats) +{ + int rc = 0; + unsigned int i; + struct bnxt *bp = eth_dev->data->dev_private; + unsigned int num_q_stats; + + num_q_stats = RTE_MIN(bp->rx_cp_nr_rings, + (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS); + + for (i = 0; i < num_q_stats; i++) { + struct bnxt_rx_queue *rxq = bp->rx_queues[i]; + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_ring_stats_ext ring_stats = {0}; + + if (!rxq->rx_started) + continue; + + rc = bnxt_hwrm_ring_stats_ext(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, true); + if (unlikely(rc)) + return rc; + + bnxt_fill_rte_eth_stats_ext(bnxt_stats, &ring_stats, i, true); + bnxt_stats->rx_nombuf += + __atomic_load_n(&rxq->rx_mbuf_alloc_fail, __ATOMIC_RELAXED); + } + + num_q_stats = RTE_MIN(bp->tx_cp_nr_rings, + (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS); + + for (i = 0; i < num_q_stats; i++) { + struct bnxt_tx_queue *txq = bp->tx_queues[i]; + struct bnxt_cp_ring_info *cpr = txq->cp_ring; + struct bnxt_ring_stats_ext ring_stats = {0}; + + if (!txq->tx_started) + continue; + + rc = bnxt_hwrm_ring_stats_ext(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, false); + if (unlikely(rc)) + return rc; + + bnxt_fill_rte_eth_stats_ext(bnxt_stats, &ring_stats, i, false); + } + + return rc; +} + int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, struct rte_eth_stats *bnxt_stats) { @@ -560,6 +703,9 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev, if (!eth_dev->data->dev_started) return -EIO; + if (BNXT_TPA_V2_P7(bp)) + return bnxt_stats_get_ext(eth_dev, bnxt_stats); + num_q_stats = RTE_MIN(bp->rx_cp_nr_rings, (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS); @@ -609,8 +755,17 @@ static void bnxt_clear_prev_stat(struct bnxt *bp) * Clear the cached values of stats returned by HW in the previous * get operation. */ - memset(bp->prev_rx_ring_stats, 0, sizeof(struct bnxt_ring_stats) * bp->rx_cp_nr_rings); - memset(bp->prev_tx_ring_stats, 0, sizeof(struct bnxt_ring_stats) * bp->tx_cp_nr_rings); + if (BNXT_TPA_V2_P7(bp)) { + memset(bp->prev_rx_ring_stats_ext, 0, + sizeof(struct bnxt_ring_stats_ext) * bp->rx_cp_nr_rings); + memset(bp->prev_tx_ring_stats_ext, 0, + sizeof(struct bnxt_ring_stats_ext) * bp->tx_cp_nr_rings); + } else { + memset(bp->prev_rx_ring_stats, 0, + sizeof(struct bnxt_ring_stats) * bp->rx_cp_nr_rings); + memset(bp->prev_tx_ring_stats, 0, + sizeof(struct bnxt_ring_stats) * bp->tx_cp_nr_rings); + } } int bnxt_stats_reset_op(struct rte_eth_dev *eth_dev) @@ -640,6 +795,42 @@ int bnxt_stats_reset_op(struct rte_eth_dev *eth_dev) return ret; } +static void bnxt_fill_func_qstats_ext(struct hwrm_func_qstats_ext_output *func_qstats, + struct bnxt_ring_stats_ext *ring_stats, + bool rx) +{ + if (rx) { + func_qstats->rx_ucast_pkts += ring_stats->rx_ucast_pkts; + func_qstats->rx_mcast_pkts += ring_stats->rx_mcast_pkts; + func_qstats->rx_bcast_pkts += ring_stats->rx_bcast_pkts; + + func_qstats->rx_ucast_bytes += ring_stats->rx_ucast_bytes; + func_qstats->rx_mcast_bytes += ring_stats->rx_mcast_bytes; + func_qstats->rx_bcast_bytes += ring_stats->rx_bcast_bytes; + + func_qstats->rx_discard_pkts += ring_stats->rx_discard_pkts; + func_qstats->rx_error_pkts += ring_stats->rx_error_pkts; + + func_qstats->rx_tpa_eligible_pkt += ring_stats->rx_tpa_eligible_pkt; + func_qstats->rx_tpa_eligible_bytes += ring_stats->rx_tpa_eligible_bytes; + func_qstats->rx_tpa_pkt += ring_stats->rx_tpa_pkt; + func_qstats->rx_tpa_bytes += ring_stats->rx_tpa_bytes; + func_qstats->rx_tpa_errors += ring_stats->rx_tpa_errors; + func_qstats->rx_tpa_events += ring_stats->rx_tpa_events; + } else { + func_qstats->tx_ucast_pkts += ring_stats->tx_ucast_pkts; + func_qstats->tx_mcast_pkts += ring_stats->tx_mcast_pkts; + func_qstats->tx_bcast_pkts += ring_stats->tx_bcast_pkts; + + func_qstats->tx_ucast_bytes += ring_stats->tx_ucast_bytes; + func_qstats->tx_mcast_bytes += ring_stats->tx_mcast_bytes; + func_qstats->tx_bcast_bytes += ring_stats->tx_bcast_bytes; + + func_qstats->tx_error_pkts += ring_stats->tx_error_pkts; + func_qstats->tx_discard_pkts += ring_stats->tx_discard_pkts; + } +} + static void bnxt_fill_func_qstats(struct hwrm_func_qstats_output *func_qstats, struct bnxt_ring_stats *ring_stats, bool rx) @@ -683,16 +874,21 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, unsigned int tx_port_stats_ext_cnt; unsigned int stat_size = sizeof(uint64_t); struct hwrm_func_qstats_output func_qstats = {0}; - unsigned int stat_count; + struct hwrm_func_qstats_ext_output func_qstats_ext = {0}; + unsigned int stat_count, sz; int rc; rc = is_bnxt_in_error(bp); if (rc) return rc; + if (BNXT_TPA_V2_P7(bp)) + sz = RTE_DIM(bnxt_func_stats_ext_strings); + else + sz = RTE_DIM(bnxt_func_stats_strings); + stat_count = RTE_DIM(bnxt_rx_stats_strings) + - RTE_DIM(bnxt_tx_stats_strings) + - RTE_DIM(bnxt_func_stats_strings) + + RTE_DIM(bnxt_tx_stats_strings) + sz + RTE_DIM(bnxt_rx_ext_stats_strings) + RTE_DIM(bnxt_tx_ext_stats_strings) + bnxt_flow_stats_cnt(bp); @@ -704,32 +900,51 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, struct bnxt_rx_queue *rxq = bp->rx_queues[i]; struct bnxt_cp_ring_info *cpr = rxq->cp_ring; struct bnxt_ring_stats ring_stats = {0}; + struct bnxt_ring_stats_ext ring_stats_ext = {0}; if (!rxq->rx_started) continue; - rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, - &ring_stats, true); + if (BNXT_TPA_V2_P7(bp)) + rc = bnxt_hwrm_ring_stats_ext(bp, cpr->hw_stats_ctx_id, i, + &ring_stats_ext, true); + else + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, true); + if (unlikely(rc)) return rc; - bnxt_fill_func_qstats(&func_qstats, &ring_stats, true); + if (BNXT_TPA_V2_P7(bp)) + bnxt_fill_func_qstats_ext(&func_qstats_ext, + &ring_stats_ext, true); + else + bnxt_fill_func_qstats(&func_qstats, &ring_stats, true); } for (i = 0; i < bp->tx_cp_nr_rings; i++) { struct bnxt_tx_queue *txq = bp->tx_queues[i]; struct bnxt_cp_ring_info *cpr = txq->cp_ring; struct bnxt_ring_stats ring_stats = {0}; + struct bnxt_ring_stats_ext ring_stats_ext = {0}; if (!txq->tx_started) continue; - rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, - &ring_stats, false); + if (BNXT_TPA_V2_P7(bp)) + rc = bnxt_hwrm_ring_stats_ext(bp, cpr->hw_stats_ctx_id, i, + &ring_stats_ext, false); + else + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i, + &ring_stats, false); if (unlikely(rc)) return rc; - bnxt_fill_func_qstats(&func_qstats, &ring_stats, false); + if (BNXT_TPA_V2_P7(bp)) + bnxt_fill_func_qstats_ext(&func_qstats_ext, + &ring_stats_ext, false); + else + bnxt_fill_func_qstats(&func_qstats, &ring_stats, false); } bnxt_hwrm_port_qstats(bp); @@ -762,6 +977,15 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, count++; } + if (BNXT_TPA_V2_P7(bp)) { + for (i = 0; i < RTE_DIM(bnxt_func_stats_ext_strings); i++) { + xstats[count].id = count; + xstats[count].value = *(uint64_t *)((char *)&func_qstats_ext + + bnxt_func_stats_ext_strings[i].offset); + count++; + } + goto skip_func_stats; + } for (i = 0; i < RTE_DIM(bnxt_func_stats_strings); i++) { xstats[count].id = count; xstats[count].value = *(uint64_t *)((char *)&func_qstats + @@ -769,6 +993,7 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev, count++; } +skip_func_stats: for (i = 0; i < rx_port_stats_ext_cnt; i++) { uint64_t *rx_stats_ext = (uint64_t *)bp->hw_rx_port_stats_ext; @@ -849,19 +1074,26 @@ int bnxt_dev_xstats_get_names_op(struct rte_eth_dev *eth_dev, unsigned int size) { struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private; - const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) + - RTE_DIM(bnxt_tx_stats_strings) + - RTE_DIM(bnxt_func_stats_strings) + - RTE_DIM(bnxt_rx_ext_stats_strings) + - RTE_DIM(bnxt_tx_ext_stats_strings) + - bnxt_flow_stats_cnt(bp); - unsigned int i, count = 0; + unsigned int stat_cnt; + unsigned int i, count = 0, sz; int rc; rc = is_bnxt_in_error(bp); if (rc) return rc; + if (BNXT_TPA_V2_P7(bp)) + sz = RTE_DIM(bnxt_func_stats_ext_strings); + else + sz = RTE_DIM(bnxt_func_stats_strings); + + stat_cnt = RTE_DIM(bnxt_rx_stats_strings) + + RTE_DIM(bnxt_tx_stats_strings) + + sz + + RTE_DIM(bnxt_rx_ext_stats_strings) + + RTE_DIM(bnxt_tx_ext_stats_strings) + + bnxt_flow_stats_cnt(bp); + if (xstats_names == NULL || size < stat_cnt) return stat_cnt; @@ -879,6 +1111,16 @@ int bnxt_dev_xstats_get_names_op(struct rte_eth_dev *eth_dev, count++; } + if (BNXT_TPA_V2_P7(bp)) { + for (i = 0; i < RTE_DIM(bnxt_func_stats_ext_strings); i++) { + strlcpy(xstats_names[count].name, + bnxt_func_stats_ext_strings[i].name, + sizeof(xstats_names[count].name)); + count++; + } + goto skip_func_stats; + } + for (i = 0; i < RTE_DIM(bnxt_func_stats_strings); i++) { strlcpy(xstats_names[count].name, bnxt_func_stats_strings[i].name, @@ -886,6 +1128,7 @@ int bnxt_dev_xstats_get_names_op(struct rte_eth_dev *eth_dev, count++; } +skip_func_stats: for (i = 0; i < RTE_DIM(bnxt_rx_ext_stats_strings); i++) { strlcpy(xstats_names[count].name, bnxt_rx_ext_stats_strings[i].name, From patchwork Wed Dec 27 04:21:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135605 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94CD9437A1; Wed, 27 Dec 2023 05:23:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A14DB40EA5; Wed, 27 Dec 2023 05:21:54 +0100 (CET) Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by mails.dpdk.org (Postfix) with ESMTP id 03B3B40E2D for ; Wed, 27 Dec 2023 05:21:50 +0100 (CET) Received: by mail-qt1-f175.google.com with SMTP id d75a77b69052e-42786ec994bso44746361cf.1 for ; Tue, 26 Dec 2023 20:21:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650909; x=1704255709; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=UHGu4JKP4YqC48fGuPT6SgGifyRfDyuFcFA/A5H5NIg=; b=bwsPC7GqjlVnlsZ9+4ZVLmf7JzIqZTmJXh+JYh+79Qb/EsZK7A3SoUq9YxqIXc2gho W/+d2C4QQz4/X/+7L9ugkB9/00KlZTGHOIDsBrvxG4sVpPqmngMYVaJnSJneKG/QjpMY QXk5RLQmBPS4oyXR38zzKMm2KvX0hX4Mv0/TM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650909; x=1704255709; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UHGu4JKP4YqC48fGuPT6SgGifyRfDyuFcFA/A5H5NIg=; b=CDNSGlBK21MTM1DIr4pSnPK0OL19TosOxBgIwyTwoZsRRl+BWTCcNM7UX8BpAmHOUD FAPFJkl5vn9PlsPq2LE7l/QS2nu2jQktv8U2aEq2tm4IbJ1s+ShHz65sjUJ6yFL7b8HE uSo0EBTJehZYORTn1S1Pbe0Ppi+GQKuL+RYpC3utBVlEUBbQ7CBN4Yrn5ktpJV/sa1q3 AlEbCJYUkHBp1UNjoUU1ffm4Z8f8vkXZQEXYizHIRLVsCzdB+2G7jpsgbzB4YZ77jmXb mU8g4qzbYV8LuJAvkqFxP7Js+PYvs13UD5Ahkop01FzfUc4PdfQ3sq0207U8vbvqMtMi 0jTA== X-Gm-Message-State: AOJu0YxP8/No4XHgNUKHH+FWxOOfg9F7Vvvs/K7GMPCZrNSDf816X4MG 4GKMM34WSqfRu+Ct5Xr7C5pajbhSE2waQ+8HEeMAUv8JW8EwNn6DUGuOs9/6gxjrPQQ+xqpEm7K st+jClpeUEuRnEOe6uOrZEhtcKCMI9vHN93/jwQ1fnKDtPug2vrr5DXya7K/VO6Id/nPZNfuG9/ k= X-Google-Smtp-Source: AGHT+IGTHrJMh3aW+nyEWgdK8Z76utsGAIoEP2WUdCOx5jQzmte844CYDyLOI2inizhU4gMHM7vg6g== X-Received: by 2002:a05:622a:205:b0:425:a5fe:9b71 with SMTP id b5-20020a05622a020500b00425a5fe9b71mr12390128qtx.43.1703650908781; Tue, 26 Dec 2023 20:21:48 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:48 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 17/18] net/bnxt: add AVX2 support for compressed CQE Date: Tue, 26 Dec 2023 20:21:18 -0800 Message-Id: <20231227042119.72469-18-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org P7 device family supports 16 byte Rx completions. Add AVX2 vector mode for compressed Rx CQE. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_ethdev.c | 5 + drivers/net/bnxt/bnxt_rxr.h | 2 + drivers/net/bnxt/bnxt_rxtx_vec_avx2.c | 309 ++++++++++++++++++++++++++ 3 files changed, 316 insertions(+) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 031028eda1..bd8c7557dd 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1406,6 +1406,8 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) "Using AVX2 vector mode receive for port %d\n", eth_dev->data->port_id); bp->flags |= BNXT_FLAG_RX_VECTOR_PKT_MODE; + if (bnxt_compressed_rx_cqe_mode_enabled(bp)) + return bnxt_crx_pkts_vec_avx2; return bnxt_recv_pkts_vec_avx2; } #endif @@ -3124,6 +3126,9 @@ static const struct { {bnxt_recv_pkts, "Scalar"}, #if defined(RTE_ARCH_X86) {bnxt_recv_pkts_vec, "Vector SSE"}, +#endif +#if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT) + {bnxt_crx_pkts_vec_avx2, "Vector AVX2"}, {bnxt_recv_pkts_vec_avx2, "Vector AVX2"}, #endif #if defined(RTE_ARCH_ARM64) diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index c51bb2d62c..a474a69ae3 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -162,6 +162,8 @@ int bnxt_rxq_vec_setup(struct bnxt_rx_queue *rxq); #if defined(RTE_ARCH_X86) uint16_t bnxt_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t bnxt_crx_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif void bnxt_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1, diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c index ea8dbaffba..ce6b597611 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c +++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c @@ -361,6 +361,294 @@ recv_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return nb_rx_pkts; } +static uint16_t +crx_burst_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct bnxt_rx_queue *rxq = rx_queue; + const __m256i mbuf_init = + _mm256_set_epi64x(0, 0, 0, rxq->mbuf_initializer); + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + uint16_t cp_ring_size = cpr->cp_ring_struct->ring_size; + uint16_t rx_ring_size = rxr->rx_ring_struct->ring_size; + struct cmpl_base *cp_desc_ring = cpr->cp_desc_ring; + uint64_t valid, desc_valid_mask = ~0ULL; + const __m256i info3_v_mask = _mm256_set1_epi32(CMPL_BASE_V); + uint32_t raw_cons = cpr->cp_raw_cons; + uint32_t cons, mbcons; + int nb_rx_pkts = 0; + int i; + const __m256i valid_target = + _mm256_set1_epi32(!!(raw_cons & cp_ring_size)); + const __m256i shuf_msk = + _mm256_set_epi8(15, 14, 13, 12, /* rss */ + 7, 6, /* vlan_tci */ + 3, 2, /* data_len */ + 0xFF, 0xFF, 3, 2, /* pkt_len */ + 0xFF, 0xFF, 0xFF, 0xFF, /* pkt_type (zeroes) */ + 15, 14, 13, 12, /* rss */ + 7, 6, /* vlan_tci */ + 3, 2, /* data_len */ + 0xFF, 0xFF, 3, 2, /* pkt_len */ + 0xFF, 0xFF, 0xFF, 0xFF); /* pkt_type (zeroes) */ + const __m256i flags_type_mask = + _mm256_set1_epi32(RX_PKT_CMPL_FLAGS_ITYPE_MASK); + const __m256i flags2_mask1 = + _mm256_set1_epi32(CMPL_FLAGS2_VLAN_TUN_MSK); + const __m256i flags2_mask2 = + _mm256_set1_epi32(RX_PKT_CMPL_FLAGS2_IP_TYPE); + const __m256i rss_mask = + _mm256_set1_epi32(RX_PKT_CMPL_FLAGS_RSS_VALID); + __m256i t0, t1, flags_type, flags2, index, errors; + __m256i ptype_idx, ptypes, is_tunnel; + __m256i mbuf01, mbuf23, mbuf45, mbuf67; + __m256i rearm0, rearm1, rearm2, rearm3, rearm4, rearm5, rearm6, rearm7; + __m256i ol_flags, ol_flags_hi; + __m256i rss_flags; + + /* Validate ptype table indexing at build time. */ + bnxt_check_ptype_constants(); + + /* If Rx Q was stopped return */ + if (unlikely(!rxq->rx_started)) + return 0; + + if (rxq->rxrearm_nb >= rxq->rx_free_thresh) + bnxt_rxq_rearm(rxq, rxr); + + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, BNXT_RX_DESCS_PER_LOOP_VEC256); + + cons = raw_cons & (cp_ring_size - 1); + mbcons = raw_cons & (rx_ring_size - 1); + + /* Return immediately if there is not at least one completed packet. */ + if (!bnxt_cpr_cmp_valid(&cp_desc_ring[cons], raw_cons, cp_ring_size)) + return 0; + + /* Ensure that we do not go past the ends of the rings. */ + nb_pkts = RTE_MIN(nb_pkts, RTE_MIN(rx_ring_size - mbcons, + cp_ring_size - cons)); + /* + * If we are at the end of the ring, ensure that descriptors after the + * last valid entry are not treated as valid. Otherwise, force the + * maximum number of packets to receive to be a multiple of the per- + * loop count. + */ + if (nb_pkts < BNXT_RX_DESCS_PER_LOOP_VEC256) { + desc_valid_mask >>= + CHAR_BIT * (BNXT_RX_DESCS_PER_LOOP_VEC256 - nb_pkts); + } else { + nb_pkts = + RTE_ALIGN_FLOOR(nb_pkts, BNXT_RX_DESCS_PER_LOOP_VEC256); + } + + /* Handle RX burst request */ + for (i = 0; i < nb_pkts; i += BNXT_RX_DESCS_PER_LOOP_VEC256, + cons += BNXT_RX_DESCS_PER_LOOP_VEC256, + mbcons += BNXT_RX_DESCS_PER_LOOP_VEC256) { + __m256i rxcmp0_1, rxcmp2_3, rxcmp4_5, rxcmp6_7, info3_v; + __m256i errors_v2; + uint32_t num_valid; + + /* Copy eight mbuf pointers to output array. */ + t0 = _mm256_loadu_si256((void *)&rxr->rx_buf_ring[mbcons]); + _mm256_storeu_si256((void *)&rx_pkts[i], t0); +#ifdef RTE_ARCH_X86_64 + t0 = _mm256_loadu_si256((void *)&rxr->rx_buf_ring[mbcons + 4]); + _mm256_storeu_si256((void *)&rx_pkts[i + 4], t0); +#endif + + /* + * Load eight receive completion descriptors into 256-bit + * registers. Loads are issued in reverse order in order to + * ensure consistent state. + */ + rxcmp6_7 = _mm256_loadu_si256((void *)&cp_desc_ring[cons + 6]); + rte_compiler_barrier(); + rxcmp4_5 = _mm256_loadu_si256((void *)&cp_desc_ring[cons + 4]); + rte_compiler_barrier(); + rxcmp2_3 = _mm256_loadu_si256((void *)&cp_desc_ring[cons + 2]); + rte_compiler_barrier(); + rxcmp0_1 = _mm256_loadu_si256((void *)&cp_desc_ring[cons + 0]); + + /* Compute packet type table indices for eight packets. */ + t0 = _mm256_unpacklo_epi32(rxcmp0_1, rxcmp2_3); + t1 = _mm256_unpacklo_epi32(rxcmp4_5, rxcmp6_7); + flags_type = _mm256_unpacklo_epi64(t0, t1); + ptype_idx = _mm256_and_si256(flags_type, flags_type_mask); + ptype_idx = _mm256_srli_epi32(ptype_idx, + RX_PKT_CMPL_FLAGS_ITYPE_SFT - + BNXT_PTYPE_TBL_TYPE_SFT); + + t0 = _mm256_unpacklo_epi32(rxcmp0_1, rxcmp2_3); + t1 = _mm256_unpacklo_epi32(rxcmp4_5, rxcmp6_7); + flags2 = _mm256_unpackhi_epi64(t0, t1); + + t0 = _mm256_srli_epi32(_mm256_and_si256(flags2, flags2_mask1), + RX_PKT_CMPL_FLAGS2_META_FORMAT_SFT - + BNXT_PTYPE_TBL_VLAN_SFT); + ptype_idx = _mm256_or_si256(ptype_idx, t0); + + t0 = _mm256_srli_epi32(_mm256_and_si256(flags2, flags2_mask2), + RX_PKT_CMPL_FLAGS2_IP_TYPE_SFT - + BNXT_PTYPE_TBL_IP_VER_SFT); + ptype_idx = _mm256_or_si256(ptype_idx, t0); + + /* + * Load ptypes for eight packets using gather. Gather operations + * have extremely high latency (~19 cycles), execution and use + * of result should be separated as much as possible. + */ + ptypes = _mm256_i32gather_epi32((int *)bnxt_ptype_table, + ptype_idx, sizeof(uint32_t)); + /* + * Compute ol_flags and checksum error table indices for eight + * packets. + */ + is_tunnel = _mm256_and_si256(flags2, _mm256_set1_epi32(4)); + is_tunnel = _mm256_slli_epi32(is_tunnel, 3); + flags2 = _mm256_and_si256(flags2, _mm256_set1_epi32(0x1F)); + + /* Extract errors_v2 fields for eight packets. */ + t0 = _mm256_unpackhi_epi32(rxcmp0_1, rxcmp2_3); + t1 = _mm256_unpackhi_epi32(rxcmp4_5, rxcmp6_7); + errors_v2 = _mm256_unpacklo_epi64(t0, t1); + + errors = _mm256_srli_epi32(errors_v2, 4); + errors = _mm256_and_si256(errors, _mm256_set1_epi32(0xF)); + errors = _mm256_and_si256(errors, flags2); + + index = _mm256_andnot_si256(errors, flags2); + errors = _mm256_or_si256(errors, + _mm256_srli_epi32(is_tunnel, 1)); + index = _mm256_or_si256(index, is_tunnel); + + /* + * Load ol_flags for eight packets using gather. Gather + * operations have extremely high latency (~19 cycles), + * execution and use of result should be separated as much + * as possible. + */ + ol_flags = _mm256_i32gather_epi32((int *)rxr->ol_flags_table, + index, sizeof(uint32_t)); + errors = _mm256_i32gather_epi32((int *)rxr->ol_flags_err_table, + errors, sizeof(uint32_t)); + + /* + * Pack the 128-bit array of valid descriptor flags into 64 + * bits and count the number of set bits in order to determine + * the number of valid descriptors. + */ + const __m256i perm_msk = + _mm256_set_epi32(7, 3, 6, 2, 5, 1, 4, 0); + info3_v = _mm256_permutevar8x32_epi32(errors_v2, perm_msk); + info3_v = _mm256_and_si256(errors_v2, info3_v_mask); + info3_v = _mm256_xor_si256(info3_v, valid_target); + + info3_v = _mm256_packs_epi32(info3_v, _mm256_setzero_si256()); + valid = _mm_cvtsi128_si64(_mm256_extracti128_si256(info3_v, 1)); + valid = (valid << CHAR_BIT) | + _mm_cvtsi128_si64(_mm256_castsi256_si128(info3_v)); + num_valid = rte_popcount64(valid & desc_valid_mask); + + if (num_valid == 0) + break; + + /* Update mbuf rearm_data for eight packets. */ + mbuf01 = _mm256_shuffle_epi8(rxcmp0_1, shuf_msk); + mbuf23 = _mm256_shuffle_epi8(rxcmp2_3, shuf_msk); + mbuf45 = _mm256_shuffle_epi8(rxcmp4_5, shuf_msk); + mbuf67 = _mm256_shuffle_epi8(rxcmp6_7, shuf_msk); + + /* Blend in ptype field for two mbufs at a time. */ + mbuf01 = _mm256_blend_epi32(mbuf01, ptypes, 0x11); + mbuf23 = _mm256_blend_epi32(mbuf23, + _mm256_srli_si256(ptypes, 4), 0x11); + mbuf45 = _mm256_blend_epi32(mbuf45, + _mm256_srli_si256(ptypes, 8), 0x11); + mbuf67 = _mm256_blend_epi32(mbuf67, + _mm256_srli_si256(ptypes, 12), 0x11); + + /* Unpack rearm data, set fixed fields for first four mbufs. */ + rearm0 = _mm256_permute2f128_si256(mbuf_init, mbuf01, 0x20); + rearm1 = _mm256_blend_epi32(mbuf_init, mbuf01, 0xF0); + rearm2 = _mm256_permute2f128_si256(mbuf_init, mbuf23, 0x20); + rearm3 = _mm256_blend_epi32(mbuf_init, mbuf23, 0xF0); + + /* Compute final ol_flags values for eight packets. */ + rss_flags = _mm256_and_si256(flags_type, rss_mask); + rss_flags = _mm256_srli_epi32(rss_flags, 9); + ol_flags = _mm256_or_si256(ol_flags, errors); + ol_flags = _mm256_or_si256(ol_flags, rss_flags); + ol_flags_hi = _mm256_permute2f128_si256(ol_flags, + ol_flags, 0x11); + + /* Set ol_flags fields for first four packets. */ + rearm0 = _mm256_blend_epi32(rearm0, + _mm256_slli_si256(ol_flags, 8), + 0x04); + rearm1 = _mm256_blend_epi32(rearm1, + _mm256_slli_si256(ol_flags_hi, 8), + 0x04); + rearm2 = _mm256_blend_epi32(rearm2, + _mm256_slli_si256(ol_flags, 4), + 0x04); + rearm3 = _mm256_blend_epi32(rearm3, + _mm256_slli_si256(ol_flags_hi, 4), + 0x04); + + /* Store all mbuf fields for first four packets. */ + _mm256_storeu_si256((void *)&rx_pkts[i + 0]->rearm_data, + rearm0); + _mm256_storeu_si256((void *)&rx_pkts[i + 1]->rearm_data, + rearm1); + _mm256_storeu_si256((void *)&rx_pkts[i + 2]->rearm_data, + rearm2); + _mm256_storeu_si256((void *)&rx_pkts[i + 3]->rearm_data, + rearm3); + + /* Unpack rearm data, set fixed fields for final four mbufs. */ + rearm4 = _mm256_permute2f128_si256(mbuf_init, mbuf45, 0x20); + rearm5 = _mm256_blend_epi32(mbuf_init, mbuf45, 0xF0); + rearm6 = _mm256_permute2f128_si256(mbuf_init, mbuf67, 0x20); + rearm7 = _mm256_blend_epi32(mbuf_init, mbuf67, 0xF0); + + /* Set ol_flags fields for final four packets. */ + rearm4 = _mm256_blend_epi32(rearm4, ol_flags, 0x04); + rearm5 = _mm256_blend_epi32(rearm5, ol_flags_hi, 0x04); + rearm6 = _mm256_blend_epi32(rearm6, + _mm256_srli_si256(ol_flags, 4), + 0x04); + rearm7 = _mm256_blend_epi32(rearm7, + _mm256_srli_si256(ol_flags_hi, 4), + 0x04); + + /* Store all mbuf fields for final four packets. */ + _mm256_storeu_si256((void *)&rx_pkts[i + 4]->rearm_data, + rearm4); + _mm256_storeu_si256((void *)&rx_pkts[i + 5]->rearm_data, + rearm5); + _mm256_storeu_si256((void *)&rx_pkts[i + 6]->rearm_data, + rearm6); + _mm256_storeu_si256((void *)&rx_pkts[i + 7]->rearm_data, + rearm7); + + nb_rx_pkts += num_valid; + if (num_valid < BNXT_RX_DESCS_PER_LOOP_VEC256) + break; + } + + if (nb_rx_pkts) { + rxr->rx_raw_prod = RING_ADV(rxr->rx_raw_prod, nb_rx_pkts); + + rxq->rxrearm_nb += nb_rx_pkts; + cpr->cp_raw_cons += nb_rx_pkts; + bnxt_db_cq(cpr); + } + + return nb_rx_pkts; +} + uint16_t bnxt_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -382,6 +670,27 @@ bnxt_recv_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, return cnt + recv_burst_vec_avx2(rx_queue, rx_pkts + cnt, nb_pkts); } +uint16_t +bnxt_crx_pkts_vec_avx2(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t cnt = 0; + + while (nb_pkts > RTE_BNXT_MAX_RX_BURST) { + uint16_t burst; + + burst = crx_burst_vec_avx2(rx_queue, rx_pkts + cnt, + RTE_BNXT_MAX_RX_BURST); + + cnt += burst; + nb_pkts -= burst; + + if (burst < RTE_BNXT_MAX_RX_BURST) + return cnt; + } + return cnt + crx_burst_vec_avx2(rx_queue, rx_pkts + cnt, nb_pkts); +} + static void bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq) { From patchwork Wed Dec 27 04:21:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135606 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F628437A1; Wed, 27 Dec 2023 05:23:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F16F741060; Wed, 27 Dec 2023 05:21:56 +0100 (CET) Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by mails.dpdk.org (Postfix) with ESMTP id 77EFC40E72 for ; Wed, 27 Dec 2023 05:21:51 +0100 (CET) Received: by mail-qt1-f171.google.com with SMTP id d75a77b69052e-427e2ec1441so7786661cf.2 for ; Tue, 26 Dec 2023 20:21:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703650910; x=1704255710; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=jjp4Qb4SoKOGONdE//vFc/xBOJOo81jl00vvpCTNpnc=; b=MUiEnOsbopUQM2OWopfDwie7Vko0nXVOsy+vMJsNesMwgJZLUwFRgM0aL5E9Oh6Kge 9sIddWEpXz+0jbL+SuwZPZr3IeT1TRLhmWvrYvfKQ96TjueaRKLOqAkwMOcGy/WcgbgC EWRjAE7yPmGgcK/iGSmnzvKevJwSUL87c+kT4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703650910; x=1704255710; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jjp4Qb4SoKOGONdE//vFc/xBOJOo81jl00vvpCTNpnc=; b=Mlm5UXs0FTwvMMbq7dU6tgakKoPgt9wRTCFGaxIhfFECsnEC56XHcI9nBx9ib/c9/x IN5xSbCcpV/cwDDTaW/Cr6Mj2JWRvDZwD4m090GdPWWC6R14przVHxcPA2GbbYNZS0Kp SLKHogyb2aURqDP8YIsgexDf0wCjIN7NK4rSXkfOkJdepV8vTAynhgD27FCNjiP9yAGT J/vx0rhQcsON4IbhiLCt3NPshNo1Lsb/HkjLK6E4as4FkNzIW6GR2F/PdPZHIcquxTdK JYtxdEvdsoquLgMvYOnCP4G0YDpOGlv6NZMu2/8xY13rSLAfn4g+dsR9YRW3mBPbypF8 YGSQ== X-Gm-Message-State: AOJu0YzfBcoZuku2StKXCvlDPHtwGd/IzEUQWoQPucIVZwiKdh8RNKhS 54ST2yRVTLYZGnssrFdPwU526QBZmpkYIbmt2P2EhAymmr6bkHe9lu/YeJUjbPPxalWsMWP+bFH wI261EnFXuSOPgDhPZHjU8z3BJJaHhs+AHHHZtdGdVYmNmnrZ0MN9zo+KDpjapU2ZdnZtIE7yFk A= X-Google-Smtp-Source: AGHT+IH0clLvs37notdKfareaSVJclHUdNApCka8WV/LWzXPFCLj1wPeyCsCp4PpRinsqGJL/1+OJg== X-Received: by 2002:a05:622a:346:b0:427:8dfd:3655 with SMTP id r6-20020a05622a034600b004278dfd3655mr10474116qtw.72.1703650910200; Tue, 26 Dec 2023 20:21:50 -0800 (PST) Received: from localhost.localdomain ([2605:a601:a780:1400:c066:75e3:74c8:50e6]) by smtp.gmail.com with ESMTPSA id bt7-20020ac86907000000b00427e120889bsm1415488qtb.91.2023.12.26.20.21.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Dec 2023 20:21:49 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 18/18] net/bnxt: enable SSE mode for compressed CQE Date: Tue, 26 Dec 2023 20:21:19 -0800 Message-Id: <20231227042119.72469-19-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231227042119.72469-1-ajit.khaparde@broadcom.com> References: <20231227042119.72469-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org P7 device family supports 16 byte Rx completions. Enable SSE vector mode for compressed Rx CQE processing. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_ethdev.c | 16 ++- drivers/net/bnxt/bnxt_rxr.h | 2 + drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 167 +++++++++++++++++++++++++-- 3 files changed, 173 insertions(+), 12 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index bd8c7557dd..f9cd234bb6 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1377,7 +1377,8 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) * asynchronous completions and receive completions can be placed in * the same completion ring. */ - if (BNXT_TRUFLOW_EN(bp) || !BNXT_NUM_ASYNC_CPR(bp)) + if ((BNXT_TRUFLOW_EN(bp) && !BNXT_CHIP_P7(bp)) || + !BNXT_NUM_ASYNC_CPR(bp)) goto use_scalar_rx; /* @@ -1410,12 +1411,19 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) return bnxt_crx_pkts_vec_avx2; return bnxt_recv_pkts_vec_avx2; } - #endif +#endif if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { PMD_DRV_LOG(INFO, "Using SSE vector mode receive for port %d\n", eth_dev->data->port_id); bp->flags |= BNXT_FLAG_RX_VECTOR_PKT_MODE; + if (bnxt_compressed_rx_cqe_mode_enabled(bp)) { +#if defined(RTE_ARCH_ARM64) + goto use_scalar_rx; +#else + return bnxt_crx_pkts_vec; +#endif + } return bnxt_recv_pkts_vec; } @@ -1445,7 +1453,8 @@ bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev) */ if (eth_dev->data->scattered_rx || (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) || - BNXT_TRUFLOW_EN(bp) || bp->ieee_1588) + (BNXT_TRUFLOW_EN(bp) && !BNXT_CHIP_P7(bp)) || + bp->ieee_1588) goto use_scalar_tx; #if defined(RTE_ARCH_X86) @@ -3125,6 +3134,7 @@ static const struct { } bnxt_rx_burst_info[] = { {bnxt_recv_pkts, "Scalar"}, #if defined(RTE_ARCH_X86) + {bnxt_crx_pkts_vec, "Vector SSE"}, {bnxt_recv_pkts_vec, "Vector SSE"}, #endif #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT) diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index a474a69ae3..d36cbded1d 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -156,6 +156,8 @@ int bnxt_flush_rx_cmp(struct bnxt_cp_ring_info *cpr); #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) uint16_t bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t bnxt_crx_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); int bnxt_rxq_vec_setup(struct bnxt_rx_queue *rxq); #endif diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c index e99a547f58..220aa82073 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c +++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c @@ -54,15 +54,9 @@ static inline void descs_to_mbufs(__m128i mm_rxcmp[4], __m128i mm_rxcmp1[4], - __m128i mbuf_init, struct rte_mbuf **mbuf, - struct bnxt_rx_ring_info *rxr) + __m128i mbuf_init, const __m128i shuf_msk, + struct rte_mbuf **mbuf, struct bnxt_rx_ring_info *rxr) { - const __m128i shuf_msk = - _mm_set_epi8(15, 14, 13, 12, /* rss */ - 0xFF, 0xFF, /* vlan_tci (zeroes) */ - 3, 2, /* data_len */ - 0xFF, 0xFF, 3, 2, /* pkt_len */ - 0xFF, 0xFF, 0xFF, 0xFF); /* pkt_type (zeroes) */ const __m128i flags_type_mask = _mm_set1_epi32(RX_PKT_CMPL_FLAGS_ITYPE_MASK); const __m128i flags2_mask1 = @@ -166,6 +160,12 @@ recv_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) int nb_rx_pkts = 0; const __m128i valid_target = _mm_set1_epi32(!!(raw_cons & cp_ring_size)); + const __m128i shuf_msk = + _mm_set_epi8(15, 14, 13, 12, /* rss */ + 0xFF, 0xFF, /* vlan_tci (zeroes) */ + 3, 2, /* data_len */ + 0xFF, 0xFF, 3, 2, /* pkt_len */ + 0xFF, 0xFF, 0xFF, 0xFF); /* pkt_type (zeroes) */ int i; /* If Rx Q was stopped return */ @@ -264,7 +264,7 @@ recv_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (num_valid == 0) break; - descs_to_mbufs(rxcmp, rxcmp1, mbuf_init, &rx_pkts[nb_rx_pkts], + descs_to_mbufs(rxcmp, rxcmp1, mbuf_init, shuf_msk, &rx_pkts[nb_rx_pkts], rxr); nb_rx_pkts += num_valid; @@ -283,6 +283,134 @@ recv_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return nb_rx_pkts; } +static uint16_t +crx_burst_vec_sse(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct bnxt_rx_queue *rxq = rx_queue; + const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer); + struct bnxt_cp_ring_info *cpr = rxq->cp_ring; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + uint16_t cp_ring_size = cpr->cp_ring_struct->ring_size; + uint16_t rx_ring_size = rxr->rx_ring_struct->ring_size; + struct cmpl_base *cp_desc_ring = cpr->cp_desc_ring; + uint64_t valid, desc_valid_mask = ~0ULL; + const __m128i info3_v_mask = _mm_set1_epi32(CMPL_BASE_V); + uint32_t raw_cons = cpr->cp_raw_cons; + uint32_t cons, mbcons; + int nb_rx_pkts = 0; + const __m128i valid_target = + _mm_set1_epi32(!!(raw_cons & cp_ring_size)); + const __m128i shuf_msk = + _mm_set_epi8(7, 6, 5, 4, /* rss */ + 0xFF, 0xFF, /* vlan_tci (zeroes) */ + 3, 2, /* data_len */ + 0xFF, 0xFF, 3, 2, /* pkt_len */ + 0xFF, 0xFF, 0xFF, 0xFF); /* pkt_type (zeroes) */ + int i; + + /* If Rx Q was stopped return */ + if (unlikely(!rxq->rx_started)) + return 0; + + if (rxq->rxrearm_nb >= rxq->rx_free_thresh) + bnxt_rxq_rearm(rxq, rxr); + + cons = raw_cons & (cp_ring_size - 1); + mbcons = raw_cons & (rx_ring_size - 1); + + /* Prefetch first four descriptor pairs. */ + rte_prefetch0(&cp_desc_ring[cons]); + + /* Ensure that we do not go past the ends of the rings. */ + nb_pkts = RTE_MIN(nb_pkts, RTE_MIN(rx_ring_size - mbcons, + cp_ring_size - cons)); + /* + * If we are at the end of the ring, ensure that descriptors after the + * last valid entry are not treated as valid. Otherwise, force the + * maximum number of packets to receive to be a multiple of the per- + * loop count. + */ + if (nb_pkts < BNXT_RX_DESCS_PER_LOOP_VEC128) { + desc_valid_mask >>= + 16 * (BNXT_RX_DESCS_PER_LOOP_VEC128 - nb_pkts); + } else { + nb_pkts = + RTE_ALIGN_FLOOR(nb_pkts, BNXT_RX_DESCS_PER_LOOP_VEC128); + } + + /* Handle RX burst request */ + for (i = 0; i < nb_pkts; i += BNXT_RX_DESCS_PER_LOOP_VEC128, + cons += BNXT_RX_DESCS_PER_LOOP_VEC128, + mbcons += BNXT_RX_DESCS_PER_LOOP_VEC128) { + __m128i rxcmp1[BNXT_RX_DESCS_PER_LOOP_VEC128]; + __m128i rxcmp[BNXT_RX_DESCS_PER_LOOP_VEC128]; + __m128i tmp0, tmp1, info3_v; + uint32_t num_valid; + + /* Copy four mbuf pointers to output array. */ + tmp0 = _mm_loadu_si128((void *)&rxr->rx_buf_ring[mbcons]); +#ifdef RTE_ARCH_X86_64 + tmp1 = _mm_loadu_si128((void *)&rxr->rx_buf_ring[mbcons + 2]); +#endif + _mm_storeu_si128((void *)&rx_pkts[i], tmp0); +#ifdef RTE_ARCH_X86_64 + _mm_storeu_si128((void *)&rx_pkts[i + 2], tmp1); +#endif + + /* Prefetch four descriptor pairs for next iteration. */ + if (i + BNXT_RX_DESCS_PER_LOOP_VEC128 < nb_pkts) + rte_prefetch0(&cp_desc_ring[cons + 4]); + + /* + * Load the four current descriptors into SSE registers in + * reverse order to ensure consistent state. + */ + rxcmp[3] = _mm_load_si128((void *)&cp_desc_ring[cons + 3]); + rte_compiler_barrier(); + rxcmp[2] = _mm_load_si128((void *)&cp_desc_ring[cons + 2]); + rte_compiler_barrier(); + rxcmp[1] = _mm_load_si128((void *)&cp_desc_ring[cons + 1]); + rte_compiler_barrier(); + rxcmp[0] = _mm_load_si128((void *)&cp_desc_ring[cons + 0]); + + tmp1 = _mm_unpackhi_epi32(rxcmp[2], rxcmp[3]); + tmp0 = _mm_unpackhi_epi32(rxcmp[0], rxcmp[1]); + + /* Isolate descriptor valid flags. */ + info3_v = _mm_and_si128(_mm_unpacklo_epi64(tmp0, tmp1), + info3_v_mask); + info3_v = _mm_xor_si128(info3_v, valid_target); + + /* + * Pack the 128-bit array of valid descriptor flags into 64 + * bits and count the number of set bits in order to determine + * the number of valid descriptors. + */ + valid = _mm_cvtsi128_si64(_mm_packs_epi32(info3_v, info3_v)); + num_valid = rte_popcount64(valid & desc_valid_mask); + + if (num_valid == 0) + break; + + descs_to_mbufs(rxcmp, rxcmp1, mbuf_init, shuf_msk, &rx_pkts[nb_rx_pkts], + rxr); + nb_rx_pkts += num_valid; + + if (num_valid < BNXT_RX_DESCS_PER_LOOP_VEC128) + break; + } + + if (nb_rx_pkts) { + rxr->rx_raw_prod = RING_ADV(rxr->rx_raw_prod, nb_rx_pkts); + + rxq->rxrearm_nb += nb_rx_pkts; + cpr->cp_raw_cons += nb_rx_pkts; + bnxt_db_cq(cpr); + } + + return nb_rx_pkts; +} + uint16_t bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -304,6 +432,27 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return cnt + recv_burst_vec_sse(rx_queue, rx_pkts + cnt, nb_pkts); } +uint16_t +bnxt_crx_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + uint16_t cnt = 0; + + while (nb_pkts > RTE_BNXT_MAX_RX_BURST) { + uint16_t burst; + + burst = crx_burst_vec_sse(rx_queue, rx_pkts + cnt, + RTE_BNXT_MAX_RX_BURST); + + cnt += burst; + nb_pkts -= burst; + + if (burst < RTE_BNXT_MAX_RX_BURST) + return cnt; + } + + return cnt + crx_burst_vec_sse(rx_queue, rx_pkts + cnt, nb_pkts); +} + static void bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq) {