From patchwork Wed Sep 8 08:37:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98282 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 872C4A0C56; Wed, 8 Sep 2021 10:36:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C40A64115F; Wed, 8 Sep 2021 10:36:29 +0200 (CEST) Received: from smtpbgsg1.qq.com (smtpbgsg1.qq.com [54.254.200.92]) by mails.dpdk.org (Postfix) with ESMTP id 7592E4115A for ; Wed, 8 Sep 2021 10:36:28 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090179t0pljq9w Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:19 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: XDJ14wmgNKJAHkl0WeJdMtWBbidgs9GjvfWxhbMk5h/ygXnRnP1am3QWuEobm uQdCtKRXINAW/DSdS4mNqFWSWs3C4DVVH8Wn7Fv6m3KOmVF9VKa1nyFpz/FfTvtarf17YWl Eh6weB1t1RNM6FXW4NTj7zhvzhRTGxlE4WX0ap6SbnWtMlZoJh/lGdbV14Em1QbZJ/W3wvJ /qt8Q6LBV8RkqvWKkgFkQBuAPNPtg9lgKNpz/nhtPRl3kaDCaPBchCAQNdG0kfKffF+9cci NRM0GNFVgKHMv898bXW3etxyo5r2kKXAxi7sBwsRraEhfaGlpyQuocdNnkGHhxfteAtbukG 7bhAQuPoIRau+1kilODfSqF4VioS1/haBrGMPik X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:27 +0800 Message-Id: <20210908083758.312055-2-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign1 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 01/32] net/ngbe: add packet type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add packet type marco definition and convert ptype to ptid. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/meson.build | 1 + drivers/net/ngbe/ngbe_ethdev.c | 9 + drivers/net/ngbe/ngbe_ethdev.h | 4 + drivers/net/ngbe/ngbe_ptypes.c | 300 ++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ptypes.h | 240 ++++++++++++++++++++++++ drivers/net/ngbe/ngbe_rxtx.c | 16 ++ drivers/net/ngbe/ngbe_rxtx.h | 2 + 9 files changed, 574 insertions(+) create mode 100644 drivers/net/ngbe/ngbe_ptypes.c create mode 100644 drivers/net/ngbe/ngbe_ptypes.h diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 08d5f1b0dc..8b7588184a 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -8,6 +8,7 @@ Speed capabilities = Y Link status = Y Link status event = Y Queue start/stop = Y +Packet type parsing = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 3ba3bb755f..d044397cd5 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -11,6 +11,7 @@ for Wangxun 1 Gigabit Ethernet NICs. Features -------- +- Packet type information - Link state information diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build index 815ef4da23..05f94fe7d6 100644 --- a/drivers/net/ngbe/meson.build +++ b/drivers/net/ngbe/meson.build @@ -12,6 +12,7 @@ objs = [base_objs] sources = files( 'ngbe_ethdev.c', + 'ngbe_ptypes.c', 'ngbe_rxtx.c', ) diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 3b5c6615ad..4388d93560 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -667,6 +667,15 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) return 0; } +const uint32_t * +ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev) +{ + if (dev->rx_pkt_burst == ngbe_recv_pkts) + return ngbe_get_supported_ptypes(); + + return NULL; +} + /* return 0 means link status changed, -1 means not changed */ int ngbe_dev_link_update_share(struct rte_eth_dev *dev, diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 7fb72f3f1f..486c6c3839 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -6,6 +6,8 @@ #ifndef _NGBE_ETHDEV_H_ #define _NGBE_ETHDEV_H_ +#include "ngbe_ptypes.h" + /* need update link, bit flag */ #define NGBE_FLAG_NEED_LINK_UPDATE ((uint32_t)(1 << 0)) #define NGBE_FLAG_MAILBOX ((uint32_t)(1 << 1)) @@ -131,4 +133,6 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev, #define NGBE_DEFAULT_TX_HTHRESH 0 #define NGBE_DEFAULT_TX_WTHRESH 0 +const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev); + #endif /* _NGBE_ETHDEV_H_ */ diff --git a/drivers/net/ngbe/ngbe_ptypes.c b/drivers/net/ngbe/ngbe_ptypes.c new file mode 100644 index 0000000000..d6d82105c9 --- /dev/null +++ b/drivers/net/ngbe/ngbe_ptypes.c @@ -0,0 +1,300 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + */ + +#include +#include + +#include "base/ngbe_type.h" +#include "ngbe_ptypes.h" + +/* The ngbe_ptype_lookup is used to convert from the 8-bit ptid in the + * hardware to a bit-field that can be used by SW to more easily determine the + * packet type. + * + * Macros are used to shorten the table lines and make this table human + * readable. + * + * We store the PTYPE in the top byte of the bit field - this is just so that + * we can check that the table doesn't have a row missing, as the index into + * the table should be the PTYPE. + */ +#define TPTE(ptid, l2, l3, l4, tun, el2, el3, el4) \ + [ptid] = (RTE_PTYPE_L2_##l2 | \ + RTE_PTYPE_L3_##l3 | \ + RTE_PTYPE_L4_##l4 | \ + RTE_PTYPE_TUNNEL_##tun | \ + RTE_PTYPE_INNER_L2_##el2 | \ + RTE_PTYPE_INNER_L3_##el3 | \ + RTE_PTYPE_INNER_L4_##el4) + +#define RTE_PTYPE_L2_NONE 0 +#define RTE_PTYPE_L3_NONE 0 +#define RTE_PTYPE_L4_NONE 0 +#define RTE_PTYPE_TUNNEL_NONE 0 +#define RTE_PTYPE_INNER_L2_NONE 0 +#define RTE_PTYPE_INNER_L3_NONE 0 +#define RTE_PTYPE_INNER_L4_NONE 0 + +static u32 ngbe_ptype_lookup[NGBE_PTID_MAX] __rte_cache_aligned = { + /* L2:0-3 L3:4-7 L4:8-11 TUN:12-15 EL2:16-19 EL3:20-23 EL2:24-27 */ + /* L2: ETH */ + TPTE(0x10, ETHER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x11, ETHER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x12, ETHER_TIMESYNC, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x13, ETHER_FIP, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x14, ETHER_LLDP, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x15, ETHER_CNM, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x16, ETHER_EAPOL, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x17, ETHER_ARP, NONE, NONE, NONE, NONE, NONE, NONE), + /* L2: Ethertype Filter */ + TPTE(0x18, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x19, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x1A, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x1B, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x1C, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x1D, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x1E, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + TPTE(0x1F, ETHER_FILTER, NONE, NONE, NONE, NONE, NONE, NONE), + /* L3: IP */ + TPTE(0x20, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE), + TPTE(0x21, ETHER, IPV4, FRAG, NONE, NONE, NONE, NONE), + TPTE(0x22, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE), + TPTE(0x23, ETHER, IPV4, UDP, NONE, NONE, NONE, NONE), + TPTE(0x24, ETHER, IPV4, TCP, NONE, NONE, NONE, NONE), + TPTE(0x25, ETHER, IPV4, SCTP, NONE, NONE, NONE, NONE), + TPTE(0x29, ETHER, IPV6, FRAG, NONE, NONE, NONE, NONE), + TPTE(0x2A, ETHER, IPV6, NONFRAG, NONE, NONE, NONE, NONE), + TPTE(0x2B, ETHER, IPV6, UDP, NONE, NONE, NONE, NONE), + TPTE(0x2C, ETHER, IPV6, TCP, NONE, NONE, NONE, NONE), + TPTE(0x2D, ETHER, IPV6, SCTP, NONE, NONE, NONE, NONE), + /* IPv4 -> IPv4/IPv6 */ + TPTE(0x81, ETHER, IPV4, NONE, IP, NONE, IPV4, FRAG), + TPTE(0x82, ETHER, IPV4, NONE, IP, NONE, IPV4, NONFRAG), + TPTE(0x83, ETHER, IPV4, NONE, IP, NONE, IPV4, UDP), + TPTE(0x84, ETHER, IPV4, NONE, IP, NONE, IPV4, TCP), + TPTE(0x85, ETHER, IPV4, NONE, IP, NONE, IPV4, SCTP), + TPTE(0x89, ETHER, IPV4, NONE, IP, NONE, IPV6, FRAG), + TPTE(0x8A, ETHER, IPV4, NONE, IP, NONE, IPV6, NONFRAG), + TPTE(0x8B, ETHER, IPV4, NONE, IP, NONE, IPV6, UDP), + TPTE(0x8C, ETHER, IPV4, NONE, IP, NONE, IPV6, TCP), + TPTE(0x8D, ETHER, IPV4, NONE, IP, NONE, IPV6, SCTP), + /* IPv6 -> IPv4/IPv6 */ + TPTE(0xC1, ETHER, IPV6, NONE, IP, NONE, IPV4, FRAG), + TPTE(0xC2, ETHER, IPV6, NONE, IP, NONE, IPV4, NONFRAG), + TPTE(0xC3, ETHER, IPV6, NONE, IP, NONE, IPV4, UDP), + TPTE(0xC4, ETHER, IPV6, NONE, IP, NONE, IPV4, TCP), + TPTE(0xC5, ETHER, IPV6, NONE, IP, NONE, IPV4, SCTP), + TPTE(0xC9, ETHER, IPV6, NONE, IP, NONE, IPV6, FRAG), + TPTE(0xCA, ETHER, IPV6, NONE, IP, NONE, IPV6, NONFRAG), + TPTE(0xCB, ETHER, IPV6, NONE, IP, NONE, IPV6, UDP), + TPTE(0xCC, ETHER, IPV6, NONE, IP, NONE, IPV6, TCP), + TPTE(0xCD, ETHER, IPV6, NONE, IP, NONE, IPV6, SCTP), +}; + +u32 *ngbe_get_supported_ptypes(void) +{ + static u32 ptypes[] = { + /* For non-vec functions, + * refers to ngbe_rxd_pkt_info_to_pkt_type(); + */ + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_UNKNOWN + }; + + return ptypes; +} + +static inline u8 +ngbe_encode_ptype_mac(u32 ptype) +{ + u8 ptid; + + ptid = NGBE_PTID_PKT_MAC; + + switch (ptype & RTE_PTYPE_L2_MASK) { + case RTE_PTYPE_UNKNOWN: + break; + case RTE_PTYPE_L2_ETHER_TIMESYNC: + ptid |= NGBE_PTID_TYP_TS; + break; + case RTE_PTYPE_L2_ETHER_ARP: + ptid |= NGBE_PTID_TYP_ARP; + break; + case RTE_PTYPE_L2_ETHER_LLDP: + ptid |= NGBE_PTID_TYP_LLDP; + break; + default: + ptid |= NGBE_PTID_TYP_MAC; + break; + } + + return ptid; +} + +static inline u8 +ngbe_encode_ptype_ip(u32 ptype) +{ + u8 ptid; + + ptid = NGBE_PTID_PKT_IP; + + switch (ptype & RTE_PTYPE_L3_MASK) { + case RTE_PTYPE_L3_IPV4: + case RTE_PTYPE_L3_IPV4_EXT: + case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN: + break; + case RTE_PTYPE_L3_IPV6: + case RTE_PTYPE_L3_IPV6_EXT: + case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN: + ptid |= NGBE_PTID_PKT_IPV6; + break; + default: + return ngbe_encode_ptype_mac(ptype); + } + + switch (ptype & RTE_PTYPE_L4_MASK) { + case RTE_PTYPE_L4_TCP: + ptid |= NGBE_PTID_TYP_TCP; + break; + case RTE_PTYPE_L4_UDP: + ptid |= NGBE_PTID_TYP_UDP; + break; + case RTE_PTYPE_L4_SCTP: + ptid |= NGBE_PTID_TYP_SCTP; + break; + case RTE_PTYPE_L4_FRAG: + ptid |= NGBE_PTID_TYP_IPFRAG; + break; + default: + ptid |= NGBE_PTID_TYP_IPDATA; + break; + } + + return ptid; +} + +static inline u8 +ngbe_encode_ptype_tunnel(u32 ptype) +{ + u8 ptid; + + ptid = NGBE_PTID_PKT_TUN; + + switch (ptype & RTE_PTYPE_L3_MASK) { + case RTE_PTYPE_L3_IPV4: + case RTE_PTYPE_L3_IPV4_EXT: + case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN: + break; + case RTE_PTYPE_L3_IPV6: + case RTE_PTYPE_L3_IPV6_EXT: + case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN: + ptid |= NGBE_PTID_TUN_IPV6; + break; + default: + return ngbe_encode_ptype_ip(ptype); + } + + /* VXLAN/GRE/Teredo/VXLAN-GPE are not supported in EM */ + switch (ptype & RTE_PTYPE_TUNNEL_MASK) { + case RTE_PTYPE_TUNNEL_IP: + ptid |= NGBE_PTID_TUN_EI; + break; + case RTE_PTYPE_TUNNEL_GRE: + case RTE_PTYPE_TUNNEL_VXLAN_GPE: + ptid |= NGBE_PTID_TUN_EIG; + break; + case RTE_PTYPE_TUNNEL_VXLAN: + case RTE_PTYPE_TUNNEL_NVGRE: + case RTE_PTYPE_TUNNEL_GENEVE: + case RTE_PTYPE_TUNNEL_GRENAT: + break; + default: + return ptid; + } + + switch (ptype & RTE_PTYPE_INNER_L2_MASK) { + case RTE_PTYPE_INNER_L2_ETHER: + ptid |= NGBE_PTID_TUN_EIGM; + break; + case RTE_PTYPE_INNER_L2_ETHER_VLAN: + ptid |= NGBE_PTID_TUN_EIGMV; + break; + case RTE_PTYPE_INNER_L2_ETHER_QINQ: + ptid |= NGBE_PTID_TUN_EIGMV; + break; + default: + break; + } + + switch (ptype & RTE_PTYPE_INNER_L3_MASK) { + case RTE_PTYPE_INNER_L3_IPV4: + case RTE_PTYPE_INNER_L3_IPV4_EXT: + case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN: + break; + case RTE_PTYPE_INNER_L3_IPV6: + case RTE_PTYPE_INNER_L3_IPV6_EXT: + case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN: + ptid |= NGBE_PTID_PKT_IPV6; + break; + default: + return ptid; + } + + switch (ptype & RTE_PTYPE_INNER_L4_MASK) { + case RTE_PTYPE_INNER_L4_TCP: + ptid |= NGBE_PTID_TYP_TCP; + break; + case RTE_PTYPE_INNER_L4_UDP: + ptid |= NGBE_PTID_TYP_UDP; + break; + case RTE_PTYPE_INNER_L4_SCTP: + ptid |= NGBE_PTID_TYP_SCTP; + break; + case RTE_PTYPE_INNER_L4_FRAG: + ptid |= NGBE_PTID_TYP_IPFRAG; + break; + default: + ptid |= NGBE_PTID_TYP_IPDATA; + break; + } + + return ptid; +} + +u32 ngbe_decode_ptype(u8 ptid) +{ + if (-1 != ngbe_etflt_id(ptid)) + return RTE_PTYPE_UNKNOWN; + + return ngbe_ptype_lookup[ptid]; +} + +u8 ngbe_encode_ptype(u32 ptype) +{ + u8 ptid = 0; + + if (ptype & RTE_PTYPE_TUNNEL_MASK) + ptid = ngbe_encode_ptype_tunnel(ptype); + else if (ptype & RTE_PTYPE_L3_MASK) + ptid = ngbe_encode_ptype_ip(ptype); + else if (ptype & RTE_PTYPE_L2_MASK) + ptid = ngbe_encode_ptype_mac(ptype); + else + ptid = NGBE_PTID_NULL; + + return ptid; +} + diff --git a/drivers/net/ngbe/ngbe_ptypes.h b/drivers/net/ngbe/ngbe_ptypes.h new file mode 100644 index 0000000000..2ac33d814b --- /dev/null +++ b/drivers/net/ngbe/ngbe_ptypes.h @@ -0,0 +1,240 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + */ + +#ifndef _NGBE_PTYPE_H_ +#define _NGBE_PTYPE_H_ + +/** + * PTID(Packet Type Identifier, 8bits) + * - Bit 3:0 detailed types. + * - Bit 5:4 basic types. + * - Bit 7:6 tunnel types. + **/ +#define NGBE_PTID_NULL 0 +#define NGBE_PTID_MAX 256 +#define NGBE_PTID_MASK 0xFF +#define NGBE_PTID_MASK_TUNNEL 0x7F + +/* TUN */ +#define NGBE_PTID_TUN_IPV6 0x40 +#define NGBE_PTID_TUN_EI 0x00 /* IP */ +#define NGBE_PTID_TUN_EIG 0x10 /* IP+GRE */ +#define NGBE_PTID_TUN_EIGM 0x20 /* IP+GRE+MAC */ +#define NGBE_PTID_TUN_EIGMV 0x30 /* IP+GRE+MAC+VLAN */ + +/* PKT for !TUN */ +#define NGBE_PTID_PKT_TUN (0x80) +#define NGBE_PTID_PKT_MAC (0x10) +#define NGBE_PTID_PKT_IP (0x20) + +/* TYP for PKT=mac */ +#define NGBE_PTID_TYP_MAC (0x01) +#define NGBE_PTID_TYP_TS (0x02) /* time sync */ +#define NGBE_PTID_TYP_FIP (0x03) +#define NGBE_PTID_TYP_LLDP (0x04) +#define NGBE_PTID_TYP_CNM (0x05) +#define NGBE_PTID_TYP_EAPOL (0x06) +#define NGBE_PTID_TYP_ARP (0x07) +#define NGBE_PTID_TYP_ETF (0x08) + +/* TYP for PKT=ip */ +#define NGBE_PTID_PKT_IPV6 (0x08) +#define NGBE_PTID_TYP_IPFRAG (0x01) +#define NGBE_PTID_TYP_IPDATA (0x02) +#define NGBE_PTID_TYP_UDP (0x03) +#define NGBE_PTID_TYP_TCP (0x04) +#define NGBE_PTID_TYP_SCTP (0x05) + +/* packet type non-ip values */ +enum ngbe_l2_ptids { + NGBE_PTID_L2_ABORTED = (NGBE_PTID_PKT_MAC), + NGBE_PTID_L2_MAC = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_MAC), + NGBE_PTID_L2_TMST = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_TS), + NGBE_PTID_L2_FIP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_FIP), + NGBE_PTID_L2_LLDP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_LLDP), + NGBE_PTID_L2_CNM = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_CNM), + NGBE_PTID_L2_EAPOL = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_EAPOL), + NGBE_PTID_L2_ARP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_ARP), + + NGBE_PTID_L2_IPV4_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPFRAG), + NGBE_PTID_L2_IPV4 = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPDATA), + NGBE_PTID_L2_IPV4_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_UDP), + NGBE_PTID_L2_IPV4_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_TCP), + NGBE_PTID_L2_IPV4_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_SCTP), + NGBE_PTID_L2_IPV6_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 | + NGBE_PTID_TYP_IPFRAG), + NGBE_PTID_L2_IPV6 = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 | + NGBE_PTID_TYP_IPDATA), + NGBE_PTID_L2_IPV6_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 | + NGBE_PTID_TYP_UDP), + NGBE_PTID_L2_IPV6_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 | + NGBE_PTID_TYP_TCP), + NGBE_PTID_L2_IPV6_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 | + NGBE_PTID_TYP_SCTP), + + NGBE_PTID_L2_TUN4_MAC = (NGBE_PTID_PKT_TUN | + NGBE_PTID_TUN_EIGM), + NGBE_PTID_L2_TUN6_MAC = (NGBE_PTID_PKT_TUN | + NGBE_PTID_TUN_IPV6 | NGBE_PTID_TUN_EIGM), +}; + + +/* + * PTYPE(Packet Type, 32bits) + * - Bit 3:0 is for L2 types. + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types. + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types. + * - Bit 15:12 is for tunnel types. + * - Bit 19:16 is for inner L2 types. + * - Bit 23:20 is for inner L3 types. + * - Bit 27:24 is for inner L4 types. + * - Bit 31:28 is reserved. + * please ref to rte_mbuf.h: rte_mbuf.packet_type + */ +struct rte_ngbe_ptype { + u32 l2:4; /* outer mac */ + u32 l3:4; /* outer internet protocol */ + u32 l4:4; /* outer transport protocol */ + u32 tun:4; /* tunnel protocol */ + + u32 el2:4; /* inner mac */ + u32 el3:4; /* inner internet protocol */ + u32 el4:4; /* inner transport protocol */ + u32 rsv:3; + u32 known:1; +}; + +#ifndef RTE_PTYPE_UNKNOWN +#define RTE_PTYPE_UNKNOWN 0x00000000 +#define RTE_PTYPE_L2_ETHER 0x00000001 +#define RTE_PTYPE_L2_ETHER_TIMESYNC 0x00000002 +#define RTE_PTYPE_L2_ETHER_ARP 0x00000003 +#define RTE_PTYPE_L2_ETHER_LLDP 0x00000004 +#define RTE_PTYPE_L2_ETHER_NSH 0x00000005 +#define RTE_PTYPE_L2_ETHER_FCOE 0x00000009 +#define RTE_PTYPE_L3_IPV4 0x00000010 +#define RTE_PTYPE_L3_IPV4_EXT 0x00000030 +#define RTE_PTYPE_L3_IPV6 0x00000040 +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN 0x00000090 +#define RTE_PTYPE_L3_IPV6_EXT 0x000000c0 +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN 0x000000e0 +#define RTE_PTYPE_L4_TCP 0x00000100 +#define RTE_PTYPE_L4_UDP 0x00000200 +#define RTE_PTYPE_L4_FRAG 0x00000300 +#define RTE_PTYPE_L4_SCTP 0x00000400 +#define RTE_PTYPE_L4_ICMP 0x00000500 +#define RTE_PTYPE_L4_NONFRAG 0x00000600 +#define RTE_PTYPE_TUNNEL_IP 0x00001000 +#define RTE_PTYPE_TUNNEL_GRE 0x00002000 +#define RTE_PTYPE_TUNNEL_VXLAN 0x00003000 +#define RTE_PTYPE_TUNNEL_NVGRE 0x00004000 +#define RTE_PTYPE_TUNNEL_GENEVE 0x00005000 +#define RTE_PTYPE_TUNNEL_GRENAT 0x00006000 +#define RTE_PTYPE_INNER_L2_ETHER 0x00010000 +#define RTE_PTYPE_INNER_L2_ETHER_VLAN 0x00020000 +#define RTE_PTYPE_INNER_L3_IPV4 0x00100000 +#define RTE_PTYPE_INNER_L3_IPV4_EXT 0x00200000 +#define RTE_PTYPE_INNER_L3_IPV6 0x00300000 +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000 +#define RTE_PTYPE_INNER_L3_IPV6_EXT 0x00500000 +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000 +#define RTE_PTYPE_INNER_L4_TCP 0x01000000 +#define RTE_PTYPE_INNER_L4_UDP 0x02000000 +#define RTE_PTYPE_INNER_L4_FRAG 0x03000000 +#define RTE_PTYPE_INNER_L4_SCTP 0x04000000 +#define RTE_PTYPE_INNER_L4_ICMP 0x05000000 +#define RTE_PTYPE_INNER_L4_NONFRAG 0x06000000 +#endif /* !RTE_PTYPE_UNKNOWN */ +#define RTE_PTYPE_L3_IPV4u RTE_PTYPE_L3_IPV4_EXT_UNKNOWN +#define RTE_PTYPE_L3_IPV6u RTE_PTYPE_L3_IPV6_EXT_UNKNOWN +#define RTE_PTYPE_INNER_L3_IPV4u RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN +#define RTE_PTYPE_INNER_L3_IPV6u RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN +#define RTE_PTYPE_L2_ETHER_FIP RTE_PTYPE_L2_ETHER +#define RTE_PTYPE_L2_ETHER_CNM RTE_PTYPE_L2_ETHER +#define RTE_PTYPE_L2_ETHER_EAPOL RTE_PTYPE_L2_ETHER +#define RTE_PTYPE_L2_ETHER_FILTER RTE_PTYPE_L2_ETHER + +u32 *ngbe_get_supported_ptypes(void); +u32 ngbe_decode_ptype(u8 ptid); +u8 ngbe_encode_ptype(u32 ptype); + +/** + * PT(Packet Type, 32bits) + * - Bit 3:0 is for L2 types. + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types. + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types. + * - Bit 15:12 is for tunnel types. + * - Bit 19:16 is for inner L2 types. + * - Bit 23:20 is for inner L3 types. + * - Bit 27:24 is for inner L4 types. + * - Bit 31:28 is reserved. + * PT is a more accurate version of PTYPE + **/ +#define NGBE_PT_ETHER 0x00 +#define NGBE_PT_IPV4 0x01 +#define NGBE_PT_IPV4_TCP 0x11 +#define NGBE_PT_IPV4_UDP 0x21 +#define NGBE_PT_IPV4_SCTP 0x41 +#define NGBE_PT_IPV4_EXT 0x03 +#define NGBE_PT_IPV4_EXT_TCP 0x13 +#define NGBE_PT_IPV4_EXT_UDP 0x23 +#define NGBE_PT_IPV4_EXT_SCTP 0x43 +#define NGBE_PT_IPV6 0x04 +#define NGBE_PT_IPV6_TCP 0x14 +#define NGBE_PT_IPV6_UDP 0x24 +#define NGBE_PT_IPV6_SCTP 0x44 +#define NGBE_PT_IPV6_EXT 0x0C +#define NGBE_PT_IPV6_EXT_TCP 0x1C +#define NGBE_PT_IPV6_EXT_UDP 0x2C +#define NGBE_PT_IPV6_EXT_SCTP 0x4C +#define NGBE_PT_IPV4_IPV6 0x05 +#define NGBE_PT_IPV4_IPV6_TCP 0x15 +#define NGBE_PT_IPV4_IPV6_UDP 0x25 +#define NGBE_PT_IPV4_IPV6_SCTP 0x45 +#define NGBE_PT_IPV4_EXT_IPV6 0x07 +#define NGBE_PT_IPV4_EXT_IPV6_TCP 0x17 +#define NGBE_PT_IPV4_EXT_IPV6_UDP 0x27 +#define NGBE_PT_IPV4_EXT_IPV6_SCTP 0x47 +#define NGBE_PT_IPV4_IPV6_EXT 0x0D +#define NGBE_PT_IPV4_IPV6_EXT_TCP 0x1D +#define NGBE_PT_IPV4_IPV6_EXT_UDP 0x2D +#define NGBE_PT_IPV4_IPV6_EXT_SCTP 0x4D +#define NGBE_PT_IPV4_EXT_IPV6_EXT 0x0F +#define NGBE_PT_IPV4_EXT_IPV6_EXT_TCP 0x1F +#define NGBE_PT_IPV4_EXT_IPV6_EXT_UDP 0x2F +#define NGBE_PT_IPV4_EXT_IPV6_EXT_SCTP 0x4F + +#define NGBE_PT_MAX 256 + +/* ether type filter list: one static filter per filter consumer. This is + * to avoid filter collisions later. Add new filters + * here!! + * EAPOL 802.1x (0x888e): Filter 0 + * FCoE (0x8906): Filter 2 + * 1588 (0x88f7): Filter 3 + * FIP (0x8914): Filter 4 + * LLDP (0x88CC): Filter 5 + * LACP (0x8809): Filter 6 + * FC (0x8808): Filter 7 + */ +#define NGBE_ETF_ID_EAPOL 0 +#define NGBE_ETF_ID_FCOE 2 +#define NGBE_ETF_ID_1588 3 +#define NGBE_ETF_ID_FIP 4 +#define NGBE_ETF_ID_LLDP 5 +#define NGBE_ETF_ID_LACP 6 +#define NGBE_ETF_ID_FC 7 +#define NGBE_ETF_ID_MAX 8 + +#define NGBE_PTID_ETF_MIN 0x18 +#define NGBE_PTID_ETF_MAX 0x1F +static inline int ngbe_etflt_id(u8 ptid) +{ + if (ptid >= NGBE_PTID_ETF_MIN && ptid <= NGBE_PTID_ETF_MAX) + return ptid - NGBE_PTID_ETF_MIN; + else + return -1; +} + +#endif /* _NGBE_PTYPE_H_ */ diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 5c06e0d550..a3ef0f7577 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -253,6 +253,16 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, * Rx functions * **********************************************************************/ +static inline uint32_t +ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask) +{ + uint16_t ptid = NGBE_RXD_PTID(pkt_info); + + ptid &= ptid_mask; + + return ngbe_decode_ptype(ptid); +} + uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -267,6 +277,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ngbe_rx_desc rxd; uint64_t dma_addr; uint32_t staterr; + uint32_t pkt_info; uint16_t pkt_len; uint16_t rx_id; uint16_t nb_rx; @@ -378,6 +389,10 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->data_len = pkt_len; rxm->port = rxq->port_id; + pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0); + rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, + rxq->pkt_type_mask); + /* * Store the mbuf address into the next entry of the array * of returned packets. @@ -799,6 +814,7 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->port_id = dev->data->port_id; rxq->drop_en = rx_conf->rx_drop_en; rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->pkt_type_mask = NGBE_PTID_MASK; /* * Allocate Rx ring hardware descriptors. A memzone large enough to diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index a89d59e06b..788d684def 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -238,6 +238,8 @@ struct ngbe_rx_queue { uint16_t rx_free_thresh; /**< max free Rx desc to hold */ uint16_t queue_id; /**< RX queue index */ uint16_t reg_idx; /**< RX queue register index */ + /** Packet type mask for different NICs */ + uint16_t pkt_type_mask; uint16_t port_id; /**< Device port identifier */ uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */ uint8_t rx_deferred_start; /**< not in global dev start */ From patchwork Wed Sep 8 08:37:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98287 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D119CA0C56; Wed, 8 Sep 2021 10:37:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2AA724118D; Wed, 8 Sep 2021 10:36:38 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg703.qq.com [203.205.195.89]) by mails.dpdk.org (Postfix) with ESMTP id 273A241183 for ; Wed, 8 Sep 2021 10:36:33 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090181t0rpd0ew Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:21 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: hoArX50alxHq7ww0dHoQDMo4oUGOTY3mnnfWmv4/47Fs3EgWfdrGsbtvfSu1B 7jXVdytCF4HDy5CFp41k0myqyT+43cL2kQtiqbz2ufR4ZxogMA+yDZDhYJukhfZjWeDhrzU In/x37h4hClaQVeSfFJnEA7InKrn+B4Q7QHR52ant0CEUL2eDNIKvh+S96wiNEtTZ9l/4rg gwWD9UbZL+hquK5ENFoYe8a5tpAnrwu2+uQz0J9T+rYFc58FQYVbRk/bR2srIGNLQ86+Z0d Oc/VyL7rKygvQkAzX6iMgsHEsqnq06zyn8kMeISbEFbd3KZXbBVny9JXFsSRXu4agtLvYq3 FRBZI8WLhg54SotHN6rOz7WRmCr2zrguSzkGt8v X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:28 +0800 Message-Id: <20210908083758.312055-3-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 02/32] net/ngbe: support scattered Rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add scattered Rx function to support receiving segmented mbufs. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/ngbe_ethdev.c | 20 +- drivers/net/ngbe/ngbe_ethdev.h | 8 + drivers/net/ngbe/ngbe_rxtx.c | 541 ++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_rxtx.h | 5 + 6 files changed, 574 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 8b7588184a..f85754eb7a 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -8,6 +8,7 @@ Speed capabilities = Y Link status = Y Link status event = Y Queue start/stop = Y +Scattered Rx = Y Packet type parsing = Y Multiprocess aware = Y Linux = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index d044397cd5..463452ce8c 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -13,6 +13,7 @@ Features - Packet type information - Link state information +- Scattered for RX Prerequisites diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 4388d93560..fba0a2dcfd 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -140,8 +140,16 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) eth_dev->rx_pkt_burst = &ngbe_recv_pkts; eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple; - if (rte_eal_process_type() != RTE_PROC_PRIMARY) + /* + * For secondary processes, we don't initialise any further as primary + * has already done this work. Only check we don't need a different + * Rx and Tx function. + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + ngbe_set_rx_function(eth_dev); + return 0; + } rte_eth_copy_pci_info(eth_dev, pci_dev); @@ -528,6 +536,9 @@ ngbe_dev_stop(struct rte_eth_dev *dev) ngbe_dev_clear_queues(dev); + /* Clear stored conf */ + dev->data->scattered_rx = 0; + /* Clear recorded link status */ memset(&link, 0, sizeof(link)); rte_eth_linkstatus_set(dev, &link); @@ -628,6 +639,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues; dev_info->min_rx_bufsize = 1024; dev_info->max_rx_pktlen = 15872; + dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) | + dev_info->rx_queue_offload_capa); dev_info->default_rxconf = (struct rte_eth_rxconf) { .rx_thresh = { @@ -670,7 +683,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) const uint32_t * ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev) { - if (dev->rx_pkt_burst == ngbe_recv_pkts) + if (dev->rx_pkt_burst == ngbe_recv_pkts || + dev->rx_pkt_burst == ngbe_recv_pkts_sc_single_alloc || + dev->rx_pkt_burst == ngbe_recv_pkts_sc_bulk_alloc || + dev->rx_pkt_burst == ngbe_recv_pkts_bulk_alloc) return ngbe_get_supported_ptypes(); return NULL; diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 486c6c3839..e7fe9a03b7 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -106,6 +106,14 @@ int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + +uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue, + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); + uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index a3ef0f7577..49fa978853 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -263,6 +263,243 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask) return ngbe_decode_ptype(ptid); } +/* + * LOOK_AHEAD defines how many desc statuses to check beyond the + * current descriptor. + * It must be a pound define for optimal performance. + * Do not change the value of LOOK_AHEAD, as the ngbe_rx_scan_hw_ring + * function only works with LOOK_AHEAD=8. + */ +#define LOOK_AHEAD 8 +#if (LOOK_AHEAD != 8) +#error "PMD NGBE: LOOK_AHEAD must be 8\n" +#endif +static inline int +ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq) +{ + volatile struct ngbe_rx_desc *rxdp; + struct ngbe_rx_entry *rxep; + struct rte_mbuf *mb; + uint16_t pkt_len; + int nb_dd; + uint32_t s[LOOK_AHEAD]; + uint32_t pkt_info[LOOK_AHEAD]; + int i, j, nb_rx = 0; + uint32_t status; + + /* get references to current descriptor and S/W ring entry */ + rxdp = &rxq->rx_ring[rxq->rx_tail]; + rxep = &rxq->sw_ring[rxq->rx_tail]; + + status = rxdp->qw1.lo.status; + /* check to make sure there is at least 1 packet to receive */ + if (!(status & rte_cpu_to_le_32(NGBE_RXD_STAT_DD))) + return 0; + + /* + * Scan LOOK_AHEAD descriptors at a time to determine which descriptors + * reference packets that are ready to be received. + */ + for (i = 0; i < RTE_PMD_NGBE_RX_MAX_BURST; + i += LOOK_AHEAD, rxdp += LOOK_AHEAD, rxep += LOOK_AHEAD) { + /* Read desc statuses backwards to avoid race condition */ + for (j = 0; j < LOOK_AHEAD; j++) + s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status); + + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + + /* Compute how many status bits were set */ + for (nb_dd = 0; nb_dd < LOOK_AHEAD && + (s[nb_dd] & NGBE_RXD_STAT_DD); nb_dd++) + ; + + for (j = 0; j < nb_dd; j++) + pkt_info[j] = rte_le_to_cpu_32(rxdp[j].qw0.dw0); + + nb_rx += nb_dd; + + /* Translate descriptor info to mbuf format */ + for (j = 0; j < nb_dd; ++j) { + mb = rxep[j].mbuf; + pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len); + mb->data_len = pkt_len; + mb->pkt_len = pkt_len; + + mb->packet_type = + ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j], + rxq->pkt_type_mask); + } + + /* Move mbuf pointers from the S/W ring to the stage */ + for (j = 0; j < LOOK_AHEAD; ++j) + rxq->rx_stage[i + j] = rxep[j].mbuf; + + /* stop if all requested packets could not be received */ + if (nb_dd != LOOK_AHEAD) + break; + } + + /* clear software ring entries so we can cleanup correctly */ + for (i = 0; i < nb_rx; ++i) + rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL; + + return nb_rx; +} + +static inline int +ngbe_rx_alloc_bufs(struct ngbe_rx_queue *rxq, bool reset_mbuf) +{ + volatile struct ngbe_rx_desc *rxdp; + struct ngbe_rx_entry *rxep; + struct rte_mbuf *mb; + uint16_t alloc_idx; + __le64 dma_addr; + int diag, i; + + /* allocate buffers in bulk directly into the S/W ring */ + alloc_idx = rxq->rx_free_trigger - (rxq->rx_free_thresh - 1); + rxep = &rxq->sw_ring[alloc_idx]; + diag = rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep, + rxq->rx_free_thresh); + if (unlikely(diag != 0)) + return -ENOMEM; + + rxdp = &rxq->rx_ring[alloc_idx]; + for (i = 0; i < rxq->rx_free_thresh; ++i) { + /* populate the static rte mbuf fields */ + mb = rxep[i].mbuf; + if (reset_mbuf) + mb->port = rxq->port_id; + + rte_mbuf_refcnt_set(mb, 1); + mb->data_off = RTE_PKTMBUF_HEADROOM; + + /* populate the descriptors */ + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb)); + NGBE_RXD_HDRADDR(&rxdp[i], 0); + NGBE_RXD_PKTADDR(&rxdp[i], dma_addr); + } + + /* update state of internal queue structure */ + rxq->rx_free_trigger = rxq->rx_free_trigger + rxq->rx_free_thresh; + if (rxq->rx_free_trigger >= rxq->nb_rx_desc) + rxq->rx_free_trigger = rxq->rx_free_thresh - 1; + + /* no errors */ + return 0; +} + +static inline uint16_t +ngbe_rx_fill_from_stage(struct ngbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail]; + int i; + + /* how many packets are ready to return? */ + nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail); + + /* copy mbuf pointers to the application's packet list */ + for (i = 0; i < nb_pkts; ++i) + rx_pkts[i] = stage[i]; + + /* update internal queue state */ + rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts); + rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts); + + return nb_pkts; +} + +static inline uint16_t +ngbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct ngbe_rx_queue *rxq = (struct ngbe_rx_queue *)rx_queue; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id]; + uint16_t nb_rx = 0; + + /* Any previously recv'd pkts will be returned from the Rx stage */ + if (rxq->rx_nb_avail) + return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + + /* Scan the H/W ring for packets to receive */ + nb_rx = (uint16_t)ngbe_rx_scan_hw_ring(rxq); + + /* update internal queue state */ + rxq->rx_next_avail = 0; + rxq->rx_nb_avail = nb_rx; + rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx); + + /* if required, allocate new buffers to replenish descriptors */ + if (rxq->rx_tail > rxq->rx_free_trigger) { + uint16_t cur_free_trigger = rxq->rx_free_trigger; + + if (ngbe_rx_alloc_bufs(rxq, true) != 0) { + int i, j; + + PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u " + "queue_id=%u", (uint16_t)rxq->port_id, + (uint16_t)rxq->queue_id); + + dev->data->rx_mbuf_alloc_failed += + rxq->rx_free_thresh; + + /* + * Need to rewind any previous receives if we cannot + * allocate new buffers to replenish the old ones. + */ + rxq->rx_nb_avail = 0; + rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx); + for (i = 0, j = rxq->rx_tail; i < nb_rx; ++i, ++j) + rxq->sw_ring[j].mbuf = rxq->rx_stage[i]; + + return 0; + } + + /* update tail pointer */ + rte_wmb(); + ngbe_set32_relaxed(rxq->rdt_reg_addr, cur_free_trigger); + } + + if (rxq->rx_tail >= rxq->nb_rx_desc) + rxq->rx_tail = 0; + + /* received any packets this loop? */ + if (rxq->rx_nb_avail) + return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + + return 0; +} + +/* split requests into chunks of size RTE_PMD_NGBE_RX_MAX_BURST */ +uint16_t +ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_rx; + + if (unlikely(nb_pkts == 0)) + return 0; + + if (likely(nb_pkts <= RTE_PMD_NGBE_RX_MAX_BURST)) + return ngbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); + + /* request is relatively large, chunk it up */ + nb_rx = 0; + while (nb_pkts) { + uint16_t ret, n; + + n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_RX_MAX_BURST); + ret = ngbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); + nb_rx = (uint16_t)(nb_rx + ret); + nb_pkts = (uint16_t)(nb_pkts - ret); + if (ret < n) + break; + } + + return nb_rx; +} + uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) @@ -426,6 +663,246 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rx; } +static inline void +ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc, + struct ngbe_rx_queue *rxq, uint32_t staterr) +{ + uint32_t pkt_info; + + RTE_SET_USED(staterr); + head->port = rxq->port_id; + + pkt_info = rte_le_to_cpu_32(desc->qw0.dw0); + head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, + rxq->pkt_type_mask); +} + +/** + * ngbe_recv_pkts_sc - receive handler for scatter case. + * + * @rx_queue Rx queue handle + * @rx_pkts table of received packets + * @nb_pkts size of rx_pkts table + * @bulk_alloc if TRUE bulk allocation is used for a HW ring refilling + * + * Returns the number of received packets/clusters (according to the "bulk + * receive" interface). + */ +static inline uint16_t +ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, + bool bulk_alloc) +{ + struct ngbe_rx_queue *rxq = rx_queue; + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id]; + volatile struct ngbe_rx_desc *rx_ring = rxq->rx_ring; + struct ngbe_rx_entry *sw_ring = rxq->sw_ring; + struct ngbe_scattered_rx_entry *sw_sc_ring = rxq->sw_sc_ring; + uint16_t rx_id = rxq->rx_tail; + uint16_t nb_rx = 0; + uint16_t nb_hold = rxq->nb_rx_hold; + uint16_t prev_id = rxq->rx_tail; + + while (nb_rx < nb_pkts) { + bool eop; + struct ngbe_rx_entry *rxe; + struct ngbe_scattered_rx_entry *sc_entry; + struct ngbe_scattered_rx_entry *next_sc_entry = NULL; + struct ngbe_rx_entry *next_rxe = NULL; + struct rte_mbuf *first_seg; + struct rte_mbuf *rxm; + struct rte_mbuf *nmb = NULL; + struct ngbe_rx_desc rxd; + uint16_t data_len; + uint16_t next_id; + volatile struct ngbe_rx_desc *rxdp; + uint32_t staterr; + +next_desc: + rxdp = &rx_ring[rx_id]; + staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status); + + if (!(staterr & NGBE_RXD_STAT_DD)) + break; + + rxd = *rxdp; + + PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u " + "staterr=0x%x data_len=%u", + rxq->port_id, rxq->queue_id, rx_id, staterr, + rte_le_to_cpu_16(rxd.qw1.hi.len)); + + if (!bulk_alloc) { + nmb = rte_mbuf_raw_alloc(rxq->mb_pool); + if (nmb == NULL) { + PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed " + "port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + + dev->data->rx_mbuf_alloc_failed++; + break; + } + } else if (nb_hold > rxq->rx_free_thresh) { + uint16_t next_rdt = rxq->rx_free_trigger; + + if (!ngbe_rx_alloc_bufs(rxq, false)) { + rte_wmb(); + ngbe_set32_relaxed(rxq->rdt_reg_addr, + next_rdt); + nb_hold -= rxq->rx_free_thresh; + } else { + PMD_RX_LOG(DEBUG, "Rx bulk alloc failed " + "port_id=%u queue_id=%u", + rxq->port_id, rxq->queue_id); + + dev->data->rx_mbuf_alloc_failed++; + break; + } + } + + nb_hold++; + rxe = &sw_ring[rx_id]; + eop = staterr & NGBE_RXD_STAT_EOP; + + next_id = rx_id + 1; + if (next_id == rxq->nb_rx_desc) + next_id = 0; + + /* Prefetch next mbuf while processing current one. */ + rte_ngbe_prefetch(sw_ring[next_id].mbuf); + + /* + * When next Rx descriptor is on a cache-line boundary, + * prefetch the next 4 RX descriptors and the next 4 pointers + * to mbufs. + */ + if ((next_id & 0x3) == 0) { + rte_ngbe_prefetch(&rx_ring[next_id]); + rte_ngbe_prefetch(&sw_ring[next_id]); + } + + rxm = rxe->mbuf; + + if (!bulk_alloc) { + __le64 dma = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); + /* + * Update Rx descriptor with the physical address of the + * new data buffer of the new allocated mbuf. + */ + rxe->mbuf = nmb; + + rxm->data_off = RTE_PKTMBUF_HEADROOM; + NGBE_RXD_HDRADDR(rxdp, 0); + NGBE_RXD_PKTADDR(rxdp, dma); + } else { + rxe->mbuf = NULL; + } + + /* + * Set data length & data buffer address of mbuf. + */ + data_len = rte_le_to_cpu_16(rxd.qw1.hi.len); + rxm->data_len = data_len; + + if (!eop) { + uint16_t nextp_id; + + nextp_id = next_id; + next_sc_entry = &sw_sc_ring[nextp_id]; + next_rxe = &sw_ring[nextp_id]; + rte_ngbe_prefetch(next_rxe); + } + + sc_entry = &sw_sc_ring[rx_id]; + first_seg = sc_entry->fbuf; + sc_entry->fbuf = NULL; + + /* + * If this is the first buffer of the received packet, + * set the pointer to the first mbuf of the packet and + * initialize its context. + * Otherwise, update the total length and the number of segments + * of the current scattered packet, and update the pointer to + * the last mbuf of the current packet. + */ + if (first_seg == NULL) { + first_seg = rxm; + first_seg->pkt_len = data_len; + first_seg->nb_segs = 1; + } else { + first_seg->pkt_len += data_len; + first_seg->nb_segs++; + } + + prev_id = rx_id; + rx_id = next_id; + + /* + * If this is not the last buffer of the received packet, update + * the pointer to the first mbuf at the NEXTP entry in the + * sw_sc_ring and continue to parse the Rx ring. + */ + if (!eop && next_rxe) { + rxm->next = next_rxe->mbuf; + next_sc_entry->fbuf = first_seg; + goto next_desc; + } + + /* Initialize the first mbuf of the returned packet */ + ngbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr); + + /* Prefetch data of first segment, if configured to do so. */ + rte_packet_prefetch((char *)first_seg->buf_addr + + first_seg->data_off); + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rx++] = first_seg; + } + + /* + * Record index of the next Rx descriptor to probe. + */ + rxq->rx_tail = rx_id; + + /* + * If the number of free Rx descriptors is greater than the Rx free + * threshold of the queue, advance the Receive Descriptor Tail (RDT) + * register. + * Update the RDT with the value of the last processed Rx descriptor + * minus 1, to guarantee that the RDT register is never equal to the + * RDH register, which creates a "full" ring situation from the + * hardware point of view... + */ + if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) { + PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u " + "nb_hold=%u nb_rx=%u", + rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx); + + rte_wmb(); + ngbe_set32_relaxed(rxq->rdt_reg_addr, prev_id); + nb_hold = 0; + } + + rxq->nb_rx_hold = nb_hold; + return nb_rx; +} + +uint16_t +ngbe_recv_pkts_sc_single_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, false); +} + +uint16_t +ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, true); +} /********************************************************************* * @@ -777,6 +1254,12 @@ ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq) rxq->pkt_last_seg = NULL; } +uint64_t +ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused) +{ + return DEV_RX_OFFLOAD_SCATTER; +} + int ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, @@ -790,10 +1273,13 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, struct ngbe_hw *hw; uint16_t len; struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + uint64_t offloads; PMD_INIT_FUNC_TRACE(); hw = ngbe_dev_hw(dev); + offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + /* Free memory prior to re-allocation if needed... */ if (dev->data->rx_queues[queue_idx] != NULL) { ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]); @@ -814,6 +1300,7 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->port_id = dev->data->port_id; rxq->drop_en = rx_conf->rx_drop_en; rxq->rx_deferred_start = rx_conf->rx_deferred_start; + rxq->offloads = offloads; rxq->pkt_type_mask = NGBE_PTID_MASK; /* @@ -978,6 +1465,54 @@ ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq) return 0; } +void +ngbe_set_rx_function(struct rte_eth_dev *dev) +{ + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + + if (dev->data->scattered_rx) { + /* + * Set the scattered callback: there are bulk and + * single allocation versions. + */ + if (adapter->rx_bulk_alloc_allowed) { + PMD_INIT_LOG(DEBUG, "Using a Scattered with bulk " + "allocation callback (port=%d).", + dev->data->port_id); + dev->rx_pkt_burst = ngbe_recv_pkts_sc_bulk_alloc; + } else { + PMD_INIT_LOG(DEBUG, "Using Regular (non-vector, " + "single allocation) " + "Scattered Rx callback " + "(port=%d).", + dev->data->port_id); + + dev->rx_pkt_burst = ngbe_recv_pkts_sc_single_alloc; + } + /* + * Below we set "simple" callbacks according to port/queues parameters. + * If parameters allow we are going to choose between the following + * callbacks: + * - Bulk Allocation + * - Single buffer allocation (the simplest one) + */ + } else if (adapter->rx_bulk_alloc_allowed) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " + "satisfied. Rx Burst Bulk Alloc function " + "will be used on port=%d.", + dev->data->port_id); + + dev->rx_pkt_burst = ngbe_recv_pkts_bulk_alloc; + } else { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are not " + "satisfied, or Scattered Rx is requested " + "(port=%d).", + dev->data->port_id); + + dev->rx_pkt_burst = ngbe_recv_pkts; + } +} + /* * Initializes Receive Unit. */ @@ -992,6 +1527,7 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) uint32_t srrctl; uint16_t buf_size; uint16_t i; + struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode; PMD_INIT_FUNC_TRACE(); hw = ngbe_dev_hw(dev); @@ -1048,6 +1584,11 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl); } + if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) + dev->data->scattered_rx = 1; + + ngbe_set_rx_function(dev); + return 0; } diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index 788d684def..07b5ac3fbe 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -243,6 +243,7 @@ struct ngbe_rx_queue { uint16_t port_id; /**< Device port identifier */ uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */ uint8_t rx_deferred_start; /**< not in global dev start */ + uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */ /** need to alloc dummy mbuf, for wraparound when scanning hw ring */ struct rte_mbuf fake_mbuf; /** hold packets to return to application */ @@ -308,4 +309,8 @@ struct ngbe_txq_ops { void (*reset)(struct ngbe_tx_queue *txq); }; +void ngbe_set_rx_function(struct rte_eth_dev *dev); + +uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev); + #endif /* _NGBE_RXTX_H_ */ From patchwork Wed Sep 8 08:37:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98283 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32D97A0C56; Wed, 8 Sep 2021 10:36:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 43D8F41168; Wed, 8 Sep 2021 10:36:33 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg704.qq.com [203.205.195.105]) by mails.dpdk.org (Postfix) with ESMTP id 931654115A for ; Wed, 8 Sep 2021 10:36:29 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090184tpk0wdfu Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:23 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: /+iK7ZpVlLSoN7ihk6pkb+oZSoDTTswnJJsJvw3v8sq2h2HchYtu6F/C+qigL LATScipOB23nOyQsyx7Nigxqe8xgBH6nQkrV9ql4PnuJyFa8Tv0SyYd66zHn+NANoed8aJ9 fvGsOJB/KN+X9S/okbVWyllIa6mRzk2kkGF7Rlnc3uWjFpayz+Fuq+9OVvY6dT3C35FnNrS 8N9jQu6AiRbtdEN/fOewy46yQPCIG3Qfj3+AdxLcC2CFtQUV0gVOKXy5Zz0hEd0l95yKFCS C0AslkVPlEoX2GqoCvwmrYhN8wqKmgiAGzToZEkZAFsgA1wftwjcCSoVitfxC/e73XSePnD FyaPM2qjgBV4luu8gkQPGYfKb78HaFVdqod5GAK X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:29 +0800 Message-Id: <20210908083758.312055-4-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 03/32] net/ngbe: support Rx checksum offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support IP/L4 checksum on Rx, and convert it to mbuf flags. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 2 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/ngbe_rxtx.c | 75 +++++++++++++++++++++++++++++-- 3 files changed, 75 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index f85754eb7a..2777ed5a62 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -9,6 +9,8 @@ Link status = Y Link status event = Y Queue start/stop = Y Scattered Rx = Y +L3 checksum offload = P +L4 checksum offload = P Packet type parsing = Y Multiprocess aware = Y Linux = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 463452ce8c..0a14252ff2 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -12,6 +12,7 @@ Features -------- - Packet type information +- Checksum offload - Link state information - Scattered for RX diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 49fa978853..1661ecafa5 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -263,6 +263,31 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask) return ngbe_decode_ptype(ptid); } +static inline uint64_t +rx_desc_error_to_pkt_flags(uint32_t rx_status) +{ + uint64_t pkt_flags = 0; + + /* checksum offload can't be disabled */ + if (rx_status & NGBE_RXD_STAT_IPCS) { + pkt_flags |= (rx_status & NGBE_RXD_ERR_IPCS + ? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD); + } + + if (rx_status & NGBE_RXD_STAT_L4CS) { + pkt_flags |= (rx_status & NGBE_RXD_ERR_L4CS + ? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD); + } + + if (rx_status & NGBE_RXD_STAT_EIPCS && + rx_status & NGBE_RXD_ERR_EIPCS) { + pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD; + } + + + return pkt_flags; +} + /* * LOOK_AHEAD defines how many desc statuses to check beyond the * current descriptor. @@ -281,6 +306,7 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq) struct ngbe_rx_entry *rxep; struct rte_mbuf *mb; uint16_t pkt_len; + uint64_t pkt_flags; int nb_dd; uint32_t s[LOOK_AHEAD]; uint32_t pkt_info[LOOK_AHEAD]; @@ -325,6 +351,9 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq) mb->data_len = pkt_len; mb->pkt_len = pkt_len; + /* convert descriptor fields to rte mbuf flags */ + pkt_flags = rx_desc_error_to_pkt_flags(s[j]); + mb->ol_flags = pkt_flags; mb->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j], rxq->pkt_type_mask); @@ -519,6 +548,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t rx_id; uint16_t nb_rx; uint16_t nb_hold; + uint64_t pkt_flags; nb_rx = 0; nb_hold = 0; @@ -611,11 +641,14 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, /* * Initialize the returned mbuf. - * setup generic mbuf fields: + * 1) setup generic mbuf fields: * - number of segments, * - next segment, * - packet length, * - Rx port identifier. + * 2) integrate hardware offload data, if any: + * - IP checksum flag, + * - error flags. */ pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len)); rxm->data_off = RTE_PKTMBUF_HEADROOM; @@ -627,6 +660,8 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->port = rxq->port_id; pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0); + pkt_flags = rx_desc_error_to_pkt_flags(staterr); + rxm->ol_flags = pkt_flags; rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask); @@ -663,16 +698,30 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rx; } +/** + * ngbe_fill_cluster_head_buf - fill the first mbuf of the returned packet + * + * Fill the following info in the HEAD buffer of the Rx cluster: + * - RX port identifier + * - hardware offload data, if any: + * - IP checksum flag + * - error flags + * @head HEAD of the packet cluster + * @desc HW descriptor to get data from + * @rxq Pointer to the Rx queue + */ static inline void ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc, struct ngbe_rx_queue *rxq, uint32_t staterr) { uint32_t pkt_info; + uint64_t pkt_flags; - RTE_SET_USED(staterr); head->port = rxq->port_id; pkt_info = rte_le_to_cpu_32(desc->qw0.dw0); + pkt_flags = rx_desc_error_to_pkt_flags(staterr); + head->ol_flags = pkt_flags; head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask); } @@ -1257,7 +1306,14 @@ ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq) uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused) { - return DEV_RX_OFFLOAD_SCATTER; + uint64_t offloads; + + offloads = DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_SCATTER; + + return offloads; } int @@ -1525,6 +1581,7 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) uint32_t fctrl; uint32_t hlreg0; uint32_t srrctl; + uint32_t rxcsum; uint16_t buf_size; uint16_t i; struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode; @@ -1586,6 +1643,18 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) dev->data->scattered_rx = 1; + /* + * Setup the Checksum Register. + * Enable IP/L4 checksum computation by hardware if requested to do so. + */ + rxcsum = rd32(hw, NGBE_PSRCTL); + rxcsum |= NGBE_PSRCTL_PCSD; + if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM) + rxcsum |= NGBE_PSRCTL_L4CSUM; + else + rxcsum &= ~NGBE_PSRCTL_L4CSUM; + + wr32(hw, NGBE_PSRCTL, rxcsum); ngbe_set_rx_function(dev); From patchwork Wed Sep 8 08:37:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98284 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F05AA0C56; Wed, 8 Sep 2021 10:36:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6614F41186; Wed, 8 Sep 2021 10:36:34 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg704.qq.com [203.205.195.105]) by mails.dpdk.org (Postfix) with ESMTP id 7560541168 for ; Wed, 8 Sep 2021 10:36:30 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090186tov01hu7 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:25 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: +bXiSo2NuBf7AexP0mye/de0ZZZ23eiclHijOjCoSjpykxeQGf8TbLWWcqpjh l1R4WCRgQddjO5x4KSQnp0a2W6wOc+4yPm41rjMyNGtskRZO5uu8bKIuAXW2mU8XUwyuIbQ Qcyyn37/IV1nwC1IaPZkk9TBHoZrgAFYvedY1dE5y+PCsUCzTdxa/AhR5Sol02yA51JpsXQ 7nHoJfVPxD5BfxjkFANWHwEcAFOunb/+1zK5ye6+xzmHpXIVv6Yrc8ZtEL+TsSShGceDB26 l9QVLrMiLiF00/33Ar/E/RHfemU16Nl0quFFC+wi+HJ/CieNjsL9EGT4YaG++NHkG0gL4uC IeofaeldPkzNL9FrwFjR2q1JYO0vqblbiPbB3hf X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:30 +0800 Message-Id: <20210908083758.312055-5-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 04/32] net/ngbe: support TSO X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add transmit datapath with offloads, and support TCP segmentation offload. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 3 + doc/guides/nics/ngbe.rst | 3 +- drivers/net/ngbe/ngbe_ethdev.c | 19 +- drivers/net/ngbe/ngbe_ethdev.h | 6 + drivers/net/ngbe/ngbe_rxtx.c | 678 ++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_rxtx.h | 58 +++ 6 files changed, 765 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 2777ed5a62..32f74a3084 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -9,8 +9,11 @@ Link status = Y Link status event = Y Queue start/stop = Y Scattered Rx = Y +TSO = Y L3 checksum offload = P L4 checksum offload = P +Inner L3 checksum = P +Inner L4 checksum = P Packet type parsing = Y Multiprocess aware = Y Linux = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 0a14252ff2..6a6ae39243 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -13,8 +13,9 @@ Features - Packet type information - Checksum offload +- TSO offload - Link state information -- Scattered for RX +- Scattered and gather for TX and RX Prerequisites diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index fba0a2dcfd..e7d63f1b14 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -138,7 +138,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) eth_dev->dev_ops = &ngbe_eth_dev_ops; eth_dev->rx_pkt_burst = &ngbe_recv_pkts; - eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple; + eth_dev->tx_pkt_burst = &ngbe_xmit_pkts; + eth_dev->tx_pkt_prepare = &ngbe_prep_pkts; /* * For secondary processes, we don't initialise any further as primary @@ -146,6 +147,20 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) * Rx and Tx function. */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + struct ngbe_tx_queue *txq; + /* Tx queue function in primary, set by last queue initialized + * Tx queue may not initialized by primary process + */ + if (eth_dev->data->tx_queues) { + uint16_t nb_tx_queues = eth_dev->data->nb_tx_queues; + txq = eth_dev->data->tx_queues[nb_tx_queues - 1]; + ngbe_set_tx_function(eth_dev, txq); + } else { + /* Use default Tx function if we get here */ + PMD_INIT_LOG(NOTICE, + "No Tx queues configured yet. Using default Tx function."); + } + ngbe_set_rx_function(eth_dev); return 0; @@ -641,6 +656,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_pktlen = 15872; dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) | dev_info->rx_queue_offload_capa); + dev_info->tx_queue_offload_capa = 0; + dev_info->tx_offload_capa = ngbe_get_tx_port_offloads(dev); dev_info->default_rxconf = (struct rte_eth_rxconf) { .rx_thresh = { diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index e7fe9a03b7..cbf3ab558f 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -114,9 +114,15 @@ uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue, uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction, uint8_t queue, uint8_t msix_vector); diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 1661ecafa5..21f5808787 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -9,11 +9,24 @@ #include #include #include +#include #include "ngbe_logs.h" #include "base/ngbe.h" #include "ngbe_ethdev.h" #include "ngbe_rxtx.h" +/* Bit Mask to indicate what bits required for building Tx context */ +static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM | + PKT_TX_OUTER_IPV6 | + PKT_TX_OUTER_IPV4 | + PKT_TX_IPV6 | + PKT_TX_IPV4 | + PKT_TX_L4_MASK | + PKT_TX_TCP_SEG | + PKT_TX_TUNNEL_MASK | + PKT_TX_OUTER_IP_CKSUM); +#define NGBE_TX_OFFLOAD_NOTSUP_MASK \ + (PKT_TX_OFFLOAD_MASK ^ NGBE_TX_OFFLOAD_MASK) /* * Prefetch a cache line into all cache levels. @@ -248,6 +261,614 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx; } +static inline void +ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq, + volatile struct ngbe_tx_ctx_desc *ctx_txd, + uint64_t ol_flags, union ngbe_tx_offload tx_offload) +{ + union ngbe_tx_offload tx_offload_mask; + uint32_t type_tucmd_mlhl; + uint32_t mss_l4len_idx; + uint32_t ctx_idx; + uint32_t vlan_macip_lens; + uint32_t tunnel_seed; + + ctx_idx = txq->ctx_curr; + tx_offload_mask.data[0] = 0; + tx_offload_mask.data[1] = 0; + + /* Specify which HW CTX to upload. */ + mss_l4len_idx = NGBE_TXD_IDX(ctx_idx); + type_tucmd_mlhl = NGBE_TXD_CTXT; + + tx_offload_mask.ptid |= ~0; + type_tucmd_mlhl |= NGBE_TXD_PTID(tx_offload.ptid); + + /* check if TCP segmentation required for this packet */ + if (ol_flags & PKT_TX_TCP_SEG) { + tx_offload_mask.l2_len |= ~0; + tx_offload_mask.l3_len |= ~0; + tx_offload_mask.l4_len |= ~0; + tx_offload_mask.tso_segsz |= ~0; + mss_l4len_idx |= NGBE_TXD_MSS(tx_offload.tso_segsz); + mss_l4len_idx |= NGBE_TXD_L4LEN(tx_offload.l4_len); + } else { /* no TSO, check if hardware checksum is needed */ + if (ol_flags & PKT_TX_IP_CKSUM) { + tx_offload_mask.l2_len |= ~0; + tx_offload_mask.l3_len |= ~0; + } + + switch (ol_flags & PKT_TX_L4_MASK) { + case PKT_TX_UDP_CKSUM: + mss_l4len_idx |= + NGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr)); + tx_offload_mask.l2_len |= ~0; + tx_offload_mask.l3_len |= ~0; + break; + case PKT_TX_TCP_CKSUM: + mss_l4len_idx |= + NGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr)); + tx_offload_mask.l2_len |= ~0; + tx_offload_mask.l3_len |= ~0; + break; + case PKT_TX_SCTP_CKSUM: + mss_l4len_idx |= + NGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr)); + tx_offload_mask.l2_len |= ~0; + tx_offload_mask.l3_len |= ~0; + break; + default: + break; + } + } + + vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1); + + if (ol_flags & PKT_TX_TUNNEL_MASK) { + tx_offload_mask.outer_tun_len |= ~0; + tx_offload_mask.outer_l2_len |= ~0; + tx_offload_mask.outer_l3_len |= ~0; + tx_offload_mask.l2_len |= ~0; + tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1); + tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2); + + switch (ol_flags & PKT_TX_TUNNEL_MASK) { + case PKT_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 0b */ + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } + vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len); + } else { + tunnel_seed = 0; + vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len); + } + + txq->ctx_cache[ctx_idx].flags = ol_flags; + txq->ctx_cache[ctx_idx].tx_offload.data[0] = + tx_offload_mask.data[0] & tx_offload.data[0]; + txq->ctx_cache[ctx_idx].tx_offload.data[1] = + tx_offload_mask.data[1] & tx_offload.data[1]; + txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask; + + ctx_txd->dw0 = rte_cpu_to_le_32(vlan_macip_lens); + ctx_txd->dw1 = rte_cpu_to_le_32(tunnel_seed); + ctx_txd->dw2 = rte_cpu_to_le_32(type_tucmd_mlhl); + ctx_txd->dw3 = rte_cpu_to_le_32(mss_l4len_idx); +} + +/* + * Check which hardware context can be used. Use the existing match + * or create a new context descriptor. + */ +static inline uint32_t +what_ctx_update(struct ngbe_tx_queue *txq, uint64_t flags, + union ngbe_tx_offload tx_offload) +{ + /* If match with the current used context */ + if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags && + (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] == + (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0] + & tx_offload.data[0])) && + (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] == + (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1] + & tx_offload.data[1])))) + return txq->ctx_curr; + + /* What if match with the next context */ + txq->ctx_curr ^= 1; + if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags && + (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] == + (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0] + & tx_offload.data[0])) && + (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] == + (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1] + & tx_offload.data[1])))) + return txq->ctx_curr; + + /* Mismatch, use the previous context */ + return NGBE_CTX_NUM; +} + +static inline uint32_t +tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags) +{ + uint32_t tmp = 0; + + if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) { + tmp |= NGBE_TXD_CC; + tmp |= NGBE_TXD_L4CS; + } + if (ol_flags & PKT_TX_IP_CKSUM) { + tmp |= NGBE_TXD_CC; + tmp |= NGBE_TXD_IPCS; + } + if (ol_flags & PKT_TX_OUTER_IP_CKSUM) { + tmp |= NGBE_TXD_CC; + tmp |= NGBE_TXD_EIPCS; + } + if (ol_flags & PKT_TX_TCP_SEG) { + tmp |= NGBE_TXD_CC; + /* implies IPv4 cksum */ + if (ol_flags & PKT_TX_IPV4) + tmp |= NGBE_TXD_IPCS; + tmp |= NGBE_TXD_L4CS; + } + + return tmp; +} + +static inline uint32_t +tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags) +{ + uint32_t cmdtype = 0; + + if (ol_flags & PKT_TX_TCP_SEG) + cmdtype |= NGBE_TXD_TSE; + return cmdtype; +} + +static inline uint8_t +tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype) +{ + bool tun; + + if (ptype) + return ngbe_encode_ptype(ptype); + + /* Only support flags in NGBE_TX_OFFLOAD_MASK */ + tun = !!(oflags & PKT_TX_TUNNEL_MASK); + + /* L2 level */ + ptype = RTE_PTYPE_L2_ETHER; + + /* L3 level */ + if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM)) + ptype |= RTE_PTYPE_L3_IPV4; + else if (oflags & (PKT_TX_OUTER_IPV6)) + ptype |= RTE_PTYPE_L3_IPV6; + + if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM)) + ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4); + else if (oflags & (PKT_TX_IPV6)) + ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6); + + /* L4 level */ + switch (oflags & (PKT_TX_L4_MASK)) { + case PKT_TX_TCP_CKSUM: + ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP); + break; + case PKT_TX_UDP_CKSUM: + ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP); + break; + case PKT_TX_SCTP_CKSUM: + ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP); + break; + } + + if (oflags & PKT_TX_TCP_SEG) + ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP); + + /* Tunnel */ + switch (oflags & PKT_TX_TUNNEL_MASK) { + case PKT_TX_TUNNEL_IPIP: + case PKT_TX_TUNNEL_IP: + ptype |= RTE_PTYPE_L2_ETHER | + RTE_PTYPE_L3_IPV4 | + RTE_PTYPE_TUNNEL_IP; + break; + } + + return ngbe_encode_ptype(ptype); +} + +/* Reset transmit descriptors after they have been used */ +static inline int +ngbe_xmit_cleanup(struct ngbe_tx_queue *txq) +{ + struct ngbe_tx_entry *sw_ring = txq->sw_ring; + volatile struct ngbe_tx_desc *txr = txq->tx_ring; + uint16_t last_desc_cleaned = txq->last_desc_cleaned; + uint16_t nb_tx_desc = txq->nb_tx_desc; + uint16_t desc_to_clean_to; + uint16_t nb_tx_to_clean; + uint32_t status; + + /* Determine the last descriptor needing to be cleaned */ + desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_free_thresh); + if (desc_to_clean_to >= nb_tx_desc) + desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc); + + /* Check to make sure the last descriptor to clean is done */ + desc_to_clean_to = sw_ring[desc_to_clean_to].last_id; + status = txr[desc_to_clean_to].dw3; + if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) { + PMD_TX_LOG(DEBUG, + "Tx descriptor %4u is not done" + "(port=%d queue=%d)", + desc_to_clean_to, + txq->port_id, txq->queue_id); + if (txq->nb_tx_free >> 1 < txq->tx_free_thresh) + ngbe_set32_masked(txq->tdc_reg_addr, + NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH); + /* Failed to clean any descriptors, better luck next time */ + return -(1); + } + + /* Figure out how many descriptors will be cleaned */ + if (last_desc_cleaned > desc_to_clean_to) + nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) + + desc_to_clean_to); + else + nb_tx_to_clean = (uint16_t)(desc_to_clean_to - + last_desc_cleaned); + + PMD_TX_LOG(DEBUG, + "Cleaning %4u Tx descriptors: %4u to %4u (port=%d queue=%d)", + nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to, + txq->port_id, txq->queue_id); + + /* + * The last descriptor to clean is done, so that means all the + * descriptors from the last descriptor that was cleaned + * up to the last descriptor with the RS bit set + * are done. Only reset the threshold descriptor. + */ + txr[desc_to_clean_to].dw3 = 0; + + /* Update the txq to reflect the last descriptor that was cleaned */ + txq->last_desc_cleaned = desc_to_clean_to; + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean); + + /* No Error */ + return 0; +} + +uint16_t +ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct ngbe_tx_queue *txq; + struct ngbe_tx_entry *sw_ring; + struct ngbe_tx_entry *txe, *txn; + volatile struct ngbe_tx_desc *txr; + volatile struct ngbe_tx_desc *txd; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + uint64_t buf_dma_addr; + uint32_t olinfo_status; + uint32_t cmd_type_len; + uint32_t pkt_len; + uint16_t slen; + uint64_t ol_flags; + uint16_t tx_id; + uint16_t tx_last; + uint16_t nb_tx; + uint16_t nb_used; + uint64_t tx_ol_req; + uint32_t ctx = 0; + uint32_t new_ctx; + union ngbe_tx_offload tx_offload; + + tx_offload.data[0] = 0; + tx_offload.data[1] = 0; + txq = tx_queue; + sw_ring = txq->sw_ring; + txr = txq->tx_ring; + tx_id = txq->tx_tail; + txe = &sw_ring[tx_id]; + + /* Determine if the descriptor ring needs to be cleaned. */ + if (txq->nb_tx_free < txq->tx_free_thresh) + ngbe_xmit_cleanup(txq); + + rte_prefetch0(&txe->mbuf->pool); + + /* Tx loop */ + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + new_ctx = 0; + tx_pkt = *tx_pkts++; + pkt_len = tx_pkt->pkt_len; + + /* + * Determine how many (if any) context descriptors + * are needed for offload functionality. + */ + ol_flags = tx_pkt->ol_flags; + + /* If hardware offload required */ + tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK; + if (tx_ol_req) { + tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req, + tx_pkt->packet_type); + tx_offload.l2_len = tx_pkt->l2_len; + tx_offload.l3_len = tx_pkt->l3_len; + tx_offload.l4_len = tx_pkt->l4_len; + tx_offload.tso_segsz = tx_pkt->tso_segsz; + tx_offload.outer_l2_len = tx_pkt->outer_l2_len; + tx_offload.outer_l3_len = tx_pkt->outer_l3_len; + tx_offload.outer_tun_len = 0; + + /* If new context need be built or reuse the exist ctx*/ + ctx = what_ctx_update(txq, tx_ol_req, tx_offload); + /* Only allocate context descriptor if required */ + new_ctx = (ctx == NGBE_CTX_NUM); + ctx = txq->ctx_curr; + } + + /* + * Keep track of how many descriptors are used this loop + * This will always be the number of segments + the number of + * Context descriptors required to transmit the packet + */ + nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx); + + /* + * The number of descriptors that must be allocated for a + * packet is the number of segments of that packet, plus 1 + * Context Descriptor for the hardware offload, if any. + * Determine the last Tx descriptor to allocate in the Tx ring + * for the packet, starting from the current position (tx_id) + * in the ring. + */ + tx_last = (uint16_t)(tx_id + nb_used - 1); + + /* Circular ring */ + if (tx_last >= txq->nb_tx_desc) + tx_last = (uint16_t)(tx_last - txq->nb_tx_desc); + + PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u pktlen=%u" + " tx_first=%u tx_last=%u", + (uint16_t)txq->port_id, + (uint16_t)txq->queue_id, + (uint32_t)pkt_len, + (uint16_t)tx_id, + (uint16_t)tx_last); + + /* + * Make sure there are enough Tx descriptors available to + * transmit the entire packet. + * nb_used better be less than or equal to txq->tx_free_thresh + */ + if (nb_used > txq->nb_tx_free) { + PMD_TX_LOG(DEBUG, + "Not enough free Tx descriptors " + "nb_used=%4u nb_free=%4u " + "(port=%d queue=%d)", + nb_used, txq->nb_tx_free, + txq->port_id, txq->queue_id); + + if (ngbe_xmit_cleanup(txq) != 0) { + /* Could not clean any descriptors */ + if (nb_tx == 0) + return 0; + goto end_of_tx; + } + + /* nb_used better be <= txq->tx_free_thresh */ + if (unlikely(nb_used > txq->tx_free_thresh)) { + PMD_TX_LOG(DEBUG, + "The number of descriptors needed to " + "transmit the packet exceeds the " + "RS bit threshold. This will impact " + "performance." + "nb_used=%4u nb_free=%4u " + "tx_free_thresh=%4u. " + "(port=%d queue=%d)", + nb_used, txq->nb_tx_free, + txq->tx_free_thresh, + txq->port_id, txq->queue_id); + /* + * Loop here until there are enough Tx + * descriptors or until the ring cannot be + * cleaned. + */ + while (nb_used > txq->nb_tx_free) { + if (ngbe_xmit_cleanup(txq) != 0) { + /* + * Could not clean any + * descriptors + */ + if (nb_tx == 0) + return 0; + goto end_of_tx; + } + } + } + } + + /* + * By now there are enough free Tx descriptors to transmit + * the packet. + */ + + /* + * Set common flags of all Tx Data Descriptors. + * + * The following bits must be set in the first Data Descriptor + * and are ignored in the other ones: + * - NGBE_TXD_FCS + * + * The following bits must only be set in the last Data + * Descriptor: + * - NGBE_TXD_EOP + */ + cmd_type_len = NGBE_TXD_FCS; + + olinfo_status = 0; + if (tx_ol_req) { + if (ol_flags & PKT_TX_TCP_SEG) { + /* when TSO is on, paylen in descriptor is the + * not the packet len but the tcp payload len + */ + pkt_len -= (tx_offload.l2_len + + tx_offload.l3_len + tx_offload.l4_len); + pkt_len -= + (tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK) + ? tx_offload.outer_l2_len + + tx_offload.outer_l3_len : 0; + } + + /* + * Setup the Tx Context Descriptor if required + */ + if (new_ctx) { + volatile struct ngbe_tx_ctx_desc *ctx_txd; + + ctx_txd = (volatile struct ngbe_tx_ctx_desc *) + &txr[tx_id]; + + txn = &sw_ring[txe->next_id]; + rte_prefetch0(&txn->mbuf->pool); + + if (txe->mbuf != NULL) { + rte_pktmbuf_free_seg(txe->mbuf); + txe->mbuf = NULL; + } + + ngbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, + tx_offload); + + txe->last_id = tx_last; + tx_id = txe->next_id; + txe = txn; + } + + /* + * Setup the Tx Data Descriptor, + * This path will go through + * whatever new/reuse the context descriptor + */ + cmd_type_len |= tx_desc_ol_flags_to_cmdtype(ol_flags); + olinfo_status |= + tx_desc_cksum_flags_to_olinfo(ol_flags); + olinfo_status |= NGBE_TXD_IDX(ctx); + } + + olinfo_status |= NGBE_TXD_PAYLEN(pkt_len); + + m_seg = tx_pkt; + do { + txd = &txr[tx_id]; + txn = &sw_ring[txe->next_id]; + rte_prefetch0(&txn->mbuf->pool); + + if (txe->mbuf != NULL) + rte_pktmbuf_free_seg(txe->mbuf); + txe->mbuf = m_seg; + + /* + * Set up Transmit Data Descriptor. + */ + slen = m_seg->data_len; + buf_dma_addr = rte_mbuf_data_iova(m_seg); + txd->qw0 = rte_cpu_to_le_64(buf_dma_addr); + txd->dw2 = rte_cpu_to_le_32(cmd_type_len | slen); + txd->dw3 = rte_cpu_to_le_32(olinfo_status); + txe->last_id = tx_last; + tx_id = txe->next_id; + txe = txn; + m_seg = m_seg->next; + } while (m_seg != NULL); + + /* + * The last packet data descriptor needs End Of Packet (EOP) + */ + cmd_type_len |= NGBE_TXD_EOP; + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used); + + txd->dw2 |= rte_cpu_to_le_32(cmd_type_len); + } + +end_of_tx: + + rte_wmb(); + + /* + * Set the Transmit Descriptor Tail (TDT) + */ + PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u", + (uint16_t)txq->port_id, (uint16_t)txq->queue_id, + (uint16_t)tx_id, (uint16_t)nb_tx); + ngbe_set32_relaxed(txq->tdt_reg_addr, tx_id); + txq->tx_tail = tx_id; + + return nb_tx; +} + +/********************************************************************* + * + * Tx prep functions + * + **********************************************************************/ +uint16_t +ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + int i, ret; + uint64_t ol_flags; + struct rte_mbuf *m; + struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue; + + for (i = 0; i < nb_pkts; i++) { + m = tx_pkts[i]; + ol_flags = m->ol_flags; + + /** + * Check if packet meets requirements for number of segments + * + * NOTE: for ngbe it's always (40 - WTHRESH) for both TSO and + * non-TSO + */ + + if (m->nb_segs > NGBE_TX_MAX_SEG - txq->wthresh) { + rte_errno = -EINVAL; + return i; + } + + if (ol_flags & NGBE_TX_OFFLOAD_NOTSUP_MASK) { + rte_errno = -ENOTSUP; + return i; + } + +#ifdef RTE_ETHDEV_DEBUG_TX + ret = rte_validate_tx_offload(m); + if (ret != 0) { + rte_errno = ret; + return i; + } +#endif + ret = rte_net_intel_cksum_prepare(m); + if (ret != 0) { + rte_errno = ret; + return i; + } + } + + return i; +} + /********************************************************************* * * Rx functions @@ -1044,6 +1665,56 @@ static const struct ngbe_txq_ops def_txq_ops = { .reset = ngbe_reset_tx_queue, }; +/* Takes an ethdev and a queue and sets up the tx function to be used based on + * the queue parameters. Used in tx_queue_setup by primary process and then + * in dev_init by secondary process when attaching to an existing ethdev. + */ +void +ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq) +{ + /* Use a simple Tx queue (no offloads, no multi segs) if possible */ + if (txq->offloads == 0 && + txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) { + PMD_INIT_LOG(DEBUG, "Using simple tx code path"); + dev->tx_pkt_burst = ngbe_xmit_pkts_simple; + dev->tx_pkt_prepare = NULL; + } else { + PMD_INIT_LOG(DEBUG, "Using full-featured tx code path"); + PMD_INIT_LOG(DEBUG, + " - offloads = 0x%" PRIx64, + txq->offloads); + PMD_INIT_LOG(DEBUG, + " - tx_free_thresh = %lu [RTE_PMD_NGBE_TX_MAX_BURST=%lu]", + (unsigned long)txq->tx_free_thresh, + (unsigned long)RTE_PMD_NGBE_TX_MAX_BURST); + dev->tx_pkt_burst = ngbe_xmit_pkts; + dev->tx_pkt_prepare = ngbe_prep_pkts; + } +} + +uint64_t +ngbe_get_tx_port_offloads(struct rte_eth_dev *dev) +{ + uint64_t tx_offload_capa; + + RTE_SET_USED(dev); + + tx_offload_capa = + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM | + DEV_TX_OFFLOAD_SCTP_CKSUM | + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_TX_OFFLOAD_TCP_TSO | + DEV_TX_OFFLOAD_UDP_TSO | + DEV_TX_OFFLOAD_UDP_TNL_TSO | + DEV_TX_OFFLOAD_IP_TNL_TSO | + DEV_TX_OFFLOAD_IPIP_TNL_TSO | + DEV_TX_OFFLOAD_MULTI_SEGS; + + return tx_offload_capa; +} + int ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, @@ -1055,10 +1726,13 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq; struct ngbe_hw *hw; uint16_t tx_free_thresh; + uint64_t offloads; PMD_INIT_FUNC_TRACE(); hw = ngbe_dev_hw(dev); + offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + /* * The Tx descriptor ring will be cleaned after txq->tx_free_thresh * descriptors are used or if the number of descriptors required @@ -1120,6 +1794,7 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->queue_id = queue_idx; txq->reg_idx = queue_idx; txq->port_id = dev->data->port_id; + txq->offloads = offloads; txq->ops = &def_txq_ops; txq->tx_deferred_start = tx_conf->tx_deferred_start; @@ -1141,6 +1816,9 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64, txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr); + /* set up scalar Tx function as appropriate */ + ngbe_set_tx_function(dev, txq); + txq->ops->reset(txq); dev->data->tx_queues[queue_idx] = txq; diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index 07b5ac3fbe..27c83f45a7 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -135,8 +135,35 @@ struct ngbe_tx_ctx_desc { rte_le32_t dw3; /* w.mss_l4len_idx */ }; +/* @ngbe_tx_ctx_desc.dw0 */ +#define NGBE_TXD_IPLEN(v) LS(v, 0, 0x1FF) /* ip/fcoe header end */ +#define NGBE_TXD_MACLEN(v) LS(v, 9, 0x7F) /* desc mac len */ +#define NGBE_TXD_VLAN(v) LS(v, 16, 0xFFFF) /* vlan tag */ + +/* @ngbe_tx_ctx_desc.dw1 */ +/*** bit 0-31, when NGBE_TXD_DTYP_FCOE=0 ***/ +#define NGBE_TXD_IPSEC_SAIDX(v) LS(v, 0, 0x3FF) /* ipsec SA index */ +#define NGBE_TXD_ETYPE(v) LS(v, 11, 0x1) /* tunnel type */ +#define NGBE_TXD_ETYPE_UDP LS(0, 11, 0x1) +#define NGBE_TXD_ETYPE_GRE LS(1, 11, 0x1) +#define NGBE_TXD_EIPLEN(v) LS(v, 12, 0x7F) /* tunnel ip header */ +#define NGBE_TXD_DTYP_FCOE MS(16, 0x1) /* FCoE/IP descriptor */ +#define NGBE_TXD_ETUNLEN(v) LS(v, 21, 0xFF) /* tunnel header */ +#define NGBE_TXD_DECTTL(v) LS(v, 29, 0xF) /* decrease ip TTL */ + +/* @ngbe_tx_ctx_desc.dw2 */ +#define NGBE_TXD_IPSEC_ESPLEN(v) LS(v, 1, 0x1FF) /* ipsec ESP length */ +#define NGBE_TXD_SNAP MS(10, 0x1) /* SNAP indication */ +#define NGBE_TXD_TPID_SEL(v) LS(v, 11, 0x7) /* vlan tag index */ +#define NGBE_TXD_IPSEC_ESP MS(14, 0x1) /* ipsec type: esp=1 ah=0 */ +#define NGBE_TXD_IPSEC_ESPENC MS(15, 0x1) /* ESP encrypt */ +#define NGBE_TXD_CTXT MS(20, 0x1) /* context descriptor */ +#define NGBE_TXD_PTID(v) LS(v, 24, 0xFF) /* packet type */ /* @ngbe_tx_ctx_desc.dw3 */ #define NGBE_TXD_DD MS(0, 0x1) /* descriptor done */ +#define NGBE_TXD_IDX(v) LS(v, 4, 0x1) /* ctxt desc index */ +#define NGBE_TXD_L4LEN(v) LS(v, 8, 0xFF) /* l4 header length */ +#define NGBE_TXD_MSS(v) LS(v, 16, 0xFFFF) /* l4 MSS */ /** * Transmit Data Descriptor (NGBE_TXD_TYP=DATA) @@ -259,11 +286,34 @@ enum ngbe_ctx_num { NGBE_CTX_NUM = 2, /**< CTX NUMBER */ }; +/** Offload features */ +union ngbe_tx_offload { + uint64_t data[2]; + struct { + uint64_t ptid:8; /**< Packet Type Identifier. */ + uint64_t l2_len:7; /**< L2 (MAC) Header Length. */ + uint64_t l3_len:9; /**< L3 (IP) Header Length. */ + uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */ + uint64_t tso_segsz:16; /**< TCP TSO segment size */ + uint64_t vlan_tci:16; + /**< VLAN Tag Control Identifier (CPU order). */ + + /* fields for TX offloading of tunnels */ + uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */ + uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */ + uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */ + }; +}; + /** * Structure to check if new context need be built */ struct ngbe_ctx_info { uint64_t flags; /**< ol_flags for context build. */ + /**< tx offload: vlan, tso, l2-l3-l4 lengths. */ + union ngbe_tx_offload tx_offload; + /** compare mask for tx offload. */ + union ngbe_tx_offload tx_offload_mask; }; /** @@ -295,6 +345,7 @@ struct ngbe_tx_queue { uint8_t pthresh; /**< Prefetch threshold register */ uint8_t hthresh; /**< Host threshold register */ uint8_t wthresh; /**< Write-back threshold reg */ + uint64_t offloads; /**< Tx offload flags */ uint32_t ctx_curr; /**< Hardware context states */ /** Hardware context0 history */ struct ngbe_ctx_info ctx_cache[NGBE_CTX_NUM]; @@ -309,8 +360,15 @@ struct ngbe_txq_ops { void (*reset)(struct ngbe_tx_queue *txq); }; +/* Takes an ethdev and a queue and sets up the tx function to be used based on + * the queue parameters. Used in tx_queue_setup by primary process and then + * in dev_init by secondary process when attaching to an existing ethdev. + */ +void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq); + void ngbe_set_rx_function(struct rte_eth_dev *dev); +uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev); uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev); #endif /* _NGBE_RXTX_H_ */ From patchwork Wed Sep 8 08:37:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98285 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD1D7A0C56; Wed, 8 Sep 2021 10:36:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE32F41194; Wed, 8 Sep 2021 10:36:35 +0200 (CEST) Received: from smtpbgeu2.qq.com (smtpbgeu2.qq.com [18.194.254.142]) by mails.dpdk.org (Postfix) with ESMTP id AA06F4115D for ; Wed, 8 Sep 2021 10:36:32 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090188tpo2dns4 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:27 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: ZTnzshg2nJbfAA/6OVfCVNaYwlduFdVqevo3yVDonY6zkgMaSps4c8T9ort0f RhWBJniyUjrl/jNLHTSH9v3tJpOmKqva+lhU1RcWYImdgYpKBKQxs2EqKm1sirUDP6k4Nxr 41IG0XKTxAPLNtoN1uR9VeDcFyRtE5MkLRDACvomuWho1LsKTlutyMFfGea0e/tyRCfMFQ5 xEMPmDuJVQ8gO2gxbpBC2+siI/Q5KAbP97nFv7X0P7/1NtEIrM2GuKyEQJuKrnWZfQC40hX pgYsaLfN9pGWuJrGV+AAxeOTEuxRRzGjXSXCs4vv8V61eWP2culOcmS8AFsK9Ryv2IzPdcJ QTmjgZpaPI34JxWauFOMWH1ZM6urnZeVr+CowgS X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:31 +0800 Message-Id: <20210908083758.312055-6-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 05/32] net/ngbe: support CRC offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to strip or keep CRC in Rx path. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + drivers/net/ngbe/ngbe_rxtx.c | 53 +++++++++++++++++++++++++++++-- drivers/net/ngbe/ngbe_rxtx.h | 1 + 3 files changed, 53 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 32f74a3084..2a472d9434 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -10,6 +10,7 @@ Link status event = Y Queue start/stop = Y Scattered Rx = Y TSO = Y +CRC offload = P L3 checksum offload = P L4 checksum offload = P Inner L3 checksum = P diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 21f5808787..f9d8cf9d19 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -968,7 +968,8 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq) /* Translate descriptor info to mbuf format */ for (j = 0; j < nb_dd; ++j) { mb = rxep[j].mbuf; - pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len); + pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len) - + rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; @@ -1271,7 +1272,8 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * - IP checksum flag, * - error flags. */ - pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len)); + pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len) - + rxq->crc_len); rxm->data_off = RTE_PKTMBUF_HEADROOM; rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off); rxm->nb_segs = 1; @@ -1521,6 +1523,22 @@ ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, /* Initialize the first mbuf of the returned packet */ ngbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr); + /* Deal with the case, when HW CRC srip is disabled. */ + first_seg->pkt_len -= rxq->crc_len; + if (unlikely(rxm->data_len <= rxq->crc_len)) { + struct rte_mbuf *lp; + + for (lp = first_seg; lp->next != rxm; lp = lp->next) + ; + + first_seg->nb_segs--; + lp->data_len -= rxq->crc_len - rxm->data_len; + lp->next = NULL; + rte_pktmbuf_free_seg(rxm); + } else { + rxm->data_len -= rxq->crc_len; + } + /* Prefetch data of first segment, if configured to do so. */ rte_packet_prefetch((char *)first_seg->buf_addr + first_seg->data_off); @@ -1989,6 +2007,7 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused) offloads = DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_KEEP_CRC | DEV_RX_OFFLOAD_SCATTER; return offloads; @@ -2032,6 +2051,10 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->queue_id = queue_idx; rxq->reg_idx = queue_idx; rxq->port_id = dev->data->port_id; + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; rxq->drop_en = rx_conf->rx_drop_en; rxq->rx_deferred_start = rx_conf->rx_deferred_start; rxq->offloads = offloads; @@ -2259,6 +2282,7 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) uint32_t fctrl; uint32_t hlreg0; uint32_t srrctl; + uint32_t rdrxctl; uint32_t rxcsum; uint16_t buf_size; uint16_t i; @@ -2279,7 +2303,14 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) fctrl |= NGBE_PSRCTL_BCA; wr32(hw, NGBE_PSRCTL, fctrl); + /* + * Configure CRC stripping, if any. + */ hlreg0 = rd32(hw, NGBE_SECRXCTL); + if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) + hlreg0 &= ~NGBE_SECRXCTL_CRCSTRIP; + else + hlreg0 |= NGBE_SECRXCTL_CRCSTRIP; hlreg0 &= ~NGBE_SECRXCTL_XDSA; wr32(hw, NGBE_SECRXCTL, hlreg0); @@ -2290,6 +2321,15 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq = dev->data->rx_queues[i]; + /* + * Reset crc_len in case it was changed after queue setup by a + * call to configure. + */ + if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + /* Setup the Base and Length of the Rx Descriptor Rings */ bus_addr = rxq->rx_ring_phys_addr; wr32(hw, NGBE_RXBAL(rxq->reg_idx), @@ -2334,6 +2374,15 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) wr32(hw, NGBE_PSRCTL, rxcsum); + if (hw->is_pf) { + rdrxctl = rd32(hw, NGBE_SECRXCTL); + if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) + rdrxctl &= ~NGBE_SECRXCTL_CRCSTRIP; + else + rdrxctl |= NGBE_SECRXCTL_CRCSTRIP; + wr32(hw, NGBE_SECRXCTL, rdrxctl); + } + ngbe_set_rx_function(dev); return 0; diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index 27c83f45a7..07b6e2374e 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -268,6 +268,7 @@ struct ngbe_rx_queue { /** Packet type mask for different NICs */ uint16_t pkt_type_mask; uint16_t port_id; /**< Device port identifier */ + uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */ uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */ uint8_t rx_deferred_start; /**< not in global dev start */ uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */ From patchwork Wed Sep 8 08:37:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98286 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EDF6BA0C56; Wed, 8 Sep 2021 10:37:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F419D4119A; Wed, 8 Sep 2021 10:36:36 +0200 (CEST) Received: from smtpbgeu1.qq.com (smtpbgeu1.qq.com [52.59.177.22]) by mails.dpdk.org (Postfix) with ESMTP id BE9AE41180 for ; Wed, 8 Sep 2021 10:36:33 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090190tcr3p1oa Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:29 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: 83ShfzFP0oC7bMyvJicNAZdrm6FzlE6nPH98ALn1W+2uRbos55SBlmTdfqlyi /aeITQA0Wr0EvdQoaZbSxTYpvrdxH9ylrOtw3NogUDnWf+U3brt3N5tPFMHXW4eADakv48w U8pTbCXQtlKc+RHmMwHO7nrM8nx6QqDoFoT9HR4kvM+oWe6WsDD9r19HguG9w+qh3/wrr/r OqZuPNRdPczDSTOn0mqhgRa0DvNojViJO1Sd+eSpXPu91x96HH30N4tLomIirqkpgap3+sP FYbJzw9ErHAHCKXLSXaGm9WCQkiif3cHQEPavLsMI737MKXT/nZtMCfp3RKrIg/A0nc0oXV UkBOAukFNRXbGlNN2echYo1EhMiS+tCEQxd+1cX X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:32 +0800 Message-Id: <20210908083758.312055-7-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign1 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 06/32] net/ngbe: support jumbo frame X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add to support Rx jumbo frames. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/ngbe_rxtx.c | 11 ++++++++++- 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 2a472d9434..30fdfe62c7 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -8,6 +8,7 @@ Speed capabilities = Y Link status = Y Link status event = Y Queue start/stop = Y +Jumbo frame = Y Scattered Rx = Y TSO = Y CRC offload = P diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 6a6ae39243..702a455041 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -14,6 +14,7 @@ Features - Packet type information - Checksum offload - TSO offload +- Jumbo frames - Link state information - Scattered and gather for TX and RX diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index f9d8cf9d19..4238fbe3b8 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -2008,6 +2008,7 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused) DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_KEEP_CRC | + DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER; return offloads; @@ -2314,8 +2315,16 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) hlreg0 &= ~NGBE_SECRXCTL_XDSA; wr32(hw, NGBE_SECRXCTL, hlreg0); - wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK, + /* + * Configure jumbo frame support, if any. + */ + if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK, + NGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len)); + } else { + wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK, NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT)); + } /* Setup Rx queues */ for (i = 0; i < dev->data->nb_rx_queues; i++) { From patchwork Wed Sep 8 08:37:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98288 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 780EAA0C56; Wed, 8 Sep 2021 10:37:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 80D7E411A5; Wed, 8 Sep 2021 10:36:39 +0200 (CEST) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id 4AFAA41162 for ; Wed, 8 Sep 2021 10:36:37 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090192thx343wj Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:31 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: 6PjtIMncaiyRnb1Dg142Oo4f0YtXKZA37B34vipdFJR3bw0QqZkIU65omX2Du vykj+iCYay4OQDeDdaHF60vSJnlqQqB5eGAlbN28SZQRlbwaCQ3tHg8/LfBqlBxJKy51ccA DDDPeAlF7OZ6l7BVx8evrZ3Rz5UEuKtLtgUHIwBO/AwIcFdK6ajkkLgMv3fO9619cgqGSxt PPD3dYgKncnOUnTo70UxUZHQaQazZBKdJBAgOp/vXg/TuuRDJMhGUJPw0fMa04c3qhBSViZ gtRMxpy/k90gIJ28MuraxhFtuBJ2AprnKvzpA43RXRrm8l9t5q1dmliBVcBhv8VS3FsAs4H 0GIr8dkISTBQHYCODF4MGrA2FkqJ4V99+Pju/II X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:33 +0800 Message-Id: <20210908083758.312055-8-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign1 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 07/32] net/ngbe: support VLAN and QinQ offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to set VLAN and QinQ offload. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 2 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/ngbe_ethdev.c | 273 ++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ethdev.h | 42 +++++ drivers/net/ngbe/ngbe_rxtx.c | 119 ++++++++++++- drivers/net/ngbe/ngbe_rxtx.h | 3 + 6 files changed, 434 insertions(+), 6 deletions(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 30fdfe62c7..4ae2d66d15 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -12,6 +12,8 @@ Jumbo frame = Y Scattered Rx = Y TSO = Y CRC offload = P +VLAN offload = P +QinQ offload = P L3 checksum offload = P L4 checksum offload = P Inner L3 checksum = P diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 702a455041..9518a59443 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -13,6 +13,7 @@ Features - Packet type information - Checksum offload +- VLAN/QinQ stripping and inserting - TSO offload - Jumbo frames - Link state information diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index e7d63f1b14..3903eb0a2c 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -17,6 +17,9 @@ static int ngbe_dev_close(struct rte_eth_dev *dev); static int ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete); +static void ngbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue); +static void ngbe_vlan_hw_strip_disable(struct rte_eth_dev *dev, + uint16_t queue); static void ngbe_dev_link_status_print(struct rte_eth_dev *dev); static int ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on); @@ -27,6 +30,24 @@ static void ngbe_dev_interrupt_handler(void *param); static void ngbe_dev_interrupt_delayed_handler(void *param); static void ngbe_configure_msix(struct rte_eth_dev *dev); +#define NGBE_SET_HWSTRIP(h, q) do {\ + uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \ + uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \ + (h)->bitmap[idx] |= 1 << bit;\ + } while (0) + +#define NGBE_CLEAR_HWSTRIP(h, q) do {\ + uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \ + uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \ + (h)->bitmap[idx] &= ~(1 << bit);\ + } while (0) + +#define NGBE_GET_HWSTRIP(h, q, r) do {\ + uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \ + uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \ + (r) = (h)->bitmap[idx] >> bit & 1;\ + } while (0) + /* * The set of PCI devices this driver supports */ @@ -129,6 +150,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vfta *shadow_vfta = NGBE_DEV_VFTA(eth_dev); + struct ngbe_hwstrip *hwstrip = NGBE_DEV_HWSTRIP(eth_dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; const struct rte_memzone *mz; uint32_t ctrl_ext; @@ -242,6 +265,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) return -ENOMEM; } + /* initialize the vfta */ + memset(shadow_vfta, 0, sizeof(*shadow_vfta)); + + /* initialize the hw strip bitmap*/ + memset(hwstrip, 0, sizeof(*hwstrip)); + ctrl_ext = rd32(hw, NGBE_PORTCTL); /* let hardware know driver is loaded */ ctrl_ext |= NGBE_PORTCTL_DRVLOAD; @@ -311,6 +340,237 @@ static struct rte_pci_driver rte_ngbe_pmd = { .remove = eth_ngbe_pci_remove, }; +void +ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t vlnctrl; + + PMD_INIT_FUNC_TRACE(); + + /* Filter Table Disable */ + vlnctrl = rd32(hw, NGBE_VLANCTL); + vlnctrl &= ~NGBE_VLANCTL_VFE; + wr32(hw, NGBE_VLANCTL, vlnctrl); +} + +void +ngbe_vlan_hw_filter_enable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_vfta *shadow_vfta = NGBE_DEV_VFTA(dev); + uint32_t vlnctrl; + uint16_t i; + + PMD_INIT_FUNC_TRACE(); + + /* Filter Table Enable */ + vlnctrl = rd32(hw, NGBE_VLANCTL); + vlnctrl &= ~NGBE_VLANCTL_CFIENA; + vlnctrl |= NGBE_VLANCTL_VFE; + wr32(hw, NGBE_VLANCTL, vlnctrl); + + /* write whatever is in local vfta copy */ + for (i = 0; i < NGBE_VFTA_SIZE; i++) + wr32(hw, NGBE_VLANTBL(i), shadow_vfta->vfta[i]); +} + +void +ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on) +{ + struct ngbe_hwstrip *hwstrip = NGBE_DEV_HWSTRIP(dev); + struct ngbe_rx_queue *rxq; + + if (queue >= NGBE_MAX_RX_QUEUE_NUM) + return; + + if (on) + NGBE_SET_HWSTRIP(hwstrip, queue); + else + NGBE_CLEAR_HWSTRIP(hwstrip, queue); + + if (queue >= dev->data->nb_rx_queues) + return; + + rxq = dev->data->rx_queues[queue]; + + if (on) { + rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; + rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + } else { + rxq->vlan_flags = PKT_RX_VLAN; + rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP; + } +} + +static void +ngbe_vlan_hw_strip_disable(struct rte_eth_dev *dev, uint16_t queue) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t ctrl; + + PMD_INIT_FUNC_TRACE(); + + ctrl = rd32(hw, NGBE_RXCFG(queue)); + ctrl &= ~NGBE_RXCFG_VLAN; + wr32(hw, NGBE_RXCFG(queue), ctrl); + + /* record those setting for HW strip per queue */ + ngbe_vlan_hw_strip_bitmap_set(dev, queue, 0); +} + +static void +ngbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t ctrl; + + PMD_INIT_FUNC_TRACE(); + + ctrl = rd32(hw, NGBE_RXCFG(queue)); + ctrl |= NGBE_RXCFG_VLAN; + wr32(hw, NGBE_RXCFG(queue), ctrl); + + /* record those setting for HW strip per queue */ + ngbe_vlan_hw_strip_bitmap_set(dev, queue, 1); +} + +static void +ngbe_vlan_hw_extend_disable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t ctrl; + + PMD_INIT_FUNC_TRACE(); + + ctrl = rd32(hw, NGBE_PORTCTL); + ctrl &= ~NGBE_PORTCTL_VLANEXT; + ctrl &= ~NGBE_PORTCTL_QINQ; + wr32(hw, NGBE_PORTCTL, ctrl); +} + +static void +ngbe_vlan_hw_extend_enable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t ctrl; + + PMD_INIT_FUNC_TRACE(); + + ctrl = rd32(hw, NGBE_PORTCTL); + ctrl |= NGBE_PORTCTL_VLANEXT | NGBE_PORTCTL_QINQ; + wr32(hw, NGBE_PORTCTL, ctrl); +} + +static void +ngbe_qinq_hw_strip_disable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t ctrl; + + PMD_INIT_FUNC_TRACE(); + + ctrl = rd32(hw, NGBE_PORTCTL); + ctrl &= ~NGBE_PORTCTL_QINQ; + wr32(hw, NGBE_PORTCTL, ctrl); +} + +static void +ngbe_qinq_hw_strip_enable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t ctrl; + + PMD_INIT_FUNC_TRACE(); + + ctrl = rd32(hw, NGBE_PORTCTL); + ctrl |= NGBE_PORTCTL_QINQ | NGBE_PORTCTL_VLANEXT; + wr32(hw, NGBE_PORTCTL, ctrl); +} + +void +ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev) +{ + struct ngbe_rx_queue *rxq; + uint16_t i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + + if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + ngbe_vlan_hw_strip_enable(dev, i); + else + ngbe_vlan_hw_strip_disable(dev, i); + } +} + +void +ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask) +{ + uint16_t i; + struct rte_eth_rxmode *rxmode; + struct ngbe_rx_queue *rxq; + + if (mask & ETH_VLAN_STRIP_MASK) { + rxmode = &dev->data->dev_conf.rxmode; + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + } + else + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP; + } + } +} + +static int +ngbe_vlan_offload_config(struct rte_eth_dev *dev, int mask) +{ + struct rte_eth_rxmode *rxmode; + rxmode = &dev->data->dev_conf.rxmode; + + if (mask & ETH_VLAN_STRIP_MASK) + ngbe_vlan_hw_strip_config(dev); + + if (mask & ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) + ngbe_vlan_hw_filter_enable(dev); + else + ngbe_vlan_hw_filter_disable(dev); + } + + if (mask & ETH_VLAN_EXTEND_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) + ngbe_vlan_hw_extend_enable(dev); + else + ngbe_vlan_hw_extend_disable(dev); + } + + if (mask & ETH_QINQ_STRIP_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) + ngbe_qinq_hw_strip_enable(dev); + else + ngbe_qinq_hw_strip_disable(dev); + } + + return 0; +} + +static int +ngbe_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + ngbe_config_vlan_strip_on_all_queues(dev, mask); + + ngbe_vlan_offload_config(dev, mask); + + return 0; +} + static int ngbe_dev_configure(struct rte_eth_dev *dev) { @@ -363,6 +623,7 @@ ngbe_dev_start(struct rte_eth_dev *dev) bool link_up = false, negotiate = false; uint32_t speed = 0; uint32_t allowed_speeds = 0; + int mask = 0; int status; uint32_t *link_speeds; @@ -420,6 +681,16 @@ ngbe_dev_start(struct rte_eth_dev *dev) goto error; } + mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | + ETH_VLAN_EXTEND_MASK; + err = ngbe_vlan_offload_config(dev, mask); + if (err != 0) { + PMD_INIT_LOG(ERR, "Unable to set VLAN offload"); + goto error; + } + + ngbe_configure_port(dev); + err = ngbe_dev_rxtx_start(dev); if (err < 0) { PMD_INIT_LOG(ERR, "Unable to start rxtx queues"); @@ -654,6 +925,7 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues; dev_info->min_rx_bufsize = 1024; dev_info->max_rx_pktlen = 15872; + dev_info->rx_queue_offload_capa = ngbe_get_rx_queue_offloads(dev); dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) | dev_info->rx_queue_offload_capa); dev_info->tx_queue_offload_capa = 0; @@ -1190,6 +1462,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .dev_close = ngbe_dev_close, .dev_reset = ngbe_dev_reset, .link_update = ngbe_dev_link_update, + .vlan_offload_set = ngbe_vlan_offload_set, .rx_queue_start = ngbe_dev_rx_queue_start, .rx_queue_stop = ngbe_dev_rx_queue_stop, .tx_queue_start = ngbe_dev_tx_queue_start, diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index cbf3ab558f..8b3a1cdc3d 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -15,6 +15,17 @@ #define NGBE_FLAG_MACSEC ((uint32_t)(1 << 3)) #define NGBE_FLAG_NEED_LINK_CONFIG ((uint32_t)(1 << 4)) +#define NGBE_VFTA_SIZE 128 +#define NGBE_VLAN_TAG_SIZE 4 +/*Default value of Max Rx Queue*/ +#define NGBE_MAX_RX_QUEUE_NUM 8 + +#ifndef NBBY +#define NBBY 8 /* number of bits in a byte */ +#endif +#define NGBE_HWSTRIP_BITMAP_SIZE \ + (NGBE_MAX_RX_QUEUE_NUM / (sizeof(uint32_t) * NBBY)) + #define NGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */ #define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET @@ -29,12 +40,22 @@ struct ngbe_interrupt { uint64_t mask_orig; /* save mask during delayed handler */ }; +struct ngbe_vfta { + uint32_t vfta[NGBE_VFTA_SIZE]; +}; + +struct ngbe_hwstrip { + uint32_t bitmap[NGBE_HWSTRIP_BITMAP_SIZE]; +}; + /* * Structure to store private data for each driver instance (for each port). */ struct ngbe_adapter { struct ngbe_hw hw; struct ngbe_interrupt intr; + struct ngbe_vfta shadow_vfta; + struct ngbe_hwstrip hwstrip; bool rx_bulk_alloc_allowed; }; @@ -64,6 +85,12 @@ ngbe_dev_intr(struct rte_eth_dev *dev) return intr; } +#define NGBE_DEV_VFTA(dev) \ + (&((struct ngbe_adapter *)(dev)->data->dev_private)->shadow_vfta) + +#define NGBE_DEV_HWSTRIP(dev) \ + (&((struct ngbe_adapter *)(dev)->data->dev_private)->hwstrip) + /* * Rx/Tx function prototypes */ @@ -126,10 +153,21 @@ uint16_t ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction, uint8_t queue, uint8_t msix_vector); +void ngbe_configure_port(struct rte_eth_dev *dev); + int ngbe_dev_link_update_share(struct rte_eth_dev *dev, int wait_to_complete); +/* + * misc function prototypes + */ +void ngbe_vlan_hw_filter_enable(struct rte_eth_dev *dev); + +void ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev); + +void ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev); + #define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */ #define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */ #define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */ @@ -148,5 +186,9 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev, #define NGBE_DEFAULT_TX_WTHRESH 0 const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev); +void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, + uint16_t queue, bool on); +void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, + int mask); #endif /* _NGBE_ETHDEV_H_ */ diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 4238fbe3b8..1151173b02 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -21,6 +21,7 @@ static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM | PKT_TX_OUTER_IPV4 | PKT_TX_IPV6 | PKT_TX_IPV4 | + PKT_TX_VLAN_PKT | PKT_TX_L4_MASK | PKT_TX_TCP_SEG | PKT_TX_TUNNEL_MASK | @@ -346,6 +347,11 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq, vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len); } + if (ol_flags & PKT_TX_VLAN_PKT) { + tx_offload_mask.vlan_tci |= ~0; + vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci); + } + txq->ctx_cache[ctx_idx].flags = ol_flags; txq->ctx_cache[ctx_idx].tx_offload.data[0] = tx_offload_mask.data[0] & tx_offload.data[0]; @@ -416,6 +422,8 @@ tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags) tmp |= NGBE_TXD_IPCS; tmp |= NGBE_TXD_L4CS; } + if (ol_flags & PKT_TX_VLAN_PKT) + tmp |= NGBE_TXD_CC; return tmp; } @@ -425,6 +433,8 @@ tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags) { uint32_t cmdtype = 0; + if (ol_flags & PKT_TX_VLAN_PKT) + cmdtype |= NGBE_TXD_VLE; if (ol_flags & PKT_TX_TCP_SEG) cmdtype |= NGBE_TXD_TSE; return cmdtype; @@ -443,6 +453,8 @@ tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype) /* L2 level */ ptype = RTE_PTYPE_L2_ETHER; + if (oflags & PKT_TX_VLAN) + ptype |= RTE_PTYPE_L2_ETHER_VLAN; /* L3 level */ if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM)) @@ -606,6 +618,7 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_offload.l2_len = tx_pkt->l2_len; tx_offload.l3_len = tx_pkt->l3_len; tx_offload.l4_len = tx_pkt->l4_len; + tx_offload.vlan_tci = tx_pkt->vlan_tci; tx_offload.tso_segsz = tx_pkt->tso_segsz; tx_offload.outer_l2_len = tx_pkt->outer_l2_len; tx_offload.outer_l3_len = tx_pkt->outer_l3_len; @@ -884,6 +897,23 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask) return ngbe_decode_ptype(ptid); } +static inline uint64_t +rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags) +{ + uint64_t pkt_flags; + + /* + * Check if VLAN present only. + * Do not check whether L3/L4 rx checksum done by NIC or not, + * That can be found from rte_eth_rxmode.offloads flag + */ + pkt_flags = (rx_status & NGBE_RXD_STAT_VLAN && + vlan_flags & PKT_RX_VLAN_STRIPPED) + ? vlan_flags : 0; + + return pkt_flags; +} + static inline uint64_t rx_desc_error_to_pkt_flags(uint32_t rx_status) { @@ -972,9 +1002,12 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq) rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; + mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].qw1.hi.tag); /* convert descriptor fields to rte mbuf flags */ - pkt_flags = rx_desc_error_to_pkt_flags(s[j]); + pkt_flags = rx_desc_status_to_pkt_flags(s[j], + rxq->vlan_flags); + pkt_flags |= rx_desc_error_to_pkt_flags(s[j]); mb->ol_flags = pkt_flags; mb->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j], @@ -1270,6 +1303,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * - Rx port identifier. * 2) integrate hardware offload data, if any: * - IP checksum flag, + * - VLAN TCI, if any, * - error flags. */ pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len) - @@ -1283,7 +1317,12 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rxm->port = rxq->port_id; pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0); - pkt_flags = rx_desc_error_to_pkt_flags(staterr); + /* Only valid if PKT_RX_VLAN set in pkt_flags */ + rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag); + + pkt_flags = rx_desc_status_to_pkt_flags(staterr, + rxq->vlan_flags); + pkt_flags |= rx_desc_error_to_pkt_flags(staterr); rxm->ol_flags = pkt_flags; rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask); @@ -1328,6 +1367,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * - RX port identifier * - hardware offload data, if any: * - IP checksum flag + * - VLAN TCI, if any * - error flags * @head HEAD of the packet cluster * @desc HW descriptor to get data from @@ -1342,8 +1382,13 @@ ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc, head->port = rxq->port_id; + /* The vlan_tci field is only valid when PKT_RX_VLAN is + * set in the pkt_flags field. + */ + head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag); pkt_info = rte_le_to_cpu_32(desc->qw0.dw0); - pkt_flags = rx_desc_error_to_pkt_flags(staterr); + pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags); + pkt_flags |= rx_desc_error_to_pkt_flags(staterr); head->ol_flags = pkt_flags; head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask); @@ -1714,10 +1759,10 @@ uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev) { uint64_t tx_offload_capa; - - RTE_SET_USED(dev); + struct ngbe_hw *hw = ngbe_dev_hw(dev); tx_offload_capa = + DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM | @@ -1730,6 +1775,9 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev) DEV_TX_OFFLOAD_IPIP_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS; + if (hw->is_pf) + tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT; + return tx_offload_capa; } @@ -2000,17 +2048,29 @@ ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq) } uint64_t -ngbe_get_rx_port_offloads(struct rte_eth_dev *dev __rte_unused) +ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused) +{ + return DEV_RX_OFFLOAD_VLAN_STRIP; +} + +uint64_t +ngbe_get_rx_port_offloads(struct rte_eth_dev *dev) { uint64_t offloads; + struct ngbe_hw *hw = ngbe_dev_hw(dev); offloads = DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_KEEP_CRC | DEV_RX_OFFLOAD_JUMBO_FRAME | + DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_SCATTER; + if (hw->is_pf) + offloads |= (DEV_RX_OFFLOAD_QINQ_STRIP | + DEV_RX_OFFLOAD_VLAN_EXTEND); + return offloads; } @@ -2189,6 +2249,40 @@ ngbe_dev_free_queues(struct rte_eth_dev *dev) dev->data->nb_tx_queues = 0; } +void ngbe_configure_port(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + int i = 0; + uint16_t tpids[8] = {RTE_ETHER_TYPE_VLAN, RTE_ETHER_TYPE_QINQ, + 0x9100, 0x9200, + 0x0000, 0x0000, + 0x0000, 0x0000}; + + PMD_INIT_FUNC_TRACE(); + + /* default outer vlan tpid */ + wr32(hw, NGBE_EXTAG, + NGBE_EXTAG_ETAG(RTE_ETHER_TYPE_ETAG) | + NGBE_EXTAG_VLAN(RTE_ETHER_TYPE_QINQ)); + + /* default inner vlan tpid */ + wr32m(hw, NGBE_VLANCTL, + NGBE_VLANCTL_TPID_MASK, + NGBE_VLANCTL_TPID(RTE_ETHER_TYPE_VLAN)); + wr32m(hw, NGBE_DMATXCTRL, + NGBE_DMATXCTRL_TPID_MASK, + NGBE_DMATXCTRL_TPID(RTE_ETHER_TYPE_VLAN)); + + /* default vlan tpid filters */ + for (i = 0; i < 8; i++) { + wr32m(hw, NGBE_TAGTPID(i / 2), + (i % 2 ? NGBE_TAGTPID_MSB_MASK + : NGBE_TAGTPID_LSB_MASK), + (i % 2 ? NGBE_TAGTPID_MSB(tpids[i]) + : NGBE_TAGTPID_LSB(tpids[i]))); + } +} + static int ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq) { @@ -2326,6 +2420,12 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT)); } + /* + * Assume no header split and no VLAN strip support + * on any Rx queue first . + */ + rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP; + /* Setup Rx queues */ for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq = dev->data->rx_queues[i]; @@ -2366,6 +2466,13 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) srrctl |= NGBE_RXCFG_PKTLEN(buf_size); wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl); + + /* It adds dual VLAN length for supporting dual VLAN */ + if (dev->data->dev_conf.rxmode.max_rx_pkt_len + + 2 * NGBE_VLAN_TAG_SIZE > buf_size) + dev->data->scattered_rx = 1; + if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; } if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index 07b6e2374e..812bc57c9e 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -271,6 +271,8 @@ struct ngbe_rx_queue { uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise. */ uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En */ uint8_t rx_deferred_start; /**< not in global dev start */ + /** flags to set in mbuf when a vlan is detected */ + uint64_t vlan_flags; uint64_t offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */ /** need to alloc dummy mbuf, for wraparound when scanning hw ring */ struct rte_mbuf fake_mbuf; @@ -370,6 +372,7 @@ void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq); void ngbe_set_rx_function(struct rte_eth_dev *dev); uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev); +uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev); uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev); #endif /* _NGBE_RXTX_H_ */ From patchwork Wed Sep 8 08:37:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98289 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ABC43A0C56; Wed, 8 Sep 2021 10:37:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 38617411A0; Wed, 8 Sep 2021 10:36:42 +0200 (CEST) Received: from smtpbg506.qq.com (smtpbg506.qq.com [203.205.250.33]) by mails.dpdk.org (Postfix) with ESMTP id 3119D411A0 for ; Wed, 8 Sep 2021 10:36:38 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090194tcw5lffv Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:33 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: NC3V4KDEm0GNS7HtLA5n+4BPNGuSW5rO0xqv/czdpNXkevC9vdnsi2Pmg64YU bXkFvTXdsytF9hM7Fi7hOCr0ijrYeM43ntKE3Dm5HESSQvqtchX9LXYlRtezFP7C3qDGzOh 3LuvXLKpLkNCgixpTVzQi7LYnb4bIlEjfsB46My2DEOjGbnqq7faZD0672kj82XDtOhguUK 0cp5rVl54LHKqidjSdfqXz6LTsSBdbrSgdYKROtX+Av24yixE0f0fnr4S+8+LyznN2pP9Da R+SFuqXA9bntIZ1EyE86WI2gIkteZmz9n6HLBEtoIIU3WkaDWviYpwmP3SxlUBgCaHdQ3Hc uh9zDpGdVeKlvYMsN65/f4APJgbw//AI7JqKIgb X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:34 +0800 Message-Id: <20210908083758.312055-9-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 08/32] net/ngbe: support basic statistics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to read and clear basic statistics, and configure per-queue stats counter mapping. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 2 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/base/ngbe_dummy.h | 5 + drivers/net/ngbe/base/ngbe_hw.c | 101 ++++++++++ drivers/net/ngbe/base/ngbe_hw.h | 1 + drivers/net/ngbe/base/ngbe_type.h | 134 +++++++++++++ drivers/net/ngbe/ngbe_ethdev.c | 300 +++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ethdev.h | 19 ++ 8 files changed, 563 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 4ae2d66d15..f310fb102a 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -19,6 +19,8 @@ L4 checksum offload = P Inner L3 checksum = P Inner L4 checksum = P Packet type parsing = Y +Basic stats = Y +Stats per queue = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 9518a59443..64c07e4741 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -15,6 +15,7 @@ Features - Checksum offload - VLAN/QinQ stripping and inserting - TSO offload +- Port hardware statistics - Jumbo frames - Link state information - Scattered and gather for TX and RX diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 8863acef0d..0def116c53 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -55,6 +55,10 @@ static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_mac_clear_hw_cntrs_dummy(struct ngbe_hw *TUP0) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline s32 ngbe_mac_get_mac_addr_dummy(struct ngbe_hw *TUP0, u8 *TUP1) { return NGBE_ERR_OPS_DUMMY; @@ -178,6 +182,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->mac.reset_hw = ngbe_mac_reset_hw_dummy; hw->mac.start_hw = ngbe_mac_start_hw_dummy; hw->mac.stop_hw = ngbe_mac_stop_hw_dummy; + hw->mac.clear_hw_cntrs = ngbe_mac_clear_hw_cntrs_dummy; hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy; hw->mac.enable_rx_dma = ngbe_mac_enable_rx_dma_dummy; hw->mac.disable_sec_rx_path = ngbe_mac_disable_sec_rx_path_dummy; diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index 6b575fc67b..f302df5d9d 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -19,6 +19,9 @@ s32 ngbe_start_hw(struct ngbe_hw *hw) { DEBUGFUNC("ngbe_start_hw"); + /* Clear statistics registers */ + hw->mac.clear_hw_cntrs(hw); + /* Clear adapter stopped flag */ hw->adapter_stopped = false; @@ -159,6 +162,7 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw) msec_delay(50); ngbe_reset_misc_em(hw); + hw->mac.clear_hw_cntrs(hw); msec_delay(50); @@ -175,6 +179,102 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw) return status; } +/** + * ngbe_clear_hw_cntrs - Generic clear hardware counters + * @hw: pointer to hardware structure + * + * Clears all hardware statistics counters by reading them from the hardware + * Statistics counters are clear on read. + **/ +s32 ngbe_clear_hw_cntrs(struct ngbe_hw *hw) +{ + u16 i = 0; + + DEBUGFUNC("ngbe_clear_hw_cntrs"); + + /* QP Stats */ + /* don't write clear queue stats */ + for (i = 0; i < NGBE_MAX_QP; i++) { + hw->qp_last[i].rx_qp_packets = 0; + hw->qp_last[i].tx_qp_packets = 0; + hw->qp_last[i].rx_qp_bytes = 0; + hw->qp_last[i].tx_qp_bytes = 0; + hw->qp_last[i].rx_qp_mc_packets = 0; + hw->qp_last[i].tx_qp_mc_packets = 0; + hw->qp_last[i].rx_qp_bc_packets = 0; + hw->qp_last[i].tx_qp_bc_packets = 0; + } + + /* PB Stats */ + rd32(hw, NGBE_PBRXLNKXON); + rd32(hw, NGBE_PBRXLNKXOFF); + rd32(hw, NGBE_PBTXLNKXON); + rd32(hw, NGBE_PBTXLNKXOFF); + + /* DMA Stats */ + rd32(hw, NGBE_DMARXPKT); + rd32(hw, NGBE_DMATXPKT); + + rd64(hw, NGBE_DMARXOCTL); + rd64(hw, NGBE_DMATXOCTL); + + /* MAC Stats */ + rd64(hw, NGBE_MACRXERRCRCL); + rd64(hw, NGBE_MACRXMPKTL); + rd64(hw, NGBE_MACTXMPKTL); + + rd64(hw, NGBE_MACRXPKTL); + rd64(hw, NGBE_MACTXPKTL); + rd64(hw, NGBE_MACRXGBOCTL); + + rd64(hw, NGBE_MACRXOCTL); + rd32(hw, NGBE_MACTXOCTL); + + rd64(hw, NGBE_MACRX1TO64L); + rd64(hw, NGBE_MACRX65TO127L); + rd64(hw, NGBE_MACRX128TO255L); + rd64(hw, NGBE_MACRX256TO511L); + rd64(hw, NGBE_MACRX512TO1023L); + rd64(hw, NGBE_MACRX1024TOMAXL); + rd64(hw, NGBE_MACTX1TO64L); + rd64(hw, NGBE_MACTX65TO127L); + rd64(hw, NGBE_MACTX128TO255L); + rd64(hw, NGBE_MACTX256TO511L); + rd64(hw, NGBE_MACTX512TO1023L); + rd64(hw, NGBE_MACTX1024TOMAXL); + + rd64(hw, NGBE_MACRXERRLENL); + rd32(hw, NGBE_MACRXOVERSIZE); + rd32(hw, NGBE_MACRXJABBER); + + /* MACsec Stats */ + rd32(hw, NGBE_LSECTX_UTPKT); + rd32(hw, NGBE_LSECTX_ENCPKT); + rd32(hw, NGBE_LSECTX_PROTPKT); + rd32(hw, NGBE_LSECTX_ENCOCT); + rd32(hw, NGBE_LSECTX_PROTOCT); + rd32(hw, NGBE_LSECRX_UTPKT); + rd32(hw, NGBE_LSECRX_BTPKT); + rd32(hw, NGBE_LSECRX_NOSCIPKT); + rd32(hw, NGBE_LSECRX_UNSCIPKT); + rd32(hw, NGBE_LSECRX_DECOCT); + rd32(hw, NGBE_LSECRX_VLDOCT); + rd32(hw, NGBE_LSECRX_UNCHKPKT); + rd32(hw, NGBE_LSECRX_DLYPKT); + rd32(hw, NGBE_LSECRX_LATEPKT); + for (i = 0; i < 2; i++) { + rd32(hw, NGBE_LSECRX_OKPKT(i)); + rd32(hw, NGBE_LSECRX_INVPKT(i)); + rd32(hw, NGBE_LSECRX_BADPKT(i)); + } + for (i = 0; i < 4; i++) { + rd32(hw, NGBE_LSECRX_INVSAPKT(i)); + rd32(hw, NGBE_LSECRX_BADSAPKT(i)); + } + + return 0; +} + /** * ngbe_get_mac_addr - Generic get MAC address * @hw: pointer to hardware structure @@ -988,6 +1088,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->init_hw = ngbe_init_hw; mac->reset_hw = ngbe_reset_hw_em; mac->start_hw = ngbe_start_hw; + mac->clear_hw_cntrs = ngbe_clear_hw_cntrs; mac->enable_rx_dma = ngbe_enable_rx_dma; mac->get_mac_addr = ngbe_get_mac_addr; mac->stop_hw = ngbe_stop_hw; diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h index 17a0a03c88..6a08c02bee 100644 --- a/drivers/net/ngbe/base/ngbe_hw.h +++ b/drivers/net/ngbe/base/ngbe_hw.h @@ -17,6 +17,7 @@ s32 ngbe_init_hw(struct ngbe_hw *hw); s32 ngbe_start_hw(struct ngbe_hw *hw); s32 ngbe_reset_hw_em(struct ngbe_hw *hw); s32 ngbe_stop_hw(struct ngbe_hw *hw); +s32 ngbe_clear_hw_cntrs(struct ngbe_hw *hw); s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr); void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw); diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 28540e4ba0..c13f0208fd 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -9,6 +9,7 @@ #define NGBE_LINK_UP_TIME 90 /* 9.0 Seconds */ #define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */ +#define NGBE_MAX_QP (8) #define NGBE_ALIGN 128 /* as intel did */ #define NGBE_ISB_SIZE 16 @@ -77,6 +78,127 @@ struct ngbe_bus_info { u8 lan_id; }; +/* Statistics counters collected by the MAC */ +/* PB[] RxTx */ +struct ngbe_pb_stats { + u64 tx_pb_xon_packets; + u64 rx_pb_xon_packets; + u64 tx_pb_xoff_packets; + u64 rx_pb_xoff_packets; + u64 rx_pb_dropped; + u64 rx_pb_mbuf_alloc_errors; + u64 tx_pb_xon2off_packets; +}; + +/* QP[] RxTx */ +struct ngbe_qp_stats { + u64 rx_qp_packets; + u64 tx_qp_packets; + u64 rx_qp_bytes; + u64 tx_qp_bytes; + u64 rx_qp_mc_packets; +}; + +struct ngbe_hw_stats { + /* MNG RxTx */ + u64 mng_bmc2host_packets; + u64 mng_host2bmc_packets; + /* Basix RxTx */ + u64 rx_drop_packets; + u64 tx_drop_packets; + u64 rx_dma_drop; + u64 tx_secdrp_packets; + u64 rx_packets; + u64 tx_packets; + u64 rx_bytes; + u64 tx_bytes; + u64 rx_total_bytes; + u64 rx_total_packets; + u64 tx_total_packets; + u64 rx_total_missed_packets; + u64 rx_broadcast_packets; + u64 tx_broadcast_packets; + u64 rx_multicast_packets; + u64 tx_multicast_packets; + u64 rx_management_packets; + u64 tx_management_packets; + u64 rx_management_dropped; + + /* Basic Error */ + u64 rx_crc_errors; + u64 rx_illegal_byte_errors; + u64 rx_error_bytes; + u64 rx_mac_short_packet_dropped; + u64 rx_length_errors; + u64 rx_undersize_errors; + u64 rx_fragment_errors; + u64 rx_oversize_errors; + u64 rx_jabber_errors; + u64 rx_l3_l4_xsum_error; + u64 mac_local_errors; + u64 mac_remote_errors; + + /* MACSEC */ + u64 tx_macsec_pkts_untagged; + u64 tx_macsec_pkts_encrypted; + u64 tx_macsec_pkts_protected; + u64 tx_macsec_octets_encrypted; + u64 tx_macsec_octets_protected; + u64 rx_macsec_pkts_untagged; + u64 rx_macsec_pkts_badtag; + u64 rx_macsec_pkts_nosci; + u64 rx_macsec_pkts_unknownsci; + u64 rx_macsec_octets_decrypted; + u64 rx_macsec_octets_validated; + u64 rx_macsec_sc_pkts_unchecked; + u64 rx_macsec_sc_pkts_delayed; + u64 rx_macsec_sc_pkts_late; + u64 rx_macsec_sa_pkts_ok; + u64 rx_macsec_sa_pkts_invalid; + u64 rx_macsec_sa_pkts_notvalid; + u64 rx_macsec_sa_pkts_unusedsa; + u64 rx_macsec_sa_pkts_notusingsa; + + /* MAC RxTx */ + u64 rx_size_64_packets; + u64 rx_size_65_to_127_packets; + u64 rx_size_128_to_255_packets; + u64 rx_size_256_to_511_packets; + u64 rx_size_512_to_1023_packets; + u64 rx_size_1024_to_max_packets; + u64 tx_size_64_packets; + u64 tx_size_65_to_127_packets; + u64 tx_size_128_to_255_packets; + u64 tx_size_256_to_511_packets; + u64 tx_size_512_to_1023_packets; + u64 tx_size_1024_to_max_packets; + + /* Flow Control */ + u64 tx_xon_packets; + u64 rx_xon_packets; + u64 tx_xoff_packets; + u64 rx_xoff_packets; + + u64 rx_up_dropped; + + u64 rdb_pkt_cnt; + u64 rdb_repli_cnt; + u64 rdb_drp_cnt; + + /* QP[] RxTx */ + struct { + u64 rx_qp_packets; + u64 tx_qp_packets; + u64 rx_qp_bytes; + u64 tx_qp_bytes; + u64 rx_qp_mc_packets; + u64 tx_qp_mc_packets; + u64 rx_qp_bc_packets; + u64 tx_qp_bc_packets; + } qp[NGBE_MAX_QP]; + +}; + struct ngbe_rom_info { s32 (*init_params)(struct ngbe_hw *hw); s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val); @@ -96,6 +218,7 @@ struct ngbe_mac_info { s32 (*reset_hw)(struct ngbe_hw *hw); s32 (*start_hw)(struct ngbe_hw *hw); s32 (*stop_hw)(struct ngbe_hw *hw); + s32 (*clear_hw_cntrs)(struct ngbe_hw *hw); s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr); s32 (*enable_rx_dma)(struct ngbe_hw *hw, u32 regval); s32 (*disable_sec_rx_path)(struct ngbe_hw *hw); @@ -195,7 +318,18 @@ struct ngbe_hw { u32 q_rx_regs[8 * 4]; u32 q_tx_regs[8 * 4]; + bool offset_loaded; bool is_pf; + struct { + u64 rx_qp_packets; + u64 tx_qp_packets; + u64 rx_qp_bytes; + u64 tx_qp_bytes; + u64 rx_qp_mc_packets; + u64 tx_qp_mc_packets; + u64 rx_qp_bc_packets; + u64 tx_qp_bc_packets; + } qp_last[NGBE_MAX_QP]; }; #include "ngbe_regs.h" diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 3903eb0a2c..3d459718b1 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -17,6 +17,7 @@ static int ngbe_dev_close(struct rte_eth_dev *dev); static int ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete); +static int ngbe_dev_stats_reset(struct rte_eth_dev *dev); static void ngbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue); static void ngbe_vlan_hw_strip_disable(struct rte_eth_dev *dev, uint16_t queue); @@ -122,6 +123,56 @@ ngbe_disable_intr(struct ngbe_hw *hw) ngbe_flush(hw); } +static int +ngbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev, + uint16_t queue_id, + uint8_t stat_idx, + uint8_t is_rx) +{ + struct ngbe_stat_mappings *stat_mappings = + NGBE_DEV_STAT_MAPPINGS(eth_dev); + uint32_t qsmr_mask = 0; + uint32_t clearing_mask = QMAP_FIELD_RESERVED_BITS_MASK; + uint32_t q_map; + uint8_t n, offset; + + if (stat_idx & !QMAP_FIELD_RESERVED_BITS_MASK) + return -EIO; + + PMD_INIT_LOG(DEBUG, "Setting port %d, %s queue_id %d to stat index %d", + (int)(eth_dev->data->port_id), is_rx ? "RX" : "TX", + queue_id, stat_idx); + + n = (uint8_t)(queue_id / NB_QMAP_FIELDS_PER_QSM_REG); + if (n >= NGBE_NB_STAT_MAPPING) { + PMD_INIT_LOG(ERR, "Nb of stat mapping registers exceeded"); + return -EIO; + } + offset = (uint8_t)(queue_id % NB_QMAP_FIELDS_PER_QSM_REG); + + /* Now clear any previous stat_idx set */ + clearing_mask <<= (QSM_REG_NB_BITS_PER_QMAP_FIELD * offset); + if (!is_rx) + stat_mappings->tqsm[n] &= ~clearing_mask; + else + stat_mappings->rqsm[n] &= ~clearing_mask; + + q_map = (uint32_t)stat_idx; + q_map &= QMAP_FIELD_RESERVED_BITS_MASK; + qsmr_mask = q_map << (QSM_REG_NB_BITS_PER_QMAP_FIELD * offset); + if (!is_rx) + stat_mappings->tqsm[n] |= qsmr_mask; + else + stat_mappings->rqsm[n] |= qsmr_mask; + + PMD_INIT_LOG(DEBUG, "Set port %d, %s queue_id %d to stat index %d", + (int)(eth_dev->data->port_id), is_rx ? "RX" : "TX", + queue_id, stat_idx); + PMD_INIT_LOG(DEBUG, "%s[%d] = 0x%08x", is_rx ? "RQSMR" : "TQSM", n, + is_rx ? stat_mappings->rqsm[n] : stat_mappings->tqsm[n]); + return 0; +} + /* * Ensure that all locks are released before first NVM or PHY access */ @@ -236,6 +287,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) return -EIO; } + /* Reset the hw statistics */ + ngbe_dev_stats_reset(eth_dev); + /* disable interrupt */ ngbe_disable_intr(hw); @@ -616,6 +670,7 @@ static int ngbe_dev_start(struct rte_eth_dev *dev) { struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; uint32_t intr_vector = 0; @@ -780,6 +835,9 @@ ngbe_dev_start(struct rte_eth_dev *dev) */ ngbe_dev_link_update(dev, 0); + ngbe_read_stats_registers(hw, hw_stats); + hw->offset_loaded = 1; + return 0; error: @@ -916,6 +974,245 @@ ngbe_dev_reset(struct rte_eth_dev *dev) return ret; } +#define UPDATE_QP_COUNTER_32bit(reg, last_counter, counter) \ + { \ + uint32_t current_counter = rd32(hw, reg); \ + if (current_counter < last_counter) \ + current_counter += 0x100000000LL; \ + if (!hw->offset_loaded) \ + last_counter = current_counter; \ + counter = current_counter - last_counter; \ + counter &= 0xFFFFFFFFLL; \ + } + +#define UPDATE_QP_COUNTER_36bit(reg_lsb, reg_msb, last_counter, counter) \ + { \ + uint64_t current_counter_lsb = rd32(hw, reg_lsb); \ + uint64_t current_counter_msb = rd32(hw, reg_msb); \ + uint64_t current_counter = (current_counter_msb << 32) | \ + current_counter_lsb; \ + if (current_counter < last_counter) \ + current_counter += 0x1000000000LL; \ + if (!hw->offset_loaded) \ + last_counter = current_counter; \ + counter = current_counter - last_counter; \ + counter &= 0xFFFFFFFFFLL; \ + } + +void +ngbe_read_stats_registers(struct ngbe_hw *hw, + struct ngbe_hw_stats *hw_stats) +{ + unsigned int i; + + /* QP Stats */ + for (i = 0; i < hw->nb_rx_queues; i++) { + UPDATE_QP_COUNTER_32bit(NGBE_QPRXPKT(i), + hw->qp_last[i].rx_qp_packets, + hw_stats->qp[i].rx_qp_packets); + UPDATE_QP_COUNTER_36bit(NGBE_QPRXOCTL(i), NGBE_QPRXOCTH(i), + hw->qp_last[i].rx_qp_bytes, + hw_stats->qp[i].rx_qp_bytes); + UPDATE_QP_COUNTER_32bit(NGBE_QPRXMPKT(i), + hw->qp_last[i].rx_qp_mc_packets, + hw_stats->qp[i].rx_qp_mc_packets); + UPDATE_QP_COUNTER_32bit(NGBE_QPRXBPKT(i), + hw->qp_last[i].rx_qp_bc_packets, + hw_stats->qp[i].rx_qp_bc_packets); + } + + for (i = 0; i < hw->nb_tx_queues; i++) { + UPDATE_QP_COUNTER_32bit(NGBE_QPTXPKT(i), + hw->qp_last[i].tx_qp_packets, + hw_stats->qp[i].tx_qp_packets); + UPDATE_QP_COUNTER_36bit(NGBE_QPTXOCTL(i), NGBE_QPTXOCTH(i), + hw->qp_last[i].tx_qp_bytes, + hw_stats->qp[i].tx_qp_bytes); + UPDATE_QP_COUNTER_32bit(NGBE_QPTXMPKT(i), + hw->qp_last[i].tx_qp_mc_packets, + hw_stats->qp[i].tx_qp_mc_packets); + UPDATE_QP_COUNTER_32bit(NGBE_QPTXBPKT(i), + hw->qp_last[i].tx_qp_bc_packets, + hw_stats->qp[i].tx_qp_bc_packets); + } + + /* PB Stats */ + hw_stats->rx_up_dropped += rd32(hw, NGBE_PBRXMISS); + hw_stats->rdb_pkt_cnt += rd32(hw, NGBE_PBRXPKT); + hw_stats->rdb_repli_cnt += rd32(hw, NGBE_PBRXREP); + hw_stats->rdb_drp_cnt += rd32(hw, NGBE_PBRXDROP); + hw_stats->tx_xoff_packets += rd32(hw, NGBE_PBTXLNKXOFF); + hw_stats->tx_xon_packets += rd32(hw, NGBE_PBTXLNKXON); + + hw_stats->rx_xon_packets += rd32(hw, NGBE_PBRXLNKXON); + hw_stats->rx_xoff_packets += rd32(hw, NGBE_PBRXLNKXOFF); + + /* DMA Stats */ + hw_stats->rx_drop_packets += rd32(hw, NGBE_DMARXDROP); + hw_stats->tx_drop_packets += rd32(hw, NGBE_DMATXDROP); + hw_stats->rx_dma_drop += rd32(hw, NGBE_DMARXDROP); + hw_stats->tx_secdrp_packets += rd32(hw, NGBE_DMATXSECDROP); + hw_stats->rx_packets += rd32(hw, NGBE_DMARXPKT); + hw_stats->tx_packets += rd32(hw, NGBE_DMATXPKT); + hw_stats->rx_bytes += rd64(hw, NGBE_DMARXOCTL); + hw_stats->tx_bytes += rd64(hw, NGBE_DMATXOCTL); + + /* MAC Stats */ + hw_stats->rx_crc_errors += rd64(hw, NGBE_MACRXERRCRCL); + hw_stats->rx_multicast_packets += rd64(hw, NGBE_MACRXMPKTL); + hw_stats->tx_multicast_packets += rd64(hw, NGBE_MACTXMPKTL); + + hw_stats->rx_total_packets += rd64(hw, NGBE_MACRXPKTL); + hw_stats->tx_total_packets += rd64(hw, NGBE_MACTXPKTL); + hw_stats->rx_total_bytes += rd64(hw, NGBE_MACRXGBOCTL); + + hw_stats->rx_broadcast_packets += rd64(hw, NGBE_MACRXOCTL); + hw_stats->tx_broadcast_packets += rd32(hw, NGBE_MACTXOCTL); + + hw_stats->rx_size_64_packets += rd64(hw, NGBE_MACRX1TO64L); + hw_stats->rx_size_65_to_127_packets += rd64(hw, NGBE_MACRX65TO127L); + hw_stats->rx_size_128_to_255_packets += rd64(hw, NGBE_MACRX128TO255L); + hw_stats->rx_size_256_to_511_packets += rd64(hw, NGBE_MACRX256TO511L); + hw_stats->rx_size_512_to_1023_packets += + rd64(hw, NGBE_MACRX512TO1023L); + hw_stats->rx_size_1024_to_max_packets += + rd64(hw, NGBE_MACRX1024TOMAXL); + hw_stats->tx_size_64_packets += rd64(hw, NGBE_MACTX1TO64L); + hw_stats->tx_size_65_to_127_packets += rd64(hw, NGBE_MACTX65TO127L); + hw_stats->tx_size_128_to_255_packets += rd64(hw, NGBE_MACTX128TO255L); + hw_stats->tx_size_256_to_511_packets += rd64(hw, NGBE_MACTX256TO511L); + hw_stats->tx_size_512_to_1023_packets += + rd64(hw, NGBE_MACTX512TO1023L); + hw_stats->tx_size_1024_to_max_packets += + rd64(hw, NGBE_MACTX1024TOMAXL); + + hw_stats->rx_undersize_errors += rd64(hw, NGBE_MACRXERRLENL); + hw_stats->rx_oversize_errors += rd32(hw, NGBE_MACRXOVERSIZE); + hw_stats->rx_jabber_errors += rd32(hw, NGBE_MACRXJABBER); + + /* MNG Stats */ + hw_stats->mng_bmc2host_packets = rd32(hw, NGBE_MNGBMC2OS); + hw_stats->mng_host2bmc_packets = rd32(hw, NGBE_MNGOS2BMC); + hw_stats->rx_management_packets = rd32(hw, NGBE_DMARXMNG); + hw_stats->tx_management_packets = rd32(hw, NGBE_DMATXMNG); + + /* MACsec Stats */ + hw_stats->tx_macsec_pkts_untagged += rd32(hw, NGBE_LSECTX_UTPKT); + hw_stats->tx_macsec_pkts_encrypted += + rd32(hw, NGBE_LSECTX_ENCPKT); + hw_stats->tx_macsec_pkts_protected += + rd32(hw, NGBE_LSECTX_PROTPKT); + hw_stats->tx_macsec_octets_encrypted += + rd32(hw, NGBE_LSECTX_ENCOCT); + hw_stats->tx_macsec_octets_protected += + rd32(hw, NGBE_LSECTX_PROTOCT); + hw_stats->rx_macsec_pkts_untagged += rd32(hw, NGBE_LSECRX_UTPKT); + hw_stats->rx_macsec_pkts_badtag += rd32(hw, NGBE_LSECRX_BTPKT); + hw_stats->rx_macsec_pkts_nosci += rd32(hw, NGBE_LSECRX_NOSCIPKT); + hw_stats->rx_macsec_pkts_unknownsci += rd32(hw, NGBE_LSECRX_UNSCIPKT); + hw_stats->rx_macsec_octets_decrypted += rd32(hw, NGBE_LSECRX_DECOCT); + hw_stats->rx_macsec_octets_validated += rd32(hw, NGBE_LSECRX_VLDOCT); + hw_stats->rx_macsec_sc_pkts_unchecked += + rd32(hw, NGBE_LSECRX_UNCHKPKT); + hw_stats->rx_macsec_sc_pkts_delayed += rd32(hw, NGBE_LSECRX_DLYPKT); + hw_stats->rx_macsec_sc_pkts_late += rd32(hw, NGBE_LSECRX_LATEPKT); + for (i = 0; i < 2; i++) { + hw_stats->rx_macsec_sa_pkts_ok += + rd32(hw, NGBE_LSECRX_OKPKT(i)); + hw_stats->rx_macsec_sa_pkts_invalid += + rd32(hw, NGBE_LSECRX_INVPKT(i)); + hw_stats->rx_macsec_sa_pkts_notvalid += + rd32(hw, NGBE_LSECRX_BADPKT(i)); + } + for (i = 0; i < 4; i++) { + hw_stats->rx_macsec_sa_pkts_unusedsa += + rd32(hw, NGBE_LSECRX_INVSAPKT(i)); + hw_stats->rx_macsec_sa_pkts_notusingsa += + rd32(hw, NGBE_LSECRX_BADSAPKT(i)); + } + hw_stats->rx_total_missed_packets = + hw_stats->rx_up_dropped; +} + +static int +ngbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev); + struct ngbe_stat_mappings *stat_mappings = + NGBE_DEV_STAT_MAPPINGS(dev); + uint32_t i, j; + + ngbe_read_stats_registers(hw, hw_stats); + + if (stats == NULL) + return -EINVAL; + + /* Fill out the rte_eth_stats statistics structure */ + stats->ipackets = hw_stats->rx_packets; + stats->ibytes = hw_stats->rx_bytes; + stats->opackets = hw_stats->tx_packets; + stats->obytes = hw_stats->tx_bytes; + + memset(&stats->q_ipackets, 0, sizeof(stats->q_ipackets)); + memset(&stats->q_opackets, 0, sizeof(stats->q_opackets)); + memset(&stats->q_ibytes, 0, sizeof(stats->q_ibytes)); + memset(&stats->q_obytes, 0, sizeof(stats->q_obytes)); + memset(&stats->q_errors, 0, sizeof(stats->q_errors)); + for (i = 0; i < NGBE_MAX_QP; i++) { + uint32_t n = i / NB_QMAP_FIELDS_PER_QSM_REG; + uint32_t offset = (i % NB_QMAP_FIELDS_PER_QSM_REG) * 8; + uint32_t q_map; + + q_map = (stat_mappings->rqsm[n] >> offset) + & QMAP_FIELD_RESERVED_BITS_MASK; + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS); + stats->q_ipackets[j] += hw_stats->qp[i].rx_qp_packets; + stats->q_ibytes[j] += hw_stats->qp[i].rx_qp_bytes; + + q_map = (stat_mappings->tqsm[n] >> offset) + & QMAP_FIELD_RESERVED_BITS_MASK; + j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS + ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS); + stats->q_opackets[j] += hw_stats->qp[i].tx_qp_packets; + stats->q_obytes[j] += hw_stats->qp[i].tx_qp_bytes; + } + + /* Rx Errors */ + stats->imissed = hw_stats->rx_total_missed_packets + + hw_stats->rx_dma_drop; + stats->ierrors = hw_stats->rx_crc_errors + + hw_stats->rx_mac_short_packet_dropped + + hw_stats->rx_length_errors + + hw_stats->rx_undersize_errors + + hw_stats->rx_oversize_errors + + hw_stats->rx_illegal_byte_errors + + hw_stats->rx_error_bytes + + hw_stats->rx_fragment_errors; + + /* Tx Errors */ + stats->oerrors = 0; + return 0; +} + +static int +ngbe_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev); + + /* HW registers are cleared on read */ + hw->offset_loaded = 0; + ngbe_dev_stats_get(dev, NULL); + hw->offset_loaded = 1; + + /* Reset software totals */ + memset(hw_stats, 0, sizeof(*hw_stats)); + + return 0; +} + static int ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -1462,6 +1759,9 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .dev_close = ngbe_dev_close, .dev_reset = ngbe_dev_reset, .link_update = ngbe_dev_link_update, + .stats_get = ngbe_dev_stats_get, + .stats_reset = ngbe_dev_stats_reset, + .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set, .vlan_offload_set = ngbe_vlan_offload_set, .rx_queue_start = ngbe_dev_rx_queue_start, .rx_queue_stop = ngbe_dev_rx_queue_stop, diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 8b3a1cdc3d..c0f1a50c66 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -40,6 +40,15 @@ struct ngbe_interrupt { uint64_t mask_orig; /* save mask during delayed handler */ }; +#define NGBE_NB_STAT_MAPPING 32 +#define QSM_REG_NB_BITS_PER_QMAP_FIELD 8 +#define NB_QMAP_FIELDS_PER_QSM_REG 4 +#define QMAP_FIELD_RESERVED_BITS_MASK 0x0f +struct ngbe_stat_mappings { + uint32_t tqsm[NGBE_NB_STAT_MAPPING]; + uint32_t rqsm[NGBE_NB_STAT_MAPPING]; +}; + struct ngbe_vfta { uint32_t vfta[NGBE_VFTA_SIZE]; }; @@ -53,7 +62,9 @@ struct ngbe_hwstrip { */ struct ngbe_adapter { struct ngbe_hw hw; + struct ngbe_hw_stats stats; struct ngbe_interrupt intr; + struct ngbe_stat_mappings stat_mappings; struct ngbe_vfta shadow_vfta; struct ngbe_hwstrip hwstrip; bool rx_bulk_alloc_allowed; @@ -76,6 +87,9 @@ ngbe_dev_hw(struct rte_eth_dev *dev) return hw; } +#define NGBE_DEV_STATS(dev) \ + (&((struct ngbe_adapter *)(dev)->data->dev_private)->stats) + static inline struct ngbe_interrupt * ngbe_dev_intr(struct rte_eth_dev *dev) { @@ -85,6 +99,9 @@ ngbe_dev_intr(struct rte_eth_dev *dev) return intr; } +#define NGBE_DEV_STAT_MAPPINGS(dev) \ + (&((struct ngbe_adapter *)(dev)->data->dev_private)->stat_mappings) + #define NGBE_DEV_VFTA(dev) \ (&((struct ngbe_adapter *)(dev)->data->dev_private)->shadow_vfta) @@ -190,5 +207,7 @@ void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on); void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask); +void ngbe_read_stats_registers(struct ngbe_hw *hw, + struct ngbe_hw_stats *hw_stats); #endif /* _NGBE_ETHDEV_H_ */ From patchwork Wed Sep 8 08:37:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98294 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 767C2A0C56; Wed, 8 Sep 2021 10:37:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6877F41189; Wed, 8 Sep 2021 10:36:51 +0200 (CEST) Received: from smtpbguseast3.qq.com (smtpbguseast3.qq.com [54.243.244.52]) by mails.dpdk.org (Postfix) with ESMTP id 9CE4E411C6 for ; Wed, 8 Sep 2021 10:36:49 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090196tc251jn7 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:35 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: T2pt4rM4NZslBl7J/ykJTpPIKfh91p2X8XFzFxHDz//O1mCWbBSLbb9gO4GhA QYOXwiQ5pVJqMFP+UqK5Sk9bq5Oo4/9RnPGoRTAPjXtEgwPnxspAFe91B33xqNCW0b5FGfO 4cCnMjCqxBf2EdQjfudkE0i8siUwJM+OVtEdg/l06ziU/UaEx73s6NqkdbtE4fm4yjPgoo6 qNNXzm5q0RBZkXnzY1SQlyW3cn23lzpjEg9FvuqcKRZqWtkCRipjNPYEbA1C+Je6nnikhA4 /u3yDtHqOFy9VGKQwfd8/sn/sWvipKNOqobVbqkdTH4wzucbm3ok2aubQRvwB6AHfc/82PR Pt7qz2PAvdCkoPMgTPBuyEDRvahMA== X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:35 +0800 Message-Id: <20210908083758.312055-10-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 09/32] net/ngbe: support device xstats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add device extended stats get from reading hardware registers. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + drivers/net/ngbe/ngbe_ethdev.c | 316 ++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ethdev.h | 6 + 3 files changed, 323 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index f310fb102a..42101020dd 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -20,6 +20,7 @@ Inner L3 checksum = P Inner L4 checksum = P Packet type parsing = Y Basic stats = Y +Extended stats = Y Stats per queue = Y Multiprocess aware = Y Linux = Y diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 3d459718b1..45d7c48011 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -84,6 +84,104 @@ static const struct rte_eth_desc_lim tx_desc_lim = { static const struct eth_dev_ops ngbe_eth_dev_ops; +#define HW_XSTAT(m) {#m, offsetof(struct ngbe_hw_stats, m)} +#define HW_XSTAT_NAME(m, n) {n, offsetof(struct ngbe_hw_stats, m)} +static const struct rte_ngbe_xstats_name_off rte_ngbe_stats_strings[] = { + /* MNG RxTx */ + HW_XSTAT(mng_bmc2host_packets), + HW_XSTAT(mng_host2bmc_packets), + /* Basic RxTx */ + HW_XSTAT(rx_packets), + HW_XSTAT(tx_packets), + HW_XSTAT(rx_bytes), + HW_XSTAT(tx_bytes), + HW_XSTAT(rx_total_bytes), + HW_XSTAT(rx_total_packets), + HW_XSTAT(tx_total_packets), + HW_XSTAT(rx_total_missed_packets), + HW_XSTAT(rx_broadcast_packets), + HW_XSTAT(rx_multicast_packets), + HW_XSTAT(rx_management_packets), + HW_XSTAT(tx_management_packets), + HW_XSTAT(rx_management_dropped), + + /* Basic Error */ + HW_XSTAT(rx_crc_errors), + HW_XSTAT(rx_illegal_byte_errors), + HW_XSTAT(rx_error_bytes), + HW_XSTAT(rx_mac_short_packet_dropped), + HW_XSTAT(rx_length_errors), + HW_XSTAT(rx_undersize_errors), + HW_XSTAT(rx_fragment_errors), + HW_XSTAT(rx_oversize_errors), + HW_XSTAT(rx_jabber_errors), + HW_XSTAT(rx_l3_l4_xsum_error), + HW_XSTAT(mac_local_errors), + HW_XSTAT(mac_remote_errors), + + /* MACSEC */ + HW_XSTAT(tx_macsec_pkts_untagged), + HW_XSTAT(tx_macsec_pkts_encrypted), + HW_XSTAT(tx_macsec_pkts_protected), + HW_XSTAT(tx_macsec_octets_encrypted), + HW_XSTAT(tx_macsec_octets_protected), + HW_XSTAT(rx_macsec_pkts_untagged), + HW_XSTAT(rx_macsec_pkts_badtag), + HW_XSTAT(rx_macsec_pkts_nosci), + HW_XSTAT(rx_macsec_pkts_unknownsci), + HW_XSTAT(rx_macsec_octets_decrypted), + HW_XSTAT(rx_macsec_octets_validated), + HW_XSTAT(rx_macsec_sc_pkts_unchecked), + HW_XSTAT(rx_macsec_sc_pkts_delayed), + HW_XSTAT(rx_macsec_sc_pkts_late), + HW_XSTAT(rx_macsec_sa_pkts_ok), + HW_XSTAT(rx_macsec_sa_pkts_invalid), + HW_XSTAT(rx_macsec_sa_pkts_notvalid), + HW_XSTAT(rx_macsec_sa_pkts_unusedsa), + HW_XSTAT(rx_macsec_sa_pkts_notusingsa), + + /* MAC RxTx */ + HW_XSTAT(rx_size_64_packets), + HW_XSTAT(rx_size_65_to_127_packets), + HW_XSTAT(rx_size_128_to_255_packets), + HW_XSTAT(rx_size_256_to_511_packets), + HW_XSTAT(rx_size_512_to_1023_packets), + HW_XSTAT(rx_size_1024_to_max_packets), + HW_XSTAT(tx_size_64_packets), + HW_XSTAT(tx_size_65_to_127_packets), + HW_XSTAT(tx_size_128_to_255_packets), + HW_XSTAT(tx_size_256_to_511_packets), + HW_XSTAT(tx_size_512_to_1023_packets), + HW_XSTAT(tx_size_1024_to_max_packets), + + /* Flow Control */ + HW_XSTAT(tx_xon_packets), + HW_XSTAT(rx_xon_packets), + HW_XSTAT(tx_xoff_packets), + HW_XSTAT(rx_xoff_packets), + + HW_XSTAT_NAME(tx_xon_packets, "tx_flow_control_xon_packets"), + HW_XSTAT_NAME(rx_xon_packets, "rx_flow_control_xon_packets"), + HW_XSTAT_NAME(tx_xoff_packets, "tx_flow_control_xoff_packets"), + HW_XSTAT_NAME(rx_xoff_packets, "rx_flow_control_xoff_packets"), +}; + +#define NGBE_NB_HW_STATS (sizeof(rte_ngbe_stats_strings) / \ + sizeof(rte_ngbe_stats_strings[0])) + +/* Per-queue statistics */ +#define QP_XSTAT(m) {#m, offsetof(struct ngbe_hw_stats, qp[0].m)} +static const struct rte_ngbe_xstats_name_off rte_ngbe_qp_strings[] = { + QP_XSTAT(rx_qp_packets), + QP_XSTAT(tx_qp_packets), + QP_XSTAT(rx_qp_bytes), + QP_XSTAT(tx_qp_bytes), + QP_XSTAT(rx_qp_mc_packets), +}; + +#define NGBE_NB_QP_STATS (sizeof(rte_ngbe_qp_strings) / \ + sizeof(rte_ngbe_qp_strings[0])) + static inline int32_t ngbe_pf_reset_hw(struct ngbe_hw *hw) { @@ -1213,6 +1311,219 @@ ngbe_dev_stats_reset(struct rte_eth_dev *dev) return 0; } +/* This function calculates the number of xstats based on the current config */ +static unsigned +ngbe_xstats_calc_num(struct rte_eth_dev *dev) +{ + int nb_queues = max(dev->data->nb_rx_queues, dev->data->nb_tx_queues); + return NGBE_NB_HW_STATS + + NGBE_NB_QP_STATS * nb_queues; +} + +static inline int +ngbe_get_name_by_id(uint32_t id, char *name, uint32_t size) +{ + int nb, st; + + /* Extended stats from ngbe_hw_stats */ + if (id < NGBE_NB_HW_STATS) { + snprintf(name, size, "[hw]%s", + rte_ngbe_stats_strings[id].name); + return 0; + } + id -= NGBE_NB_HW_STATS; + + /* Queue Stats */ + if (id < NGBE_NB_QP_STATS * NGBE_MAX_QP) { + nb = id / NGBE_NB_QP_STATS; + st = id % NGBE_NB_QP_STATS; + snprintf(name, size, "[q%u]%s", nb, + rte_ngbe_qp_strings[st].name); + return 0; + } + id -= NGBE_NB_QP_STATS * NGBE_MAX_QP; + + return -(int)(id + 1); +} + +static inline int +ngbe_get_offset_by_id(uint32_t id, uint32_t *offset) +{ + int nb, st; + + /* Extended stats from ngbe_hw_stats */ + if (id < NGBE_NB_HW_STATS) { + *offset = rte_ngbe_stats_strings[id].offset; + return 0; + } + id -= NGBE_NB_HW_STATS; + + /* Queue Stats */ + if (id < NGBE_NB_QP_STATS * NGBE_MAX_QP) { + nb = id / NGBE_NB_QP_STATS; + st = id % NGBE_NB_QP_STATS; + *offset = rte_ngbe_qp_strings[st].offset + + nb * (NGBE_NB_QP_STATS * sizeof(uint64_t)); + return 0; + } + + return -1; +} + +static int ngbe_dev_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, unsigned int limit) +{ + unsigned int i, count; + + count = ngbe_xstats_calc_num(dev); + if (xstats_names == NULL) + return count; + + /* Note: limit >= cnt_stats checked upstream + * in rte_eth_xstats_names() + */ + limit = min(limit, count); + + /* Extended stats from ngbe_hw_stats */ + for (i = 0; i < limit; i++) { + if (ngbe_get_name_by_id(i, xstats_names[i].name, + sizeof(xstats_names[i].name))) { + PMD_INIT_LOG(WARNING, "id value %d isn't valid", i); + break; + } + } + + return i; +} + +static int ngbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + const uint64_t *ids, + unsigned int limit) +{ + unsigned int i; + + if (ids == NULL) + return ngbe_dev_xstats_get_names(dev, xstats_names, limit); + + for (i = 0; i < limit; i++) { + if (ngbe_get_name_by_id(ids[i], xstats_names[i].name, + sizeof(xstats_names[i].name))) { + PMD_INIT_LOG(WARNING, "id value %d isn't valid", i); + return -1; + } + } + + return i; +} + +static int +ngbe_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, + unsigned int limit) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev); + unsigned int i, count; + + ngbe_read_stats_registers(hw, hw_stats); + + /* If this is a reset xstats is NULL, and we have cleared the + * registers by reading them. + */ + count = ngbe_xstats_calc_num(dev); + if (xstats == NULL) + return count; + + limit = min(limit, ngbe_xstats_calc_num(dev)); + + /* Extended stats from ngbe_hw_stats */ + for (i = 0; i < limit; i++) { + uint32_t offset = 0; + + if (ngbe_get_offset_by_id(i, &offset)) { + PMD_INIT_LOG(WARNING, "id value %d isn't valid", i); + break; + } + xstats[i].value = *(uint64_t *)(((char *)hw_stats) + offset); + xstats[i].id = i; + } + + return i; +} + +static int +ngbe_dev_xstats_get_(struct rte_eth_dev *dev, uint64_t *values, + unsigned int limit) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev); + unsigned int i, count; + + ngbe_read_stats_registers(hw, hw_stats); + + /* If this is a reset xstats is NULL, and we have cleared the + * registers by reading them. + */ + count = ngbe_xstats_calc_num(dev); + if (values == NULL) + return count; + + limit = min(limit, ngbe_xstats_calc_num(dev)); + + /* Extended stats from ngbe_hw_stats */ + for (i = 0; i < limit; i++) { + uint32_t offset; + + if (ngbe_get_offset_by_id(i, &offset)) { + PMD_INIT_LOG(WARNING, "id value %d isn't valid", i); + break; + } + values[i] = *(uint64_t *)(((char *)hw_stats) + offset); + } + + return i; +} + +static int +ngbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, + uint64_t *values, unsigned int limit) +{ + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev); + unsigned int i; + + if (ids == NULL) + return ngbe_dev_xstats_get_(dev, values, limit); + + for (i = 0; i < limit; i++) { + uint32_t offset; + + if (ngbe_get_offset_by_id(ids[i], &offset)) { + PMD_INIT_LOG(WARNING, "id value %d isn't valid", i); + break; + } + values[i] = *(uint64_t *)(((char *)hw_stats) + offset); + } + + return i; +} + +static int +ngbe_dev_xstats_reset(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_hw_stats *hw_stats = NGBE_DEV_STATS(dev); + + /* HW registers are cleared on read */ + hw->offset_loaded = 0; + ngbe_read_stats_registers(hw, hw_stats); + hw->offset_loaded = 1; + + /* Reset software totals */ + memset(hw_stats, 0, sizeof(*hw_stats)); + + return 0; +} + static int ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -1760,7 +2071,12 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .dev_reset = ngbe_dev_reset, .link_update = ngbe_dev_link_update, .stats_get = ngbe_dev_stats_get, + .xstats_get = ngbe_dev_xstats_get, + .xstats_get_by_id = ngbe_dev_xstats_get_by_id, .stats_reset = ngbe_dev_stats_reset, + .xstats_reset = ngbe_dev_xstats_reset, + .xstats_get_names = ngbe_dev_xstats_get_names, + .xstats_get_names_by_id = ngbe_dev_xstats_get_names_by_id, .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set, .vlan_offload_set = ngbe_vlan_offload_set, .rx_queue_start = ngbe_dev_rx_queue_start, diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index c0f1a50c66..1527dcc022 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -202,6 +202,12 @@ void ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev); #define NGBE_DEFAULT_TX_HTHRESH 0 #define NGBE_DEFAULT_TX_WTHRESH 0 +/* store statistics names and its offset in stats structure */ +struct rte_ngbe_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev); void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on); From patchwork Wed Sep 8 08:37:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98290 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E09A3A0C56; Wed, 8 Sep 2021 10:37:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C38CA411BA; Wed, 8 Sep 2021 10:36:44 +0200 (CEST) Received: from smtpbgsg2.qq.com (smtpbgsg2.qq.com [54.254.200.128]) by mails.dpdk.org (Postfix) with ESMTP id 5B2E9411B0 for ; Wed, 8 Sep 2021 10:36:43 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090198t5x4c5e6 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:37 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: B65ZW1oC9L1yF7nMVSO/Di1REZbQFQor10MBC3iq0agOaBrAX61NamUMublA+ xnJRDxvotlnxKpt7LJdM/4UMx1dTzycc7HoTRviO4oXC3UmY1WT2WLXx6NZ1fGnzo9MJYKJ d6fa4hOlZSshG0XB5917t0pwNVGbtWNS3gW5B9u3TQU6jgI9dbNsWEauCUKFRObThmH43iI LTyL6WFmxJq47AMlqZO8hcnJRjUzLSoQrXf8cLL2AeU2FpScsiGSNQA5IXhAFXoySMnL1W1 7TResPFFKQNeDUr/54S/VI+fYEc6goGMseyPNuaoFCh6PyFflSe3Dsk4l7htgED4WKmAbE7 9f8XsOI628JjpdhJj8c7eUFpNyo8uM76p9AR6ZX X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:36 +0800 Message-Id: <20210908083758.312055-11-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 10/32] net/ngbe: support MTU set X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support updating port MTU. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + drivers/net/ngbe/base/ngbe_type.h | 3 +++ drivers/net/ngbe/ngbe_ethdev.c | 41 +++++++++++++++++++++++++++++++ 3 files changed, 45 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 42101020dd..bdb06916e1 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -8,6 +8,7 @@ Speed capabilities = Y Link status = Y Link status event = Y Queue start/stop = Y +MTU update = Y Jumbo frame = Y Scattered Rx = Y TSO = Y diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index c13f0208fd..78fb0da7fa 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -8,6 +8,7 @@ #define NGBE_LINK_UP_TIME 90 /* 9.0 Seconds */ +#define NGBE_FRAME_SIZE_MAX (9728) /* Maximum frame size, +FCS */ #define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */ #define NGBE_MAX_QP (8) @@ -316,6 +317,8 @@ struct ngbe_hw { u16 nb_rx_queues; u16 nb_tx_queues; + u32 mode; + u32 q_rx_regs[8 * 4]; u32 q_tx_regs[8 * 4]; bool offset_loaded; diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 45d7c48011..29f35d9e8d 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -1970,6 +1970,46 @@ ngbe_dev_interrupt_handler(void *param) ngbe_dev_interrupt_action(dev); } +static int +ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct rte_eth_dev_info dev_info; + uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 4; + struct rte_eth_dev_data *dev_data = dev->data; + int ret; + + ret = ngbe_dev_info_get(dev, &dev_info); + if (ret != 0) + return ret; + + /* check that mtu is within the allowed range */ + if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) + return -EINVAL; + + /* If device is started, refuse mtu that requires the support of + * scattered packets when this feature has not been enabled before. + */ + if (dev_data->dev_started && !dev_data->scattered_rx && + (frame_size + 2 * NGBE_VLAN_TAG_SIZE > + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) { + PMD_INIT_LOG(ERR, "Stop port first."); + return -EINVAL; + } + + /* update max frame size */ + dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + + if (hw->mode) + wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK, + NGBE_FRAME_SIZE_MAX); + else + wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK, + NGBE_FRMSZ_MAX(frame_size)); + + return 0; +} + /** * Set the IVAR registers, mapping interrupt causes to vectors * @param hw @@ -2078,6 +2118,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .xstats_get_names = ngbe_dev_xstats_get_names, .xstats_get_names_by_id = ngbe_dev_xstats_get_names_by_id, .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set, + .mtu_set = ngbe_dev_mtu_set, .vlan_offload_set = ngbe_vlan_offload_set, .rx_queue_start = ngbe_dev_rx_queue_start, .rx_queue_stop = ngbe_dev_rx_queue_stop, From patchwork Wed Sep 8 08:37:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98291 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98CA7A0C56; Wed, 8 Sep 2021 10:37:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 045F4411B5; Wed, 8 Sep 2021 10:36:47 +0200 (CEST) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id BA634411B8 for ; Wed, 8 Sep 2021 10:36:44 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090200tow6w7aa Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:39 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: HH6/KuQOBEaX0tiTvLj10F9zNL7H7ILmOXtWuixYXQJIuPZvGmQE4b4NMaIZF PJ+hsT7R7zD6SxMBXvmf2CtiDXCq86kpZseDv4SaFn9Kyh4MefIxXgrvR+vg11N0qQEp6/+ XjnIsapeOE4psr2nOQzBjP+NCiypdNKeWU8X9o0TOun4coHXm7IKak+SAIsY0s+JrPCopRW f5MDkKdmrklMrpsg8WRmGWE3xTdGbrSXYXaH927hK2naPMEltTJ8SgVBElD+cIPx2ri3JLl Daq4nQVZKqXgHZ5CkzJB/Rt/S/MPXIVUq3REYwXNyZCmsBD4atIQf6OhOt0frrxpjexoykP e7Aqli3iuNMUHg1q4g9aMbOcHPUW2skyuzIVPjT X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:37 +0800 Message-Id: <20210908083758.312055-12-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 11/32] net/ngbe: add device promiscuous and allmulticast mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to enable/disable promiscuous and allmulticast mode for a port. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 2 + doc/guides/nics/ngbe.rst | 2 + drivers/net/ngbe/ngbe_ethdev.c | 63 +++++++++++++++++++++++++++++++ 3 files changed, 67 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index bdb06916e1..2f38f1e843 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -12,6 +12,8 @@ MTU update = Y Jumbo frame = Y Scattered Rx = Y TSO = Y +Promiscuous mode = Y +Allmulticast mode = Y CRC offload = P VLAN offload = P QinQ offload = P diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 64c07e4741..8333fba9cd 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -15,6 +15,8 @@ Features - Checksum offload - VLAN/QinQ stripping and inserting - TSO offload +- Promiscuous mode +- Multicast mode - Port hardware statistics - Jumbo frames - Link state information diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 29f35d9e8d..ce71edd6d8 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -1674,6 +1674,65 @@ ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete) return ngbe_dev_link_update_share(dev, wait_to_complete); } +static int +ngbe_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t fctrl; + + fctrl = rd32(hw, NGBE_PSRCTL); + fctrl |= (NGBE_PSRCTL_UCP | NGBE_PSRCTL_MCP); + wr32(hw, NGBE_PSRCTL, fctrl); + + return 0; +} + +static int +ngbe_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t fctrl; + + fctrl = rd32(hw, NGBE_PSRCTL); + fctrl &= (~NGBE_PSRCTL_UCP); + if (dev->data->all_multicast == 1) + fctrl |= NGBE_PSRCTL_MCP; + else + fctrl &= (~NGBE_PSRCTL_MCP); + wr32(hw, NGBE_PSRCTL, fctrl); + + return 0; +} + +static int +ngbe_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t fctrl; + + fctrl = rd32(hw, NGBE_PSRCTL); + fctrl |= NGBE_PSRCTL_MCP; + wr32(hw, NGBE_PSRCTL, fctrl); + + return 0; +} + +static int +ngbe_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t fctrl; + + if (dev->data->promiscuous == 1) + return 0; /* must remain in all_multicast mode */ + + fctrl = rd32(hw, NGBE_PSRCTL); + fctrl &= (~NGBE_PSRCTL_MCP); + wr32(hw, NGBE_PSRCTL, fctrl); + + return 0; +} + /** * It clears the interrupt causes and enables the interrupt. * It will be called once only during NIC initialized. @@ -2109,6 +2168,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .dev_stop = ngbe_dev_stop, .dev_close = ngbe_dev_close, .dev_reset = ngbe_dev_reset, + .promiscuous_enable = ngbe_dev_promiscuous_enable, + .promiscuous_disable = ngbe_dev_promiscuous_disable, + .allmulticast_enable = ngbe_dev_allmulticast_enable, + .allmulticast_disable = ngbe_dev_allmulticast_disable, .link_update = ngbe_dev_link_update, .stats_get = ngbe_dev_stats_get, .xstats_get = ngbe_dev_xstats_get, From patchwork Wed Sep 8 08:37:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98292 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE235A0C56; Wed, 8 Sep 2021 10:37:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 343D6411BE; Wed, 8 Sep 2021 10:36:48 +0200 (CEST) Received: from smtpbgeu1.qq.com (smtpbgeu1.qq.com [52.59.177.22]) by mails.dpdk.org (Postfix) with ESMTP id 1C87941192 for ; Wed, 8 Sep 2021 10:36:46 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090202tjl7l5w2 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:41 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: jUz4x+AhIZk1uYT3Yto/1ktromUvLwmnR4fzTsuRuFTMomihLxePlcugokATO D61VRJ+V15vh9zJZ4xEOIzbW33FpfwlqgkAWfuNHopl5cm3y5KULzOxGgb+ey9IA8h+XemB i3oTYC0Htz3ysrm30AN0OtHpjs+EwNGYCg5ic0IAhPUjs4H31fnyrtpTFghgFThfE4MXduX S/n4wWSvM0Evm7EORz/6yBGVTYa276TkvRU7jLzOR8bvZlb9+MLI9tqjZE2t+pBotC4mZw6 43dvoMMkN1upxlwEcIst5KzILlEXY861rud0wAUtV6htYJ/hOtbRlb22tK1/MWtt1YrKu1i Ai/EjibaggPfyepaiYKaTxnZyPPuC13DxOiVlR1 X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:38 +0800 Message-Id: <20210908083758.312055-13-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 12/32] net/ngbe: support getting FW version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add firmware version get operation. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/base/ngbe_dummy.h | 6 ++++ drivers/net/ngbe/base/ngbe_eeprom.c | 56 +++++++++++++++++++++++++++++ drivers/net/ngbe/base/ngbe_eeprom.h | 5 +++ drivers/net/ngbe/base/ngbe_hw.c | 3 ++ drivers/net/ngbe/base/ngbe_mng.c | 44 +++++++++++++++++++++++ drivers/net/ngbe/base/ngbe_mng.h | 5 +++ drivers/net/ngbe/base/ngbe_type.h | 2 ++ drivers/net/ngbe/ngbe_ethdev.c | 21 +++++++++++ 10 files changed, 144 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 2f38f1e843..1006c3935b 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -25,6 +25,7 @@ Packet type parsing = Y Basic stats = Y Extended stats = Y Stats per queue = Y +FW version = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 8333fba9cd..50a6e85c49 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -21,6 +21,7 @@ Features - Jumbo frames - Link state information - Scattered and gather for TX and RX +- FW version Prerequisites diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 0def116c53..689480cc9a 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -33,6 +33,11 @@ static inline s32 ngbe_rom_init_params_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_rom_read32_dummy(struct ngbe_hw *TUP0, u32 TUP1, + u32 *TUP2) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0, u16 *TUP1) { @@ -177,6 +182,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) { hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy; hw->rom.init_params = ngbe_rom_init_params_dummy; + hw->rom.read32 = ngbe_rom_read32_dummy; hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy; hw->mac.init_hw = ngbe_mac_init_hw_dummy; hw->mac.reset_hw = ngbe_mac_reset_hw_dummy; diff --git a/drivers/net/ngbe/base/ngbe_eeprom.c b/drivers/net/ngbe/base/ngbe_eeprom.c index 3dcd5c2f6c..9ae2f0badb 100644 --- a/drivers/net/ngbe/base/ngbe_eeprom.c +++ b/drivers/net/ngbe/base/ngbe_eeprom.c @@ -161,6 +161,30 @@ void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw) ngbe_flush(hw); } +/** + * ngbe_ee_read32 - Read EEPROM word using a host interface cmd + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to read + * @data: word read from the EEPROM + * + * Reads a 32 bit word from the EEPROM using the hostif. + **/ +s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data) +{ + const u32 mask = NGBE_MNGSEM_SWMBX | NGBE_MNGSEM_SWFLASH; + int err; + + err = hw->mac.acquire_swfw_sync(hw, mask); + if (err) + return err; + + err = ngbe_hic_sr_read(hw, addr, (u8 *)data, 4); + + hw->mac.release_swfw_sync(hw, mask); + + return err; +} + /** * ngbe_validate_eeprom_checksum_em - Validate EEPROM checksum * @hw: pointer to hardware structure @@ -201,3 +225,35 @@ s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw, return err; } +/** + * ngbe_save_eeprom_version + * @hw: pointer to hardware structure + * + * Save off EEPROM version number and Option Rom version which + * together make a unique identify for the eeprom + */ +s32 ngbe_save_eeprom_version(struct ngbe_hw *hw) +{ + u32 eeprom_verl = 0; + u32 etrack_id = 0; + u32 offset = (hw->rom.sw_addr + NGBE_EEPROM_VERSION_L) << 1; + + DEBUGFUNC("ngbe_save_eeprom_version"); + + if (hw->bus.lan_id == 0) { + hw->rom.read32(hw, offset, &eeprom_verl); + etrack_id = eeprom_verl; + wr32(hw, NGBE_EEPROM_VERSION_STORE_REG, etrack_id); + wr32(hw, NGBE_CALSUM_CAP_STATUS, + hw->rom.cksum_devcap | 0x10000); + } else if (hw->rom.cksum_devcap) { + etrack_id = hw->rom.saved_version; + } else { + hw->rom.read32(hw, offset, &eeprom_verl); + etrack_id = eeprom_verl; + } + + hw->eeprom_id = etrack_id; + + return 0; +} diff --git a/drivers/net/ngbe/base/ngbe_eeprom.h b/drivers/net/ngbe/base/ngbe_eeprom.h index b433077629..5f27425913 100644 --- a/drivers/net/ngbe/base/ngbe_eeprom.h +++ b/drivers/net/ngbe/base/ngbe_eeprom.h @@ -6,6 +6,8 @@ #ifndef _NGBE_EEPROM_H_ #define _NGBE_EEPROM_H_ +#define NGBE_EEPROM_VERSION_L 0x1D +#define NGBE_EEPROM_VERSION_H 0x1E #define NGBE_CALSUM_CAP_STATUS 0x10224 #define NGBE_EEPROM_VERSION_STORE_REG 0x1022C @@ -13,5 +15,8 @@ s32 ngbe_init_eeprom_params(struct ngbe_hw *hw); s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw, u16 *checksum_val); s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw); void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw); +s32 ngbe_save_eeprom_version(struct ngbe_hw *hw); + +s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data); #endif /* _NGBE_EEPROM_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index f302df5d9d..0dabb6c1c7 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -44,6 +44,8 @@ s32 ngbe_init_hw(struct ngbe_hw *hw) DEBUGFUNC("ngbe_init_hw"); + ngbe_save_eeprom_version(hw); + /* Reset the hardware */ status = hw->mac.reset_hw(hw); if (status == 0) { @@ -1115,6 +1117,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) /* EEPROM */ rom->init_params = ngbe_init_eeprom_params; + rom->read32 = ngbe_ee_read32; rom->validate_checksum = ngbe_validate_eeprom_checksum_em; mac->mcft_size = NGBE_EM_MC_TBL_SIZE; diff --git a/drivers/net/ngbe/base/ngbe_mng.c b/drivers/net/ngbe/base/ngbe_mng.c index 6ad2838ea7..9416ea4c8d 100644 --- a/drivers/net/ngbe/base/ngbe_mng.c +++ b/drivers/net/ngbe/base/ngbe_mng.c @@ -158,6 +158,50 @@ ngbe_host_interface_command(struct ngbe_hw *hw, u32 *buffer, return err; } +/** + * ngbe_hic_sr_read - Read EEPROM word using a host interface cmd + * assuming that the semaphore is already obtained. + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to read + * @data: word read from the EEPROM + * + * Reads a 16 bit word from the EEPROM using the hostif. + **/ +s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len) +{ + struct ngbe_hic_read_shadow_ram command; + u32 value; + int err, i = 0, j = 0; + + if (len > NGBE_PMMBX_DATA_SIZE) + return NGBE_ERR_HOST_INTERFACE_COMMAND; + + memset(&command, 0, sizeof(command)); + command.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD; + command.hdr.req.buf_lenh = 0; + command.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN; + command.hdr.req.checksum = FW_DEFAULT_CHECKSUM; + command.address = cpu_to_be32(addr); + command.length = cpu_to_be16(len); + + err = ngbe_hic_unlocked(hw, (u32 *)&command, + sizeof(command), NGBE_HI_COMMAND_TIMEOUT); + if (err) + return err; + + while (i < (len >> 2)) { + value = rd32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + i); + ((u32 *)buf)[i] = value; + i++; + } + + value = rd32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + i); + for (i <<= 2; i < len; i++) + ((u8 *)buf)[i] = ((u8 *)&value)[j++]; + + return 0; +} + s32 ngbe_hic_check_cap(struct ngbe_hw *hw) { struct ngbe_hic_read_shadow_ram command; diff --git a/drivers/net/ngbe/base/ngbe_mng.h b/drivers/net/ngbe/base/ngbe_mng.h index e86893101b..6f368b028f 100644 --- a/drivers/net/ngbe/base/ngbe_mng.h +++ b/drivers/net/ngbe/base/ngbe_mng.h @@ -10,12 +10,16 @@ #define NGBE_PMMBX_QSIZE 64 /* Num of dwords in range */ #define NGBE_PMMBX_BSIZE (NGBE_PMMBX_QSIZE * 4) +#define NGBE_PMMBX_DATA_SIZE (NGBE_PMMBX_BSIZE - FW_NVM_DATA_OFFSET * 4) #define NGBE_HI_COMMAND_TIMEOUT 5000 /* Process HI command limit */ /* CEM Support */ #define FW_CEM_MAX_RETRIES 3 #define FW_CEM_RESP_STATUS_SUCCESS 0x1 +#define FW_READ_SHADOW_RAM_CMD 0x31 +#define FW_READ_SHADOW_RAM_LEN 0x6 #define FW_DEFAULT_CHECKSUM 0xFF /* checksum always 0xFF */ +#define FW_NVM_DATA_OFFSET 3 #define FW_EEPROM_CHECK_STATUS 0xE9 #define FW_CHECKSUM_CAP_ST_PASS 0x80658383 @@ -61,5 +65,6 @@ struct ngbe_hic_read_shadow_ram { u16 pad3; }; +s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len); s32 ngbe_hic_check_cap(struct ngbe_hw *hw); #endif /* _NGBE_MNG_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 78fb0da7fa..2586eaf36a 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -202,6 +202,7 @@ struct ngbe_hw_stats { struct ngbe_rom_info { s32 (*init_params)(struct ngbe_hw *hw); + s32 (*read32)(struct ngbe_hw *hw, u32 addr, u32 *data); s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val); enum ngbe_eeprom_type type; @@ -310,6 +311,7 @@ struct ngbe_hw { u16 vendor_id; u16 sub_device_id; u16 sub_system_id; + u32 eeprom_id; bool adapter_stopped; uint64_t isb_dma; diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index ce71edd6d8..5566bf26a9 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -1524,6 +1524,26 @@ ngbe_dev_xstats_reset(struct rte_eth_dev *dev) return 0; } +static int +ngbe_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + int ret; + + ret = snprintf(fw_version, fw_size, "0x%08x", hw->eeprom_id); + + if (ret < 0) + return -EINVAL; + + ret += 1; /* add the size of '\0' */ + if (fw_size < (size_t)ret) + return ret; + else + return 0; + + return 0; +} + static int ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { @@ -2181,6 +2201,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .xstats_get_names = ngbe_dev_xstats_get_names, .xstats_get_names_by_id = ngbe_dev_xstats_get_names_by_id, .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set, + .fw_version_get = ngbe_fw_version_get, .mtu_set = ngbe_dev_mtu_set, .vlan_offload_set = ngbe_vlan_offload_set, .rx_queue_start = ngbe_dev_rx_queue_start, From patchwork Wed Sep 8 08:37:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98293 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEB19A0C56; Wed, 8 Sep 2021 10:37:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 466B9411C3; Wed, 8 Sep 2021 10:36:49 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg702.qq.com [203.205.195.102]) by mails.dpdk.org (Postfix) with ESMTP id 0375E411A9 for ; Wed, 8 Sep 2021 10:36:47 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090204tdqap80a Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:43 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: 83ShfzFP0oC6bJ7C+d6Pgqvbsoj0NjLnna/InzWGePxQOp8E5JgXLZYRO1zcx rjljFCZmuhDhwSxRMF1Rm5vWbtQDe5KH510fnCOeJft/9IOkJWUS/xVOMgkrhgYGrsby45Z cGkH44x512mtZNRt++tFH9nvWeqVzbHsqQDwKGHMLt7p4gIUpfm9h+u4cveAPsymFLjeDVS v0KHUsJW4nTDQ7iupD72MZidQOm46yjY+am6PVI8ndv8nGg1xwto09qoTrUfBhr5SKqUp+I dJIqjQMN0NoSwnCW5e3gMhaer8fNSw+XnqJs4ChmWfcfmQ1wmSR7ZZAcuq0KOF5OgWCSFPC Cxe6ChxwTfIDY42LkLbSMk8ANs9SMp4R/5tW2Pf X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:39 +0800 Message-Id: <20210908083758.312055-14-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 13/32] net/ngbe: add loopback mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support loopback operation mode. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ethdev.c | 6 ++++++ drivers/net/ngbe/ngbe_rxtx.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 5566bf26a9..9caca55df3 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -850,6 +850,10 @@ ngbe_dev_start(struct rte_eth_dev *dev) goto error; } + /* Skip link setup if loopback mode is enabled. */ + if (hw->is_pf && dev->data->dev_conf.lpbk_mode) + goto skip_link_setup; + err = hw->mac.check_link(hw, &speed, &link_up, 0); if (err != 0) goto error; @@ -893,6 +897,8 @@ ngbe_dev_start(struct rte_eth_dev *dev) if (err != 0) goto error; +skip_link_setup: + if (rte_intr_allow_others(intr_handle)) { ngbe_dev_misc_interrupt_setup(dev); /* check if lsc interrupt is enabled */ diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 1151173b02..22693c144a 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -2420,6 +2420,17 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT)); } + /* + * If loopback mode is configured, set LPBK bit. + */ + hlreg0 = rd32(hw, NGBE_PSRCTL); + if (hw->is_pf && dev->data->dev_conf.lpbk_mode) + hlreg0 |= NGBE_PSRCTL_LBENA; + else + hlreg0 &= ~NGBE_PSRCTL_LBENA; + + wr32(hw, NGBE_PSRCTL, hlreg0); + /* * Assume no header split and no VLAN strip support * on any Rx queue first . @@ -2538,6 +2549,19 @@ ngbe_dev_tx_init(struct rte_eth_dev *dev) } } +/* + * Set up link loopback mode Tx->Rx. + */ +static inline void +ngbe_setup_loopback_link(struct ngbe_hw *hw) +{ + PMD_INIT_FUNC_TRACE(); + + wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_LB, NGBE_MACRXCFG_LB); + + msec_delay(50); +} + /* * Start Transmit and Receive Units. */ @@ -2592,6 +2616,10 @@ ngbe_dev_rxtx_start(struct rte_eth_dev *dev) rxctrl |= NGBE_PBRXCTL_ENA; hw->mac.enable_rx_dma(hw, rxctrl); + /* If loopback mode is enabled, set up the link accordingly */ + if (hw->is_pf && dev->data->dev_conf.lpbk_mode) + ngbe_setup_loopback_link(hw); + return 0; } From patchwork Wed Sep 8 08:37:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98295 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09F4CA0C56; Wed, 8 Sep 2021 10:38:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB156411CC; Wed, 8 Sep 2021 10:36:53 +0200 (CEST) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id B4D17411C9 for ; Wed, 8 Sep 2021 10:36:51 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090206tgp7hz83 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:45 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: ddv/5a5hb5IVYO4sHELXr13Z7EXHOzE+cMTiGVBGJqvQ8dc5178+Joh4QFP8J r6stredcfv5UV1xCamtq+UXAM29IZAhvwwVSXlRNzmH1ZqW928BeqSUbPvJQ8XEw7gN2Afl pgb8CpMeFkXNCeyJ/Zf2+VQ2F4PQp12SQFOuYGiQfH0dPOrJ7q6rTuoQ0gOkWXgSdcWHz3y Ye/BbgDs15b56A2F12JGdy/uiv4TtYlFdPQ2pqNPUqv+DcJDcTjwrJ6Wtf8EmZj3KULpPem DuNQwDW+cwQeOiXhYrSK1anmJ5Cz6wIOCpEWN62M2rZ8n43sE6Fe1CXz0OfqOPJQtoaVx0l 9U3vwftiTFPZSdaOoyDrmaMn7qBLHDCyX8p0wEQ X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:40 +0800 Message-Id: <20210908083758.312055-15-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign6 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 14/32] net/ngbe: support Rx interrupt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support Rx queue interrupt. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/ngbe_ethdev.c | 35 +++++++++++++++++++++++++++++++ 3 files changed, 37 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 1006c3935b..d14469eb43 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -7,6 +7,7 @@ Speed capabilities = Y Link status = Y Link status event = Y +Rx interrupt = Y Queue start/stop = Y MTU update = Y Jumbo frame = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 50a6e85c49..2783c4a3c4 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -20,6 +20,7 @@ Features - Port hardware statistics - Jumbo frames - Link state information +- Interrupt mode for RX - Scattered and gather for TX and RX - FW version diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 9caca55df3..52642161b7 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -2095,6 +2095,39 @@ ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } +static int +ngbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + uint32_t mask; + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + if (queue_id < 32) { + mask = rd32(hw, NGBE_IMS(0)); + mask &= (1 << queue_id); + wr32(hw, NGBE_IMS(0), mask); + } + rte_intr_enable(intr_handle); + + return 0; +} + +static int +ngbe_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + uint32_t mask; + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + if (queue_id < 32) { + mask = rd32(hw, NGBE_IMS(0)); + mask &= ~(1 << queue_id); + wr32(hw, NGBE_IMS(0), mask); + } + + return 0; +} + /** * Set the IVAR registers, mapping interrupt causes to vectors * @param hw @@ -2215,6 +2248,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .tx_queue_start = ngbe_dev_tx_queue_start, .tx_queue_stop = ngbe_dev_tx_queue_stop, .rx_queue_setup = ngbe_dev_rx_queue_setup, + .rx_queue_intr_enable = ngbe_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = ngbe_dev_rx_queue_intr_disable, .rx_queue_release = ngbe_dev_rx_queue_release, .tx_queue_setup = ngbe_dev_tx_queue_setup, .tx_queue_release = ngbe_dev_tx_queue_release, From patchwork Wed Sep 8 08:37:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98296 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C9E9A0C56; Wed, 8 Sep 2021 10:38:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E2F34411D6; Wed, 8 Sep 2021 10:36:55 +0200 (CEST) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id 5D646411C9 for ; Wed, 8 Sep 2021 10:36:53 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090208ted8oklj Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:47 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: hoArX50alxFuEuFGsLc+9QaGhgq31w4N2Ug6oCZXP5t9dmj1+KmnoiBdNfOw3 ZrXaKWztQp84KBeHgrOd3e0NG96Xgd7HDUvB8AClDha/ieYBkfx6eNKvk++1yS+3MCwhYRL HLh4nrTCONwvW6vseXdYcvJeQCi0StkQonoCMr3H/WceTFs3AH4eFWvrxurnw15pnEPAhNa 4GF1F1F2x7y5TL3c3p88lxtnz3XE3hgymmHoBHeEQKL0B2HK2Xio0ze7AuqgcJ49rYu9xq/ srZmvEjZZMD5M2pzYfGIZDWFwSYVUV5VuoFo5czkI1HFrURZBTmtFx21orgKHlQCSFg5Aw0 NW76UdyG1HwfPuetqWvFQGII/pwETCrLFfNacyC X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:41 +0800 Message-Id: <20210908083758.312055-16-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 15/32] net/ngbe: support MAC filters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add MAC addresses to filter incoming packets, support to set multicast addresses to filter. And support to set unicast table array. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 2 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/base/ngbe_dummy.h | 6 + drivers/net/ngbe/base/ngbe_hw.c | 135 +++++++++++++++++++++- drivers/net/ngbe/base/ngbe_hw.h | 4 + drivers/net/ngbe/base/ngbe_type.h | 11 ++ drivers/net/ngbe/ngbe_ethdev.c | 175 +++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ethdev.h | 13 +++ 8 files changed, 346 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index d14469eb43..4b22dc683a 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -15,6 +15,8 @@ Scattered Rx = Y TSO = Y Promiscuous mode = Y Allmulticast mode = Y +Unicast MAC filter = Y +Multicast MAC filter = Y CRC offload = P VLAN offload = P QinQ offload = P diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 2783c4a3c4..4d01c27064 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -11,6 +11,7 @@ for Wangxun 1 Gigabit Ethernet NICs. Features -------- +- MAC filtering - Packet type information - Checksum offload - VLAN/QinQ stripping and inserting diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 689480cc9a..fe2d53f312 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -127,6 +127,11 @@ static inline s32 ngbe_mac_init_rx_addrs_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_mac_update_mc_addr_list_dummy(struct ngbe_hw *TUP0, + u8 *TUP1, u32 TUP2, ngbe_mc_addr_itr TUP3, bool TUP4) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; @@ -203,6 +208,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy; hw->mac.clear_vmdq = ngbe_mac_clear_vmdq_dummy; hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy; + hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy; hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy; hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy; hw->phy.identify = ngbe_phy_identify_dummy; diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index 0dabb6c1c7..897baf179d 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -567,6 +567,138 @@ s32 ngbe_init_rx_addrs(struct ngbe_hw *hw) return 0; } +/** + * ngbe_mta_vector - Determines bit-vector in multicast table to set + * @hw: pointer to hardware structure + * @mc_addr: the multicast address + * + * Extracts the 12 bits, from a multicast address, to determine which + * bit-vector to set in the multicast table. The hardware uses 12 bits, from + * incoming rx multicast addresses, to determine the bit-vector to check in + * the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set + * by the MO field of the PSRCTRL. The MO field is set during initialization + * to mc_filter_type. + **/ +static s32 ngbe_mta_vector(struct ngbe_hw *hw, u8 *mc_addr) +{ + u32 vector = 0; + + DEBUGFUNC("ngbe_mta_vector"); + + switch (hw->mac.mc_filter_type) { + case 0: /* use bits [47:36] of the address */ + vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4)); + break; + case 1: /* use bits [46:35] of the address */ + vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5)); + break; + case 2: /* use bits [45:34] of the address */ + vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6)); + break; + case 3: /* use bits [43:32] of the address */ + vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8)); + break; + default: /* Invalid mc_filter_type */ + DEBUGOUT("MC filter type param set incorrectly\n"); + ASSERT(0); + break; + } + + /* vector can only be 12-bits or boundary will be exceeded */ + vector &= 0xFFF; + return vector; +} + +/** + * ngbe_set_mta - Set bit-vector in multicast table + * @hw: pointer to hardware structure + * @mc_addr: Multicast address + * + * Sets the bit-vector in the multicast table. + **/ +void ngbe_set_mta(struct ngbe_hw *hw, u8 *mc_addr) +{ + u32 vector; + u32 vector_bit; + u32 vector_reg; + + DEBUGFUNC("ngbe_set_mta"); + + hw->addr_ctrl.mta_in_use++; + + vector = ngbe_mta_vector(hw, mc_addr); + DEBUGOUT(" bit-vector = 0x%03X\n", vector); + + /* + * The MTA is a register array of 128 32-bit registers. It is treated + * like an array of 4096 bits. We want to set bit + * BitArray[vector_value]. So we figure out what register the bit is + * in, read it, OR in the new bit, then write back the new value. The + * register is determined by the upper 7 bits of the vector value and + * the bit within that register are determined by the lower 5 bits of + * the value. + */ + vector_reg = (vector >> 5) & 0x7F; + vector_bit = vector & 0x1F; + hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit); +} + +/** + * ngbe_update_mc_addr_list - Updates MAC list of multicast addresses + * @hw: pointer to hardware structure + * @mc_addr_list: the list of new multicast addresses + * @mc_addr_count: number of addresses + * @next: iterator function to walk the multicast address list + * @clear: flag, when set clears the table beforehand + * + * When the clear flag is set, the given list replaces any existing list. + * Hashes the given addresses into the multicast table. + **/ +s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list, + u32 mc_addr_count, ngbe_mc_addr_itr next, + bool clear) +{ + u32 i; + u32 vmdq; + + DEBUGFUNC("ngbe_update_mc_addr_list"); + + /* + * Set the new number of MC addresses that we are being requested to + * use. + */ + hw->addr_ctrl.num_mc_addrs = mc_addr_count; + hw->addr_ctrl.mta_in_use = 0; + + /* Clear mta_shadow */ + if (clear) { + DEBUGOUT(" Clearing MTA\n"); + memset(&hw->mac.mta_shadow, 0, sizeof(hw->mac.mta_shadow)); + } + + /* Update mta_shadow */ + for (i = 0; i < mc_addr_count; i++) { + DEBUGOUT(" Adding the multicast addresses:\n"); + ngbe_set_mta(hw, next(hw, &mc_addr_list, &vmdq)); + } + + /* Enable mta */ + for (i = 0; i < hw->mac.mcft_size; i++) + wr32a(hw, NGBE_MCADDRTBL(0), i, + hw->mac.mta_shadow[i]); + + if (hw->addr_ctrl.mta_in_use > 0) { + u32 psrctl = rd32(hw, NGBE_PSRCTL); + psrctl &= ~(NGBE_PSRCTL_ADHF12_MASK | NGBE_PSRCTL_MCHFENA); + psrctl |= NGBE_PSRCTL_MCHFENA | + NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type); + wr32(hw, NGBE_PSRCTL, psrctl); + } + + DEBUGOUT("ngbe update mc addr list complete\n"); + return 0; +} + /** * ngbe_acquire_swfw_sync - Acquire SWFW semaphore * @hw: pointer to hardware structure @@ -1099,10 +1231,11 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->disable_sec_rx_path = ngbe_disable_sec_rx_path; mac->enable_sec_rx_path = ngbe_enable_sec_rx_path; - /* RAR */ + /* RAR, Multicast */ mac->set_rar = ngbe_set_rar; mac->clear_rar = ngbe_clear_rar; mac->init_rx_addrs = ngbe_init_rx_addrs; + mac->update_mc_addr_list = ngbe_update_mc_addr_list; mac->set_vmdq = ngbe_set_vmdq; mac->clear_vmdq = ngbe_clear_vmdq; diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h index 6a08c02bee..f06baa4395 100644 --- a/drivers/net/ngbe/base/ngbe_hw.h +++ b/drivers/net/ngbe/base/ngbe_hw.h @@ -35,6 +35,9 @@ s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index); s32 ngbe_init_rx_addrs(struct ngbe_hw *hw); +s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list, + u32 mc_addr_count, + ngbe_mc_addr_itr func, bool clear); s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw); s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw); @@ -50,6 +53,7 @@ s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw); s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw); void ngbe_disable_rx(struct ngbe_hw *hw); void ngbe_enable_rx(struct ngbe_hw *hw); +void ngbe_set_mta(struct ngbe_hw *hw, u8 *mc_addr); s32 ngbe_init_shared_code(struct ngbe_hw *hw); s32 ngbe_set_mac_type(struct ngbe_hw *hw); s32 ngbe_init_ops_pf(struct ngbe_hw *hw); diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 2586eaf36a..3e62dde707 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -11,6 +11,7 @@ #define NGBE_FRAME_SIZE_MAX (9728) /* Maximum frame size, +FCS */ #define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */ #define NGBE_MAX_QP (8) +#define NGBE_MAX_UTA 128 #define NGBE_ALIGN 128 /* as intel did */ #define NGBE_ISB_SIZE 16 @@ -68,6 +69,7 @@ enum ngbe_media_type { struct ngbe_hw; struct ngbe_addr_filter_info { + u32 num_mc_addrs; u32 mta_in_use; }; @@ -200,6 +202,10 @@ struct ngbe_hw_stats { }; +/* iterator type for walking multicast address lists */ +typedef u8* (*ngbe_mc_addr_itr) (struct ngbe_hw *hw, u8 **mc_addr_ptr, + u32 *vmdq); + struct ngbe_rom_info { s32 (*init_params)(struct ngbe_hw *hw); s32 (*read32)(struct ngbe_hw *hw, u32 addr, u32 *data); @@ -243,6 +249,9 @@ struct ngbe_mac_info { s32 (*set_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq); s32 (*clear_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq); s32 (*init_rx_addrs)(struct ngbe_hw *hw); + s32 (*update_mc_addr_list)(struct ngbe_hw *hw, u8 *mc_addr_list, + u32 mc_addr_count, + ngbe_mc_addr_itr func, bool clear); /* Manageability interface */ s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw); @@ -251,6 +260,8 @@ struct ngbe_mac_info { enum ngbe_mac_type type; u8 addr[ETH_ADDR_LEN]; u8 perm_addr[ETH_ADDR_LEN]; +#define NGBE_MAX_MTA 128 + u32 mta_shadow[NGBE_MAX_MTA]; s32 mc_filter_type; u32 mcft_size; u32 num_rar_entries; diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 52642161b7..d076ba8036 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -1553,12 +1553,16 @@ ngbe_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size) static int ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct ngbe_hw *hw = ngbe_dev_hw(dev); dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues; dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues; dev_info->min_rx_bufsize = 1024; dev_info->max_rx_pktlen = 15872; + dev_info->max_mac_addrs = hw->mac.num_rar_entries; + dev_info->max_hash_mac_addrs = NGBE_VMDQ_NUM_UC_MAC; + dev_info->max_vfs = pci_dev->max_vfs; dev_info->rx_queue_offload_capa = ngbe_get_rx_queue_offloads(dev); dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) | dev_info->rx_queue_offload_capa); @@ -2055,6 +2059,36 @@ ngbe_dev_interrupt_handler(void *param) ngbe_dev_interrupt_action(dev); } +static int +ngbe_add_rar(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t pool) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t enable_addr = 1; + + return ngbe_set_rar(hw, index, mac_addr->addr_bytes, + pool, enable_addr); +} + +static void +ngbe_remove_rar(struct rte_eth_dev *dev, uint32_t index) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + ngbe_clear_rar(hw, index); +} + +static int +ngbe_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + + ngbe_remove_rar(dev, 0); + ngbe_add_rar(dev, addr, 0, pci_dev->max_vfs); + + return 0; +} + static int ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { @@ -2095,6 +2129,116 @@ ngbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } +static uint32_t +ngbe_uta_vector(struct ngbe_hw *hw, struct rte_ether_addr *uc_addr) +{ + uint32_t vector = 0; + + switch (hw->mac.mc_filter_type) { + case 0: /* use bits [47:36] of the address */ + vector = ((uc_addr->addr_bytes[4] >> 4) | + (((uint16_t)uc_addr->addr_bytes[5]) << 4)); + break; + case 1: /* use bits [46:35] of the address */ + vector = ((uc_addr->addr_bytes[4] >> 3) | + (((uint16_t)uc_addr->addr_bytes[5]) << 5)); + break; + case 2: /* use bits [45:34] of the address */ + vector = ((uc_addr->addr_bytes[4] >> 2) | + (((uint16_t)uc_addr->addr_bytes[5]) << 6)); + break; + case 3: /* use bits [43:32] of the address */ + vector = ((uc_addr->addr_bytes[4]) | + (((uint16_t)uc_addr->addr_bytes[5]) << 8)); + break; + default: /* Invalid mc_filter_type */ + break; + } + + /* vector can only be 12-bits or boundary will be exceeded */ + vector &= 0xFFF; + return vector; +} + +static int +ngbe_uc_hash_table_set(struct rte_eth_dev *dev, + struct rte_ether_addr *mac_addr, uint8_t on) +{ + uint32_t vector; + uint32_t uta_idx; + uint32_t reg_val; + uint32_t uta_mask; + uint32_t psrctl; + + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_uta_info *uta_info = NGBE_DEV_UTA_INFO(dev); + + vector = ngbe_uta_vector(hw, mac_addr); + uta_idx = (vector >> 5) & 0x7F; + uta_mask = 0x1UL << (vector & 0x1F); + + if (!!on == !!(uta_info->uta_shadow[uta_idx] & uta_mask)) + return 0; + + reg_val = rd32(hw, NGBE_UCADDRTBL(uta_idx)); + if (on) { + uta_info->uta_in_use++; + reg_val |= uta_mask; + uta_info->uta_shadow[uta_idx] |= uta_mask; + } else { + uta_info->uta_in_use--; + reg_val &= ~uta_mask; + uta_info->uta_shadow[uta_idx] &= ~uta_mask; + } + + wr32(hw, NGBE_UCADDRTBL(uta_idx), reg_val); + + psrctl = rd32(hw, NGBE_PSRCTL); + if (uta_info->uta_in_use > 0) + psrctl |= NGBE_PSRCTL_UCHFENA; + else + psrctl &= ~NGBE_PSRCTL_UCHFENA; + + psrctl &= ~NGBE_PSRCTL_ADHF12_MASK; + psrctl |= NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type); + wr32(hw, NGBE_PSRCTL, psrctl); + + return 0; +} + +static int +ngbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_uta_info *uta_info = NGBE_DEV_UTA_INFO(dev); + uint32_t psrctl; + int i; + + if (on) { + for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) { + uta_info->uta_shadow[i] = ~0; + wr32(hw, NGBE_UCADDRTBL(i), ~0); + } + } else { + for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) { + uta_info->uta_shadow[i] = 0; + wr32(hw, NGBE_UCADDRTBL(i), 0); + } + } + + psrctl = rd32(hw, NGBE_PSRCTL); + if (on) + psrctl |= NGBE_PSRCTL_UCHFENA; + else + psrctl &= ~NGBE_PSRCTL_UCHFENA; + + psrctl &= ~NGBE_PSRCTL_ADHF12_MASK; + psrctl |= NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type); + wr32(hw, NGBE_PSRCTL, psrctl); + + return 0; +} + static int ngbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { @@ -2220,6 +2364,31 @@ ngbe_configure_msix(struct rte_eth_dev *dev) | NGBE_ITR_WRDSA); } +static u8 * +ngbe_dev_addr_list_itr(__rte_unused struct ngbe_hw *hw, + u8 **mc_addr_ptr, u32 *vmdq) +{ + u8 *mc_addr; + + *vmdq = 0; + mc_addr = *mc_addr_ptr; + *mc_addr_ptr = (mc_addr + sizeof(struct rte_ether_addr)); + return mc_addr; +} + +int +ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev, + struct rte_ether_addr *mc_addr_set, + uint32_t nb_mc_addr) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + u8 *mc_addr_list; + + mc_addr_list = (u8 *)mc_addr_set; + return hw->mac.update_mc_addr_list(hw, mc_addr_list, nb_mc_addr, + ngbe_dev_addr_list_itr, TRUE); +} + static const struct eth_dev_ops ngbe_eth_dev_ops = { .dev_configure = ngbe_dev_configure, .dev_infos_get = ngbe_dev_info_get, @@ -2253,6 +2422,12 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .rx_queue_release = ngbe_dev_rx_queue_release, .tx_queue_setup = ngbe_dev_tx_queue_setup, .tx_queue_release = ngbe_dev_tx_queue_release, + .mac_addr_add = ngbe_add_rar, + .mac_addr_remove = ngbe_remove_rar, + .mac_addr_set = ngbe_set_default_mac_addr, + .uc_hash_table_set = ngbe_uc_hash_table_set, + .uc_all_hash_table_set = ngbe_uc_all_hash_table_set, + .set_mc_addr_list = ngbe_dev_set_mc_addr_list, }; RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd); diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 1527dcc022..65dad4a72b 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -57,6 +57,12 @@ struct ngbe_hwstrip { uint32_t bitmap[NGBE_HWSTRIP_BITMAP_SIZE]; }; +struct ngbe_uta_info { + uint8_t uc_filter_type; + uint16_t uta_in_use; + uint32_t uta_shadow[NGBE_MAX_UTA]; +}; + /* * Structure to store private data for each driver instance (for each port). */ @@ -67,6 +73,7 @@ struct ngbe_adapter { struct ngbe_stat_mappings stat_mappings; struct ngbe_vfta shadow_vfta; struct ngbe_hwstrip hwstrip; + struct ngbe_uta_info uta_info; bool rx_bulk_alloc_allowed; }; @@ -107,6 +114,9 @@ ngbe_dev_intr(struct rte_eth_dev *dev) #define NGBE_DEV_HWSTRIP(dev) \ (&((struct ngbe_adapter *)(dev)->data->dev_private)->hwstrip) +#define NGBE_DEV_UTA_INFO(dev) \ + (&((struct ngbe_adapter *)(dev)->data->dev_private)->uta_info) + /* * Rx/Tx function prototypes @@ -209,6 +219,9 @@ struct rte_ngbe_xstats_name_off { }; const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev); +int ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev, + struct rte_ether_addr *mc_addr_set, + uint32_t nb_mc_addr); void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on); void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, From patchwork Wed Sep 8 08:37:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98298 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA982A0C56; Wed, 8 Sep 2021 10:38:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7EAEC411E1; Wed, 8 Sep 2021 10:37:00 +0200 (CEST) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 3E822411DC for ; Wed, 8 Sep 2021 10:36:56 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090210thv83uoe Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:49 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: r/cTxDoDoiGYKx6n32Xd3z36w0VmcjwGOG5SO2NuExKmVO1HIIzo32GDwyqsi 3pO0QdyRkfC6XfEAN09wgosBAqrNhWaDnlqg/fn2RfJVcIerHXoD8afmdRtK+1Y+552YbYU DWnSLOpp/9Y0ND2Fov2xVqEAxuV3RkaeBm2Jo7SVb/2O+togsUh7xoItrzYAshr4bKf2vVn TttJCPNrNttvqdLBUutfgBtvJzIw9eUwY8Yp6lnI8/Vs8ARHKXlod7ur/+Xxvk6NYSeU68I J+XDNsrIp/VgHkEdaU6YhjtEHGe7LoT5jc7I0DLk/ywd8TuOMLAMYbA9q6Qnts5J33G0Kgq 4xDUsYli2bhM0qUdAyPsSKHB4JL4tYf5fAdED/L X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:42 +0800 Message-Id: <20210908083758.312055-17-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 16/32] net/ngbe: support VLAN filter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to filter of a VLAN tag identifier. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 2 +- drivers/net/ngbe/base/ngbe_dummy.h | 5 ++ drivers/net/ngbe/base/ngbe_hw.c | 29 +++++++ drivers/net/ngbe/base/ngbe_hw.h | 2 + drivers/net/ngbe/base/ngbe_type.h | 3 + drivers/net/ngbe/ngbe_ethdev.c | 128 +++++++++++++++++++++++++++++ 7 files changed, 169 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 4b22dc683a..265edba361 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -17,6 +17,7 @@ Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y +VLAN filter = Y CRC offload = P VLAN offload = P QinQ offload = P diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 4d01c27064..3683862fd1 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -11,7 +11,7 @@ for Wangxun 1 Gigabit Ethernet NICs. Features -------- -- MAC filtering +- MAC/VLAN filtering - Packet type information - Checksum offload - VLAN/QinQ stripping and inserting diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index fe2d53f312..7814fd6226 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -132,6 +132,10 @@ static inline s32 ngbe_mac_update_mc_addr_list_dummy(struct ngbe_hw *TUP0, { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_mac_clear_vfta_dummy(struct ngbe_hw *TUP0) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; @@ -209,6 +213,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->mac.clear_vmdq = ngbe_mac_clear_vmdq_dummy; hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy; hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy; + hw->mac.clear_vfta = ngbe_mac_clear_vfta_dummy; hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy; hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy; hw->phy.identify = ngbe_phy_identify_dummy; diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index 897baf179d..ce0867575a 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -19,6 +19,9 @@ s32 ngbe_start_hw(struct ngbe_hw *hw) { DEBUGFUNC("ngbe_start_hw"); + /* Clear the VLAN filter table */ + hw->mac.clear_vfta(hw); + /* Clear statistics registers */ hw->mac.clear_hw_cntrs(hw); @@ -910,6 +913,30 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw) return 0; } +/** + * ngbe_clear_vfta - Clear VLAN filter table + * @hw: pointer to hardware structure + * + * Clears the VLAN filer table, and the VMDq index associated with the filter + **/ +s32 ngbe_clear_vfta(struct ngbe_hw *hw) +{ + u32 offset; + + DEBUGFUNC("ngbe_clear_vfta"); + + for (offset = 0; offset < hw->mac.vft_size; offset++) + wr32(hw, NGBE_VLANTBL(offset), 0); + + for (offset = 0; offset < NGBE_NUM_POOL; offset++) { + wr32(hw, NGBE_PSRVLANIDX, offset); + wr32(hw, NGBE_PSRVLAN, 0); + wr32(hw, NGBE_PSRVLANPLM(0), 0); + } + + return 0; +} + /** * ngbe_check_mac_link_em - Determine link and speed status * @hw: pointer to hardware structure @@ -1238,6 +1265,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->update_mc_addr_list = ngbe_update_mc_addr_list; mac->set_vmdq = ngbe_set_vmdq; mac->clear_vmdq = ngbe_clear_vmdq; + mac->clear_vfta = ngbe_clear_vfta; /* Link */ mac->get_link_capabilities = ngbe_get_link_capabilities_em; @@ -1254,6 +1282,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) rom->validate_checksum = ngbe_validate_eeprom_checksum_em; mac->mcft_size = NGBE_EM_MC_TBL_SIZE; + mac->vft_size = NGBE_EM_VFT_TBL_SIZE; mac->num_rar_entries = NGBE_EM_RAR_ENTRIES; mac->max_rx_queues = NGBE_EM_MAX_RX_QUEUES; mac->max_tx_queues = NGBE_EM_MAX_TX_QUEUES; diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h index f06baa4395..a27bd3e650 100644 --- a/drivers/net/ngbe/base/ngbe_hw.h +++ b/drivers/net/ngbe/base/ngbe_hw.h @@ -12,6 +12,7 @@ #define NGBE_EM_MAX_RX_QUEUES 8 #define NGBE_EM_RAR_ENTRIES 32 #define NGBE_EM_MC_TBL_SIZE 32 +#define NGBE_EM_VFT_TBL_SIZE 128 s32 ngbe_init_hw(struct ngbe_hw *hw); s32 ngbe_start_hw(struct ngbe_hw *hw); @@ -48,6 +49,7 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask); s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq); s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq); s32 ngbe_init_uta_tables(struct ngbe_hw *hw); +s32 ngbe_clear_vfta(struct ngbe_hw *hw); s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw); s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw); diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 3e62dde707..5a88d38e84 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -10,6 +10,7 @@ #define NGBE_FRAME_SIZE_MAX (9728) /* Maximum frame size, +FCS */ #define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */ +#define NGBE_NUM_POOL (32) #define NGBE_MAX_QP (8) #define NGBE_MAX_UTA 128 @@ -252,6 +253,7 @@ struct ngbe_mac_info { s32 (*update_mc_addr_list)(struct ngbe_hw *hw, u8 *mc_addr_list, u32 mc_addr_count, ngbe_mc_addr_itr func, bool clear); + s32 (*clear_vfta)(struct ngbe_hw *hw); /* Manageability interface */ s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw); @@ -264,6 +266,7 @@ struct ngbe_mac_info { u32 mta_shadow[NGBE_MAX_MTA]; s32 mc_filter_type; u32 mcft_size; + u32 vft_size; u32 num_rar_entries; u32 max_tx_queues; u32 max_rx_queues; diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index d076ba8036..acc018c811 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -492,6 +492,131 @@ static struct rte_pci_driver rte_ngbe_pmd = { .remove = eth_ngbe_pci_remove, }; +static int +ngbe_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_vfta *shadow_vfta = NGBE_DEV_VFTA(dev); + uint32_t vfta; + uint32_t vid_idx; + uint32_t vid_bit; + + vid_idx = (uint32_t)((vlan_id >> 5) & 0x7F); + vid_bit = (uint32_t)(1 << (vlan_id & 0x1F)); + vfta = rd32(hw, NGBE_VLANTBL(vid_idx)); + if (on) + vfta |= vid_bit; + else + vfta &= ~vid_bit; + wr32(hw, NGBE_VLANTBL(vid_idx), vfta); + + /* update local VFTA copy */ + shadow_vfta->vfta[vid_idx] = vfta; + + return 0; +} + +static void +ngbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_rx_queue *rxq; + bool restart; + uint32_t rxcfg, rxbal, rxbah; + + if (on) + ngbe_vlan_hw_strip_enable(dev, queue); + else + ngbe_vlan_hw_strip_disable(dev, queue); + + rxq = dev->data->rx_queues[queue]; + rxbal = rd32(hw, NGBE_RXBAL(rxq->reg_idx)); + rxbah = rd32(hw, NGBE_RXBAH(rxq->reg_idx)); + rxcfg = rd32(hw, NGBE_RXCFG(rxq->reg_idx)); + if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) { + restart = (rxcfg & NGBE_RXCFG_ENA) && + !(rxcfg & NGBE_RXCFG_VLAN); + rxcfg |= NGBE_RXCFG_VLAN; + } else { + restart = (rxcfg & NGBE_RXCFG_ENA) && + (rxcfg & NGBE_RXCFG_VLAN); + rxcfg &= ~NGBE_RXCFG_VLAN; + } + rxcfg &= ~NGBE_RXCFG_ENA; + + if (restart) { + /* set vlan strip for ring */ + ngbe_dev_rx_queue_stop(dev, queue); + wr32(hw, NGBE_RXBAL(rxq->reg_idx), rxbal); + wr32(hw, NGBE_RXBAH(rxq->reg_idx), rxbah); + wr32(hw, NGBE_RXCFG(rxq->reg_idx), rxcfg); + ngbe_dev_rx_queue_start(dev, queue); + } +} + +static int +ngbe_vlan_tpid_set(struct rte_eth_dev *dev, + enum rte_vlan_type vlan_type, + uint16_t tpid) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + int ret = 0; + uint32_t portctrl, vlan_ext, qinq; + + portctrl = rd32(hw, NGBE_PORTCTL); + + vlan_ext = (portctrl & NGBE_PORTCTL_VLANEXT); + qinq = vlan_ext && (portctrl & NGBE_PORTCTL_QINQ); + switch (vlan_type) { + case ETH_VLAN_TYPE_INNER: + if (vlan_ext) { + wr32m(hw, NGBE_VLANCTL, + NGBE_VLANCTL_TPID_MASK, + NGBE_VLANCTL_TPID(tpid)); + wr32m(hw, NGBE_DMATXCTRL, + NGBE_DMATXCTRL_TPID_MASK, + NGBE_DMATXCTRL_TPID(tpid)); + } else { + ret = -ENOTSUP; + PMD_DRV_LOG(ERR, + "Inner type is not supported by single VLAN"); + } + + if (qinq) { + wr32m(hw, NGBE_TAGTPID(0), + NGBE_TAGTPID_LSB_MASK, + NGBE_TAGTPID_LSB(tpid)); + } + break; + case ETH_VLAN_TYPE_OUTER: + if (vlan_ext) { + /* Only the high 16-bits is valid */ + wr32m(hw, NGBE_EXTAG, + NGBE_EXTAG_VLAN_MASK, + NGBE_EXTAG_VLAN(tpid)); + } else { + wr32m(hw, NGBE_VLANCTL, + NGBE_VLANCTL_TPID_MASK, + NGBE_VLANCTL_TPID(tpid)); + wr32m(hw, NGBE_DMATXCTRL, + NGBE_DMATXCTRL_TPID_MASK, + NGBE_DMATXCTRL_TPID(tpid)); + } + + if (qinq) { + wr32m(hw, NGBE_TAGTPID(0), + NGBE_TAGTPID_MSB_MASK, + NGBE_TAGTPID_MSB(tpid)); + } + break; + default: + PMD_DRV_LOG(ERR, "Unsupported VLAN type %d", vlan_type); + return -EINVAL; + } + + return ret; +} + void ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev) { @@ -2411,7 +2536,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .queue_stats_mapping_set = ngbe_dev_queue_stats_mapping_set, .fw_version_get = ngbe_fw_version_get, .mtu_set = ngbe_dev_mtu_set, + .vlan_filter_set = ngbe_vlan_filter_set, + .vlan_tpid_set = ngbe_vlan_tpid_set, .vlan_offload_set = ngbe_vlan_offload_set, + .vlan_strip_queue_set = ngbe_vlan_strip_queue_set, .rx_queue_start = ngbe_dev_rx_queue_start, .rx_queue_stop = ngbe_dev_rx_queue_stop, .tx_queue_start = ngbe_dev_tx_queue_start, From patchwork Wed Sep 8 08:37:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98297 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C0A5A0C56; Wed, 8 Sep 2021 10:38:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 143CC41174; Wed, 8 Sep 2021 10:36:59 +0200 (CEST) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id C52E0411DA for ; Wed, 8 Sep 2021 10:36:56 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090212t9i5fak1 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:51 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: hJ5650VbgwAlYPvWfE0aZbXGWdMA8DwCKZQiGfa4sFUOcBavi5rg0ytAuvMh3 fpJISDzgx7h5/k+UVkVmIdhO0D705/4GrvII/D4EeWqAB25lrAwfQqzEZ31l8Lk4ksY73UK BECh6lPqnry1If8PoSNs1Hmz3kjvW6hUTOwd8uDNNw8AjVfFiS7KmyXUBXj7eGmSkHlJsve Hglu4iwcoTARcEULoG1MYJudTAL0krQyHGWJON2AbwXyiJp72f4tSHSOj+ArK/JYIitNtbj vGEemVN3VBbgcKL6yBe13VvdDz/cXcEGCovLlbk48vvlEFyPkstJCwyKffAZhQ2wRj0dWF6 2d6xuExm3OUTLtVda3WOi33jk5rU94d4q30WDHO X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:43 +0800 Message-Id: <20210908083758.312055-18-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 17/32] net/ngbe: support RSS hash X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support RSS hashing on Rx, and configuration of RSS hash computation. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 3 + doc/guides/nics/ngbe.rst | 2 + drivers/net/ngbe/meson.build | 2 + drivers/net/ngbe/ngbe_ethdev.c | 99 +++++++++++++ drivers/net/ngbe/ngbe_ethdev.h | 27 ++++ drivers/net/ngbe/ngbe_rxtx.c | 235 ++++++++++++++++++++++++++++++ 6 files changed, 368 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 265edba361..70d731a695 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -17,6 +17,9 @@ Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y VLAN filter = Y CRC offload = P VLAN offload = P diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 3683862fd1..ce160e832c 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -11,6 +11,8 @@ for Wangxun 1 Gigabit Ethernet NICs. Features -------- +- Multiple queues for Tx and Rx +- Receiver Side Scaling (RSS) - MAC/VLAN filtering - Packet type information - Checksum offload diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build index 05f94fe7d6..c55e6c20e8 100644 --- a/drivers/net/ngbe/meson.build +++ b/drivers/net/ngbe/meson.build @@ -16,4 +16,6 @@ sources = files( 'ngbe_rxtx.c', ) +deps += ['hash'] + includes += include_directories('base') diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index acc018c811..0bc1400aea 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -856,6 +856,9 @@ ngbe_dev_configure(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /* set flag to update link status after init */ intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE; @@ -1082,6 +1085,7 @@ static int ngbe_dev_stop(struct rte_eth_dev *dev) { struct rte_eth_link link; + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); struct ngbe_hw *hw = ngbe_dev_hw(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; @@ -1129,6 +1133,8 @@ ngbe_dev_stop(struct rte_eth_dev *dev) intr_handle->intr_vec = NULL; } + adapter->rss_reta_updated = 0; + hw->adapter_stopped = true; dev->data->dev_started = 0; @@ -1718,6 +1724,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->rx_desc_lim = rx_desc_lim; dev_info->tx_desc_lim = tx_desc_lim; + dev_info->hash_key_size = NGBE_HKEY_MAX_INDEX * sizeof(uint32_t); + dev_info->reta_size = ETH_RSS_RETA_SIZE_128; + dev_info->flow_type_rss_offloads = NGBE_RSS_OFFLOAD_ALL; + dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M | ETH_LINK_SPEED_10M; @@ -2184,6 +2194,91 @@ ngbe_dev_interrupt_handler(void *param) ngbe_dev_interrupt_action(dev); } +int +ngbe_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + uint8_t i, j, mask; + uint32_t reta; + uint16_t idx, shift; + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + PMD_INIT_FUNC_TRACE(); + + if (!hw->is_pf) { + PMD_DRV_LOG(ERR, "RSS reta update is not supported on this " + "NIC."); + return -ENOTSUP; + } + + if (reta_size != ETH_RSS_RETA_SIZE_128) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, ETH_RSS_RETA_SIZE_128); + return -EINVAL; + } + + for (i = 0; i < reta_size; i += 4) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF); + if (!mask) + continue; + + reta = rd32a(hw, NGBE_REG_RSSTBL, i >> 2); + for (j = 0; j < 4; j++) { + if (RS8(mask, j, 0x1)) { + reta &= ~(MS32(8 * j, 0xFF)); + reta |= LS32(reta_conf[idx].reta[shift + j], + 8 * j, 0xFF); + } + } + wr32a(hw, NGBE_REG_RSSTBL, i >> 2, reta); + } + adapter->rss_reta_updated = 1; + + return 0; +} + +int +ngbe_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint8_t i, j, mask; + uint32_t reta; + uint16_t idx, shift; + + PMD_INIT_FUNC_TRACE(); + + if (reta_size != ETH_RSS_RETA_SIZE_128) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, ETH_RSS_RETA_SIZE_128); + return -EINVAL; + } + + for (i = 0; i < reta_size; i += 4) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF); + if (!mask) + continue; + + reta = rd32a(hw, NGBE_REG_RSSTBL, i >> 2); + for (j = 0; j < 4; j++) { + if (RS8(mask, j, 0x1)) + reta_conf[idx].reta[shift + j] = + (uint16_t)RS32(reta, 8 * j, 0xFF); + } + } + + return 0; +} + static int ngbe_add_rar(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, uint32_t index, uint32_t pool) @@ -2555,6 +2650,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .mac_addr_set = ngbe_set_default_mac_addr, .uc_hash_table_set = ngbe_uc_hash_table_set, .uc_all_hash_table_set = ngbe_uc_all_hash_table_set, + .reta_update = ngbe_dev_rss_reta_update, + .reta_query = ngbe_dev_rss_reta_query, + .rss_hash_update = ngbe_dev_rss_hash_update, + .rss_hash_conf_get = ngbe_dev_rss_hash_conf_get, .set_mc_addr_list = ngbe_dev_set_mc_addr_list, }; diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 65dad4a72b..083db6080b 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -17,6 +17,7 @@ #define NGBE_VFTA_SIZE 128 #define NGBE_VLAN_TAG_SIZE 4 +#define NGBE_HKEY_MAX_INDEX 10 /*Default value of Max Rx Queue*/ #define NGBE_MAX_RX_QUEUE_NUM 8 @@ -28,6 +29,17 @@ #define NGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */ +#define NGBE_RSS_OFFLOAD_ALL ( \ + ETH_RSS_IPV4 | \ + ETH_RSS_NONFRAG_IPV4_TCP | \ + ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_IPV6 | \ + ETH_RSS_NONFRAG_IPV6_TCP | \ + ETH_RSS_NONFRAG_IPV6_UDP | \ + ETH_RSS_IPV6_EX | \ + ETH_RSS_IPV6_TCP_EX | \ + ETH_RSS_IPV6_UDP_EX) + #define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET #define NGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET @@ -75,6 +87,9 @@ struct ngbe_adapter { struct ngbe_hwstrip hwstrip; struct ngbe_uta_info uta_info; bool rx_bulk_alloc_allowed; + + /* For RSS reta table update */ + uint8_t rss_reta_updated; }; static inline struct ngbe_adapter * @@ -177,6 +192,12 @@ uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t ngbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +int ngbe_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); + +int ngbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); + void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction, uint8_t queue, uint8_t msix_vector); @@ -222,6 +243,12 @@ const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev); int ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev, struct rte_ether_addr *mc_addr_set, uint32_t nb_mc_addr); +int ngbe_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int ngbe_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); void ngbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on); void ngbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 22693c144a..04abc2bb47 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -897,6 +897,18 @@ ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask) return ngbe_decode_ptype(ptid); } +static inline uint64_t +ngbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info) +{ + static uint64_t ip_rss_types_map[16] __rte_cache_aligned = { + 0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, + 0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH, + PKT_RX_RSS_HASH, 0, 0, 0, + 0, 0, 0, PKT_RX_FDIR, + }; + return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)]; +} + static inline uint64_t rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags) { @@ -1008,10 +1020,16 @@ ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq) pkt_flags = rx_desc_status_to_pkt_flags(s[j], rxq->vlan_flags); pkt_flags |= rx_desc_error_to_pkt_flags(s[j]); + pkt_flags |= + ngbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]); mb->ol_flags = pkt_flags; mb->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j], rxq->pkt_type_mask); + + if (likely(pkt_flags & PKT_RX_RSS_HASH)) + mb->hash.rss = + rte_le_to_cpu_32(rxdp[j].qw0.dw1); } /* Move mbuf pointers from the S/W ring to the stage */ @@ -1302,6 +1320,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * - packet length, * - Rx port identifier. * 2) integrate hardware offload data, if any: + * - RSS flag & hash, * - IP checksum flag, * - VLAN TCI, if any, * - error flags. @@ -1323,10 +1342,14 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags); pkt_flags |= rx_desc_error_to_pkt_flags(staterr); + pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info); rxm->ol_flags = pkt_flags; rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask); + if (likely(pkt_flags & PKT_RX_RSS_HASH)) + rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1); + /* * Store the mbuf address into the next entry of the array * of returned packets. @@ -1366,6 +1389,7 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * Fill the following info in the HEAD buffer of the Rx cluster: * - RX port identifier * - hardware offload data, if any: + * - RSS flag & hash * - IP checksum flag * - VLAN TCI, if any * - error flags @@ -1389,9 +1413,13 @@ ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc, pkt_info = rte_le_to_cpu_32(desc->qw0.dw0); pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags); pkt_flags |= rx_desc_error_to_pkt_flags(staterr); + pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info); head->ol_flags = pkt_flags; head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask); + + if (likely(pkt_flags & PKT_RX_RSS_HASH)) + head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1); } /** @@ -2249,6 +2277,188 @@ ngbe_dev_free_queues(struct rte_eth_dev *dev) dev->data->nb_tx_queues = 0; } +/** + * Receive Side Scaling (RSS) + * + * Principles: + * The source and destination IP addresses of the IP header and the source + * and destination ports of TCP/UDP headers, if any, of received packets are + * hashed against a configurable random key to compute a 32-bit RSS hash result. + * The seven (7) LSBs of the 32-bit hash result are used as an index into a + * 128-entry redirection table (RETA). Each entry of the RETA provides a 3-bit + * RSS output index which is used as the Rx queue index where to store the + * received packets. + * The following output is supplied in the Rx write-back descriptor: + * - 32-bit result of the Microsoft RSS hash function, + * - 4-bit RSS type field. + */ + +/* + * Used as the default key. + */ +static uint8_t rss_intel_key[40] = { + 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2, + 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0, + 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4, + 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C, + 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA, +}; + +static void +ngbe_rss_disable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + wr32m(hw, NGBE_RACTL, NGBE_RACTL_RSSENA, 0); +} + +int +ngbe_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint8_t *hash_key; + uint32_t mrqc; + uint32_t rss_key; + uint64_t rss_hf; + uint16_t i; + + if (!hw->is_pf) { + PMD_DRV_LOG(ERR, "RSS hash update is not supported on this " + "NIC."); + return -ENOTSUP; + } + + hash_key = rss_conf->rss_key; + if (hash_key) { + /* Fill in RSS hash key */ + for (i = 0; i < 10; i++) { + rss_key = LS32(hash_key[(i * 4) + 0], 0, 0xFF); + rss_key |= LS32(hash_key[(i * 4) + 1], 8, 0xFF); + rss_key |= LS32(hash_key[(i * 4) + 2], 16, 0xFF); + rss_key |= LS32(hash_key[(i * 4) + 3], 24, 0xFF); + wr32a(hw, NGBE_REG_RSSKEY, i, rss_key); + } + } + + /* Set configured hashing protocols */ + rss_hf = rss_conf->rss_hf & NGBE_RSS_OFFLOAD_ALL; + + mrqc = rd32(hw, NGBE_RACTL); + mrqc &= ~NGBE_RACTL_RSSMASK; + if (rss_hf & ETH_RSS_IPV4) + mrqc |= NGBE_RACTL_RSSIPV4; + if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) + mrqc |= NGBE_RACTL_RSSIPV4TCP; + if (rss_hf & ETH_RSS_IPV6 || + rss_hf & ETH_RSS_IPV6_EX) + mrqc |= NGBE_RACTL_RSSIPV6; + if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP || + rss_hf & ETH_RSS_IPV6_TCP_EX) + mrqc |= NGBE_RACTL_RSSIPV6TCP; + if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) + mrqc |= NGBE_RACTL_RSSIPV4UDP; + if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP || + rss_hf & ETH_RSS_IPV6_UDP_EX) + mrqc |= NGBE_RACTL_RSSIPV6UDP; + + if (rss_hf) + mrqc |= NGBE_RACTL_RSSENA; + else + mrqc &= ~NGBE_RACTL_RSSENA; + + wr32(hw, NGBE_RACTL, mrqc); + + return 0; +} + +int +ngbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint8_t *hash_key; + uint32_t mrqc; + uint32_t rss_key; + uint64_t rss_hf; + uint16_t i; + + hash_key = rss_conf->rss_key; + if (hash_key) { + /* Return RSS hash key */ + for (i = 0; i < 10; i++) { + rss_key = rd32a(hw, NGBE_REG_RSSKEY, i); + hash_key[(i * 4) + 0] = RS32(rss_key, 0, 0xFF); + hash_key[(i * 4) + 1] = RS32(rss_key, 8, 0xFF); + hash_key[(i * 4) + 2] = RS32(rss_key, 16, 0xFF); + hash_key[(i * 4) + 3] = RS32(rss_key, 24, 0xFF); + } + } + + rss_hf = 0; + + mrqc = rd32(hw, NGBE_RACTL); + if (mrqc & NGBE_RACTL_RSSIPV4) + rss_hf |= ETH_RSS_IPV4; + if (mrqc & NGBE_RACTL_RSSIPV4TCP) + rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; + if (mrqc & NGBE_RACTL_RSSIPV6) + rss_hf |= ETH_RSS_IPV6 | + ETH_RSS_IPV6_EX; + if (mrqc & NGBE_RACTL_RSSIPV6TCP) + rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP | + ETH_RSS_IPV6_TCP_EX; + if (mrqc & NGBE_RACTL_RSSIPV4UDP) + rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; + if (mrqc & NGBE_RACTL_RSSIPV6UDP) + rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP | + ETH_RSS_IPV6_UDP_EX; + if (!(mrqc & NGBE_RACTL_RSSENA)) + rss_hf = 0; + + rss_hf &= NGBE_RSS_OFFLOAD_ALL; + + rss_conf->rss_hf = rss_hf; + return 0; +} + +static void +ngbe_rss_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_rss_conf rss_conf; + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t reta; + uint16_t i; + uint16_t j; + + PMD_INIT_FUNC_TRACE(); + + /* + * Fill in redirection table + * The byte-swap is needed because NIC registers are in + * little-endian order. + */ + if (adapter->rss_reta_updated == 0) { + reta = 0; + for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) { + if (j == dev->data->nb_rx_queues) + j = 0; + reta = (reta >> 8) | LS32(j, 24, 0xFF); + if ((i & 3) == 3) + wr32a(hw, NGBE_REG_RSSTBL, i >> 2, reta); + } + } + /* + * Configure the RSS key and the RSS protocols used to compute + * the RSS hash of input packets. + */ + rss_conf = dev->data->dev_conf.rx_adv_conf.rss_conf; + if (rss_conf.rss_key == NULL) + rss_conf.rss_key = rss_intel_key; /* Default hash key */ + ngbe_dev_rss_hash_update(dev, &rss_conf); +} + void ngbe_configure_port(struct rte_eth_dev *dev) { struct ngbe_hw *hw = ngbe_dev_hw(dev); @@ -2317,6 +2527,24 @@ ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq) return 0; } +static int +ngbe_dev_mq_rx_configure(struct rte_eth_dev *dev) +{ + switch (dev->data->dev_conf.rxmode.mq_mode) { + case ETH_MQ_RX_RSS: + ngbe_rss_configure(dev); + break; + + case ETH_MQ_RX_NONE: + default: + /* if mq_mode is none, disable rss mode.*/ + ngbe_rss_disable(dev); + break; + } + + return 0; +} + void ngbe_set_rx_function(struct rte_eth_dev *dev) { @@ -2488,8 +2716,15 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev) if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) dev->data->scattered_rx = 1; + + /* + * Device configured with multiple RX queues. + */ + ngbe_dev_mq_rx_configure(dev); + /* * Setup the Checksum Register. + * Disable Full-Packet Checksum which is mutually exclusive with RSS. * Enable IP/L4 checksum computation by hardware if requested to do so. */ rxcsum = rd32(hw, NGBE_PSRCTL); From patchwork Wed Sep 8 08:37:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98299 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1CEFDA0C56; Wed, 8 Sep 2021 10:38:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9F785411E5; Wed, 8 Sep 2021 10:37:01 +0200 (CEST) Received: from smtpbgsg2.qq.com (smtpbgsg2.qq.com [54.254.200.128]) by mails.dpdk.org (Postfix) with ESMTP id E4A20411B9 for ; Wed, 8 Sep 2021 10:36:59 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090214thw9svho Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:53 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: b6Px4984bbNRSt4IRKpenuLJYYWStPpnBl9ciRyQudp5L9h+1r7s0yCaLRHyx 8dIpVUfwC/K3+n6wzDi0d4+1dJpsJl9GO+dB6RrN22LD/EWM94TSciprquTgW9epH4NXXD5 9RlqF2EzZOH4BXnjmHJIdxtcuWg9rziVvTOWo9VTINHeQskgL+LvcliwA5CiRGoGFh0R8DM fuwDJfobGAoIS4+4XO1kW4duzuU+5c1l/EB9jbfwT3YUksLIDP9PlYB00AWFn9OH4PnCXTh Py/I6LU6zOp+eeaRLCk5TbcxlihAmot83k4YVxVWVmsJLRgDZMRwNx2vRQnaxZuCAFZSy4w HGOOhGaAh58D4280OiUtQrfPvb3g/MtEIniCML3 X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:44 +0800 Message-Id: <20210908083758.312055-19-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign1 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 18/32] net/ngbe: support SRIOV X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Initialize and configure PF module to support SRIOV. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + drivers/net/ngbe/base/meson.build | 1 + drivers/net/ngbe/base/ngbe_dummy.h | 17 +++ drivers/net/ngbe/base/ngbe_hw.c | 47 ++++++- drivers/net/ngbe/base/ngbe_mbx.c | 30 +++++ drivers/net/ngbe/base/ngbe_mbx.h | 11 ++ drivers/net/ngbe/base/ngbe_type.h | 22 ++++ drivers/net/ngbe/meson.build | 1 + drivers/net/ngbe/ngbe_ethdev.c | 32 ++++- drivers/net/ngbe/ngbe_ethdev.h | 19 +++ drivers/net/ngbe/ngbe_pf.c | 196 +++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_rxtx.c | 26 ++-- 12 files changed, 390 insertions(+), 13 deletions(-) create mode 100644 drivers/net/ngbe/base/ngbe_mbx.c create mode 100644 drivers/net/ngbe/base/ngbe_mbx.h create mode 100644 drivers/net/ngbe/ngbe_pf.c diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 70d731a695..9a497ccae6 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -20,6 +20,7 @@ Multicast MAC filter = Y RSS hash = Y RSS key update = Y RSS reta update = Y +SR-IOV = Y VLAN filter = Y CRC offload = P VLAN offload = P diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build index 6081281135..390b0f9c12 100644 --- a/drivers/net/ngbe/base/meson.build +++ b/drivers/net/ngbe/base/meson.build @@ -4,6 +4,7 @@ sources = [ 'ngbe_eeprom.c', 'ngbe_hw.c', + 'ngbe_mbx.c', 'ngbe_mng.c', 'ngbe_phy.c', 'ngbe_phy_rtl.c', diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 7814fd6226..5cb09bfcaa 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -136,6 +136,14 @@ static inline s32 ngbe_mac_clear_vfta_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; } +static inline void ngbe_mac_set_mac_anti_spoofing_dummy(struct ngbe_hw *TUP0, + bool TUP1, int TUP2) +{ +} +static inline void ngbe_mac_set_vlan_anti_spoofing_dummy(struct ngbe_hw *TUP0, + bool TUP1, int TUP2) +{ +} static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; @@ -187,6 +195,12 @@ static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1, { return NGBE_ERR_OPS_DUMMY; } + +/* struct ngbe_mbx_operations */ +static inline void ngbe_mbx_init_params_dummy(struct ngbe_hw *TUP0) +{ +} + static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) { hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy; @@ -214,6 +228,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy; hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy; hw->mac.clear_vfta = ngbe_mac_clear_vfta_dummy; + hw->mac.set_mac_anti_spoofing = ngbe_mac_set_mac_anti_spoofing_dummy; + hw->mac.set_vlan_anti_spoofing = ngbe_mac_set_vlan_anti_spoofing_dummy; hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy; hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy; hw->phy.identify = ngbe_phy_identify_dummy; @@ -225,6 +241,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy; hw->phy.setup_link = ngbe_phy_setup_link_dummy; hw->phy.check_link = ngbe_phy_check_link_dummy; + hw->mbx.init_params = ngbe_mbx_init_params_dummy; } #endif /* _NGBE_TYPE_DUMMY_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index ce0867575a..8b45a91f78 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -4,6 +4,7 @@ */ #include "ngbe_type.h" +#include "ngbe_mbx.h" #include "ngbe_phy.h" #include "ngbe_eeprom.h" #include "ngbe_mng.h" @@ -1008,6 +1009,44 @@ s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw, return status; } +/** + * ngbe_set_mac_anti_spoofing - Enable/Disable MAC anti-spoofing + * @hw: pointer to hardware structure + * @enable: enable or disable switch for MAC anti-spoofing + * @vf: Virtual Function pool - VF Pool to set for MAC anti-spoofing + * + **/ +void ngbe_set_mac_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf) +{ + u32 pfvfspoof; + + pfvfspoof = rd32(hw, NGBE_POOLTXASMAC); + if (enable) + pfvfspoof |= (1 << vf); + else + pfvfspoof &= ~(1 << vf); + wr32(hw, NGBE_POOLTXASMAC, pfvfspoof); +} + +/** + * ngbe_set_vlan_anti_spoofing - Enable/Disable VLAN anti-spoofing + * @hw: pointer to hardware structure + * @enable: enable or disable switch for VLAN anti-spoofing + * @vf: Virtual Function pool - VF Pool to set for VLAN anti-spoofing + * + **/ +void ngbe_set_vlan_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf) +{ + u32 pfvfspoof; + + pfvfspoof = rd32(hw, NGBE_POOLTXASVLAN); + if (enable) + pfvfspoof |= (1 << vf); + else + pfvfspoof &= ~(1 << vf); + wr32(hw, NGBE_POOLTXASVLAN, pfvfspoof); +} + /** * ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds * @hw: pointer to hardware structure @@ -1231,6 +1270,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) struct ngbe_mac_info *mac = &hw->mac; struct ngbe_phy_info *phy = &hw->phy; struct ngbe_rom_info *rom = &hw->rom; + struct ngbe_mbx_info *mbx = &hw->mbx; DEBUGFUNC("ngbe_init_ops_pf"); @@ -1258,7 +1298,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->disable_sec_rx_path = ngbe_disable_sec_rx_path; mac->enable_sec_rx_path = ngbe_enable_sec_rx_path; - /* RAR, Multicast */ + + /* RAR, Multicast, VLAN */ mac->set_rar = ngbe_set_rar; mac->clear_rar = ngbe_clear_rar; mac->init_rx_addrs = ngbe_init_rx_addrs; @@ -1266,6 +1307,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->set_vmdq = ngbe_set_vmdq; mac->clear_vmdq = ngbe_clear_vmdq; mac->clear_vfta = ngbe_clear_vfta; + mac->set_mac_anti_spoofing = ngbe_set_mac_anti_spoofing; + mac->set_vlan_anti_spoofing = ngbe_set_vlan_anti_spoofing; /* Link */ mac->get_link_capabilities = ngbe_get_link_capabilities_em; @@ -1276,6 +1319,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh; mac->check_overtemp = ngbe_mac_check_overtemp; + mbx->init_params = ngbe_init_mbx_params_pf; + /* EEPROM */ rom->init_params = ngbe_init_eeprom_params; rom->read32 = ngbe_ee_read32; diff --git a/drivers/net/ngbe/base/ngbe_mbx.c b/drivers/net/ngbe/base/ngbe_mbx.c new file mode 100644 index 0000000000..1ac9531ceb --- /dev/null +++ b/drivers/net/ngbe/base/ngbe_mbx.c @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + * Copyright(c) 2010-2017 Intel Corporation + */ + +#include "ngbe_type.h" + +#include "ngbe_mbx.h" + +/** + * ngbe_init_mbx_params_pf - set initial values for pf mailbox + * @hw: pointer to the HW structure + * + * Initializes the hw->mbx struct to correct values for pf mailbox + */ +void ngbe_init_mbx_params_pf(struct ngbe_hw *hw) +{ + struct ngbe_mbx_info *mbx = &hw->mbx; + + mbx->timeout = 0; + mbx->usec_delay = 0; + + mbx->size = NGBE_P2VMBX_SIZE; + + mbx->stats.msgs_tx = 0; + mbx->stats.msgs_rx = 0; + mbx->stats.reqs = 0; + mbx->stats.acks = 0; + mbx->stats.rsts = 0; +} diff --git a/drivers/net/ngbe/base/ngbe_mbx.h b/drivers/net/ngbe/base/ngbe_mbx.h new file mode 100644 index 0000000000..d280945baf --- /dev/null +++ b/drivers/net/ngbe/base/ngbe_mbx.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + * Copyright(c) 2010-2017 Intel Corporation + */ + +#ifndef _NGBE_MBX_H_ +#define _NGBE_MBX_H_ + +void ngbe_init_mbx_params_pf(struct ngbe_hw *hw); + +#endif /* _NGBE_MBX_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 5a88d38e84..bc95fcf609 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -254,6 +254,9 @@ struct ngbe_mac_info { u32 mc_addr_count, ngbe_mc_addr_itr func, bool clear); s32 (*clear_vfta)(struct ngbe_hw *hw); + void (*set_mac_anti_spoofing)(struct ngbe_hw *hw, bool enable, int vf); + void (*set_vlan_anti_spoofing)(struct ngbe_hw *hw, + bool enable, int vf); /* Manageability interface */ s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw); @@ -305,6 +308,24 @@ struct ngbe_phy_info { u32 autoneg_advertised; }; +struct ngbe_mbx_stats { + u32 msgs_tx; + u32 msgs_rx; + + u32 acks; + u32 reqs; + u32 rsts; +}; + +struct ngbe_mbx_info { + void (*init_params)(struct ngbe_hw *hw); + + struct ngbe_mbx_stats stats; + u32 timeout; + u32 usec_delay; + u16 size; +}; + enum ngbe_isb_idx { NGBE_ISB_HEADER, NGBE_ISB_MISC, @@ -321,6 +342,7 @@ struct ngbe_hw { struct ngbe_phy_info phy; struct ngbe_rom_info rom; struct ngbe_bus_info bus; + struct ngbe_mbx_info mbx; u16 device_id; u16 vendor_id; u16 sub_device_id; diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build index c55e6c20e8..8b5195aab3 100644 --- a/drivers/net/ngbe/meson.build +++ b/drivers/net/ngbe/meson.build @@ -13,6 +13,7 @@ objs = [base_objs] sources = files( 'ngbe_ethdev.c', 'ngbe_ptypes.c', + 'ngbe_pf.c', 'ngbe_rxtx.c', ) diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 0bc1400aea..70e471b2c2 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -304,7 +304,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; const struct rte_memzone *mz; uint32_t ctrl_ext; - int err; + int err, ret; PMD_INIT_FUNC_TRACE(); @@ -423,6 +423,16 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) /* initialize the hw strip bitmap*/ memset(hwstrip, 0, sizeof(*hwstrip)); + /* initialize PF if max_vfs not zero */ + ret = ngbe_pf_host_init(eth_dev); + if (ret) { + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; + rte_free(eth_dev->data->hash_mac_addrs); + eth_dev->data->hash_mac_addrs = NULL; + return ret; + } + ctrl_ext = rd32(hw, NGBE_PORTCTL); /* let hardware know driver is loaded */ ctrl_ext |= NGBE_PORTCTL_DRVLOAD; @@ -926,6 +936,9 @@ ngbe_dev_start(struct rte_eth_dev *dev) hw->mac.start_hw(hw); hw->mac.get_link_status = true; + /* configure PF module if SRIOV enabled */ + ngbe_pf_host_configure(dev); + ngbe_dev_phy_intr_setup(dev); /* check and configure queue intr-vector mapping */ @@ -1087,8 +1100,10 @@ ngbe_dev_stop(struct rte_eth_dev *dev) struct rte_eth_link link; struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(dev); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int vf; if (hw->adapter_stopped) return 0; @@ -1111,6 +1126,9 @@ ngbe_dev_stop(struct rte_eth_dev *dev) /* stop adapter */ ngbe_stop_hw(hw); + for (vf = 0; vfinfo != NULL && vf < pci_dev->max_vfs; vf++) + vfinfo[vf].clear_to_send = false; + ngbe_dev_clear_queues(dev); /* Clear stored conf */ @@ -1183,6 +1201,9 @@ ngbe_dev_close(struct rte_eth_dev *dev) rte_delay_ms(100); } while (retries++ < (10 + NGBE_LINK_UP_TIME)); + /* uninitialize PF if max_vfs not zero */ + ngbe_pf_host_uninit(dev); + rte_free(dev->data->mac_addrs); dev->data->mac_addrs = NULL; @@ -1200,6 +1221,15 @@ ngbe_dev_reset(struct rte_eth_dev *dev) { int ret; + /* When a DPDK PMD PF begin to reset PF port, it should notify all + * its VF to make them align with it. The detailed notification + * mechanism is PMD specific. As to ngbe PF, it is rather complex. + * To avoid unexpected behavior in VF, currently reset of PF with + * SR-IOV activation is not supported. It might be supported later. + */ + if (dev->data->sriov.active) + return -ENOTSUP; + ret = eth_ngbe_dev_uninit(dev); if (ret != 0) return ret; diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 083db6080b..f5a1363d10 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -7,6 +7,8 @@ #define _NGBE_ETHDEV_H_ #include "ngbe_ptypes.h" +#include +#include /* need update link, bit flag */ #define NGBE_FLAG_NEED_LINK_UPDATE ((uint32_t)(1 << 0)) @@ -75,6 +77,12 @@ struct ngbe_uta_info { uint32_t uta_shadow[NGBE_MAX_UTA]; }; +struct ngbe_vf_info { + uint8_t vf_mac_addresses[RTE_ETHER_ADDR_LEN]; + bool clear_to_send; + uint16_t switch_domain_id; +}; + /* * Structure to store private data for each driver instance (for each port). */ @@ -85,6 +93,7 @@ struct ngbe_adapter { struct ngbe_stat_mappings stat_mappings; struct ngbe_vfta shadow_vfta; struct ngbe_hwstrip hwstrip; + struct ngbe_vf_info *vfdata; struct ngbe_uta_info uta_info; bool rx_bulk_alloc_allowed; @@ -129,6 +138,10 @@ ngbe_dev_intr(struct rte_eth_dev *dev) #define NGBE_DEV_HWSTRIP(dev) \ (&((struct ngbe_adapter *)(dev)->data->dev_private)->hwstrip) + +#define NGBE_DEV_VFDATA(dev) \ + (&((struct ngbe_adapter *)(dev)->data->dev_private)->vfdata) + #define NGBE_DEV_UTA_INFO(dev) \ (&((struct ngbe_adapter *)(dev)->data->dev_private)->uta_info) @@ -216,6 +229,12 @@ void ngbe_vlan_hw_filter_disable(struct rte_eth_dev *dev); void ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev); +int ngbe_pf_host_init(struct rte_eth_dev *eth_dev); + +void ngbe_pf_host_uninit(struct rte_eth_dev *eth_dev); + +int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev); + #define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */ #define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */ #define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */ diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c new file mode 100644 index 0000000000..2f9dfc4284 --- /dev/null +++ b/drivers/net/ngbe/ngbe_pf.c @@ -0,0 +1,196 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + * Copyright(c) 2010-2017 Intel Corporation + */ + +#include +#include +#include +#include + +#include "base/ngbe.h" +#include "ngbe_ethdev.h" + +#define NGBE_MAX_VFTA (128) + +static inline uint16_t +dev_num_vf(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + /* EM only support 7 VFs. */ + return pci_dev->max_vfs; +} + +static inline +int ngbe_vf_perm_addr_gen(struct rte_eth_dev *dev, uint16_t vf_num) +{ + unsigned char vf_mac_addr[RTE_ETHER_ADDR_LEN]; + struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(dev); + uint16_t vfn; + + for (vfn = 0; vfn < vf_num; vfn++) { + rte_eth_random_addr(vf_mac_addr); + /* keep the random address as default */ + memcpy(vfinfo[vfn].vf_mac_addresses, vf_mac_addr, + RTE_ETHER_ADDR_LEN); + } + + return 0; +} + +int ngbe_pf_host_init(struct rte_eth_dev *eth_dev) +{ + struct ngbe_vf_info **vfinfo = NGBE_DEV_VFDATA(eth_dev); + struct ngbe_uta_info *uta_info = NGBE_DEV_UTA_INFO(eth_dev); + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + uint16_t vf_num; + uint8_t nb_queue = 1; + int ret = 0; + + PMD_INIT_FUNC_TRACE(); + + RTE_ETH_DEV_SRIOV(eth_dev).active = 0; + vf_num = dev_num_vf(eth_dev); + if (vf_num == 0) + return ret; + + *vfinfo = rte_zmalloc("vf_info", + sizeof(struct ngbe_vf_info) * vf_num, 0); + if (*vfinfo == NULL) { + PMD_INIT_LOG(ERR, + "Cannot allocate memory for private VF data\n"); + return -ENOMEM; + } + + ret = rte_eth_switch_domain_alloc(&(*vfinfo)->switch_domain_id); + if (ret) { + PMD_INIT_LOG(ERR, + "failed to allocate switch domain for device %d", ret); + rte_free(*vfinfo); + *vfinfo = NULL; + return ret; + } + + memset(uta_info, 0, sizeof(struct ngbe_uta_info)); + hw->mac.mc_filter_type = 0; + + RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS; + RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue; + RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = + (uint16_t)(vf_num * nb_queue); + + ngbe_vf_perm_addr_gen(eth_dev, vf_num); + + /* init_mailbox_params */ + hw->mbx.init_params(hw); + + return ret; +} + +void ngbe_pf_host_uninit(struct rte_eth_dev *eth_dev) +{ + struct ngbe_vf_info **vfinfo; + uint16_t vf_num; + int ret; + + PMD_INIT_FUNC_TRACE(); + + RTE_ETH_DEV_SRIOV(eth_dev).active = 0; + RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = 0; + RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = 0; + + vf_num = dev_num_vf(eth_dev); + if (vf_num == 0) + return; + + vfinfo = NGBE_DEV_VFDATA(eth_dev); + if (*vfinfo == NULL) + return; + + ret = rte_eth_switch_domain_free((*vfinfo)->switch_domain_id); + if (ret) + PMD_INIT_LOG(WARNING, "failed to free switch domain: %d", ret); + + rte_free(*vfinfo); + *vfinfo = NULL; +} + +int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev) +{ + uint32_t vtctl, fcrth; + uint32_t vfre_offset; + uint16_t vf_num; + const uint8_t VFRE_SHIFT = 5; /* VFRE 32 bits per slot */ + const uint8_t VFRE_MASK = (uint8_t)((1U << VFRE_SHIFT) - 1); + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + uint32_t gpie; + uint32_t gcr_ext; + uint32_t vlanctrl; + int i; + + vf_num = dev_num_vf(eth_dev); + if (vf_num == 0) + return -1; + + /* set the default pool for PF */ + vtctl = rd32(hw, NGBE_POOLCTL); + vtctl &= ~NGBE_POOLCTL_DEFPL_MASK; + vtctl |= NGBE_POOLCTL_DEFPL(vf_num); + vtctl |= NGBE_POOLCTL_RPLEN; + wr32(hw, NGBE_POOLCTL, vtctl); + + vfre_offset = vf_num & VFRE_MASK; + + /* Enable pools reserved to PF only */ + wr32(hw, NGBE_POOLRXENA(0), (~0U) << vfre_offset); + wr32(hw, NGBE_POOLTXENA(0), (~0U) << vfre_offset); + + wr32(hw, NGBE_PSRCTL, NGBE_PSRCTL_LBENA); + + /* clear VMDq map to perment rar 0 */ + hw->mac.clear_vmdq(hw, 0, BIT_MASK32); + + /* clear VMDq map to scan rar 31 */ + wr32(hw, NGBE_ETHADDRIDX, hw->mac.num_rar_entries); + wr32(hw, NGBE_ETHADDRASS, 0); + + /* set VMDq map to default PF pool */ + hw->mac.set_vmdq(hw, 0, vf_num); + + /* + * SW msut set PORTCTL.VT_Mode the same as GPIE.VT_Mode + */ + gpie = rd32(hw, NGBE_GPIE); + gpie |= NGBE_GPIE_MSIX; + gcr_ext = rd32(hw, NGBE_PORTCTL); + gcr_ext &= ~NGBE_PORTCTL_NUMVT_MASK; + + if (RTE_ETH_DEV_SRIOV(eth_dev).active == ETH_8_POOLS) + gcr_ext |= NGBE_PORTCTL_NUMVT_8; + + wr32(hw, NGBE_PORTCTL, gcr_ext); + wr32(hw, NGBE_GPIE, gpie); + + /* + * enable vlan filtering and allow all vlan tags through + */ + vlanctrl = rd32(hw, NGBE_VLANCTL); + vlanctrl |= NGBE_VLANCTL_VFE; /* enable vlan filters */ + wr32(hw, NGBE_VLANCTL, vlanctrl); + + /* enable all vlan filters */ + for (i = 0; i < NGBE_MAX_VFTA; i++) + wr32(hw, NGBE_VLANTBL(i), 0xFFFFFFFF); + + /* Enable MAC Anti-Spoofing */ + hw->mac.set_mac_anti_spoofing(hw, FALSE, vf_num); + + /* set flow control threshold to max to avoid tx switch hang */ + wr32(hw, NGBE_FCWTRLO, 0); + fcrth = rd32(hw, NGBE_PBRXSIZE) - 32; + wr32(hw, NGBE_FCWTRHI, fcrth); + + return 0; +} + diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 04abc2bb47..91cafed7fc 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -1886,7 +1886,8 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->hthresh = tx_conf->tx_thresh.hthresh; txq->wthresh = tx_conf->tx_thresh.wthresh; txq->queue_id = queue_idx; - txq->reg_idx = queue_idx; + txq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ? + queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx); txq->port_id = dev->data->port_id; txq->offloads = offloads; txq->ops = &def_txq_ops; @@ -2138,7 +2139,8 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->nb_rx_desc = nb_desc; rxq->rx_free_thresh = rx_conf->rx_free_thresh; rxq->queue_id = queue_idx; - rxq->reg_idx = queue_idx; + rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ? + queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx); rxq->port_id = dev->data->port_id; if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) rxq->crc_len = RTE_ETHER_CRC_LEN; @@ -2530,16 +2532,18 @@ ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq) static int ngbe_dev_mq_rx_configure(struct rte_eth_dev *dev) { - switch (dev->data->dev_conf.rxmode.mq_mode) { - case ETH_MQ_RX_RSS: - ngbe_rss_configure(dev); - break; + if (RTE_ETH_DEV_SRIOV(dev).active == 0) { + switch (dev->data->dev_conf.rxmode.mq_mode) { + case ETH_MQ_RX_RSS: + ngbe_rss_configure(dev); + break; - case ETH_MQ_RX_NONE: - default: - /* if mq_mode is none, disable rss mode.*/ - ngbe_rss_disable(dev); - break; + case ETH_MQ_RX_NONE: + default: + /* if mq_mode is none, disable rss mode.*/ + ngbe_rss_disable(dev); + break; + } } return 0; From patchwork Wed Sep 8 08:37:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98303 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DF08A0C56; Wed, 8 Sep 2021 10:38:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C188B411FC; Wed, 8 Sep 2021 10:37:10 +0200 (CEST) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 8042941180 for ; Wed, 8 Sep 2021 10:37:04 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090216tp3d0wnm Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:55 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: xqT8U4SkSphNzAyzmJu/g59/bfEsvHK+ASN4y0rjJ+4bZizucS0pC5pLG6A0q KjHF+8nFZapVA71lUHt9O5Rah+ML+TqOwP/G6+6EmqihwLP8zdHTmvCKPNWXeAoDTtjGw5g Xhxq22c68GDWa3ymiNPvnGzcm6oO5LQG8clqQJCr+OgEutUl4vnhvQTtp3Vru3b7YDu40X/ 1aMXAQTscVxoexMM9rrA+PTnSoDqMs3FnvVSwZL/H0ZkOLKL1ggTfSPQE+79BlITFC4nC8s o/6QEJrFAgsP7K/qEKX6SzobAihV0RiHsPF6EGJvC/1FTIm1fuw/kbvttKIqyf743PNfkYZ ocCDGEv/u7dPzLkwqFVATgccW7ainaMSPJc0ibr X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:45 +0800 Message-Id: <20210908083758.312055-20-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign1 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 19/32] net/ngbe: add mailbox process operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add check operation for vf function level reset, mailbox messages and ack from vf. Waiting to process the messages. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/base/ngbe.h | 4 + drivers/net/ngbe/base/ngbe_dummy.h | 39 ++ drivers/net/ngbe/base/ngbe_hw.c | 215 +++++++++++ drivers/net/ngbe/base/ngbe_hw.h | 8 + drivers/net/ngbe/base/ngbe_mbx.c | 297 +++++++++++++++ drivers/net/ngbe/base/ngbe_mbx.h | 78 ++++ drivers/net/ngbe/base/ngbe_type.h | 10 + drivers/net/ngbe/meson.build | 2 + drivers/net/ngbe/ngbe_ethdev.c | 7 + drivers/net/ngbe/ngbe_ethdev.h | 13 + drivers/net/ngbe/ngbe_pf.c | 564 +++++++++++++++++++++++++++++ drivers/net/ngbe/rte_pmd_ngbe.h | 39 ++ 12 files changed, 1276 insertions(+) create mode 100644 drivers/net/ngbe/rte_pmd_ngbe.h diff --git a/drivers/net/ngbe/base/ngbe.h b/drivers/net/ngbe/base/ngbe.h index fe85b07b57..1d17c2f115 100644 --- a/drivers/net/ngbe/base/ngbe.h +++ b/drivers/net/ngbe/base/ngbe.h @@ -6,6 +6,10 @@ #define _NGBE_H_ #include "ngbe_type.h" +#include "ngbe_mng.h" +#include "ngbe_mbx.h" +#include "ngbe_eeprom.h" +#include "ngbe_phy.h" #include "ngbe_hw.h" #endif /* _NGBE_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 5cb09bfcaa..940b448734 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -136,6 +136,16 @@ static inline s32 ngbe_mac_clear_vfta_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_mac_set_vfta_dummy(struct ngbe_hw *TUP0, u32 TUP1, + u32 TUP2, bool TUP3, bool TUP4) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_mac_set_vlvf_dummy(struct ngbe_hw *TUP0, u32 TUP1, + u32 TUP2, bool TUP3, u32 *TUP4, u32 TUP5, bool TUP6) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline void ngbe_mac_set_mac_anti_spoofing_dummy(struct ngbe_hw *TUP0, bool TUP1, int TUP2) { @@ -200,6 +210,28 @@ static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1, static inline void ngbe_mbx_init_params_dummy(struct ngbe_hw *TUP0) { } +static inline s32 ngbe_mbx_read_dummy(struct ngbe_hw *TUP0, u32 *TUP1, + u16 TUP2, u16 TUP3) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_mbx_write_dummy(struct ngbe_hw *TUP0, u32 *TUP1, + u16 TUP2, u16 TUP3) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_mbx_check_for_msg_dummy(struct ngbe_hw *TUP0, u16 TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_mbx_check_for_ack_dummy(struct ngbe_hw *TUP0, u16 TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_mbx_check_for_rst_dummy(struct ngbe_hw *TUP0, u16 TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) { @@ -228,6 +260,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy; hw->mac.update_mc_addr_list = ngbe_mac_update_mc_addr_list_dummy; hw->mac.clear_vfta = ngbe_mac_clear_vfta_dummy; + hw->mac.set_vfta = ngbe_mac_set_vfta_dummy; + hw->mac.set_vlvf = ngbe_mac_set_vlvf_dummy; hw->mac.set_mac_anti_spoofing = ngbe_mac_set_mac_anti_spoofing_dummy; hw->mac.set_vlan_anti_spoofing = ngbe_mac_set_vlan_anti_spoofing_dummy; hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy; @@ -242,6 +276,11 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->phy.setup_link = ngbe_phy_setup_link_dummy; hw->phy.check_link = ngbe_phy_check_link_dummy; hw->mbx.init_params = ngbe_mbx_init_params_dummy; + hw->mbx.read = ngbe_mbx_read_dummy; + hw->mbx.write = ngbe_mbx_write_dummy; + hw->mbx.check_for_msg = ngbe_mbx_check_for_msg_dummy; + hw->mbx.check_for_ack = ngbe_mbx_check_for_ack_dummy; + hw->mbx.check_for_rst = ngbe_mbx_check_for_rst_dummy; } #endif /* _NGBE_TYPE_DUMMY_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index 8b45a91f78..afde58a89e 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -914,6 +914,214 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw) return 0; } +/** + * ngbe_find_vlvf_slot - find the vlanid or the first empty slot + * @hw: pointer to hardware structure + * @vlan: VLAN id to write to VLAN filter + * @vlvf_bypass: true to find vlanid only, false returns first empty slot if + * vlanid not found + * + * + * return the VLVF index where this VLAN id should be placed + * + **/ +s32 ngbe_find_vlvf_slot(struct ngbe_hw *hw, u32 vlan, bool vlvf_bypass) +{ + s32 regindex, first_empty_slot; + u32 bits; + + /* short cut the special case */ + if (vlan == 0) + return 0; + + /* if vlvf_bypass is set we don't want to use an empty slot, we + * will simply bypass the VLVF if there are no entries present in the + * VLVF that contain our VLAN + */ + first_empty_slot = vlvf_bypass ? NGBE_ERR_NO_SPACE : 0; + + /* add VLAN enable bit for comparison */ + vlan |= NGBE_PSRVLAN_EA; + + /* Search for the vlan id in the VLVF entries. Save off the first empty + * slot found along the way. + * + * pre-decrement loop covering (NGBE_NUM_POOL - 1) .. 1 + */ + for (regindex = NGBE_NUM_POOL; --regindex;) { + wr32(hw, NGBE_PSRVLANIDX, regindex); + bits = rd32(hw, NGBE_PSRVLAN); + if (bits == vlan) + return regindex; + if (!first_empty_slot && !bits) + first_empty_slot = regindex; + } + + /* If we are here then we didn't find the VLAN. Return first empty + * slot we found during our search, else error. + */ + if (!first_empty_slot) + DEBUGOUT("No space in VLVF.\n"); + + return first_empty_slot ? first_empty_slot : NGBE_ERR_NO_SPACE; +} + +/** + * ngbe_set_vfta - Set VLAN filter table + * @hw: pointer to hardware structure + * @vlan: VLAN id to write to VLAN filter + * @vind: VMDq output index that maps queue to VLAN id in VLVFB + * @vlan_on: boolean flag to turn on/off VLAN + * @vlvf_bypass: boolean flag indicating updating default pool is okay + * + * Turn on/off specified VLAN in the VLAN filter table. + **/ +s32 ngbe_set_vfta(struct ngbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, bool vlvf_bypass) +{ + u32 regidx, vfta_delta, vfta; + s32 err; + + DEBUGFUNC("ngbe_set_vfta"); + + if (vlan > 4095 || vind > 63) + return NGBE_ERR_PARAM; + + /* + * this is a 2 part operation - first the VFTA, then the + * VLVF and VLVFB if VT Mode is set + * We don't write the VFTA until we know the VLVF part succeeded. + */ + + /* Part 1 + * The VFTA is a bitstring made up of 128 32-bit registers + * that enable the particular VLAN id, much like the MTA: + * bits[11-5]: which register + * bits[4-0]: which bit in the register + */ + regidx = vlan / 32; + vfta_delta = 1 << (vlan % 32); + vfta = rd32(hw, NGBE_VLANTBL(regidx)); + + /* + * vfta_delta represents the difference between the current value + * of vfta and the value we want in the register. Since the diff + * is an XOR mask we can just update the vfta using an XOR + */ + vfta_delta &= vlan_on ? ~vfta : vfta; + vfta ^= vfta_delta; + + /* Part 2 + * Call ngbe_set_vlvf to set VLVFB and VLVF + */ + err = ngbe_set_vlvf(hw, vlan, vind, vlan_on, &vfta_delta, + vfta, vlvf_bypass); + if (err != 0) { + if (vlvf_bypass) + goto vfta_update; + return err; + } + +vfta_update: + /* Update VFTA now that we are ready for traffic */ + if (vfta_delta) + wr32(hw, NGBE_VLANTBL(regidx), vfta); + + return 0; +} + +/** + * ngbe_set_vlvf - Set VLAN Pool Filter + * @hw: pointer to hardware structure + * @vlan: VLAN id to write to VLAN filter + * @vind: VMDq output index that maps queue to VLAN id in PSRVLANPLM + * @vlan_on: boolean flag to turn on/off VLAN in PSRVLAN + * @vfta_delta: pointer to the difference between the current value + * of PSRVLANPLM and the desired value + * @vfta: the desired value of the VFTA + * @vlvf_bypass: boolean flag indicating updating default pool is okay + * + * Turn on/off specified bit in VLVF table. + **/ +s32 ngbe_set_vlvf(struct ngbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, u32 *vfta_delta, u32 vfta, + bool vlvf_bypass) +{ + u32 bits; + u32 portctl; + s32 vlvf_index; + + DEBUGFUNC("ngbe_set_vlvf"); + + if (vlan > 4095 || vind > 63) + return NGBE_ERR_PARAM; + + /* If VT Mode is set + * Either vlan_on + * make sure the vlan is in PSRVLAN + * set the vind bit in the matching PSRVLANPLM + * Or !vlan_on + * clear the pool bit and possibly the vind + */ + portctl = rd32(hw, NGBE_PORTCTL); + if (!(portctl & NGBE_PORTCTL_NUMVT_MASK)) + return 0; + + vlvf_index = ngbe_find_vlvf_slot(hw, vlan, vlvf_bypass); + if (vlvf_index < 0) + return vlvf_index; + + wr32(hw, NGBE_PSRVLANIDX, vlvf_index); + bits = rd32(hw, NGBE_PSRVLANPLM(vind / 32)); + + /* set the pool bit */ + bits |= 1 << (vind % 32); + if (vlan_on) + goto vlvf_update; + + /* clear the pool bit */ + bits ^= 1 << (vind % 32); + + if (!bits && + !rd32(hw, NGBE_PSRVLANPLM(vind / 32))) { + /* Clear PSRVLANPLM first, then disable PSRVLAN. Otherwise + * we run the risk of stray packets leaking into + * the PF via the default pool + */ + if (*vfta_delta) + wr32(hw, NGBE_PSRVLANPLM(vlan / 32), vfta); + + /* disable VLVF and clear remaining bit from pool */ + wr32(hw, NGBE_PSRVLAN, 0); + wr32(hw, NGBE_PSRVLANPLM(vind / 32), 0); + + return 0; + } + + /* If there are still bits set in the PSRVLANPLM registers + * for the VLAN ID indicated we need to see if the + * caller is requesting that we clear the PSRVLANPLM entry bit. + * If the caller has requested that we clear the PSRVLANPLM + * entry bit but there are still pools/VFs using this VLAN + * ID entry then ignore the request. We're not worried + * about the case where we're turning the PSRVLANPLM VLAN ID + * entry bit on, only when requested to turn it off as + * there may be multiple pools and/or VFs using the + * VLAN ID entry. In that case we cannot clear the + * PSRVLANPLM bit until all pools/VFs using that VLAN ID have also + * been cleared. This will be indicated by "bits" being + * zero. + */ + *vfta_delta = 0; + +vlvf_update: + /* record pool change and enable VLAN ID if not already enabled */ + wr32(hw, NGBE_PSRVLANPLM(vind / 32), bits); + wr32(hw, NGBE_PSRVLAN, NGBE_PSRVLAN_EA | vlan); + + return 0; +} + /** * ngbe_clear_vfta - Clear VLAN filter table * @hw: pointer to hardware structure @@ -1306,6 +1514,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->update_mc_addr_list = ngbe_update_mc_addr_list; mac->set_vmdq = ngbe_set_vmdq; mac->clear_vmdq = ngbe_clear_vmdq; + mac->set_vfta = ngbe_set_vfta; + mac->set_vlvf = ngbe_set_vlvf; mac->clear_vfta = ngbe_clear_vfta; mac->set_mac_anti_spoofing = ngbe_set_mac_anti_spoofing; mac->set_vlan_anti_spoofing = ngbe_set_vlan_anti_spoofing; @@ -1320,6 +1530,11 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->check_overtemp = ngbe_mac_check_overtemp; mbx->init_params = ngbe_init_mbx_params_pf; + mbx->read = ngbe_read_mbx_pf; + mbx->write = ngbe_write_mbx_pf; + mbx->check_for_msg = ngbe_check_for_msg_pf; + mbx->check_for_ack = ngbe_check_for_ack_pf; + mbx->check_for_rst = ngbe_check_for_rst_pf; /* EEPROM */ rom->init_params = ngbe_init_eeprom_params; diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h index a27bd3e650..83ad646dde 100644 --- a/drivers/net/ngbe/base/ngbe_hw.h +++ b/drivers/net/ngbe/base/ngbe_hw.h @@ -49,8 +49,16 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask); s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq); s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq); s32 ngbe_init_uta_tables(struct ngbe_hw *hw); +s32 ngbe_set_vfta(struct ngbe_hw *hw, u32 vlan, + u32 vind, bool vlan_on, bool vlvf_bypass); +s32 ngbe_set_vlvf(struct ngbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, u32 *vfta_delta, u32 vfta, + bool vlvf_bypass); s32 ngbe_clear_vfta(struct ngbe_hw *hw); +s32 ngbe_find_vlvf_slot(struct ngbe_hw *hw, u32 vlan, bool vlvf_bypass); +void ngbe_set_mac_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf); +void ngbe_set_vlan_anti_spoofing(struct ngbe_hw *hw, bool enable, int vf); s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw); s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw); void ngbe_disable_rx(struct ngbe_hw *hw); diff --git a/drivers/net/ngbe/base/ngbe_mbx.c b/drivers/net/ngbe/base/ngbe_mbx.c index 1ac9531ceb..764ae81319 100644 --- a/drivers/net/ngbe/base/ngbe_mbx.c +++ b/drivers/net/ngbe/base/ngbe_mbx.c @@ -7,6 +7,303 @@ #include "ngbe_mbx.h" +/** + * ngbe_read_mbx - Reads a message from the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to read + * + * returns 0 if it successfully read message from buffer + **/ +s32 ngbe_read_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) +{ + struct ngbe_mbx_info *mbx = &hw->mbx; + s32 ret_val = NGBE_ERR_MBX; + + DEBUGFUNC("ngbe_read_mbx"); + + /* limit read to size of mailbox */ + if (size > mbx->size) + size = mbx->size; + + if (mbx->read) + ret_val = mbx->read(hw, msg, size, mbx_id); + + return ret_val; +} + +/** + * ngbe_write_mbx - Write a message to the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @mbx_id: id of mailbox to write + * + * returns 0 if it successfully copied message into the buffer + **/ +s32 ngbe_write_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) +{ + struct ngbe_mbx_info *mbx = &hw->mbx; + s32 ret_val = 0; + + DEBUGFUNC("ngbe_write_mbx"); + + if (size > mbx->size) { + ret_val = NGBE_ERR_MBX; + DEBUGOUT("Invalid mailbox message size %d", size); + } else if (mbx->write) { + ret_val = mbx->write(hw, msg, size, mbx_id); + } + + return ret_val; +} + +/** + * ngbe_check_for_msg - checks to see if someone sent us mail + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns 0 if the Status bit was found or else ERR_MBX + **/ +s32 ngbe_check_for_msg(struct ngbe_hw *hw, u16 mbx_id) +{ + struct ngbe_mbx_info *mbx = &hw->mbx; + s32 ret_val = NGBE_ERR_MBX; + + DEBUGFUNC("ngbe_check_for_msg"); + + if (mbx->check_for_msg) + ret_val = mbx->check_for_msg(hw, mbx_id); + + return ret_val; +} + +/** + * ngbe_check_for_ack - checks to see if someone sent us ACK + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns 0 if the Status bit was found or else ERR_MBX + **/ +s32 ngbe_check_for_ack(struct ngbe_hw *hw, u16 mbx_id) +{ + struct ngbe_mbx_info *mbx = &hw->mbx; + s32 ret_val = NGBE_ERR_MBX; + + DEBUGFUNC("ngbe_check_for_ack"); + + if (mbx->check_for_ack) + ret_val = mbx->check_for_ack(hw, mbx_id); + + return ret_val; +} + +/** + * ngbe_check_for_rst - checks to see if other side has reset + * @hw: pointer to the HW structure + * @mbx_id: id of mailbox to check + * + * returns 0 if the Status bit was found or else ERR_MBX + **/ +s32 ngbe_check_for_rst(struct ngbe_hw *hw, u16 mbx_id) +{ + struct ngbe_mbx_info *mbx = &hw->mbx; + s32 ret_val = NGBE_ERR_MBX; + + DEBUGFUNC("ngbe_check_for_rst"); + + if (mbx->check_for_rst) + ret_val = mbx->check_for_rst(hw, mbx_id); + + return ret_val; +} + +STATIC s32 ngbe_check_for_bit_pf(struct ngbe_hw *hw, u32 mask) +{ + u32 mbvficr = rd32(hw, NGBE_MBVFICR); + s32 ret_val = NGBE_ERR_MBX; + + if (mbvficr & mask) { + ret_val = 0; + wr32(hw, NGBE_MBVFICR, mask); + } + + return ret_val; +} + +/** + * ngbe_check_for_msg_pf - checks to see if the VF has sent mail + * @hw: pointer to the HW structure + * @vf_number: the VF index + * + * returns 0 if the VF has set the Status bit or else ERR_MBX + **/ +s32 ngbe_check_for_msg_pf(struct ngbe_hw *hw, u16 vf_number) +{ + s32 ret_val = NGBE_ERR_MBX; + u32 vf_bit = vf_number; + + DEBUGFUNC("ngbe_check_for_msg_pf"); + + if (!ngbe_check_for_bit_pf(hw, NGBE_MBVFICR_VFREQ_VF1 << vf_bit)) { + ret_val = 0; + hw->mbx.stats.reqs++; + } + + return ret_val; +} + +/** + * ngbe_check_for_ack_pf - checks to see if the VF has ACKed + * @hw: pointer to the HW structure + * @vf_number: the VF index + * + * returns 0 if the VF has set the Status bit or else ERR_MBX + **/ +s32 ngbe_check_for_ack_pf(struct ngbe_hw *hw, u16 vf_number) +{ + s32 ret_val = NGBE_ERR_MBX; + u32 vf_bit = vf_number; + + DEBUGFUNC("ngbe_check_for_ack_pf"); + + if (!ngbe_check_for_bit_pf(hw, NGBE_MBVFICR_VFACK_VF1 << vf_bit)) { + ret_val = 0; + hw->mbx.stats.acks++; + } + + return ret_val; +} + +/** + * ngbe_check_for_rst_pf - checks to see if the VF has reset + * @hw: pointer to the HW structure + * @vf_number: the VF index + * + * returns 0 if the VF has set the Status bit or else ERR_MBX + **/ +s32 ngbe_check_for_rst_pf(struct ngbe_hw *hw, u16 vf_number) +{ + u32 vflre = 0; + s32 ret_val = NGBE_ERR_MBX; + + DEBUGFUNC("ngbe_check_for_rst_pf"); + + vflre = rd32(hw, NGBE_FLRVFE); + if (vflre & (1 << vf_number)) { + ret_val = 0; + wr32(hw, NGBE_FLRVFEC, (1 << vf_number)); + hw->mbx.stats.rsts++; + } + + return ret_val; +} + +/** + * ngbe_obtain_mbx_lock_pf - obtain mailbox lock + * @hw: pointer to the HW structure + * @vf_number: the VF index + * + * return 0 if we obtained the mailbox lock + **/ +STATIC s32 ngbe_obtain_mbx_lock_pf(struct ngbe_hw *hw, u16 vf_number) +{ + s32 ret_val = NGBE_ERR_MBX; + u32 p2v_mailbox; + + DEBUGFUNC("ngbe_obtain_mbx_lock_pf"); + + /* Take ownership of the buffer */ + wr32(hw, NGBE_MBCTL(vf_number), NGBE_MBCTL_PFU); + + /* reserve mailbox for vf use */ + p2v_mailbox = rd32(hw, NGBE_MBCTL(vf_number)); + if (p2v_mailbox & NGBE_MBCTL_PFU) + ret_val = 0; + else + DEBUGOUT("Failed to obtain mailbox lock for VF%d", vf_number); + + + return ret_val; +} + +/** + * ngbe_write_mbx_pf - Places a message in the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @vf_number: the VF index + * + * returns 0 if it successfully copied message into the buffer + **/ +s32 ngbe_write_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number) +{ + s32 ret_val; + u16 i; + + DEBUGFUNC("ngbe_write_mbx_pf"); + + /* lock the mailbox to prevent pf/vf race condition */ + ret_val = ngbe_obtain_mbx_lock_pf(hw, vf_number); + if (ret_val) + goto out_no_write; + + /* flush msg and acks as we are overwriting the message buffer */ + ngbe_check_for_msg_pf(hw, vf_number); + ngbe_check_for_ack_pf(hw, vf_number); + + /* copy the caller specified message to the mailbox memory buffer */ + for (i = 0; i < size; i++) + wr32a(hw, NGBE_MBMEM(vf_number), i, msg[i]); + + /* Interrupt VF to tell it a message has been sent and release buffer*/ + wr32(hw, NGBE_MBCTL(vf_number), NGBE_MBCTL_STS); + + /* update stats */ + hw->mbx.stats.msgs_tx++; + +out_no_write: + return ret_val; +} + +/** + * ngbe_read_mbx_pf - Read a message from the mailbox + * @hw: pointer to the HW structure + * @msg: The message buffer + * @size: Length of buffer + * @vf_number: the VF index + * + * This function copies a message from the mailbox buffer to the caller's + * memory buffer. The presumption is that the caller knows that there was + * a message due to a VF request so no polling for message is needed. + **/ +s32 ngbe_read_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number) +{ + s32 ret_val; + u16 i; + + DEBUGFUNC("ngbe_read_mbx_pf"); + + /* lock the mailbox to prevent pf/vf race condition */ + ret_val = ngbe_obtain_mbx_lock_pf(hw, vf_number); + if (ret_val) + goto out_no_read; + + /* copy the message to the mailbox memory buffer */ + for (i = 0; i < size; i++) + msg[i] = rd32a(hw, NGBE_MBMEM(vf_number), i); + + /* Acknowledge the message and release buffer */ + wr32(hw, NGBE_MBCTL(vf_number), NGBE_MBCTL_ACK); + + /* update stats */ + hw->mbx.stats.msgs_rx++; + +out_no_read: + return ret_val; +} + /** * ngbe_init_mbx_params_pf - set initial values for pf mailbox * @hw: pointer to the HW structure diff --git a/drivers/net/ngbe/base/ngbe_mbx.h b/drivers/net/ngbe/base/ngbe_mbx.h index d280945baf..d47da2718c 100644 --- a/drivers/net/ngbe/base/ngbe_mbx.h +++ b/drivers/net/ngbe/base/ngbe_mbx.h @@ -6,6 +6,84 @@ #ifndef _NGBE_MBX_H_ #define _NGBE_MBX_H_ +#define NGBE_ERR_MBX -100 + +/* If it's a NGBE_VF_* msg then it originates in the VF and is sent to the + * PF. The reverse is true if it is NGBE_PF_*. + * Message ACK's are the value or'd with 0xF0000000 + */ +/* Messages below or'd with this are the ACK */ +#define NGBE_VT_MSGTYPE_ACK 0x80000000 +/* Messages below or'd with this are the NACK */ +#define NGBE_VT_MSGTYPE_NACK 0x40000000 +/* Indicates that VF is still clear to send requests */ +#define NGBE_VT_MSGTYPE_CTS 0x20000000 + +#define NGBE_VT_MSGINFO_SHIFT 16 +/* bits 23:16 are used for extra info for certain messages */ +#define NGBE_VT_MSGINFO_MASK (0xFF << NGBE_VT_MSGINFO_SHIFT) + +/* + * each element denotes a version of the API; existing numbers may not + * change; any additions must go at the end + */ +enum ngbe_pfvf_api_rev { + ngbe_mbox_api_null, + ngbe_mbox_api_10, /* API version 1.0, linux/freebsd VF driver */ + ngbe_mbox_api_11, /* API version 1.1, linux/freebsd VF driver */ + ngbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */ + ngbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */ + ngbe_mbox_api_20, /* API version 2.0, solaris Phase1 VF driver */ + /* This value should always be last */ + ngbe_mbox_api_unknown, /* indicates that API version is not known */ +}; + +/* mailbox API, legacy requests */ +#define NGBE_VF_RESET 0x01 /* VF requests reset */ +#define NGBE_VF_SET_MAC_ADDR 0x02 /* VF requests PF to set MAC addr */ +#define NGBE_VF_SET_MULTICAST 0x03 /* VF requests PF to set MC addr */ +#define NGBE_VF_SET_VLAN 0x04 /* VF requests PF to set VLAN */ + +/* mailbox API, version 1.0 VF requests */ +#define NGBE_VF_SET_LPE 0x05 /* VF requests PF to set VMOLR.LPE */ +#define NGBE_VF_SET_MACVLAN 0x06 /* VF requests PF for unicast filter */ +#define NGBE_VF_API_NEGOTIATE 0x08 /* negotiate API version */ + +/* mailbox API, version 1.1 VF requests */ +#define NGBE_VF_GET_QUEUES 0x09 /* get queue configuration */ + +/* mailbox API, version 1.2 VF requests */ +#define NGBE_VF_GET_RETA 0x0a /* VF request for RETA */ +#define NGBE_VF_GET_RSS_KEY 0x0b /* get RSS key */ +#define NGBE_VF_UPDATE_XCAST_MODE 0x0c + +/* mode choices for NGBE_VF_UPDATE_XCAST_MODE */ +enum ngbevf_xcast_modes { + NGBEVF_XCAST_MODE_NONE = 0, + NGBEVF_XCAST_MODE_MULTI, + NGBEVF_XCAST_MODE_ALLMULTI, + NGBEVF_XCAST_MODE_PROMISC, +}; + +/* GET_QUEUES return data indices within the mailbox */ +#define NGBE_VF_TX_QUEUES 1 /* number of Tx queues supported */ +#define NGBE_VF_RX_QUEUES 2 /* number of Rx queues supported */ +#define NGBE_VF_TRANS_VLAN 3 /* Indication of port vlan */ +#define NGBE_VF_DEF_QUEUE 4 /* Default queue offset */ + +/* length of permanent address message returned from PF */ +#define NGBE_VF_PERMADDR_MSG_LEN 4 +s32 ngbe_read_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); +s32 ngbe_write_mbx(struct ngbe_hw *hw, u32 *msg, u16 size, u16 mbx_id); +s32 ngbe_check_for_msg(struct ngbe_hw *hw, u16 mbx_id); +s32 ngbe_check_for_ack(struct ngbe_hw *hw, u16 mbx_id); +s32 ngbe_check_for_rst(struct ngbe_hw *hw, u16 mbx_id); void ngbe_init_mbx_params_pf(struct ngbe_hw *hw); +s32 ngbe_read_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number); +s32 ngbe_write_mbx_pf(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number); +s32 ngbe_check_for_msg_pf(struct ngbe_hw *hw, u16 vf_number); +s32 ngbe_check_for_ack_pf(struct ngbe_hw *hw, u16 vf_number); +s32 ngbe_check_for_rst_pf(struct ngbe_hw *hw, u16 vf_number); + #endif /* _NGBE_MBX_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index bc95fcf609..7a85f82abd 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -254,6 +254,11 @@ struct ngbe_mac_info { u32 mc_addr_count, ngbe_mc_addr_itr func, bool clear); s32 (*clear_vfta)(struct ngbe_hw *hw); + s32 (*set_vfta)(struct ngbe_hw *hw, u32 vlan, + u32 vind, bool vlan_on, bool vlvf_bypass); + s32 (*set_vlvf)(struct ngbe_hw *hw, u32 vlan, u32 vind, + bool vlan_on, u32 *vfta_delta, u32 vfta, + bool vlvf_bypass); void (*set_mac_anti_spoofing)(struct ngbe_hw *hw, bool enable, int vf); void (*set_vlan_anti_spoofing)(struct ngbe_hw *hw, bool enable, int vf); @@ -319,6 +324,11 @@ struct ngbe_mbx_stats { struct ngbe_mbx_info { void (*init_params)(struct ngbe_hw *hw); + s32 (*read)(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number); + s32 (*write)(struct ngbe_hw *hw, u32 *msg, u16 size, u16 vf_number); + s32 (*check_for_msg)(struct ngbe_hw *hw, u16 mbx_id); + s32 (*check_for_ack)(struct ngbe_hw *hw, u16 mbx_id); + s32 (*check_for_rst)(struct ngbe_hw *hw, u16 mbx_id); struct ngbe_mbx_stats stats; u32 timeout; diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build index 8b5195aab3..b276ec3341 100644 --- a/drivers/net/ngbe/meson.build +++ b/drivers/net/ngbe/meson.build @@ -20,3 +20,5 @@ sources = files( deps += ['hash'] includes += include_directories('base') + +install_headers('rte_pmd_ngbe.h') diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 70e471b2c2..52d7b6376d 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -2123,6 +2123,11 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev) PMD_DRV_LOG(DEBUG, "intr action type %d", intr->flags); + if (intr->flags & NGBE_FLAG_MAILBOX) { + ngbe_pf_mbx_process(dev); + intr->flags &= ~NGBE_FLAG_MAILBOX; + } + if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) { struct rte_eth_link link; @@ -2183,6 +2188,8 @@ ngbe_dev_interrupt_delayed_handler(void *param) ngbe_disable_intr(hw); eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC]; + if (eicr & NGBE_ICRMISC_VFMBX) + ngbe_pf_mbx_process(dev); if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) { ngbe_dev_link_update(dev, 0); diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index f5a1363d10..26911cc7d2 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -71,6 +71,11 @@ struct ngbe_hwstrip { uint32_t bitmap[NGBE_HWSTRIP_BITMAP_SIZE]; }; +/* + * VF data which used by PF host only + */ +#define NGBE_MAX_VF_MC_ENTRIES 30 + struct ngbe_uta_info { uint8_t uc_filter_type; uint16_t uta_in_use; @@ -79,8 +84,14 @@ struct ngbe_uta_info { struct ngbe_vf_info { uint8_t vf_mac_addresses[RTE_ETHER_ADDR_LEN]; + uint16_t vf_mc_hashes[NGBE_MAX_VF_MC_ENTRIES]; + uint16_t num_vf_mc_hashes; bool clear_to_send; + uint16_t vlan_count; + uint8_t api_version; uint16_t switch_domain_id; + uint16_t xcast_mode; + uint16_t mac_count; }; /* @@ -233,6 +244,8 @@ int ngbe_pf_host_init(struct rte_eth_dev *eth_dev); void ngbe_pf_host_uninit(struct rte_eth_dev *eth_dev); +void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev); + int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev); #define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */ diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c index 2f9dfc4284..550f5d556b 100644 --- a/drivers/net/ngbe/ngbe_pf.c +++ b/drivers/net/ngbe/ngbe_pf.c @@ -10,8 +10,11 @@ #include "base/ngbe.h" #include "ngbe_ethdev.h" +#include "rte_pmd_ngbe.h" #define NGBE_MAX_VFTA (128) +#define NGBE_VF_MSG_SIZE_DEFAULT 1 +#define NGBE_VF_GET_QUEUE_MSG_SIZE 5 static inline uint16_t dev_num_vf(struct rte_eth_dev *eth_dev) @@ -39,6 +42,16 @@ int ngbe_vf_perm_addr_gen(struct rte_eth_dev *dev, uint16_t vf_num) return 0; } +static inline int +ngbe_mb_intr_setup(struct rte_eth_dev *dev) +{ + struct ngbe_interrupt *intr = ngbe_dev_intr(dev); + + intr->mask_misc |= NGBE_ICRMISC_VFMBX; + + return 0; +} + int ngbe_pf_host_init(struct rte_eth_dev *eth_dev) { struct ngbe_vf_info **vfinfo = NGBE_DEV_VFDATA(eth_dev); @@ -85,6 +98,9 @@ int ngbe_pf_host_init(struct rte_eth_dev *eth_dev) /* init_mailbox_params */ hw->mbx.init_params(hw); + /* set mb interrupt mask */ + ngbe_mb_intr_setup(eth_dev); + return ret; } @@ -194,3 +210,551 @@ int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev) return 0; } +static void +ngbe_set_rx_mode(struct rte_eth_dev *eth_dev) +{ + struct rte_eth_dev_data *dev_data = eth_dev->data; + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + u32 fctrl, vmolr; + uint16_t vfn = dev_num_vf(eth_dev); + + /* disable store-bad-packets */ + wr32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_SAVEBAD, 0); + + /* Check for Promiscuous and All Multicast modes */ + fctrl = rd32m(hw, NGBE_PSRCTL, + ~(NGBE_PSRCTL_UCP | NGBE_PSRCTL_MCP)); + fctrl |= NGBE_PSRCTL_BCA | + NGBE_PSRCTL_MCHFENA; + + vmolr = rd32m(hw, NGBE_POOLETHCTL(vfn), + ~(NGBE_POOLETHCTL_UCP | + NGBE_POOLETHCTL_MCP | + NGBE_POOLETHCTL_UCHA | + NGBE_POOLETHCTL_MCHA)); + vmolr |= NGBE_POOLETHCTL_BCA | + NGBE_POOLETHCTL_UTA | + NGBE_POOLETHCTL_VLA; + + if (dev_data->promiscuous) { + fctrl |= NGBE_PSRCTL_UCP | + NGBE_PSRCTL_MCP; + /* pf don't want packets routing to vf, so clear UPE */ + vmolr |= NGBE_POOLETHCTL_MCP; + } else if (dev_data->all_multicast) { + fctrl |= NGBE_PSRCTL_MCP; + vmolr |= NGBE_POOLETHCTL_MCP; + } else { + vmolr |= NGBE_POOLETHCTL_UCHA; + vmolr |= NGBE_POOLETHCTL_MCHA; + } + + wr32(hw, NGBE_POOLETHCTL(vfn), vmolr); + + wr32(hw, NGBE_PSRCTL, fctrl); + + ngbe_vlan_hw_strip_config(eth_dev); +} + +static inline void +ngbe_vf_reset_event(struct rte_eth_dev *eth_dev, uint16_t vf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev)); + int rar_entry = hw->mac.num_rar_entries - (vf + 1); + uint32_t vmolr = rd32(hw, NGBE_POOLETHCTL(vf)); + + vmolr |= (NGBE_POOLETHCTL_UCHA | + NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_UTA); + wr32(hw, NGBE_POOLETHCTL(vf), vmolr); + + wr32(hw, NGBE_POOLTAG(vf), 0); + + /* reset multicast table array for vf */ + vfinfo[vf].num_vf_mc_hashes = 0; + + /* reset rx mode */ + ngbe_set_rx_mode(eth_dev); + + hw->mac.clear_rar(hw, rar_entry); +} + +static inline void +ngbe_vf_reset_msg(struct rte_eth_dev *eth_dev, uint16_t vf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + uint32_t reg; + uint32_t vf_shift; + const uint8_t VFRE_SHIFT = 5; /* VFRE 32 bits per slot */ + const uint8_t VFRE_MASK = (uint8_t)((1U << VFRE_SHIFT) - 1); + uint8_t nb_q_per_pool; + int i; + + vf_shift = vf & VFRE_MASK; + + /* enable transmit for vf */ + reg = rd32(hw, NGBE_POOLTXENA(0)); + reg |= (1 << vf_shift); + wr32(hw, NGBE_POOLTXENA(0), reg); + + /* enable all queue drop for IOV */ + nb_q_per_pool = RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool; + for (i = vf * nb_q_per_pool; i < (vf + 1) * nb_q_per_pool; i++) { + ngbe_flush(hw); + reg = 1 << (i % 32); + wr32m(hw, NGBE_QPRXDROP, reg, reg); + } + + /* enable receive for vf */ + reg = rd32(hw, NGBE_POOLRXENA(0)); + reg |= (reg | (1 << vf_shift)); + wr32(hw, NGBE_POOLRXENA(0), reg); + + ngbe_vf_reset_event(eth_dev, vf); +} + +static int +ngbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + uint32_t vmolr; + + vmolr = rd32(hw, NGBE_POOLETHCTL(vf)); + + PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf); + + vmolr &= ~NGBE_POOLETHCTL_MCP; + + wr32(hw, NGBE_POOLETHCTL(vf), vmolr); + + return 0; +} + +static int +ngbe_vf_reset(struct rte_eth_dev *eth_dev, uint16_t vf, uint32_t *msgbuf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev)); + unsigned char *vf_mac = vfinfo[vf].vf_mac_addresses; + int rar_entry = hw->mac.num_rar_entries - (vf + 1); + uint8_t *new_mac = (uint8_t *)(&msgbuf[1]); + + ngbe_vf_reset_msg(eth_dev, vf); + + hw->mac.set_rar(hw, rar_entry, vf_mac, vf, true); + + /* Disable multicast promiscuous at reset */ + ngbe_disable_vf_mc_promisc(eth_dev, vf); + + /* reply to reset with ack and vf mac address */ + msgbuf[0] = NGBE_VF_RESET | NGBE_VT_MSGTYPE_ACK; + rte_memcpy(new_mac, vf_mac, RTE_ETHER_ADDR_LEN); + /* + * Piggyback the multicast filter type so VF can compute the + * correct vectors + */ + msgbuf[3] = hw->mac.mc_filter_type; + ngbe_write_mbx(hw, msgbuf, NGBE_VF_PERMADDR_MSG_LEN, vf); + + return 0; +} + +static int +ngbe_vf_set_mac_addr(struct rte_eth_dev *eth_dev, + uint32_t vf, uint32_t *msgbuf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev)); + int rar_entry = hw->mac.num_rar_entries - (vf + 1); + uint8_t *new_mac = (uint8_t *)(&msgbuf[1]); + struct rte_ether_addr *ea = (struct rte_ether_addr *)new_mac; + + if (rte_is_valid_assigned_ether_addr(ea)) { + rte_memcpy(vfinfo[vf].vf_mac_addresses, new_mac, 6); + return hw->mac.set_rar(hw, rar_entry, new_mac, vf, true); + } + return -1; +} + +static int +ngbe_vf_set_multicast(struct rte_eth_dev *eth_dev, + uint32_t vf, uint32_t *msgbuf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev)); + int nb_entries = (msgbuf[0] & NGBE_VT_MSGINFO_MASK) >> + NGBE_VT_MSGINFO_SHIFT; + uint16_t *hash_list = (uint16_t *)&msgbuf[1]; + uint32_t mta_idx; + uint32_t mta_shift; + const uint32_t NGBE_MTA_INDEX_MASK = 0x7F; + const uint32_t NGBE_MTA_BIT_SHIFT = 5; + const uint32_t NGBE_MTA_BIT_MASK = (0x1 << NGBE_MTA_BIT_SHIFT) - 1; + uint32_t reg_val; + int i; + u32 vmolr = rd32(hw, NGBE_POOLETHCTL(vf)); + + /* Disable multicast promiscuous first */ + ngbe_disable_vf_mc_promisc(eth_dev, vf); + + /* only so many hash values supported */ + nb_entries = RTE_MIN(nb_entries, NGBE_MAX_VF_MC_ENTRIES); + + /* store the mc entries */ + vfinfo->num_vf_mc_hashes = (uint16_t)nb_entries; + for (i = 0; i < nb_entries; i++) + vfinfo->vf_mc_hashes[i] = hash_list[i]; + + if (nb_entries == 0) { + vmolr &= ~NGBE_POOLETHCTL_MCHA; + wr32(hw, NGBE_POOLETHCTL(vf), vmolr); + return 0; + } + + for (i = 0; i < vfinfo->num_vf_mc_hashes; i++) { + mta_idx = (vfinfo->vf_mc_hashes[i] >> NGBE_MTA_BIT_SHIFT) + & NGBE_MTA_INDEX_MASK; + mta_shift = vfinfo->vf_mc_hashes[i] & NGBE_MTA_BIT_MASK; + reg_val = rd32(hw, NGBE_MCADDRTBL(mta_idx)); + reg_val |= (1 << mta_shift); + wr32(hw, NGBE_MCADDRTBL(mta_idx), reg_val); + } + + vmolr |= NGBE_POOLETHCTL_MCHA; + wr32(hw, NGBE_POOLETHCTL(vf), vmolr); + + return 0; +} + +static int +ngbe_vf_set_vlan(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf) +{ + int add, vid; + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev)); + + add = (msgbuf[0] & NGBE_VT_MSGINFO_MASK) + >> NGBE_VT_MSGINFO_SHIFT; + vid = NGBE_PSRVLAN_VID(msgbuf[1]); + + if (add) + vfinfo[vf].vlan_count++; + else if (vfinfo[vf].vlan_count) + vfinfo[vf].vlan_count--; + return hw->mac.set_vfta(hw, vid, vf, (bool)add, false); +} + +static int +ngbe_set_vf_lpe(struct rte_eth_dev *eth_dev, + __rte_unused uint32_t vf, uint32_t *msgbuf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + uint32_t max_frame = msgbuf[1]; + uint32_t max_frs; + + if (max_frame < RTE_ETHER_MIN_LEN || + max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN) + return -1; + + max_frs = rd32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK); + if (max_frs < max_frame) { + wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK, + NGBE_FRMSZ_MAX(max_frame)); + } + + return 0; +} + +static int +ngbe_negotiate_vf_api(struct rte_eth_dev *eth_dev, + uint32_t vf, uint32_t *msgbuf) +{ + uint32_t api_version = msgbuf[1]; + struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev); + + switch (api_version) { + case ngbe_mbox_api_10: + case ngbe_mbox_api_11: + case ngbe_mbox_api_12: + case ngbe_mbox_api_13: + vfinfo[vf].api_version = (uint8_t)api_version; + return 0; + default: + break; + } + + PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n", + api_version, vf); + + return -1; +} + +static int +ngbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf) +{ + struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev); + uint32_t default_q = 0; + + /* Verify if the PF supports the mbox APIs version or not */ + switch (vfinfo[vf].api_version) { + case ngbe_mbox_api_20: + case ngbe_mbox_api_11: + case ngbe_mbox_api_12: + case ngbe_mbox_api_13: + break; + default: + return -1; + } + + /* Notify VF of Rx and Tx queue number */ + msgbuf[NGBE_VF_RX_QUEUES] = RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool; + msgbuf[NGBE_VF_TX_QUEUES] = RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool; + + /* Notify VF of default queue */ + msgbuf[NGBE_VF_DEF_QUEUE] = default_q; + + msgbuf[NGBE_VF_TRANS_VLAN] = 0; + + return 0; +} + +static int +ngbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev, + uint32_t vf, uint32_t *msgbuf) +{ + struct ngbe_vf_info *vfinfo = *(NGBE_DEV_VFDATA(eth_dev)); + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + int xcast_mode = msgbuf[1]; /* msgbuf contains the flag to enable */ + u32 vmolr, fctrl, disable, enable; + + switch (vfinfo[vf].api_version) { + case ngbe_mbox_api_12: + /* promisc introduced in 1.3 version */ + if (xcast_mode == NGBEVF_XCAST_MODE_PROMISC) + return -EOPNOTSUPP; + break; + /* Fall threw */ + case ngbe_mbox_api_13: + break; + default: + return -1; + } + + if (vfinfo[vf].xcast_mode == xcast_mode) + goto out; + + switch (xcast_mode) { + case NGBEVF_XCAST_MODE_NONE: + disable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA | + NGBE_POOLETHCTL_MCP | NGBE_POOLETHCTL_UCP | + NGBE_POOLETHCTL_VLP; + enable = 0; + break; + case NGBEVF_XCAST_MODE_MULTI: + disable = NGBE_POOLETHCTL_MCP | NGBE_POOLETHCTL_UCP | + NGBE_POOLETHCTL_VLP; + enable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA; + break; + case NGBEVF_XCAST_MODE_ALLMULTI: + disable = NGBE_POOLETHCTL_UCP | NGBE_POOLETHCTL_VLP; + enable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA | + NGBE_POOLETHCTL_MCP; + break; + case NGBEVF_XCAST_MODE_PROMISC: + fctrl = rd32(hw, NGBE_PSRCTL); + if (!(fctrl & NGBE_PSRCTL_UCP)) { + /* VF promisc requires PF in promisc */ + PMD_DRV_LOG(ERR, + "Enabling VF promisc requires PF in promisc\n"); + return -1; + } + + disable = 0; + enable = NGBE_POOLETHCTL_BCA | NGBE_POOLETHCTL_MCHA | + NGBE_POOLETHCTL_MCP | NGBE_POOLETHCTL_UCP | + NGBE_POOLETHCTL_VLP; + break; + default: + return -1; + } + + vmolr = rd32(hw, NGBE_POOLETHCTL(vf)); + vmolr &= ~disable; + vmolr |= enable; + wr32(hw, NGBE_POOLETHCTL(vf), vmolr); + vfinfo[vf].xcast_mode = xcast_mode; + +out: + msgbuf[1] = xcast_mode; + + return 0; +} + +static int +ngbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_vf_info *vf_info = *(NGBE_DEV_VFDATA(dev)); + uint8_t *new_mac = (uint8_t *)(&msgbuf[1]); + struct rte_ether_addr *ea = (struct rte_ether_addr *)new_mac; + int index = (msgbuf[0] & NGBE_VT_MSGINFO_MASK) >> + NGBE_VT_MSGINFO_SHIFT; + + if (index) { + if (!rte_is_valid_assigned_ether_addr(ea)) { + PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf); + return -1; + } + + vf_info[vf].mac_count++; + + hw->mac.set_rar(hw, vf_info[vf].mac_count, + new_mac, vf, true); + } else { + if (vf_info[vf].mac_count) { + hw->mac.clear_rar(hw, vf_info[vf].mac_count); + vf_info[vf].mac_count = 0; + } + } + return 0; +} + +static int +ngbe_rcv_msg_from_vf(struct rte_eth_dev *eth_dev, uint16_t vf) +{ + uint16_t mbx_size = NGBE_P2VMBX_SIZE; + uint16_t msg_size = NGBE_VF_MSG_SIZE_DEFAULT; + uint32_t msgbuf[NGBE_P2VMBX_SIZE]; + int32_t retval; + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev); + struct rte_pmd_ngbe_mb_event_param ret_param; + + retval = ngbe_read_mbx(hw, msgbuf, mbx_size, vf); + if (retval) { + PMD_DRV_LOG(ERR, "Error mbx recv msg from VF %d", vf); + return retval; + } + + /* do nothing with the message already been processed */ + if (msgbuf[0] & (NGBE_VT_MSGTYPE_ACK | NGBE_VT_MSGTYPE_NACK)) + return retval; + + /* flush the ack before we write any messages back */ + ngbe_flush(hw); + + /** + * initialise structure to send to user application + * will return response from user in retval field + */ + ret_param.retval = RTE_PMD_NGBE_MB_EVENT_PROCEED; + ret_param.vfid = vf; + ret_param.msg_type = msgbuf[0] & 0xFFFF; + ret_param.msg = (void *)msgbuf; + + /* perform VF reset */ + if (msgbuf[0] == NGBE_VF_RESET) { + int ret = ngbe_vf_reset(eth_dev, vf, msgbuf); + + vfinfo[vf].clear_to_send = true; + + /* notify application about VF reset */ + rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_VF_MBOX, + &ret_param); + return ret; + } + + /** + * ask user application if we allowed to perform those functions + * if we get ret_param.retval == RTE_PMD_COMPAT_MB_EVENT_PROCEED + * then business as usual, + * if 0, do nothing and send ACK to VF + * if ret_param.retval > 1, do nothing and send NAK to VF + */ + rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_VF_MBOX, + &ret_param); + + retval = ret_param.retval; + + /* check & process VF to PF mailbox message */ + switch ((msgbuf[0] & 0xFFFF)) { + case NGBE_VF_SET_MAC_ADDR: + if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED) + retval = ngbe_vf_set_mac_addr(eth_dev, vf, msgbuf); + break; + case NGBE_VF_SET_MULTICAST: + if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED) + retval = ngbe_vf_set_multicast(eth_dev, vf, msgbuf); + break; + case NGBE_VF_SET_LPE: + if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED) + retval = ngbe_set_vf_lpe(eth_dev, vf, msgbuf); + break; + case NGBE_VF_SET_VLAN: + if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED) + retval = ngbe_vf_set_vlan(eth_dev, vf, msgbuf); + break; + case NGBE_VF_API_NEGOTIATE: + retval = ngbe_negotiate_vf_api(eth_dev, vf, msgbuf); + break; + case NGBE_VF_GET_QUEUES: + retval = ngbe_get_vf_queues(eth_dev, vf, msgbuf); + msg_size = NGBE_VF_GET_QUEUE_MSG_SIZE; + break; + case NGBE_VF_UPDATE_XCAST_MODE: + if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED) + retval = ngbe_set_vf_mc_promisc(eth_dev, vf, msgbuf); + break; + case NGBE_VF_SET_MACVLAN: + if (retval == RTE_PMD_NGBE_MB_EVENT_PROCEED) + retval = ngbe_set_vf_macvlan_msg(eth_dev, vf, msgbuf); + break; + default: + PMD_DRV_LOG(DEBUG, "Unhandled Msg %8.8x", (uint32_t)msgbuf[0]); + retval = NGBE_ERR_MBX; + break; + } + + /* response the VF according to the message process result */ + if (retval) + msgbuf[0] |= NGBE_VT_MSGTYPE_NACK; + else + msgbuf[0] |= NGBE_VT_MSGTYPE_ACK; + + msgbuf[0] |= NGBE_VT_MSGTYPE_CTS; + + ngbe_write_mbx(hw, msgbuf, msg_size, vf); + + return retval; +} + +static inline void +ngbe_rcv_ack_from_vf(struct rte_eth_dev *eth_dev, uint16_t vf) +{ + uint32_t msg = NGBE_VT_MSGTYPE_NACK; + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + struct ngbe_vf_info *vfinfo = *NGBE_DEV_VFDATA(eth_dev); + + if (!vfinfo[vf].clear_to_send) + ngbe_write_mbx(hw, &msg, 1, vf); +} + +void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev) +{ + uint16_t vf; + struct ngbe_hw *hw = ngbe_dev_hw(eth_dev); + + for (vf = 0; vf < dev_num_vf(eth_dev); vf++) { + /* check & process vf function level reset */ + if (!ngbe_check_for_rst(hw, vf)) + ngbe_vf_reset_event(eth_dev, vf); + + /* check & process vf mailbox messages */ + if (!ngbe_check_for_msg(hw, vf)) + ngbe_rcv_msg_from_vf(eth_dev, vf); + + /* check & process acks from vf */ + if (!ngbe_check_for_ack(hw, vf)) + ngbe_rcv_ack_from_vf(eth_dev, vf); + } +} diff --git a/drivers/net/ngbe/rte_pmd_ngbe.h b/drivers/net/ngbe/rte_pmd_ngbe.h new file mode 100644 index 0000000000..e895ecd7ef --- /dev/null +++ b/drivers/net/ngbe/rte_pmd_ngbe.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + * Copyright(c) 2010-2017 Intel Corporation + */ + +/** + * @file rte_pmd_ngbe.h + * ngbe PMD specific functions. + * + **/ + +#ifndef _PMD_NGBE_H_ +#define _PMD_NGBE_H_ + +#include +#include +#include + +/** + * Response sent back to ngbe driver from user app after callback + */ +enum rte_pmd_ngbe_mb_event_rsp { + RTE_PMD_NGBE_MB_EVENT_NOOP_ACK, /**< skip mbox request and ACK */ + RTE_PMD_NGBE_MB_EVENT_NOOP_NACK, /**< skip mbox request and NACK */ + RTE_PMD_NGBE_MB_EVENT_PROCEED, /**< proceed with mbox request */ + RTE_PMD_NGBE_MB_EVENT_MAX /**< max value of this enum */ +}; + +/** + * Data sent to the user application when the callback is executed. + */ +struct rte_pmd_ngbe_mb_event_param { + uint16_t vfid; /**< Virtual Function number */ + uint16_t msg_type; /**< VF to PF message type, defined in ngbe_mbx.h */ + uint16_t retval; /**< return value */ + void *msg; /**< pointer to message */ +}; + +#endif /* _PMD_NGBE_H_ */ From patchwork Wed Sep 8 08:37:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98300 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F241FA0C56; Wed, 8 Sep 2021 10:38:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23ACB411DC; Wed, 8 Sep 2021 10:37:04 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg704.qq.com [203.205.195.105]) by mails.dpdk.org (Postfix) with ESMTP id 10377411CE for ; Wed, 8 Sep 2021 10:37:01 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090218tpkj7fvx Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:36:58 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: pi50gOYoMMEJ28HhkJoIlnDTRieXc4Yu3qFWGpwY+yJvH9lSktUkrc0WynNFI s3yoTdcHjShyAUexkAdAYJzWKG/tjyI7Onnmll+TPzJcAq+bUkNoKBUN7GD9TLT/e+pTUnp eXOWmI2jYHdAyBZbSQDDD5XIs24QwO+f7zzD/E4mttRVO9LsoWsMHNViY5UiK++OzsqlniC jz2dVjQWTsRmEoD2frRsRGdfK7GTzzkdpAKsoe4GxJF4f09DXa5xk1rn9u2nMto9fKPzXDX A3nC1sBx2Rfy+hhnQDysk67A2GQlvjp9GPQ4i4O07iXRU51JqsozijfjSPYxbE563gGTZAj bzbrFIgQHmSwsw+LsuViLXBMxeKMjjMPr/5Ktk2 X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:46 +0800 Message-Id: <20210908083758.312055-21-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 20/32] net/ngbe: support flow control X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to get and set flow control. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/base/ngbe_dummy.h | 31 +++ drivers/net/ngbe/base/ngbe_hw.c | 334 +++++++++++++++++++++++++++ drivers/net/ngbe/base/ngbe_hw.h | 6 + drivers/net/ngbe/base/ngbe_phy.c | 9 + drivers/net/ngbe/base/ngbe_phy.h | 3 + drivers/net/ngbe/base/ngbe_phy_mvl.c | 57 +++++ drivers/net/ngbe/base/ngbe_phy_mvl.h | 4 + drivers/net/ngbe/base/ngbe_phy_rtl.c | 42 ++++ drivers/net/ngbe/base/ngbe_phy_rtl.h | 3 + drivers/net/ngbe/base/ngbe_phy_yt.c | 44 ++++ drivers/net/ngbe/base/ngbe_phy_yt.h | 6 + drivers/net/ngbe/base/ngbe_type.h | 32 +++ drivers/net/ngbe/ngbe_ethdev.c | 111 +++++++++ drivers/net/ngbe/ngbe_ethdev.h | 8 + 16 files changed, 692 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 9a497ccae6..00150282cb 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -22,6 +22,7 @@ RSS key update = Y RSS reta update = Y SR-IOV = Y VLAN filter = Y +Flow control = Y CRC offload = P VLAN offload = P QinQ offload = P diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index ce160e832c..09175e83cd 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -23,6 +23,7 @@ Features - Port hardware statistics - Jumbo frames - Link state information +- Link flow control - Interrupt mode for RX - Scattered and gather for TX and RX - FW version diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 940b448734..0baabcbae7 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -154,6 +154,17 @@ static inline void ngbe_mac_set_vlan_anti_spoofing_dummy(struct ngbe_hw *TUP0, bool TUP1, int TUP2) { } +static inline s32 ngbe_mac_fc_enable_dummy(struct ngbe_hw *TUP0) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_mac_setup_fc_dummy(struct ngbe_hw *TUP0) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline void ngbe_mac_fc_autoneg_dummy(struct ngbe_hw *TUP0) +{ +} static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; @@ -205,6 +216,20 @@ static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1, { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_get_phy_advertised_pause_dummy(struct ngbe_hw *TUP0, + u8 *TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_get_phy_lp_advertised_pause_dummy(struct ngbe_hw *TUP0, + u8 *TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_set_phy_pause_adv_dummy(struct ngbe_hw *TUP0, u16 TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} /* struct ngbe_mbx_operations */ static inline void ngbe_mbx_init_params_dummy(struct ngbe_hw *TUP0) @@ -264,6 +289,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->mac.set_vlvf = ngbe_mac_set_vlvf_dummy; hw->mac.set_mac_anti_spoofing = ngbe_mac_set_mac_anti_spoofing_dummy; hw->mac.set_vlan_anti_spoofing = ngbe_mac_set_vlan_anti_spoofing_dummy; + hw->mac.fc_enable = ngbe_mac_fc_enable_dummy; + hw->mac.setup_fc = ngbe_mac_setup_fc_dummy; + hw->mac.fc_autoneg = ngbe_mac_fc_autoneg_dummy; hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy; hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy; hw->phy.identify = ngbe_phy_identify_dummy; @@ -275,6 +303,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy; hw->phy.setup_link = ngbe_phy_setup_link_dummy; hw->phy.check_link = ngbe_phy_check_link_dummy; + hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_dummy; + hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_dummy; + hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_dummy; hw->mbx.init_params = ngbe_mbx_init_params_dummy; hw->mbx.read = ngbe_mbx_read_dummy; hw->mbx.write = ngbe_mbx_write_dummy; diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index afde58a89e..35351a2702 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -18,6 +18,8 @@ **/ s32 ngbe_start_hw(struct ngbe_hw *hw) { + s32 err; + DEBUGFUNC("ngbe_start_hw"); /* Clear the VLAN filter table */ @@ -26,6 +28,13 @@ s32 ngbe_start_hw(struct ngbe_hw *hw) /* Clear statistics registers */ hw->mac.clear_hw_cntrs(hw); + /* Setup flow control */ + err = hw->mac.setup_fc(hw); + if (err != 0 && err != NGBE_NOT_IMPLEMENTED) { + DEBUGOUT("Flow control setup failed, returning %d\n", err); + return err; + } + /* Clear adapter stopped flag */ hw->adapter_stopped = false; @@ -703,6 +712,326 @@ s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list, return 0; } +/** + * ngbe_setup_fc_em - Set up flow control + * @hw: pointer to hardware structure + * + * Called at init time to set up flow control. + **/ +s32 ngbe_setup_fc_em(struct ngbe_hw *hw) +{ + s32 err = 0; + u16 reg_cu = 0; + + DEBUGFUNC("ngbe_setup_fc"); + + /* Validate the requested mode */ + if (hw->fc.strict_ieee && hw->fc.requested_mode == ngbe_fc_rx_pause) { + DEBUGOUT("ngbe_fc_rx_pause not valid in strict IEEE mode\n"); + err = NGBE_ERR_INVALID_LINK_SETTINGS; + goto out; + } + + /* + * 1gig parts do not have a word in the EEPROM to determine the + * default flow control setting, so we explicitly set it to full. + */ + if (hw->fc.requested_mode == ngbe_fc_default) + hw->fc.requested_mode = ngbe_fc_full; + + /* + * The possible values of fc.requested_mode are: + * 0: Flow control is completely disabled + * 1: Rx flow control is enabled (we can receive pause frames, + * but not send pause frames). + * 2: Tx flow control is enabled (we can send pause frames but + * we do not support receiving pause frames). + * 3: Both Rx and Tx flow control (symmetric) are enabled. + * other: Invalid. + */ + switch (hw->fc.requested_mode) { + case ngbe_fc_none: + /* Flow control completely disabled by software override. */ + break; + case ngbe_fc_tx_pause: + /* + * Tx Flow control is enabled, and Rx Flow control is + * disabled by software override. + */ + if (hw->phy.type == ngbe_phy_mvl_sfi || + hw->phy.type == ngbe_phy_yt8521s_sfi) + reg_cu |= MVL_FANA_ASM_PAUSE; + else + reg_cu |= 0x800; /*need to merge rtl and mvl on page 0*/ + break; + case ngbe_fc_rx_pause: + /* + * Rx Flow control is enabled and Tx Flow control is + * disabled by software override. Since there really + * isn't a way to advertise that we are capable of RX + * Pause ONLY, we will advertise that we support both + * symmetric and asymmetric Rx PAUSE, as such we fall + * through to the fc_full statement. Later, we will + * disable the adapter's ability to send PAUSE frames. + */ + case ngbe_fc_full: + /* Flow control (both Rx and Tx) is enabled by SW override. */ + if (hw->phy.type == ngbe_phy_mvl_sfi || + hw->phy.type == ngbe_phy_yt8521s_sfi) + reg_cu |= MVL_FANA_SYM_PAUSE; + else + reg_cu |= 0xC00; /*need to merge rtl and mvl on page 0*/ + break; + default: + DEBUGOUT("Flow control param set incorrectly\n"); + err = NGBE_ERR_CONFIG; + goto out; + } + + err = hw->phy.set_pause_adv(hw, reg_cu); + +out: + return err; +} + +/** + * ngbe_fc_enable - Enable flow control + * @hw: pointer to hardware structure + * + * Enable flow control according to the current settings. + **/ +s32 ngbe_fc_enable(struct ngbe_hw *hw) +{ + s32 err = 0; + u32 mflcn_reg, fccfg_reg; + u32 pause_time; + u32 fcrtl, fcrth; + + DEBUGFUNC("ngbe_fc_enable"); + + /* Validate the water mark configuration */ + if (!hw->fc.pause_time) { + err = NGBE_ERR_INVALID_LINK_SETTINGS; + goto out; + } + + /* Low water mark of zero causes XOFF floods */ + if ((hw->fc.current_mode & ngbe_fc_tx_pause) && hw->fc.high_water) { + if (!hw->fc.low_water || + hw->fc.low_water >= hw->fc.high_water) { + DEBUGOUT("Invalid water mark configuration\n"); + err = NGBE_ERR_INVALID_LINK_SETTINGS; + goto out; + } + } + + /* Negotiate the fc mode to use */ + hw->mac.fc_autoneg(hw); + + /* Disable any previous flow control settings */ + mflcn_reg = rd32(hw, NGBE_RXFCCFG); + mflcn_reg &= ~NGBE_RXFCCFG_FC; + + fccfg_reg = rd32(hw, NGBE_TXFCCFG); + fccfg_reg &= ~NGBE_TXFCCFG_FC; + /* + * The possible values of fc.current_mode are: + * 0: Flow control is completely disabled + * 1: Rx flow control is enabled (we can receive pause frames, + * but not send pause frames). + * 2: Tx flow control is enabled (we can send pause frames but + * we do not support receiving pause frames). + * 3: Both Rx and Tx flow control (symmetric) are enabled. + * other: Invalid. + */ + switch (hw->fc.current_mode) { + case ngbe_fc_none: + /* + * Flow control is disabled by software override or autoneg. + * The code below will actually disable it in the HW. + */ + break; + case ngbe_fc_rx_pause: + /* + * Rx Flow control is enabled and Tx Flow control is + * disabled by software override. Since there really + * isn't a way to advertise that we are capable of RX + * Pause ONLY, we will advertise that we support both + * symmetric and asymmetric Rx PAUSE. Later, we will + * disable the adapter's ability to send PAUSE frames. + */ + mflcn_reg |= NGBE_RXFCCFG_FC; + break; + case ngbe_fc_tx_pause: + /* + * Tx Flow control is enabled, and Rx Flow control is + * disabled by software override. + */ + fccfg_reg |= NGBE_TXFCCFG_FC; + break; + case ngbe_fc_full: + /* Flow control (both Rx and Tx) is enabled by SW override. */ + mflcn_reg |= NGBE_RXFCCFG_FC; + fccfg_reg |= NGBE_TXFCCFG_FC; + break; + default: + DEBUGOUT("Flow control param set incorrectly\n"); + err = NGBE_ERR_CONFIG; + goto out; + } + + /* Set 802.3x based flow control settings. */ + wr32(hw, NGBE_RXFCCFG, mflcn_reg); + wr32(hw, NGBE_TXFCCFG, fccfg_reg); + + /* Set up and enable Rx high/low water mark thresholds, enable XON. */ + if ((hw->fc.current_mode & ngbe_fc_tx_pause) && + hw->fc.high_water) { + fcrtl = NGBE_FCWTRLO_TH(hw->fc.low_water) | + NGBE_FCWTRLO_XON; + fcrth = NGBE_FCWTRHI_TH(hw->fc.high_water) | + NGBE_FCWTRHI_XOFF; + } else { + /* + * In order to prevent Tx hangs when the internal Tx + * switch is enabled we must set the high water mark + * to the Rx packet buffer size - 24KB. This allows + * the Tx switch to function even under heavy Rx + * workloads. + */ + fcrtl = 0; + fcrth = rd32(hw, NGBE_PBRXSIZE) - 24576; + } + wr32(hw, NGBE_FCWTRLO, fcrtl); + wr32(hw, NGBE_FCWTRHI, fcrth); + + /* Configure pause time */ + pause_time = NGBE_RXFCFSH_TIME(hw->fc.pause_time); + wr32(hw, NGBE_FCXOFFTM, pause_time * 0x00010000); + + /* Configure flow control refresh threshold value */ + wr32(hw, NGBE_RXFCRFSH, hw->fc.pause_time / 2); + +out: + return err; +} + +/** + * ngbe_negotiate_fc - Negotiate flow control + * @hw: pointer to hardware structure + * @adv_reg: flow control advertised settings + * @lp_reg: link partner's flow control settings + * @adv_sym: symmetric pause bit in advertisement + * @adv_asm: asymmetric pause bit in advertisement + * @lp_sym: symmetric pause bit in link partner advertisement + * @lp_asm: asymmetric pause bit in link partner advertisement + * + * Find the intersection between advertised settings and link partner's + * advertised settings + **/ +s32 ngbe_negotiate_fc(struct ngbe_hw *hw, u32 adv_reg, u32 lp_reg, + u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm) +{ + if ((!(adv_reg)) || (!(lp_reg))) { + DEBUGOUT("Local or link partner's advertised flow control " + "settings are NULL. Local: %x, link partner: %x\n", + adv_reg, lp_reg); + return NGBE_ERR_FC_NOT_NEGOTIATED; + } + + if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) { + /* + * Now we need to check if the user selected Rx ONLY + * of pause frames. In this case, we had to advertise + * FULL flow control because we could not advertise RX + * ONLY. Hence, we must now check to see if we need to + * turn OFF the TRANSMISSION of PAUSE frames. + */ + if (hw->fc.requested_mode == ngbe_fc_full) { + hw->fc.current_mode = ngbe_fc_full; + DEBUGOUT("Flow Control = FULL.\n"); + } else { + hw->fc.current_mode = ngbe_fc_rx_pause; + DEBUGOUT("Flow Control=RX PAUSE frames only\n"); + } + } else if (!(adv_reg & adv_sym) && (adv_reg & adv_asm) && + (lp_reg & lp_sym) && (lp_reg & lp_asm)) { + hw->fc.current_mode = ngbe_fc_tx_pause; + DEBUGOUT("Flow Control = TX PAUSE frames only.\n"); + } else if ((adv_reg & adv_sym) && (adv_reg & adv_asm) && + !(lp_reg & lp_sym) && (lp_reg & lp_asm)) { + hw->fc.current_mode = ngbe_fc_rx_pause; + DEBUGOUT("Flow Control = RX PAUSE frames only.\n"); + } else { + hw->fc.current_mode = ngbe_fc_none; + DEBUGOUT("Flow Control = NONE.\n"); + } + return 0; +} + +/** + * ngbe_fc_autoneg_em - Enable flow control IEEE clause 37 + * @hw: pointer to hardware structure + * + * Enable flow control according to IEEE clause 37. + **/ +STATIC s32 ngbe_fc_autoneg_em(struct ngbe_hw *hw) +{ + u8 technology_ability_reg = 0; + u8 lp_technology_ability_reg = 0; + + hw->phy.get_adv_pause(hw, &technology_ability_reg); + hw->phy.get_lp_adv_pause(hw, &lp_technology_ability_reg); + + return ngbe_negotiate_fc(hw, (u32)technology_ability_reg, + (u32)lp_technology_ability_reg, + NGBE_TAF_SYM_PAUSE, NGBE_TAF_ASM_PAUSE, + NGBE_TAF_SYM_PAUSE, NGBE_TAF_ASM_PAUSE); +} + +/** + * ngbe_fc_autoneg - Configure flow control + * @hw: pointer to hardware structure + * + * Compares our advertised flow control capabilities to those advertised by + * our link partner, and determines the proper flow control mode to use. + **/ +void ngbe_fc_autoneg(struct ngbe_hw *hw) +{ + s32 err = NGBE_ERR_FC_NOT_NEGOTIATED; + u32 speed; + bool link_up; + + DEBUGFUNC("ngbe_fc_autoneg"); + + /* + * AN should have completed when the cable was plugged in. + * Look for reasons to bail out. Bail out if: + * - FC autoneg is disabled, or if + * - link is not up. + */ + if (hw->fc.disable_fc_autoneg) { + DEBUGOUT("Flow control autoneg is disabled"); + goto out; + } + + hw->mac.check_link(hw, &speed, &link_up, false); + if (!link_up) { + DEBUGOUT("The link is down"); + goto out; + } + + err = ngbe_fc_autoneg_em(hw); + +out: + if (err == 0) { + hw->fc.fc_was_autonegged = true; + } else { + hw->fc.fc_was_autonegged = false; + hw->fc.current_mode = hw->fc.requested_mode; + } +} + /** * ngbe_acquire_swfw_sync - Acquire SWFW semaphore * @hw: pointer to hardware structure @@ -1520,6 +1849,11 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->set_mac_anti_spoofing = ngbe_set_mac_anti_spoofing; mac->set_vlan_anti_spoofing = ngbe_set_vlan_anti_spoofing; + /* Flow Control */ + mac->fc_enable = ngbe_fc_enable; + mac->fc_autoneg = ngbe_fc_autoneg; + mac->setup_fc = ngbe_setup_fc_em; + /* Link */ mac->get_link_capabilities = ngbe_get_link_capabilities_em; mac->check_link = ngbe_check_mac_link_em; diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h index 83ad646dde..a84ddca6ac 100644 --- a/drivers/net/ngbe/base/ngbe_hw.h +++ b/drivers/net/ngbe/base/ngbe_hw.h @@ -42,6 +42,10 @@ s32 ngbe_update_mc_addr_list(struct ngbe_hw *hw, u8 *mc_addr_list, s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw); s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw); +s32 ngbe_setup_fc_em(struct ngbe_hw *hw); +s32 ngbe_fc_enable(struct ngbe_hw *hw); +void ngbe_fc_autoneg(struct ngbe_hw *hw); + s32 ngbe_validate_mac_addr(u8 *mac_addr); s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask); void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask); @@ -64,6 +68,8 @@ s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw); void ngbe_disable_rx(struct ngbe_hw *hw); void ngbe_enable_rx(struct ngbe_hw *hw); void ngbe_set_mta(struct ngbe_hw *hw, u8 *mc_addr); +s32 ngbe_negotiate_fc(struct ngbe_hw *hw, u32 adv_reg, u32 lp_reg, + u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm); s32 ngbe_init_shared_code(struct ngbe_hw *hw); s32 ngbe_set_mac_type(struct ngbe_hw *hw); s32 ngbe_init_ops_pf(struct ngbe_hw *hw); diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c index 691171ee9f..51b0a2ec60 100644 --- a/drivers/net/ngbe/base/ngbe_phy.c +++ b/drivers/net/ngbe/base/ngbe_phy.c @@ -429,18 +429,27 @@ s32 ngbe_init_phy(struct ngbe_hw *hw) hw->phy.init_hw = ngbe_init_phy_rtl; hw->phy.check_link = ngbe_check_phy_link_rtl; hw->phy.setup_link = ngbe_setup_phy_link_rtl; + hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_rtl; + hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_rtl; + hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_rtl; break; case ngbe_phy_mvl: case ngbe_phy_mvl_sfi: hw->phy.init_hw = ngbe_init_phy_mvl; hw->phy.check_link = ngbe_check_phy_link_mvl; hw->phy.setup_link = ngbe_setup_phy_link_mvl; + hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_mvl; + hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_mvl; + hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_mvl; break; case ngbe_phy_yt8521s: case ngbe_phy_yt8521s_sfi: hw->phy.init_hw = ngbe_init_phy_yt; hw->phy.check_link = ngbe_check_phy_link_yt; hw->phy.setup_link = ngbe_setup_phy_link_yt; + hw->phy.get_adv_pause = ngbe_get_phy_advertised_pause_yt; + hw->phy.get_lp_adv_pause = ngbe_get_phy_lp_advertised_pause_yt; + hw->phy.set_pause_adv = ngbe_set_phy_pause_adv_yt; default: break; } diff --git a/drivers/net/ngbe/base/ngbe_phy.h b/drivers/net/ngbe/base/ngbe_phy.h index 5d6ff1711c..f262ff3350 100644 --- a/drivers/net/ngbe/base/ngbe_phy.h +++ b/drivers/net/ngbe/base/ngbe_phy.h @@ -42,6 +42,9 @@ typedef struct mdi_reg mdi_reg_t; #define NGBE_MD22_PHY_ID_HIGH 0x2 /* PHY ID High Reg*/ #define NGBE_MD22_PHY_ID_LOW 0x3 /* PHY ID Low Reg*/ +#define NGBE_TAF_SYM_PAUSE 0x1 +#define NGBE_TAF_ASM_PAUSE 0x2 + s32 ngbe_mdi_map_register(mdi_reg_t *reg, mdi_reg_22_t *reg22); bool ngbe_validate_phy_addr(struct ngbe_hw *hw, u32 phy_addr); diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c index 86b0a072c1..2eb351d258 100644 --- a/drivers/net/ngbe/base/ngbe_phy_mvl.c +++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c @@ -209,6 +209,63 @@ s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw) return status; } +s32 ngbe_get_phy_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit) +{ + u16 value; + s32 status = 0; + + if (hw->phy.type == ngbe_phy_mvl) { + status = hw->phy.read_reg(hw, MVL_ANA, 0, &value); + value &= MVL_CANA_ASM_PAUSE | MVL_CANA_PAUSE; + *pause_bit = (u8)(value >> 10); + } else { + status = hw->phy.read_reg(hw, MVL_ANA, 0, &value); + value &= MVL_FANA_PAUSE_MASK; + *pause_bit = (u8)(value >> 7); + } + + return status; +} + +s32 ngbe_get_phy_lp_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit) +{ + u16 value; + s32 status = 0; + + if (hw->phy.type == ngbe_phy_mvl) { + status = hw->phy.read_reg(hw, MVL_LPAR, 0, &value); + value &= MVL_CLPAR_ASM_PAUSE | MVL_CLPAR_PAUSE; + *pause_bit = (u8)(value >> 10); + } else { + status = hw->phy.read_reg(hw, MVL_LPAR, 0, &value); + value &= MVL_FLPAR_PAUSE_MASK; + *pause_bit = (u8)(value >> 7); + } + + return status; +} + +s32 ngbe_set_phy_pause_adv_mvl(struct ngbe_hw *hw, u16 pause_bit) +{ + u16 value; + s32 status = 0; + + DEBUGFUNC("ngbe_set_phy_pause_adv_mvl"); + + if (hw->phy.type == ngbe_phy_mvl) { + status = hw->phy.read_reg(hw, MVL_ANA, 0, &value); + value &= ~(MVL_CANA_ASM_PAUSE | MVL_CANA_PAUSE); + } else { + status = hw->phy.read_reg(hw, MVL_ANA, 0, &value); + value &= ~MVL_FANA_PAUSE_MASK; + } + + value |= pause_bit; + status = hw->phy.write_reg(hw, MVL_ANA, 0, value); + + return status; +} + s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw, u32 *speed, bool *link_up) { diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h index 74d5ecba77..a2b5202d4b 100644 --- a/drivers/net/ngbe/base/ngbe_phy_mvl.h +++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h @@ -94,4 +94,8 @@ s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw, u32 *speed, bool *link_up); s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw, u32 speed, bool autoneg_wait_to_complete); +s32 ngbe_get_phy_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit); +s32 ngbe_get_phy_lp_advertised_pause_mvl(struct ngbe_hw *hw, u8 *pause_bit); +s32 ngbe_set_phy_pause_adv_mvl(struct ngbe_hw *hw, u16 pause_bit); + #endif /* _NGBE_PHY_MVL_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c index 83830921c2..7b08b7a46c 100644 --- a/drivers/net/ngbe/base/ngbe_phy_rtl.c +++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c @@ -249,6 +249,48 @@ s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw) return status; } +s32 ngbe_get_phy_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit) +{ + u16 value; + s32 status = 0; + + status = hw->phy.read_reg(hw, RTL_ANAR, RTL_DEV_ZERO, &value); + value &= RTL_ANAR_APAUSE | RTL_ANAR_PAUSE; + *pause_bit = (u8)(value >> 10); + return status; +} + +s32 ngbe_get_phy_lp_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit) +{ + u16 value; + s32 status = 0; + + status = hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value); + + status = hw->phy.read_reg(hw, RTL_BMSR, RTL_DEV_ZERO, &value); + value = value & RTL_BMSR_ANC; + + /* if AN complete then check lp adv pause */ + status = hw->phy.read_reg(hw, RTL_ANLPAR, RTL_DEV_ZERO, &value); + value &= RTL_ANLPAR_LP; + *pause_bit = (u8)(value >> 10); + return status; +} + +s32 ngbe_set_phy_pause_adv_rtl(struct ngbe_hw *hw, u16 pause_bit) +{ + u16 value; + s32 status = 0; + + status = hw->phy.read_reg(hw, RTL_ANAR, RTL_DEV_ZERO, &value); + value &= ~(RTL_ANAR_APAUSE | RTL_ANAR_PAUSE); + value |= pause_bit; + + status = hw->phy.write_reg(hw, RTL_ANAR, RTL_DEV_ZERO, value); + + return status; +} + s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw, u32 *speed, bool *link_up) { s32 status = 0; diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h index 9ce2058eac..d717a1915c 100644 --- a/drivers/net/ngbe/base/ngbe_phy_rtl.h +++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h @@ -83,6 +83,9 @@ s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw, s32 ngbe_init_phy_rtl(struct ngbe_hw *hw); s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw); +s32 ngbe_get_phy_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit); +s32 ngbe_get_phy_lp_advertised_pause_rtl(struct ngbe_hw *hw, u8 *pause_bit); +s32 ngbe_set_phy_pause_adv_rtl(struct ngbe_hw *hw, u16 pause_bit); s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw, u32 *speed, bool *link_up); diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c index 2a7061c100..8db0f9ce48 100644 --- a/drivers/net/ngbe/base/ngbe_phy_yt.c +++ b/drivers/net/ngbe/base/ngbe_phy_yt.c @@ -234,6 +234,50 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw) return status; } +s32 ngbe_get_phy_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit) +{ + u16 value; + s32 status = 0; + + DEBUGFUNC("ngbe_get_phy_advertised_pause_yt"); + + status = hw->phy.read_reg(hw, YT_ANA, 0, &value); + value &= YT_FANA_PAUSE_MASK; + *pause_bit = (u8)(value >> 7); + + return status; +} + +s32 ngbe_get_phy_lp_advertised_pause_yt(struct ngbe_hw *hw, u8 *pause_bit) +{ + u16 value; + s32 status = 0; + + DEBUGFUNC("ngbe_get_phy_lp_advertised_pause_yt"); + + status = hw->phy.read_reg(hw, YT_LPAR, 0, &value); + value &= YT_FLPAR_PAUSE_MASK; + *pause_bit = (u8)(value >> 7); + + return status; +} + +s32 ngbe_set_phy_pause_adv_yt(struct ngbe_hw *hw, u16 pause_bit) +{ + u16 value; + s32 status = 0; + + DEBUGFUNC("ngbe_set_phy_pause_adv_yt"); + + + status = hw->phy.read_reg(hw, YT_ANA, 0, &value); + value &= ~YT_FANA_PAUSE_MASK; + value |= pause_bit; + status = hw->phy.write_reg(hw, YT_ANA, 0, value); + + return status; +} + s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw, u32 *speed, bool *link_up) { diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h index 157339cce8..e729e0c854 100644 --- a/drivers/net/ngbe/base/ngbe_phy_yt.h +++ b/drivers/net/ngbe/base/ngbe_phy_yt.h @@ -73,4 +73,10 @@ s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw, u32 *speed, bool *link_up); s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed, bool autoneg_wait_to_complete); +s32 ngbe_get_phy_advertised_pause_yt(struct ngbe_hw *hw, + u8 *pause_bit); +s32 ngbe_get_phy_lp_advertised_pause_yt(struct ngbe_hw *hw, + u8 *pause_bit); +s32 ngbe_set_phy_pause_adv_yt(struct ngbe_hw *hw, u16 pause_bit); + #endif /* _NGBE_PHY_YT_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 7a85f82abd..310d32ecfa 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -67,6 +67,15 @@ enum ngbe_media_type { ngbe_media_type_virtual }; +/* Flow Control Settings */ +enum ngbe_fc_mode { + ngbe_fc_none = 0, + ngbe_fc_rx_pause, + ngbe_fc_tx_pause, + ngbe_fc_full, + ngbe_fc_default +}; + struct ngbe_hw; struct ngbe_addr_filter_info { @@ -82,6 +91,19 @@ struct ngbe_bus_info { u8 lan_id; }; +/* Flow control parameters */ +struct ngbe_fc_info { + u32 high_water; /* Flow Ctrl High-water */ + u32 low_water; /* Flow Ctrl Low-water */ + u16 pause_time; /* Flow Control Pause timer */ + bool send_xon; /* Flow control send XON */ + bool strict_ieee; /* Strict IEEE mode */ + bool disable_fc_autoneg; /* Do not autonegotiate FC */ + bool fc_was_autonegged; /* Is current_mode the result of autonegging? */ + enum ngbe_fc_mode current_mode; /* FC mode in effect */ + enum ngbe_fc_mode requested_mode; /* FC mode requested by caller */ +}; + /* Statistics counters collected by the MAC */ /* PB[] RxTx */ struct ngbe_pb_stats { @@ -263,6 +285,11 @@ struct ngbe_mac_info { void (*set_vlan_anti_spoofing)(struct ngbe_hw *hw, bool enable, int vf); + /* Flow Control */ + s32 (*fc_enable)(struct ngbe_hw *hw); + s32 (*setup_fc)(struct ngbe_hw *hw); + void (*fc_autoneg)(struct ngbe_hw *hw); + /* Manageability interface */ s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw); s32 (*check_overtemp)(struct ngbe_hw *hw); @@ -302,6 +329,10 @@ struct ngbe_phy_info { s32 (*setup_link)(struct ngbe_hw *hw, u32 speed, bool autoneg_wait_to_complete); s32 (*check_link)(struct ngbe_hw *hw, u32 *speed, bool *link_up); + s32 (*set_phy_power)(struct ngbe_hw *hw, bool on); + s32 (*get_adv_pause)(struct ngbe_hw *hw, u8 *pause_bit); + s32 (*get_lp_adv_pause)(struct ngbe_hw *hw, u8 *pause_bit); + s32 (*set_pause_adv)(struct ngbe_hw *hw, u16 pause_bit); enum ngbe_media_type media_type; enum ngbe_phy_type type; @@ -349,6 +380,7 @@ struct ngbe_hw { void *back; struct ngbe_mac_info mac; struct ngbe_addr_filter_info addr_ctrl; + struct ngbe_fc_info fc; struct ngbe_phy_info phy; struct ngbe_rom_info rom; struct ngbe_bus_info bus; diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 52d7b6376d..e950146f42 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -366,6 +366,14 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) /* Unlock any pending hardware semaphore */ ngbe_swfw_lock_reset(hw); + /* Get Hardware Flow Control setting */ + hw->fc.requested_mode = ngbe_fc_full; + hw->fc.current_mode = ngbe_fc_full; + hw->fc.pause_time = NGBE_FC_PAUSE_TIME; + hw->fc.low_water = NGBE_FC_XON_LOTH; + hw->fc.high_water = NGBE_FC_XOFF_HITH; + hw->fc.send_xon = 1; + err = hw->rom.init_params(hw); if (err != 0) { PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err); @@ -2231,6 +2239,107 @@ ngbe_dev_interrupt_handler(void *param) ngbe_dev_interrupt_action(dev); } +static int +ngbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t mflcn_reg; + uint32_t fccfg_reg; + int rx_pause; + int tx_pause; + + fc_conf->pause_time = hw->fc.pause_time; + fc_conf->high_water = hw->fc.high_water; + fc_conf->low_water = hw->fc.low_water; + fc_conf->send_xon = hw->fc.send_xon; + fc_conf->autoneg = !hw->fc.disable_fc_autoneg; + + /* + * Return rx_pause status according to actual setting of + * RXFCCFG register. + */ + mflcn_reg = rd32(hw, NGBE_RXFCCFG); + if (mflcn_reg & NGBE_RXFCCFG_FC) + rx_pause = 1; + else + rx_pause = 0; + + /* + * Return tx_pause status according to actual setting of + * TXFCCFG register. + */ + fccfg_reg = rd32(hw, NGBE_TXFCCFG); + if (fccfg_reg & NGBE_TXFCCFG_FC) + tx_pause = 1; + else + tx_pause = 0; + + if (rx_pause && tx_pause) + fc_conf->mode = RTE_FC_FULL; + else if (rx_pause) + fc_conf->mode = RTE_FC_RX_PAUSE; + else if (tx_pause) + fc_conf->mode = RTE_FC_TX_PAUSE; + else + fc_conf->mode = RTE_FC_NONE; + + return 0; +} + +static int +ngbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + int err; + uint32_t rx_buf_size; + uint32_t max_high_water; + enum ngbe_fc_mode rte_fcmode_2_ngbe_fcmode[] = { + ngbe_fc_none, + ngbe_fc_rx_pause, + ngbe_fc_tx_pause, + ngbe_fc_full + }; + + PMD_INIT_FUNC_TRACE(); + + rx_buf_size = rd32(hw, NGBE_PBRXSIZE); + PMD_INIT_LOG(DEBUG, "Rx packet buffer size = 0x%x", rx_buf_size); + + /* + * At least reserve one Ethernet frame for watermark + * high_water/low_water in kilo bytes for ngbe + */ + max_high_water = (rx_buf_size - RTE_ETHER_MAX_LEN) >> 10; + if (fc_conf->high_water > max_high_water || + fc_conf->high_water < fc_conf->low_water) { + PMD_INIT_LOG(ERR, "Invalid high/low water setup value in KB"); + PMD_INIT_LOG(ERR, "High_water must <= 0x%x", max_high_water); + return -EINVAL; + } + + hw->fc.requested_mode = rte_fcmode_2_ngbe_fcmode[fc_conf->mode]; + hw->fc.pause_time = fc_conf->pause_time; + hw->fc.high_water = fc_conf->high_water; + hw->fc.low_water = fc_conf->low_water; + hw->fc.send_xon = fc_conf->send_xon; + hw->fc.disable_fc_autoneg = !fc_conf->autoneg; + + err = hw->mac.fc_enable(hw); + + /* Not negotiated is not an error case */ + if (err == 0 || err == NGBE_ERR_FC_NOT_NEGOTIATED) { + wr32m(hw, NGBE_MACRXFLT, NGBE_MACRXFLT_CTL_MASK, + (fc_conf->mac_ctrl_frame_fwd + ? NGBE_MACRXFLT_CTL_NOPS : NGBE_MACRXFLT_CTL_DROP)); + ngbe_flush(hw); + + return 0; + } + + PMD_INIT_LOG(ERR, "ngbe_fc_enable = 0x%x", err); + return -EIO; +} + int ngbe_dev_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -2682,6 +2791,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .rx_queue_release = ngbe_dev_rx_queue_release, .tx_queue_setup = ngbe_dev_tx_queue_setup, .tx_queue_release = ngbe_dev_tx_queue_release, + .flow_ctrl_get = ngbe_flow_ctrl_get, + .flow_ctrl_set = ngbe_flow_ctrl_set, .mac_addr_add = ngbe_add_rar, .mac_addr_remove = ngbe_remove_rar, .mac_addr_set = ngbe_set_default_mac_addr, diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 26911cc7d2..c16c6568be 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -248,6 +248,14 @@ void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev); int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev); +/* High threshold controlling when to start sending XOFF frames. */ +#define NGBE_FC_XOFF_HITH 128 /*KB*/ +/* Low threshold controlling when to start sending XON frames. */ +#define NGBE_FC_XON_LOTH 64 /*KB*/ + +/* Timer value included in XOFF frames. */ +#define NGBE_FC_PAUSE_TIME 0x680 + #define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */ #define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */ #define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */ From patchwork Wed Sep 8 08:37:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98301 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E92B8A0C56; Wed, 8 Sep 2021 10:38:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F29341180; Wed, 8 Sep 2021 10:37:08 +0200 (CEST) Received: from smtpbgsg2.qq.com (smtpbgsg2.qq.com [54.254.200.128]) by mails.dpdk.org (Postfix) with ESMTP id 9E855411F6 for ; Wed, 8 Sep 2021 10:37:06 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090220tgrl9f0x Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:00 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: bYR630AeiPhMX7/Lq2cB2gehzVY2U9BK5JgD7+B7tc5+4S9ty9yShHIaLBMTO rcCqEluaRFCZerw6UeIic8OqXRzyt+z+YQcYGMQKq0QqxpnPKo5ct2/jGGKCrL3sMjyu5nz 3U6e+XBI3hX2+bMsI8MdqRM5O3Aa00y4FA9gYOW7p4pGEy+jx76mUjl3yOn8V/Rtn7lN5ST pX0wShSXLtgnqXvyeFHEZdaVTc2l5BjYHDQJ+lFqTyWfyIf5z1jEqr7GbhePzG6u+ZbkPF2 HZ43rz8Wo6s9/LWulta96PCVF8Y/Ye6jxtM2vH1EZPhTz3otekUqb1d/aFTNNVM78nyUmXR mH/21HbcbxMfeK1ws7ITYuanbeXMTE+NsKONKLk X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:47 +0800 Message-Id: <20210908083758.312055-22-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 21/32] net/ngbe: support device LED on and off X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support device LED on and off. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/base/ngbe_dummy.h | 10 +++++++ drivers/net/ngbe/base/ngbe_hw.c | 48 ++++++++++++++++++++++++++++++ drivers/net/ngbe/base/ngbe_hw.h | 3 ++ drivers/net/ngbe/base/ngbe_type.h | 4 +++ drivers/net/ngbe/ngbe_ethdev.c | 16 ++++++++++ 5 files changed, 81 insertions(+) diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 0baabcbae7..9930a3a1d6 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -104,6 +104,14 @@ static inline s32 ngbe_mac_get_link_capabilities_dummy(struct ngbe_hw *TUP0, { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_mac_led_on_dummy(struct ngbe_hw *TUP0, u32 TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} +static inline s32 ngbe_mac_led_off_dummy(struct ngbe_hw *TUP0, u32 TUP1) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1, u8 *TUP2, u32 TUP3, u32 TUP4) { @@ -278,6 +286,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) hw->mac.setup_link = ngbe_mac_setup_link_dummy; hw->mac.check_link = ngbe_mac_check_link_dummy; hw->mac.get_link_capabilities = ngbe_mac_get_link_capabilities_dummy; + hw->mac.led_on = ngbe_mac_led_on_dummy; + hw->mac.led_off = ngbe_mac_led_off_dummy; hw->mac.set_rar = ngbe_mac_set_rar_dummy; hw->mac.clear_rar = ngbe_mac_clear_rar_dummy; hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy; diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index 35351a2702..476e5f25cf 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -390,6 +390,50 @@ s32 ngbe_stop_hw(struct ngbe_hw *hw) return 0; } +/** + * ngbe_led_on - Turns on the software controllable LEDs. + * @hw: pointer to hardware structure + * @index: led number to turn on + **/ +s32 ngbe_led_on(struct ngbe_hw *hw, u32 index) +{ + u32 led_reg = rd32(hw, NGBE_LEDCTL); + + DEBUGFUNC("ngbe_led_on"); + + if (index > 3) + return NGBE_ERR_PARAM; + + /* To turn on the LED, set mode to ON. */ + led_reg |= NGBE_LEDCTL_100M; + wr32(hw, NGBE_LEDCTL, led_reg); + ngbe_flush(hw); + + return 0; +} + +/** + * ngbe_led_off - Turns off the software controllable LEDs. + * @hw: pointer to hardware structure + * @index: led number to turn off + **/ +s32 ngbe_led_off(struct ngbe_hw *hw, u32 index) +{ + u32 led_reg = rd32(hw, NGBE_LEDCTL); + + DEBUGFUNC("ngbe_led_off"); + + if (index > 3) + return NGBE_ERR_PARAM; + + /* To turn off the LED, set mode to OFF. */ + led_reg &= ~NGBE_LEDCTL_100M; + wr32(hw, NGBE_LEDCTL, led_reg); + ngbe_flush(hw); + + return 0; +} + /** * ngbe_validate_mac_addr - Validate MAC address * @mac_addr: pointer to MAC address. @@ -1836,6 +1880,10 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) mac->disable_sec_rx_path = ngbe_disable_sec_rx_path; mac->enable_sec_rx_path = ngbe_enable_sec_rx_path; + /* LEDs */ + mac->led_on = ngbe_led_on; + mac->led_off = ngbe_led_off; + /* RAR, Multicast, VLAN */ mac->set_rar = ngbe_set_rar; mac->clear_rar = ngbe_clear_rar; diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h index a84ddca6ac..ad7e8fc2d9 100644 --- a/drivers/net/ngbe/base/ngbe_hw.h +++ b/drivers/net/ngbe/base/ngbe_hw.h @@ -32,6 +32,9 @@ s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw, u32 speed, bool autoneg_wait_to_complete); +s32 ngbe_led_on(struct ngbe_hw *hw, u32 index); +s32 ngbe_led_off(struct ngbe_hw *hw, u32 index); + s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index); diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 310d32ecfa..886dffc0db 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -265,6 +265,10 @@ struct ngbe_mac_info { s32 (*get_link_capabilities)(struct ngbe_hw *hw, u32 *speed, bool *autoneg); + /* LED */ + s32 (*led_on)(struct ngbe_hw *hw, u32 index); + s32 (*led_off)(struct ngbe_hw *hw, u32 index); + /* RAR */ s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 enable_addr); diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index e950146f42..6ed836df9e 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -2239,6 +2239,20 @@ ngbe_dev_interrupt_handler(void *param) ngbe_dev_interrupt_action(dev); } +static int +ngbe_dev_led_on(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + return hw->mac.led_on(hw, 0) == 0 ? 0 : -ENOTSUP; +} + +static int +ngbe_dev_led_off(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + return hw->mac.led_off(hw, 0) == 0 ? 0 : -ENOTSUP; +} + static int ngbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) { @@ -2791,6 +2805,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .rx_queue_release = ngbe_dev_rx_queue_release, .tx_queue_setup = ngbe_dev_tx_queue_setup, .tx_queue_release = ngbe_dev_tx_queue_release, + .dev_led_on = ngbe_dev_led_on, + .dev_led_off = ngbe_dev_led_off, .flow_ctrl_get = ngbe_flow_ctrl_get, .flow_ctrl_set = ngbe_flow_ctrl_set, .mac_addr_add = ngbe_add_rar, From patchwork Wed Sep 8 08:37:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98302 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 35A1FA0C56; Wed, 8 Sep 2021 10:38:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A4C18411F7; Wed, 8 Sep 2021 10:37:09 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg702.qq.com [203.205.195.102]) by mails.dpdk.org (Postfix) with ESMTP id 21FEF411EF for ; Wed, 8 Sep 2021 10:37:06 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090223tc206xp2 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:02 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: Sb649gZU4sLz5jWBg0Y/+NCzUfjBCrdfg258nZbx9u24LdcDMkEqdH+Fv8t7G wb/oq1Eesc/Hq97IdShv/wZqZgfoKxGAuOi3CQvkK6jJ0CqD1Lm6ALoQKo4VZVqgrerqORf fLDRFjpB2VHukXeZ6HcrxwNT0KZYrGwWJMB2SOnMnwyGR+CKEY8j8xQkGy9ZD+/2Fuq/pO2 47aob3ynGnXYGObJh8UecliDjPaTaTJRv+Yo66tVAo40+ZAT5AxzdfhvWAwljmvlrOhfqSr 05rXvguwyP05lr7tXXyH2HAmdxBamalwZ761iToVZsFdXLTvQDPTBbWU4PpXSH9y+SR3dno KorVUNBoMxtYKxhCb71UVYAKBUVzJD2PmP+LuES X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:48 +0800 Message-Id: <20210908083758.312055-23-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 22/32] net/ngbe: support EEPROM dump X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to get and set device EEPROM data. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + drivers/net/ngbe/base/ngbe_dummy.h | 12 +++++ drivers/net/ngbe/base/ngbe_eeprom.c | 77 +++++++++++++++++++++++++++++ drivers/net/ngbe/base/ngbe_eeprom.h | 5 ++ drivers/net/ngbe/base/ngbe_hw.c | 2 + drivers/net/ngbe/base/ngbe_mng.c | 41 +++++++++++++++ drivers/net/ngbe/base/ngbe_mng.h | 13 +++++ drivers/net/ngbe/base/ngbe_type.h | 4 ++ drivers/net/ngbe/ngbe_ethdev.c | 52 +++++++++++++++++++ 9 files changed, 207 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 00150282cb..3c169ab774 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -35,6 +35,7 @@ Basic stats = Y Extended stats = Y Stats per queue = Y FW version = Y +EEPROM dump = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h index 9930a3a1d6..61b0d82bfb 100644 --- a/drivers/net/ngbe/base/ngbe_dummy.h +++ b/drivers/net/ngbe/base/ngbe_dummy.h @@ -33,11 +33,21 @@ static inline s32 ngbe_rom_init_params_dummy(struct ngbe_hw *TUP0) { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_rom_readw_buffer_dummy(struct ngbe_hw *TUP0, u32 TUP1, + u32 TUP2, void *TUP3) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline s32 ngbe_rom_read32_dummy(struct ngbe_hw *TUP0, u32 TUP1, u32 *TUP2) { return NGBE_ERR_OPS_DUMMY; } +static inline s32 ngbe_rom_writew_buffer_dummy(struct ngbe_hw *TUP0, u32 TUP1, + u32 TUP2, void *TUP3) +{ + return NGBE_ERR_OPS_DUMMY; +} static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0, u16 *TUP1) { @@ -270,7 +280,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw) { hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy; hw->rom.init_params = ngbe_rom_init_params_dummy; + hw->rom.readw_buffer = ngbe_rom_readw_buffer_dummy; hw->rom.read32 = ngbe_rom_read32_dummy; + hw->rom.writew_buffer = ngbe_rom_writew_buffer_dummy; hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy; hw->mac.init_hw = ngbe_mac_init_hw_dummy; hw->mac.reset_hw = ngbe_mac_reset_hw_dummy; diff --git a/drivers/net/ngbe/base/ngbe_eeprom.c b/drivers/net/ngbe/base/ngbe_eeprom.c index 9ae2f0badb..f9a876e9bd 100644 --- a/drivers/net/ngbe/base/ngbe_eeprom.c +++ b/drivers/net/ngbe/base/ngbe_eeprom.c @@ -161,6 +161,45 @@ void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw) ngbe_flush(hw); } +/** + * ngbe_ee_read_buffer- Read EEPROM word(s) using hostif + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to read + * @words: number of words + * @data: word(s) read from the EEPROM + * + * Reads a 16 bit word(s) from the EEPROM using the hostif. + **/ +s32 ngbe_ee_readw_buffer(struct ngbe_hw *hw, + u32 offset, u32 words, void *data) +{ + const u32 mask = NGBE_MNGSEM_SWMBX | NGBE_MNGSEM_SWFLASH; + u32 addr = (offset << 1); + u32 len = (words << 1); + u8 *buf = (u8 *)data; + int err; + + err = hw->mac.acquire_swfw_sync(hw, mask); + if (err) + return err; + + while (len) { + u32 seg = (len <= NGBE_PMMBX_DATA_SIZE + ? len : NGBE_PMMBX_DATA_SIZE); + + err = ngbe_hic_sr_read(hw, addr, buf, seg); + if (err) + break; + + len -= seg; + addr += seg; + buf += seg; + } + + hw->mac.release_swfw_sync(hw, mask); + return err; +} + /** * ngbe_ee_read32 - Read EEPROM word using a host interface cmd * @hw: pointer to hardware structure @@ -185,6 +224,44 @@ s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data) return err; } +/** + * ngbe_ee_write_buffer - Write EEPROM word(s) using hostif + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to write + * @words: number of words + * @data: word(s) write to the EEPROM + * + * Write a 16 bit word(s) to the EEPROM using the hostif. + **/ +s32 ngbe_ee_writew_buffer(struct ngbe_hw *hw, + u32 offset, u32 words, void *data) +{ + const u32 mask = NGBE_MNGSEM_SWMBX | NGBE_MNGSEM_SWFLASH; + u32 addr = (offset << 1); + u32 len = (words << 1); + u8 *buf = (u8 *)data; + int err; + + err = hw->mac.acquire_swfw_sync(hw, mask); + if (err) + return err; + + while (len) { + u32 seg = (len <= NGBE_PMMBX_DATA_SIZE + ? len : NGBE_PMMBX_DATA_SIZE); + + err = ngbe_hic_sr_write(hw, addr, buf, seg); + if (err) + break; + + len -= seg; + buf += seg; + } + + hw->mac.release_swfw_sync(hw, mask); + return err; +} + /** * ngbe_validate_eeprom_checksum_em - Validate EEPROM checksum * @hw: pointer to hardware structure diff --git a/drivers/net/ngbe/base/ngbe_eeprom.h b/drivers/net/ngbe/base/ngbe_eeprom.h index 5f27425913..26ac686723 100644 --- a/drivers/net/ngbe/base/ngbe_eeprom.h +++ b/drivers/net/ngbe/base/ngbe_eeprom.h @@ -17,6 +17,11 @@ s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw); void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw); s32 ngbe_save_eeprom_version(struct ngbe_hw *hw); +s32 ngbe_ee_readw_buffer(struct ngbe_hw *hw, u32 offset, u32 words, + void *data); s32 ngbe_ee_read32(struct ngbe_hw *hw, u32 addr, u32 *data); +s32 ngbe_ee_writew_buffer(struct ngbe_hw *hw, u32 offset, u32 words, + void *data); + #endif /* _NGBE_EEPROM_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index 476e5f25cf..218e612461 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -1920,7 +1920,9 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw) /* EEPROM */ rom->init_params = ngbe_init_eeprom_params; + rom->readw_buffer = ngbe_ee_readw_buffer; rom->read32 = ngbe_ee_read32; + rom->writew_buffer = ngbe_ee_writew_buffer; rom->validate_checksum = ngbe_validate_eeprom_checksum_em; mac->mcft_size = NGBE_EM_MC_TBL_SIZE; diff --git a/drivers/net/ngbe/base/ngbe_mng.c b/drivers/net/ngbe/base/ngbe_mng.c index 9416ea4c8d..a3dd8093ce 100644 --- a/drivers/net/ngbe/base/ngbe_mng.c +++ b/drivers/net/ngbe/base/ngbe_mng.c @@ -202,6 +202,47 @@ s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len) return 0; } +/** + * ngbe_hic_sr_write - Write EEPROM word using hostif + * @hw: pointer to hardware structure + * @offset: offset of word in the EEPROM to write + * @data: word write to the EEPROM + * + * Write a 16 bit word to the EEPROM using the hostif. + **/ +s32 ngbe_hic_sr_write(struct ngbe_hw *hw, u32 addr, u8 *buf, int len) +{ + struct ngbe_hic_write_shadow_ram command; + u32 value; + int err = 0, i = 0, j = 0; + + if (len > NGBE_PMMBX_DATA_SIZE) + return NGBE_ERR_HOST_INTERFACE_COMMAND; + + memset(&command, 0, sizeof(command)); + command.hdr.req.cmd = FW_WRITE_SHADOW_RAM_CMD; + command.hdr.req.buf_lenh = 0; + command.hdr.req.buf_lenl = FW_WRITE_SHADOW_RAM_LEN; + command.hdr.req.checksum = FW_DEFAULT_CHECKSUM; + command.address = cpu_to_be32(addr); + command.length = cpu_to_be16(len); + + while (i < (len >> 2)) { + value = ((u32 *)buf)[i]; + wr32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + i, value); + i++; + } + + for (i <<= 2; i < len; i++) + ((u8 *)&value)[j++] = ((u8 *)buf)[i]; + + wr32a(hw, NGBE_MNGMBX, FW_NVM_DATA_OFFSET + (i >> 2), value); + + UNREFERENCED_PARAMETER(&command); + + return err; +} + s32 ngbe_hic_check_cap(struct ngbe_hw *hw) { struct ngbe_hic_read_shadow_ram command; diff --git a/drivers/net/ngbe/base/ngbe_mng.h b/drivers/net/ngbe/base/ngbe_mng.h index 6f368b028f..e3d0309cbc 100644 --- a/drivers/net/ngbe/base/ngbe_mng.h +++ b/drivers/net/ngbe/base/ngbe_mng.h @@ -18,6 +18,8 @@ #define FW_CEM_RESP_STATUS_SUCCESS 0x1 #define FW_READ_SHADOW_RAM_CMD 0x31 #define FW_READ_SHADOW_RAM_LEN 0x6 +#define FW_WRITE_SHADOW_RAM_CMD 0x33 +#define FW_WRITE_SHADOW_RAM_LEN 0xA /* 8 plus 1 WORD to write */ #define FW_DEFAULT_CHECKSUM 0xFF /* checksum always 0xFF */ #define FW_NVM_DATA_OFFSET 3 #define FW_EEPROM_CHECK_STATUS 0xE9 @@ -65,6 +67,17 @@ struct ngbe_hic_read_shadow_ram { u16 pad3; }; +struct ngbe_hic_write_shadow_ram { + union ngbe_hic_hdr2 hdr; + u32 address; + u16 length; + u16 pad2; + u16 data; + u16 pad3; +}; + s32 ngbe_hic_sr_read(struct ngbe_hw *hw, u32 addr, u8 *buf, int len); +s32 ngbe_hic_sr_write(struct ngbe_hw *hw, u32 addr, u8 *buf, int len); + s32 ngbe_hic_check_cap(struct ngbe_hw *hw); #endif /* _NGBE_MNG_H_ */ diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 886dffc0db..32d3ab5d03 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -231,7 +231,11 @@ typedef u8* (*ngbe_mc_addr_itr) (struct ngbe_hw *hw, u8 **mc_addr_ptr, struct ngbe_rom_info { s32 (*init_params)(struct ngbe_hw *hw); + s32 (*readw_buffer)(struct ngbe_hw *hw, u32 offset, u32 words, + void *data); s32 (*read32)(struct ngbe_hw *hw, u32 addr, u32 *data); + s32 (*writew_buffer)(struct ngbe_hw *hw, u32 offset, u32 words, + void *data); s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val); enum ngbe_eeprom_type type; diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 6ed836df9e..1cf4ca54af 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -2769,6 +2769,55 @@ ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev, ngbe_dev_addr_list_itr, TRUE); } +static int +ngbe_get_eeprom_length(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + /* Return unit is byte count */ + return hw->rom.word_size * 2; +} + +static int +ngbe_get_eeprom(struct rte_eth_dev *dev, + struct rte_dev_eeprom_info *in_eeprom) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_rom_info *eeprom = &hw->rom; + uint16_t *data = in_eeprom->data; + int first, length; + + first = in_eeprom->offset >> 1; + length = in_eeprom->length >> 1; + if (first > hw->rom.word_size || + ((first + length) > hw->rom.word_size)) + return -EINVAL; + + in_eeprom->magic = hw->vendor_id | (hw->device_id << 16); + + return eeprom->readw_buffer(hw, first, length, data); +} + +static int +ngbe_set_eeprom(struct rte_eth_dev *dev, + struct rte_dev_eeprom_info *in_eeprom) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_rom_info *eeprom = &hw->rom; + uint16_t *data = in_eeprom->data; + int first, length; + + first = in_eeprom->offset >> 1; + length = in_eeprom->length >> 1; + if (first > hw->rom.word_size || + ((first + length) > hw->rom.word_size)) + return -EINVAL; + + in_eeprom->magic = hw->vendor_id | (hw->device_id << 16); + + return eeprom->writew_buffer(hw, first, length, data); +} + static const struct eth_dev_ops ngbe_eth_dev_ops = { .dev_configure = ngbe_dev_configure, .dev_infos_get = ngbe_dev_info_get, @@ -2819,6 +2868,9 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .rss_hash_update = ngbe_dev_rss_hash_update, .rss_hash_conf_get = ngbe_dev_rss_hash_conf_get, .set_mc_addr_list = ngbe_dev_set_mc_addr_list, + .get_eeprom_length = ngbe_get_eeprom_length, + .get_eeprom = ngbe_get_eeprom, + .set_eeprom = ngbe_set_eeprom, }; RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd); From patchwork Wed Sep 8 08:37:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98305 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA7EEA0C56; Wed, 8 Sep 2021 10:39:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B7284120A; Wed, 8 Sep 2021 10:37:14 +0200 (CEST) Received: from smtpbgau2.qq.com (smtpbgau2.qq.com [54.206.34.216]) by mails.dpdk.org (Postfix) with ESMTP id 6273541201 for ; Wed, 8 Sep 2021 10:37:11 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090225tdq1ydcf Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:04 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: RdkzUaxjIrea0H7DPsj/rqcA1ulQOBsEfTx8Q7szqXQFKRGUQv22y1/E6Duvu rKFsUnuFVdByrIrK4IXdiydi2fyxumpRoPSoioGsBnk8T4/dEPgVwkNtUCA0DSA2fgYPL0a YUtyWl8uAgdxPUOU2MubV5kYhPpCHcM8/uw3GGQ0K9x9vn71BDtk8bgBDjHOuiwDDuFcqKW /VAL/WaNQ98pYgYjNJ8T5gHA1xPQtBIa2sgb+954Lg40SL2Vo2N6azbM8Ubuf7Xba5KhFNd wkg58cX3D1UIku9a6TGy2N+kpGELCelTepBHHNj8/oa2kxXo828UqhPEY20jiX3xD372zKM rIFK3ymeS9JtCLLly0XI4sdYDRFT5B0LTUYMoby X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:49 +0800 Message-Id: <20210908083758.312055-24-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 23/32] net/ngbe: support register dump X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to dump registers. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + drivers/net/ngbe/base/ngbe_type.h | 1 + drivers/net/ngbe/ngbe_ethdev.c | 108 +++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_regs_group.h | 54 +++++++++++++++ 4 files changed, 164 insertions(+) create mode 100644 drivers/net/ngbe/ngbe_regs_group.h diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 3c169ab774..1d6399a2e7 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -36,6 +36,7 @@ Extended stats = Y Stats per queue = Y FW version = Y EEPROM dump = Y +Registers dump = Y Multiprocess aware = Y Linux = Y ARMv8 = Y diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 32d3ab5d03..12847b7272 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -398,6 +398,7 @@ struct ngbe_hw { u16 sub_device_id; u16 sub_system_id; u32 eeprom_id; + u8 revision_id; bool adapter_stopped; uint64_t isb_dma; diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 1cf4ca54af..4d94bc8b83 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -13,6 +13,67 @@ #include "ngbe.h" #include "ngbe_ethdev.h" #include "ngbe_rxtx.h" +#include "ngbe_regs_group.h" + +static const struct reg_info ngbe_regs_general[] = { + {NGBE_RST, 1, 1, "NGBE_RST"}, + {NGBE_STAT, 1, 1, "NGBE_STAT"}, + {NGBE_PORTCTL, 1, 1, "NGBE_PORTCTL"}, + {NGBE_GPIODATA, 1, 1, "NGBE_GPIODATA"}, + {NGBE_GPIOCTL, 1, 1, "NGBE_GPIOCTL"}, + {NGBE_LEDCTL, 1, 1, "NGBE_LEDCTL"}, + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_nvm[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_interrupt[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_fctl_others[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_rxdma[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_rx[] = { + {0, 0, 0, ""} +}; + +static struct reg_info ngbe_regs_tx[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_wakeup[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_mac[] = { + {0, 0, 0, ""} +}; + +static const struct reg_info ngbe_regs_diagnostic[] = { + {0, 0, 0, ""}, +}; + +/* PF registers */ +static const struct reg_info *ngbe_regs_others[] = { + ngbe_regs_general, + ngbe_regs_nvm, + ngbe_regs_interrupt, + ngbe_regs_fctl_others, + ngbe_regs_rxdma, + ngbe_regs_rx, + ngbe_regs_tx, + ngbe_regs_wakeup, + ngbe_regs_mac, + ngbe_regs_diagnostic, + NULL}; static int ngbe_dev_close(struct rte_eth_dev *dev); static int ngbe_dev_link_update(struct rte_eth_dev *dev, @@ -2769,6 +2830,52 @@ ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev, ngbe_dev_addr_list_itr, TRUE); } +static int +ngbe_get_reg_length(struct rte_eth_dev *dev __rte_unused) +{ + int count = 0; + int g_ind = 0; + const struct reg_info *reg_group; + const struct reg_info **reg_set = ngbe_regs_others; + + while ((reg_group = reg_set[g_ind++])) + count += ngbe_regs_group_count(reg_group); + + return count; +} + +static int +ngbe_get_regs(struct rte_eth_dev *dev, + struct rte_dev_reg_info *regs) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t *data = regs->data; + int g_ind = 0; + int count = 0; + const struct reg_info *reg_group; + const struct reg_info **reg_set = ngbe_regs_others; + + if (data == NULL) { + regs->length = ngbe_get_reg_length(dev); + regs->width = sizeof(uint32_t); + return 0; + } + + /* Support only full register dump */ + if (regs->length == 0 || + regs->length == (uint32_t)ngbe_get_reg_length(dev)) { + regs->version = hw->mac.type << 24 | + hw->revision_id << 16 | + hw->device_id; + while ((reg_group = reg_set[g_ind++])) + count += ngbe_read_regs_group(dev, &data[count], + reg_group); + return 0; + } + + return -ENOTSUP; +} + static int ngbe_get_eeprom_length(struct rte_eth_dev *dev) { @@ -2868,6 +2975,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .rss_hash_update = ngbe_dev_rss_hash_update, .rss_hash_conf_get = ngbe_dev_rss_hash_conf_get, .set_mc_addr_list = ngbe_dev_set_mc_addr_list, + .get_reg = ngbe_get_regs, .get_eeprom_length = ngbe_get_eeprom_length, .get_eeprom = ngbe_get_eeprom, .set_eeprom = ngbe_set_eeprom, diff --git a/drivers/net/ngbe/ngbe_regs_group.h b/drivers/net/ngbe/ngbe_regs_group.h new file mode 100644 index 0000000000..cc4b69fd54 --- /dev/null +++ b/drivers/net/ngbe/ngbe_regs_group.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + * Copyright(c) 2010-2017 Intel Corporation + */ + +#ifndef _NGBE_REGS_GROUP_H_ +#define _NGBE_REGS_GROUP_H_ + +#include "ngbe_ethdev.h" + +struct ngbe_hw; +struct reg_info { + uint32_t base_addr; + uint32_t count; + uint32_t stride; + const char *name; +}; + +static inline int +ngbe_read_regs(struct ngbe_hw *hw, const struct reg_info *reg, + uint32_t *reg_buf) +{ + unsigned int i; + + for (i = 0; i < reg->count; i++) + reg_buf[i] = rd32(hw, reg->base_addr + i * reg->stride); + return reg->count; +}; + +static inline int +ngbe_regs_group_count(const struct reg_info *regs) +{ + int count = 0; + int i = 0; + + while (regs[i].count) + count += regs[i++].count; + return count; +}; + +static inline int +ngbe_read_regs_group(struct rte_eth_dev *dev, uint32_t *reg_buf, + const struct reg_info *regs) +{ + int count = 0; + int i = 0; + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + while (regs[i].count) + count += ngbe_read_regs(hw, ®s[i++], ®_buf[count]); + return count; +}; + +#endif /* _NGBE_REGS_GROUP_H_ */ From patchwork Wed Sep 8 08:37:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98304 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40030A0C56; Wed, 8 Sep 2021 10:38:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CEF641199; Wed, 8 Sep 2021 10:37:13 +0200 (CEST) Received: from smtpbgeu1.qq.com (smtpbgeu1.qq.com [52.59.177.22]) by mails.dpdk.org (Postfix) with ESMTP id 28A1F411FF for ; Wed, 8 Sep 2021 10:37:11 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090226t9crugg3 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:06 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: XDJ14wmgNKKgUadLIs1LzDAAWXByckFa7afT8x16mtPiDQpUTHgomBeD3ttcv Ftarq+z1PYcPX87XB/1W/Cqlhr+pD+epemjbdCd0bESFySCXUZa1iaqyVcW7m2Q1QDeEJRd mg2KsQmjaHNADVFMHVIOdwpWGfUbwniwWo0JRwwWuf2d3zZEecDpGjCrSJUgm7P/eDpr66i IVB/ceRoUCicox5fcuFmiM4nt0VRYDtH71kyKmW3Dwq7zb9UATOtifZkbS4hyoHNZUuKePg CO5ZJN5A1qrF29iEjTsWZ/Y+/yM1l29OjaarK9yVB5M8IZF97jmtofy3Ou78ubVrURiV2gQ k0yNhmzF89nC/9yaU1DsRxyz8daRFKkMkIo4mtf X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:50 +0800 Message-Id: <20210908083758.312055-25-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 24/32] net/ngbe: support timesync X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add to support IEEE1588/802.1AS timestamping, and IEEE1588 timestamp offload on Tx. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + doc/guides/nics/ngbe.rst | 1 + drivers/net/ngbe/ngbe_ethdev.c | 216 ++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ethdev.h | 10 ++ drivers/net/ngbe/ngbe_rxtx.c | 33 ++++- 5 files changed, 260 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 1d6399a2e7..c780f1aa68 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -31,6 +31,7 @@ L4 checksum offload = P Inner L3 checksum = P Inner L4 checksum = P Packet type parsing = Y +Timesync = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index 09175e83cd..67fc7c89cc 100644 --- a/doc/guides/nics/ngbe.rst +++ b/doc/guides/nics/ngbe.rst @@ -26,6 +26,7 @@ Features - Link flow control - Interrupt mode for RX - Scattered and gather for TX and RX +- IEEE 1588 - FW version diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 4d94bc8b83..506b94168c 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -2830,6 +2830,215 @@ ngbe_dev_set_mc_addr_list(struct rte_eth_dev *dev, ngbe_dev_addr_list_itr, TRUE); } +static uint64_t +ngbe_read_systime_cyclecounter(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint64_t systime_cycles; + + systime_cycles = (uint64_t)rd32(hw, NGBE_TSTIMEL); + systime_cycles |= (uint64_t)rd32(hw, NGBE_TSTIMEH) << 32; + + return systime_cycles; +} + +static uint64_t +ngbe_read_rx_tstamp_cyclecounter(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint64_t rx_tstamp_cycles; + + /* TSRXSTMPL stores ns and TSRXSTMPH stores seconds. */ + rx_tstamp_cycles = (uint64_t)rd32(hw, NGBE_TSRXSTMPL); + rx_tstamp_cycles |= (uint64_t)rd32(hw, NGBE_TSRXSTMPH) << 32; + + return rx_tstamp_cycles; +} + +static uint64_t +ngbe_read_tx_tstamp_cyclecounter(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint64_t tx_tstamp_cycles; + + /* TSTXSTMPL stores ns and TSTXSTMPH stores seconds. */ + tx_tstamp_cycles = (uint64_t)rd32(hw, NGBE_TSTXSTMPL); + tx_tstamp_cycles |= (uint64_t)rd32(hw, NGBE_TSTXSTMPH) << 32; + + return tx_tstamp_cycles; +} + +static void +ngbe_start_timecounters(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + uint32_t incval = 0; + uint32_t shift = 0; + + incval = NGBE_INCVAL_1GB; + shift = NGBE_INCVAL_SHIFT_1GB; + + wr32(hw, NGBE_TSTIMEINC, NGBE_TSTIMEINC_IV(incval)); + + memset(&adapter->systime_tc, 0, sizeof(struct rte_timecounter)); + memset(&adapter->rx_tstamp_tc, 0, sizeof(struct rte_timecounter)); + memset(&adapter->tx_tstamp_tc, 0, sizeof(struct rte_timecounter)); + + adapter->systime_tc.cc_mask = NGBE_CYCLECOUNTER_MASK; + adapter->systime_tc.cc_shift = shift; + adapter->systime_tc.nsec_mask = (1ULL << shift) - 1; + + adapter->rx_tstamp_tc.cc_mask = NGBE_CYCLECOUNTER_MASK; + adapter->rx_tstamp_tc.cc_shift = shift; + adapter->rx_tstamp_tc.nsec_mask = (1ULL << shift) - 1; + + adapter->tx_tstamp_tc.cc_mask = NGBE_CYCLECOUNTER_MASK; + adapter->tx_tstamp_tc.cc_shift = shift; + adapter->tx_tstamp_tc.nsec_mask = (1ULL << shift) - 1; +} + +static int +ngbe_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta) +{ + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + + adapter->systime_tc.nsec += delta; + adapter->rx_tstamp_tc.nsec += delta; + adapter->tx_tstamp_tc.nsec += delta; + + return 0; +} + +static int +ngbe_timesync_write_time(struct rte_eth_dev *dev, const struct timespec *ts) +{ + uint64_t ns; + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + + ns = rte_timespec_to_ns(ts); + /* Set the timecounters to a new value. */ + adapter->systime_tc.nsec = ns; + adapter->rx_tstamp_tc.nsec = ns; + adapter->tx_tstamp_tc.nsec = ns; + + return 0; +} + +static int +ngbe_timesync_read_time(struct rte_eth_dev *dev, struct timespec *ts) +{ + uint64_t ns, systime_cycles; + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + + systime_cycles = ngbe_read_systime_cyclecounter(dev); + ns = rte_timecounter_update(&adapter->systime_tc, systime_cycles); + *ts = rte_ns_to_timespec(ns); + + return 0; +} + +static int +ngbe_timesync_enable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t tsync_ctl; + + /* Stop the timesync system time. */ + wr32(hw, NGBE_TSTIMEINC, 0x0); + /* Reset the timesync system time value. */ + wr32(hw, NGBE_TSTIMEL, 0x0); + wr32(hw, NGBE_TSTIMEH, 0x0); + + ngbe_start_timecounters(dev); + + /* Enable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ + wr32(hw, NGBE_ETFLT(NGBE_ETF_ID_1588), + RTE_ETHER_TYPE_1588 | NGBE_ETFLT_ENA | NGBE_ETFLT_1588); + + /* Enable timestamping of received PTP packets. */ + tsync_ctl = rd32(hw, NGBE_TSRXCTL); + tsync_ctl |= NGBE_TSRXCTL_ENA; + wr32(hw, NGBE_TSRXCTL, tsync_ctl); + + /* Enable timestamping of transmitted PTP packets. */ + tsync_ctl = rd32(hw, NGBE_TSTXCTL); + tsync_ctl |= NGBE_TSTXCTL_ENA; + wr32(hw, NGBE_TSTXCTL, tsync_ctl); + + ngbe_flush(hw); + + return 0; +} + +static int +ngbe_timesync_disable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t tsync_ctl; + + /* Disable timestamping of transmitted PTP packets. */ + tsync_ctl = rd32(hw, NGBE_TSTXCTL); + tsync_ctl &= ~NGBE_TSTXCTL_ENA; + wr32(hw, NGBE_TSTXCTL, tsync_ctl); + + /* Disable timestamping of received PTP packets. */ + tsync_ctl = rd32(hw, NGBE_TSRXCTL); + tsync_ctl &= ~NGBE_TSRXCTL_ENA; + wr32(hw, NGBE_TSRXCTL, tsync_ctl); + + /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ + wr32(hw, NGBE_ETFLT(NGBE_ETF_ID_1588), 0); + + /* Stop incrementating the System Time registers. */ + wr32(hw, NGBE_TSTIMEINC, 0); + + return 0; +} + +static int +ngbe_timesync_read_rx_timestamp(struct rte_eth_dev *dev, + struct timespec *timestamp, + uint32_t flags __rte_unused) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + uint32_t tsync_rxctl; + uint64_t rx_tstamp_cycles; + uint64_t ns; + + tsync_rxctl = rd32(hw, NGBE_TSRXCTL); + if ((tsync_rxctl & NGBE_TSRXCTL_VLD) == 0) + return -EINVAL; + + rx_tstamp_cycles = ngbe_read_rx_tstamp_cyclecounter(dev); + ns = rte_timecounter_update(&adapter->rx_tstamp_tc, rx_tstamp_cycles); + *timestamp = rte_ns_to_timespec(ns); + + return 0; +} + +static int +ngbe_timesync_read_tx_timestamp(struct rte_eth_dev *dev, + struct timespec *timestamp) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); + uint32_t tsync_txctl; + uint64_t tx_tstamp_cycles; + uint64_t ns; + + tsync_txctl = rd32(hw, NGBE_TSTXCTL); + if ((tsync_txctl & NGBE_TSTXCTL_VLD) == 0) + return -EINVAL; + + tx_tstamp_cycles = ngbe_read_tx_tstamp_cyclecounter(dev); + ns = rte_timecounter_update(&adapter->tx_tstamp_tc, tx_tstamp_cycles); + *timestamp = rte_ns_to_timespec(ns); + + return 0; +} + static int ngbe_get_reg_length(struct rte_eth_dev *dev __rte_unused) { @@ -2975,10 +3184,17 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .rss_hash_update = ngbe_dev_rss_hash_update, .rss_hash_conf_get = ngbe_dev_rss_hash_conf_get, .set_mc_addr_list = ngbe_dev_set_mc_addr_list, + .timesync_enable = ngbe_timesync_enable, + .timesync_disable = ngbe_timesync_disable, + .timesync_read_rx_timestamp = ngbe_timesync_read_rx_timestamp, + .timesync_read_tx_timestamp = ngbe_timesync_read_tx_timestamp, .get_reg = ngbe_get_regs, .get_eeprom_length = ngbe_get_eeprom_length, .get_eeprom = ngbe_get_eeprom, .set_eeprom = ngbe_set_eeprom, + .timesync_adjust_time = ngbe_timesync_adjust_time, + .timesync_read_time = ngbe_timesync_read_time, + .timesync_write_time = ngbe_timesync_write_time, }; RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd); diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index c16c6568be..b6e623ab0f 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -7,6 +7,7 @@ #define _NGBE_ETHDEV_H_ #include "ngbe_ptypes.h" +#include #include #include @@ -107,6 +108,9 @@ struct ngbe_adapter { struct ngbe_vf_info *vfdata; struct ngbe_uta_info uta_info; bool rx_bulk_alloc_allowed; + struct rte_timecounter systime_tc; + struct rte_timecounter rx_tstamp_tc; + struct rte_timecounter tx_tstamp_tc; /* For RSS reta table update */ uint8_t rss_reta_updated; @@ -273,6 +277,12 @@ int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev); #define NGBE_DEFAULT_TX_HTHRESH 0 #define NGBE_DEFAULT_TX_WTHRESH 0 +/* Additional timesync values. */ +#define NGBE_INCVAL_1GB 0x2000000 /* all speed is same in Emerald */ +#define NGBE_INCVAL_SHIFT_1GB 22 /* all speed is same in Emerald */ + +#define NGBE_CYCLECOUNTER_MASK 0xffffffffffffffffULL + /* store statistics names and its offset in stats structure */ struct rte_ngbe_xstats_name_off { char name[RTE_ETH_XSTATS_NAME_SIZE]; diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 91cafed7fc..e0ca4af9d9 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -15,6 +15,13 @@ #include "base/ngbe.h" #include "ngbe_ethdev.h" #include "ngbe_rxtx.h" + +#ifdef RTE_LIBRTE_IEEE1588 +#define NGBE_TX_IEEE1588_TMST PKT_TX_IEEE1588_TMST +#else +#define NGBE_TX_IEEE1588_TMST 0 +#endif + /* Bit Mask to indicate what bits required for building Tx context */ static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM | PKT_TX_OUTER_IPV6 | @@ -25,7 +32,9 @@ static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK | PKT_TX_TCP_SEG | PKT_TX_TUNNEL_MASK | - PKT_TX_OUTER_IP_CKSUM); + PKT_TX_OUTER_IP_CKSUM | + NGBE_TX_IEEE1588_TMST); + #define NGBE_TX_OFFLOAD_NOTSUP_MASK \ (PKT_TX_OFFLOAD_MASK ^ NGBE_TX_OFFLOAD_MASK) @@ -730,6 +739,11 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, */ cmd_type_len = NGBE_TXD_FCS; +#ifdef RTE_LIBRTE_IEEE1588 + if (ol_flags & PKT_TX_IEEE1588_TMST) + cmd_type_len |= NGBE_TXD_1588; +#endif + olinfo_status = 0; if (tx_ol_req) { if (ol_flags & PKT_TX_TCP_SEG) { @@ -906,7 +920,20 @@ ngbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info) PKT_RX_RSS_HASH, 0, 0, 0, 0, 0, 0, PKT_RX_FDIR, }; +#ifdef RTE_LIBRTE_IEEE1588 + static uint64_t ip_pkt_etqf_map[8] = { + 0, 0, 0, PKT_RX_IEEE1588_PTP, + 0, 0, 0, 0, + }; + int etfid = ngbe_etflt_id(NGBE_RXD_PTID(pkt_info)); + if (likely(-1 != etfid)) + return ip_pkt_etqf_map[etfid] | + ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)]; + else + return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)]; +#else return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)]; +#endif } static inline uint64_t @@ -923,6 +950,10 @@ rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags) vlan_flags & PKT_RX_VLAN_STRIPPED) ? vlan_flags : 0; +#ifdef RTE_LIBRTE_IEEE1588 + if (rx_status & NGBE_RXD_STAT_1588) + pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST; +#endif return pkt_flags; } From patchwork Wed Sep 8 08:37:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98307 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 897CCA0C56; Wed, 8 Sep 2021 10:39:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B44EB41211; Wed, 8 Sep 2021 10:37:17 +0200 (CEST) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id 2110741206 for ; Wed, 8 Sep 2021 10:37:13 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090229tch0pg3b Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:08 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: hoArX50alxHZB0U9Dps0h+QJg1bBTHIOZ1WwiwzgeRw5YuyvTw5UIpe4v9O7D 2XGTbZZwYVsADBzG/4HhoPwFQxio6c7cxS8OpsKuAt1n9YMNXBklq/04IjJtQw4pNNnxPGL u+qjxKbl6QOQ9rs+Cr0MwOJ057LbtP5R4M0NknSNQQNlg+/dTDbk0fDMGMdOMw/cxK2QeQO NdC8HaTJtl0ru3+tXe4CpzT/bxfYD+uEQI7OCOG67dgOLvr0aPTrvjkWZbcF+5m/aYRJWpc KN2CDaCW/PLOhsEoak8qw166XUKhEL3+eSZtoesK52zQYfok8Slybqgv/JbBs6xx+QgLhtu FRucz6zwvFlTBji6FfcBS034GBirQ+PdAjlyRAZG6ufZ4XfDpY= X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:51 +0800 Message-Id: <20210908083758.312055-26-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 25/32] net/ngbe: add Rx and Tx queue info get X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Rx and Tx queue information get operation. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ethdev.c | 2 ++ drivers/net/ngbe/ngbe_ethdev.h | 6 ++++++ drivers/net/ngbe/ngbe_rxtx.c | 37 ++++++++++++++++++++++++++++++++++ 3 files changed, 45 insertions(+) diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 506b94168c..2d0c9e3453 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -3184,6 +3184,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .rss_hash_update = ngbe_dev_rss_hash_update, .rss_hash_conf_get = ngbe_dev_rss_hash_conf_get, .set_mc_addr_list = ngbe_dev_set_mc_addr_list, + .rxq_info_get = ngbe_rxq_info_get, + .txq_info_get = ngbe_txq_info_get, .timesync_enable = ngbe_timesync_enable, .timesync_disable = ngbe_timesync_disable, .timesync_read_rx_timestamp = ngbe_timesync_read_rx_timestamp, diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index b6e623ab0f..98df1c3bf0 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -200,6 +200,12 @@ int ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +void ngbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo); + +void ngbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo); + uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index e0ca4af9d9..ac97eec1c0 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -3092,3 +3092,40 @@ ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) return 0; } + +void +ngbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo) +{ + struct ngbe_rx_queue *rxq; + + rxq = dev->data->rx_queues[queue_id]; + + qinfo->mp = rxq->mb_pool; + qinfo->scattered_rx = dev->data->scattered_rx; + qinfo->nb_desc = rxq->nb_rx_desc; + + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; + qinfo->conf.rx_drop_en = rxq->drop_en; + qinfo->conf.rx_deferred_start = rxq->rx_deferred_start; + qinfo->conf.offloads = rxq->offloads; +} + +void +ngbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct ngbe_tx_queue *txq; + + txq = dev->data->tx_queues[queue_id]; + + qinfo->nb_desc = txq->nb_tx_desc; + + qinfo->conf.tx_thresh.pthresh = txq->pthresh; + qinfo->conf.tx_thresh.hthresh = txq->hthresh; + qinfo->conf.tx_thresh.wthresh = txq->wthresh; + + qinfo->conf.tx_free_thresh = txq->tx_free_thresh; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; +} From patchwork Wed Sep 8 08:37:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98306 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B461A0C56; Wed, 8 Sep 2021 10:39:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9796D41207; Wed, 8 Sep 2021 10:37:16 +0200 (CEST) Received: from smtpproxy21.qq.com (smtpbg702.qq.com [203.205.195.102]) by mails.dpdk.org (Postfix) with ESMTP id 3602841207 for ; Wed, 8 Sep 2021 10:37:14 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090230tpari7yf Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:10 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: RFp2QSjOiS6IjwcHF8ju5GPz4LRfS8vo46Ts7YeK7VgrS6Tpbq1yx+o6h65h9 JxCHkFdzxjUg/IgDOecrRAYRJkhxukXh4nlmp6nQ7H+5YkPGClUub7bBXLHX1dLnDgzJUfE objKgdaJohHXzHsnFXUhslC5N8C9U6/mB0HN+896YEy+MBezRMzZtWDkcLhg6IGKF5DvNCI TEcq23FE00kLvsFvXUCWpWxHcooAb1G8R3VZ0cb7+dMQqjHNvaTsB4SnoSUuaDUQwEgQ2AO CqioGPI7zXS2OfOYo2I0NpZzGe+bUsi8pWlGyJqesLYdgQ9IMHn14f3Q9FqpoKdzhIMHAzs GvNwaROBChWCktEK387UoDHi6rw+1PXwapPUXzo X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:52 +0800 Message-Id: <20210908083758.312055-27-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign1 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 26/32] net/ngbe: add Rx and Tx descriptor status X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Supports to get the number of used Rx descriptos, and check the status of Rx and Tx descriptors. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 2 + drivers/net/ngbe/ngbe_ethdev.c | 3 ++ drivers/net/ngbe/ngbe_ethdev.h | 6 +++ drivers/net/ngbe/ngbe_rxtx.c | 73 +++++++++++++++++++++++++++++++ 4 files changed, 84 insertions(+) diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index c780f1aa68..56d5d71ea8 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -32,6 +32,8 @@ Inner L3 checksum = P Inner L4 checksum = P Packet type parsing = Y Timesync = Y +Rx descriptor status = Y +Tx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 2d0c9e3453..ec652aa359 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -370,6 +370,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) PMD_INIT_FUNC_TRACE(); eth_dev->dev_ops = &ngbe_eth_dev_ops; + eth_dev->rx_queue_count = ngbe_dev_rx_queue_count; + eth_dev->rx_descriptor_status = ngbe_dev_rx_descriptor_status; + eth_dev->tx_descriptor_status = ngbe_dev_tx_descriptor_status; eth_dev->rx_pkt_burst = &ngbe_recv_pkts; eth_dev->tx_pkt_burst = &ngbe_xmit_pkts; eth_dev->tx_pkt_prepare = &ngbe_prep_pkts; diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 98df1c3bf0..aacc0b68b2 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -181,6 +181,12 @@ int ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +uint32_t ngbe_dev_rx_queue_count(struct rte_eth_dev *dev, + uint16_t rx_queue_id); + +int ngbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); +int ngbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset); + int ngbe_dev_rx_init(struct rte_eth_dev *dev); void ngbe_dev_tx_init(struct rte_eth_dev *dev); diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index ac97eec1c0..0b31474193 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -2263,6 +2263,79 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, return 0; } +uint32_t +ngbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ +#define NGBE_RXQ_SCAN_INTERVAL 4 + volatile struct ngbe_rx_desc *rxdp; + struct ngbe_rx_queue *rxq; + uint32_t desc = 0; + + rxq = dev->data->rx_queues[rx_queue_id]; + rxdp = &rxq->rx_ring[rxq->rx_tail]; + + while ((desc < rxq->nb_rx_desc) && + (rxdp->qw1.lo.status & + rte_cpu_to_le_32(NGBE_RXD_STAT_DD))) { + desc += NGBE_RXQ_SCAN_INTERVAL; + rxdp += NGBE_RXQ_SCAN_INTERVAL; + if (rxq->rx_tail + desc >= rxq->nb_rx_desc) + rxdp = &(rxq->rx_ring[rxq->rx_tail + + desc - rxq->nb_rx_desc]); + } + + return desc; +} + +int +ngbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset) +{ + struct ngbe_rx_queue *rxq = rx_queue; + volatile uint32_t *status; + uint32_t nb_hold, desc; + + if (unlikely(offset >= rxq->nb_rx_desc)) + return -EINVAL; + + nb_hold = rxq->nb_rx_hold; + if (offset >= rxq->nb_rx_desc - nb_hold) + return RTE_ETH_RX_DESC_UNAVAIL; + + desc = rxq->rx_tail + offset; + if (desc >= rxq->nb_rx_desc) + desc -= rxq->nb_rx_desc; + + status = &rxq->rx_ring[desc].qw1.lo.status; + if (*status & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)) + return RTE_ETH_RX_DESC_DONE; + + return RTE_ETH_RX_DESC_AVAIL; +} + +int +ngbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) +{ + struct ngbe_tx_queue *txq = tx_queue; + volatile uint32_t *status; + uint32_t desc; + + if (unlikely(offset >= txq->nb_tx_desc)) + return -EINVAL; + + desc = txq->tx_tail + offset; + if (desc >= txq->nb_tx_desc) { + desc -= txq->nb_tx_desc; + if (desc >= txq->nb_tx_desc) + desc -= txq->nb_tx_desc; + } + + status = &txq->tx_ring[desc].dw3; + if (*status & rte_cpu_to_le_32(NGBE_TXD_DD)) + return RTE_ETH_TX_DESC_DONE; + + return RTE_ETH_TX_DESC_FULL; +} + void ngbe_dev_clear_queues(struct rte_eth_dev *dev) { From patchwork Wed Sep 8 08:37:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98308 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2FB11A0C56; Wed, 8 Sep 2021 10:39:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 32BB34121B; Wed, 8 Sep 2021 10:37:19 +0200 (CEST) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id D792641214 for ; Wed, 8 Sep 2021 10:37:17 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090232tggse40m Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:12 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: 83ShfzFP0oCn4V+4E9Fsg9z/ILPqFMETWjgyqONWAPkymXZAw7SwHkz/tixOW OAB0nxIjybGT04LoCwn/627rF4r1XOogT92RBpx8B6+S8W3g1MV3bR5OyefW4iAWEYHQxqv mPq9qDDX6VVxIgJJttIxRVWWQbld2nNELOc8+5L4KqiIH3bQzs2vnnD6Cz1va0gWmhLaExN dXISKrLpFnS/F7LlL8svwbTuHBKkNkzAa6wG+6DecNswsxivafj6du3kcFdfTAie7/mPa+k bWq9Va/z/ueb8dBYxrvdBpG02qf51klgmuFIEvrJhtzV9LWOv4MaOzn4+PVzM9F7tFEbBem HJrVZ/ewz03gzCwOYEIkMrLalZyCW/52IdP3FCd X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:53 +0800 Message-Id: <20210908083758.312055-28-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign1 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 27/32] net/ngbe: add Tx done cleanup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for API rte_eth_tx_done_cleanup(). Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ethdev.c | 1 + drivers/net/ngbe/ngbe_rxtx.c | 89 ++++++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_rxtx.h | 1 + 3 files changed, 91 insertions(+) diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index ec652aa359..4eaf9b0724 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -3200,6 +3200,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = { .timesync_adjust_time = ngbe_timesync_adjust_time, .timesync_read_time = ngbe_timesync_read_time, .timesync_write_time = ngbe_timesync_write_time, + .tx_done_cleanup = ngbe_dev_tx_done_cleanup, }; RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd); diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index 0b31474193..bee4f04616 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -1717,6 +1717,95 @@ ngbe_tx_queue_release_mbufs(struct ngbe_tx_queue *txq) } } +static int +ngbe_tx_done_cleanup_full(struct ngbe_tx_queue *txq, uint32_t free_cnt) +{ + struct ngbe_tx_entry *swr_ring = txq->sw_ring; + uint16_t i, tx_last, tx_id; + uint16_t nb_tx_free_last; + uint16_t nb_tx_to_clean; + uint32_t pkt_cnt; + + /* Start free mbuf from the next of tx_tail */ + tx_last = txq->tx_tail; + tx_id = swr_ring[tx_last].next_id; + + if (txq->nb_tx_free == 0 && ngbe_xmit_cleanup(txq)) + return 0; + + nb_tx_to_clean = txq->nb_tx_free; + nb_tx_free_last = txq->nb_tx_free; + if (!free_cnt) + free_cnt = txq->nb_tx_desc; + + /* Loop through swr_ring to count the amount of + * freeable mubfs and packets. + */ + for (pkt_cnt = 0; pkt_cnt < free_cnt; ) { + for (i = 0; i < nb_tx_to_clean && + pkt_cnt < free_cnt && + tx_id != tx_last; i++) { + if (swr_ring[tx_id].mbuf != NULL) { + rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf); + swr_ring[tx_id].mbuf = NULL; + + /* + * last segment in the packet, + * increment packet count + */ + pkt_cnt += (swr_ring[tx_id].last_id == tx_id); + } + + tx_id = swr_ring[tx_id].next_id; + } + + if (pkt_cnt < free_cnt) { + if (ngbe_xmit_cleanup(txq)) + break; + + nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last; + nb_tx_free_last = txq->nb_tx_free; + } + } + + return (int)pkt_cnt; +} + +static int +ngbe_tx_done_cleanup_simple(struct ngbe_tx_queue *txq, + uint32_t free_cnt) +{ + int i, n, cnt; + + if (free_cnt == 0 || free_cnt > txq->nb_tx_desc) + free_cnt = txq->nb_tx_desc; + + cnt = free_cnt - free_cnt % txq->tx_free_thresh; + + for (i = 0; i < cnt; i += n) { + if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_free_thresh) + break; + + n = ngbe_tx_free_bufs(txq); + + if (n == 0) + break; + } + + return i; +} + +int +ngbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt) +{ + struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue; + if (txq->offloads == 0 && + txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) + return ngbe_tx_done_cleanup_simple(txq, free_cnt); + + return ngbe_tx_done_cleanup_full(txq, free_cnt); +} + static void ngbe_tx_free_swring(struct ngbe_tx_queue *txq) { diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index 812bc57c9e..d63b25c1aa 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -370,6 +370,7 @@ struct ngbe_txq_ops { void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq); void ngbe_set_rx_function(struct rte_eth_dev *dev); +int ngbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt); uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev); uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev); From patchwork Wed Sep 8 08:37:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98309 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0BA15A0C56; Wed, 8 Sep 2021 10:39:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 547C341220; Wed, 8 Sep 2021 10:37:21 +0200 (CEST) Received: from smtpbg506.qq.com (smtpbg506.qq.com [203.205.250.33]) by mails.dpdk.org (Postfix) with ESMTP id 5595C4121E for ; Wed, 8 Sep 2021 10:37:19 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090234tcopk4wq Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:14 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: tgzXWVxr7yiUnYPwdBHg9NlNccNGD4/x0+varGjTBpxa/V28pa6mXwWl145u9 AYu4cHxcib9whiIzi4YNFEhF/OfdHET6nczG9HPKrdRQuSUTSLtPCqNlhPgha8SjWsIvQN8 T3fWVUj3mWy+QzBU9HZaNK2YtiB//UDPVEtbIukQnGSRCqoFDzuwCENjZ+MbuhPIazedME2 kci5OAWvTHSrI7IAjJb37UdlaqIhvtuBzCmSPaoYGxEh4qjosmpBMp25wUqJYaXyuJYLYPH 3Wm8RY4hXzO4qSJ2YjzZDO0tBGWZL9JxvMoUbUtUK1iIcC2i3cbnPbTFQoNMZ1VOlEBaUqJ IPDOAuKO8wARkWxW9zA0O/wQ9n9M0GfgMJBJwsB X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:54 +0800 Message-Id: <20210908083758.312055-29-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 28/32] net/ngbe: add IPsec context creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Initialize securiry context, and support to get security capabilities. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/ngbe.ini | 1 + drivers/net/ngbe/meson.build | 3 +- drivers/net/ngbe/ngbe_ethdev.c | 10 ++ drivers/net/ngbe/ngbe_ethdev.h | 4 + drivers/net/ngbe/ngbe_ipsec.c | 178 ++++++++++++++++++++++++++++++ 5 files changed, 195 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ngbe/ngbe_ipsec.c diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini index 56d5d71ea8..facdb5f006 100644 --- a/doc/guides/nics/features/ngbe.ini +++ b/doc/guides/nics/features/ngbe.ini @@ -23,6 +23,7 @@ RSS reta update = Y SR-IOV = Y VLAN filter = Y Flow control = Y +Inline crypto = Y CRC offload = P VLAN offload = P QinQ offload = P diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build index b276ec3341..f222595b19 100644 --- a/drivers/net/ngbe/meson.build +++ b/drivers/net/ngbe/meson.build @@ -12,12 +12,13 @@ objs = [base_objs] sources = files( 'ngbe_ethdev.c', + 'ngbe_ipsec.c', 'ngbe_ptypes.c', 'ngbe_pf.c', 'ngbe_rxtx.c', ) -deps += ['hash'] +deps += ['hash', 'security'] includes += include_directories('base') diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 4eaf9b0724..b0e0f7411e 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -430,6 +430,12 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused) /* Unlock any pending hardware semaphore */ ngbe_swfw_lock_reset(hw); +#ifdef RTE_LIB_SECURITY + /* Initialize security_ctx only for primary process*/ + if (ngbe_ipsec_ctx_create(eth_dev)) + return -ENOMEM; +#endif + /* Get Hardware Flow Control setting */ hw->fc.requested_mode = ngbe_fc_full; hw->fc.current_mode = ngbe_fc_full; @@ -1282,6 +1288,10 @@ ngbe_dev_close(struct rte_eth_dev *dev) rte_free(dev->data->hash_mac_addrs); dev->data->hash_mac_addrs = NULL; +#ifdef RTE_LIB_SECURITY + rte_free(dev->security_ctx); +#endif + return ret; } diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index aacc0b68b2..9eda024d65 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -264,6 +264,10 @@ void ngbe_pf_mbx_process(struct rte_eth_dev *eth_dev); int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev); +#ifdef RTE_LIB_SECURITY +int ngbe_ipsec_ctx_create(struct rte_eth_dev *dev); +#endif + /* High threshold controlling when to start sending XOFF frames. */ #define NGBE_FC_XOFF_HITH 128 /*KB*/ /* Low threshold controlling when to start sending XON frames. */ diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c new file mode 100644 index 0000000000..5f8b0bab29 --- /dev/null +++ b/drivers/net/ngbe/ngbe_ipsec.c @@ -0,0 +1,178 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + * Copyright(c) 2010-2017 Intel Corporation + */ + +#include +#include +#include + +#include "base/ngbe.h" +#include "ngbe_ethdev.h" + +static const struct rte_security_capability * +ngbe_crypto_capabilities_get(void *device __rte_unused) +{ + static const struct rte_cryptodev_capabilities + aes_gcm_gmac_crypto_capabilities[] = { + { /* AES GMAC (128-bit) */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_AES_GMAC, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { + .min = 12, + .max = 12, + .increment = 0 + } + }, } + }, } + }, + { /* AES GCM (128-bit) */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, + {.aead = { + .algo = RTE_CRYPTO_AEAD_AES_GCM, + .block_size = 16, + .key_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { + .min = 0, + .max = 65535, + .increment = 1 + }, + .iv_size = { + .min = 12, + .max = 12, + .increment = 0 + } + }, } + }, } + }, + { + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_NOT_SPECIFIED + }, } + }, + }; + + static const struct rte_security_capability + ngbe_security_capabilities[] = { + { /* IPsec Inline Crypto ESP Transport Egress */ + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, + .protocol = RTE_SECURITY_PROTOCOL_IPSEC, + {.ipsec = { + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP, + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT, + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS, + .options = { 0 } + } }, + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities, + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA + }, + { /* IPsec Inline Crypto ESP Transport Ingress */ + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, + .protocol = RTE_SECURITY_PROTOCOL_IPSEC, + {.ipsec = { + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP, + .mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT, + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + .options = { 0 } + } }, + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities, + .ol_flags = 0 + }, + { /* IPsec Inline Crypto ESP Tunnel Egress */ + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, + .protocol = RTE_SECURITY_PROTOCOL_IPSEC, + {.ipsec = { + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP, + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL, + .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS, + .options = { 0 } + } }, + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities, + .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA + }, + { /* IPsec Inline Crypto ESP Tunnel Ingress */ + .action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, + .protocol = RTE_SECURITY_PROTOCOL_IPSEC, + {.ipsec = { + .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP, + .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL, + .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS, + .options = { 0 } + } }, + .crypto_capabilities = aes_gcm_gmac_crypto_capabilities, + .ol_flags = 0 + }, + { + .action = RTE_SECURITY_ACTION_TYPE_NONE + } + }; + + return ngbe_security_capabilities; +} + +static struct rte_security_ops ngbe_security_ops = { + .capabilities_get = ngbe_crypto_capabilities_get +}; + +static int +ngbe_crypto_capable(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t reg_i, reg, capable = 1; + /* test if rx crypto can be enabled and then write back initial value*/ + reg_i = rd32(hw, NGBE_SECRXCTL); + wr32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA, 0); + reg = rd32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA); + if (reg != 0) + capable = 0; + wr32(hw, NGBE_SECRXCTL, reg_i); + return capable; +} + +int +ngbe_ipsec_ctx_create(struct rte_eth_dev *dev) +{ + struct rte_security_ctx *ctx = NULL; + + if (ngbe_crypto_capable(dev)) { + ctx = rte_malloc("rte_security_instances_ops", + sizeof(struct rte_security_ctx), 0); + if (ctx) { + ctx->device = (void *)dev; + ctx->ops = &ngbe_security_ops; + ctx->sess_cnt = 0; + dev->security_ctx = ctx; + } else { + return -ENOMEM; + } + } + if (rte_security_dynfield_register() < 0) + return -rte_errno; + return 0; +} From patchwork Wed Sep 8 08:37:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98310 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB92CA0C56; Wed, 8 Sep 2021 10:39:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6D9EE41225; Wed, 8 Sep 2021 10:37:22 +0200 (CEST) Received: from smtpbgeu1.qq.com (smtpbgeu1.qq.com [52.59.177.22]) by mails.dpdk.org (Postfix) with ESMTP id 32C7941168 for ; Wed, 8 Sep 2021 10:37:21 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090236tp0q4dgg Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:16 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: 3Pj5YKO7Ewu9isuqTZ/8/HOR6whuuhgzBD7mH+m8bXq+r3oRh8DLEMMJe/9eO 6QHASUGPJa1krnYx7j2kRhWefd94oEM2gThL9sQOQ+xqFzZy+dSpISWFoNca4LFd1KkmOZX 9ST2hzOqHjWN7kIFAUH24WsmBomKyKWQkcpo8qomcvM48fGNTlyCm/3ZvVkujqJKkj3wBak +fhzmd+mszHlz6agkp9ySWzw23Vs2vw+aeQXzXk1mKPtKrn4TjKjc7bucLozVYYLBoYhz/k qUFd7Hp6FpBAwHtPpksX5pnWdj2Gl0PCwQif6SwHv4bRTxPLPnYjHDxdWnT9qfoqhZvnTlN +mv4KjjZhAjl0XufJ5lqPdIQKmGklKQ/wMRKqcf X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:55 +0800 Message-Id: <20210908083758.312055-30-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 29/32] net/ngbe: create and destroy security session X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to configure a security session, add create and destroy operations for a security session. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ethdev.h | 8 + drivers/net/ngbe/ngbe_ipsec.c | 377 +++++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ipsec.h | 78 +++++++ 3 files changed, 463 insertions(+) create mode 100644 drivers/net/ngbe/ngbe_ipsec.h diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 9eda024d65..e8ce01e1f4 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -7,6 +7,9 @@ #define _NGBE_ETHDEV_H_ #include "ngbe_ptypes.h" +#ifdef RTE_LIB_SECURITY +#include "ngbe_ipsec.h" +#endif #include #include #include @@ -107,6 +110,9 @@ struct ngbe_adapter { struct ngbe_hwstrip hwstrip; struct ngbe_vf_info *vfdata; struct ngbe_uta_info uta_info; +#ifdef RTE_LIB_SECURITY + struct ngbe_ipsec ipsec; +#endif bool rx_bulk_alloc_allowed; struct rte_timecounter systime_tc; struct rte_timecounter rx_tstamp_tc; @@ -160,6 +166,8 @@ ngbe_dev_intr(struct rte_eth_dev *dev) #define NGBE_DEV_UTA_INFO(dev) \ (&((struct ngbe_adapter *)(dev)->data->dev_private)->uta_info) +#define NGBE_DEV_IPSEC(dev) \ + (&((struct ngbe_adapter *)(dev)->data->dev_private)->ipsec) /* * Rx/Tx function prototypes diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c index 5f8b0bab29..80151d45dc 100644 --- a/drivers/net/ngbe/ngbe_ipsec.c +++ b/drivers/net/ngbe/ngbe_ipsec.c @@ -9,6 +9,381 @@ #include "base/ngbe.h" #include "ngbe_ethdev.h" +#include "ngbe_ipsec.h" + +#define CMP_IP(a, b) (\ + (a).ipv6[0] == (b).ipv6[0] && \ + (a).ipv6[1] == (b).ipv6[1] && \ + (a).ipv6[2] == (b).ipv6[2] && \ + (a).ipv6[3] == (b).ipv6[3]) + +static int +ngbe_crypto_add_sa(struct ngbe_crypto_session *ic_session) +{ + struct rte_eth_dev *dev = ic_session->dev; + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_ipsec *priv = NGBE_DEV_IPSEC(dev); + uint32_t reg_val; + int sa_index = -1; + + if (ic_session->op == NGBE_OP_AUTHENTICATED_DECRYPTION) { + int i, ip_index = -1; + uint8_t *key; + + /* Find a match in the IP table*/ + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) { + if (CMP_IP(priv->rx_ip_tbl[i].ip, + ic_session->dst_ip)) { + ip_index = i; + break; + } + } + /* If no match, find a free entry in the IP table*/ + if (ip_index < 0) { + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) { + if (priv->rx_ip_tbl[i].ref_count == 0) { + ip_index = i; + break; + } + } + } + + /* Fail if no match and no free entries*/ + if (ip_index < 0) { + PMD_DRV_LOG(ERR, + "No free entry left in the Rx IP table\n"); + return -1; + } + + /* Find a free entry in the SA table*/ + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) { + if (priv->rx_sa_tbl[i].used == 0) { + sa_index = i; + break; + } + } + /* Fail if no free entries*/ + if (sa_index < 0) { + PMD_DRV_LOG(ERR, + "No free entry left in the Rx SA table\n"); + return -1; + } + + priv->rx_ip_tbl[ip_index].ip.ipv6[0] = + ic_session->dst_ip.ipv6[0]; + priv->rx_ip_tbl[ip_index].ip.ipv6[1] = + ic_session->dst_ip.ipv6[1]; + priv->rx_ip_tbl[ip_index].ip.ipv6[2] = + ic_session->dst_ip.ipv6[2]; + priv->rx_ip_tbl[ip_index].ip.ipv6[3] = + ic_session->dst_ip.ipv6[3]; + priv->rx_ip_tbl[ip_index].ref_count++; + + priv->rx_sa_tbl[sa_index].spi = ic_session->spi; + priv->rx_sa_tbl[sa_index].ip_index = ip_index; + priv->rx_sa_tbl[sa_index].mode = IPSRXMOD_VALID; + if (ic_session->op == NGBE_OP_AUTHENTICATED_DECRYPTION) + priv->rx_sa_tbl[sa_index].mode |= + (IPSRXMOD_PROTO | IPSRXMOD_DECRYPT); + if (ic_session->dst_ip.type == IPv6) { + priv->rx_sa_tbl[sa_index].mode |= IPSRXMOD_IPV6; + priv->rx_ip_tbl[ip_index].ip.type = IPv6; + } else if (ic_session->dst_ip.type == IPv4) { + priv->rx_ip_tbl[ip_index].ip.type = IPv4; + } + priv->rx_sa_tbl[sa_index].used = 1; + + /* write IP table entry*/ + reg_val = NGBE_IPSRXIDX_ENA | NGBE_IPSRXIDX_WRITE | + NGBE_IPSRXIDX_TB_IP | (ip_index << 3); + if (priv->rx_ip_tbl[ip_index].ip.type == IPv4) { + uint32_t ipv4 = priv->rx_ip_tbl[ip_index].ip.ipv4; + wr32(hw, NGBE_IPSRXADDR(0), rte_cpu_to_be_32(ipv4)); + wr32(hw, NGBE_IPSRXADDR(1), 0); + wr32(hw, NGBE_IPSRXADDR(2), 0); + wr32(hw, NGBE_IPSRXADDR(3), 0); + } else { + wr32(hw, NGBE_IPSRXADDR(0), + priv->rx_ip_tbl[ip_index].ip.ipv6[0]); + wr32(hw, NGBE_IPSRXADDR(1), + priv->rx_ip_tbl[ip_index].ip.ipv6[1]); + wr32(hw, NGBE_IPSRXADDR(2), + priv->rx_ip_tbl[ip_index].ip.ipv6[2]); + wr32(hw, NGBE_IPSRXADDR(3), + priv->rx_ip_tbl[ip_index].ip.ipv6[3]); + } + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + + /* write SPI table entry*/ + reg_val = NGBE_IPSRXIDX_ENA | NGBE_IPSRXIDX_WRITE | + NGBE_IPSRXIDX_TB_SPI | (sa_index << 3); + wr32(hw, NGBE_IPSRXSPI, + priv->rx_sa_tbl[sa_index].spi); + wr32(hw, NGBE_IPSRXADDRIDX, + priv->rx_sa_tbl[sa_index].ip_index); + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + + /* write Key table entry*/ + key = malloc(ic_session->key_len); + if (!key) + return -ENOMEM; + + memcpy(key, ic_session->key, ic_session->key_len); + + reg_val = NGBE_IPSRXIDX_ENA | NGBE_IPSRXIDX_WRITE | + NGBE_IPSRXIDX_TB_KEY | (sa_index << 3); + wr32(hw, NGBE_IPSRXKEY(0), + rte_cpu_to_be_32(*(uint32_t *)&key[12])); + wr32(hw, NGBE_IPSRXKEY(1), + rte_cpu_to_be_32(*(uint32_t *)&key[8])); + wr32(hw, NGBE_IPSRXKEY(2), + rte_cpu_to_be_32(*(uint32_t *)&key[4])); + wr32(hw, NGBE_IPSRXKEY(3), + rte_cpu_to_be_32(*(uint32_t *)&key[0])); + wr32(hw, NGBE_IPSRXSALT, + rte_cpu_to_be_32(ic_session->salt)); + wr32(hw, NGBE_IPSRXMODE, + priv->rx_sa_tbl[sa_index].mode); + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + + free(key); + } else { /* sess->dir == RTE_CRYPTO_OUTBOUND */ + uint8_t *key; + int i; + + /* Find a free entry in the SA table*/ + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) { + if (priv->tx_sa_tbl[i].used == 0) { + sa_index = i; + break; + } + } + /* Fail if no free entries*/ + if (sa_index < 0) { + PMD_DRV_LOG(ERR, + "No free entry left in the Tx SA table\n"); + return -1; + } + + priv->tx_sa_tbl[sa_index].spi = + rte_cpu_to_be_32(ic_session->spi); + priv->tx_sa_tbl[i].used = 1; + ic_session->sa_index = sa_index; + + key = malloc(ic_session->key_len); + if (!key) + return -ENOMEM; + + memcpy(key, ic_session->key, ic_session->key_len); + + /* write Key table entry*/ + reg_val = NGBE_IPSRXIDX_ENA | + NGBE_IPSRXIDX_WRITE | (sa_index << 3); + wr32(hw, NGBE_IPSTXKEY(0), + rte_cpu_to_be_32(*(uint32_t *)&key[12])); + wr32(hw, NGBE_IPSTXKEY(1), + rte_cpu_to_be_32(*(uint32_t *)&key[8])); + wr32(hw, NGBE_IPSTXKEY(2), + rte_cpu_to_be_32(*(uint32_t *)&key[4])); + wr32(hw, NGBE_IPSTXKEY(3), + rte_cpu_to_be_32(*(uint32_t *)&key[0])); + wr32(hw, NGBE_IPSTXSALT, + rte_cpu_to_be_32(ic_session->salt)); + wr32w(hw, NGBE_IPSTXIDX, reg_val, NGBE_IPSTXIDX_WRITE, 1000); + + free(key); + } + + return 0; +} + +static int +ngbe_crypto_remove_sa(struct rte_eth_dev *dev, + struct ngbe_crypto_session *ic_session) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_ipsec *priv = NGBE_DEV_IPSEC(dev); + uint32_t reg_val; + int sa_index = -1; + + if (ic_session->op == NGBE_OP_AUTHENTICATED_DECRYPTION) { + int i, ip_index = -1; + + /* Find a match in the IP table*/ + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) { + if (CMP_IP(priv->rx_ip_tbl[i].ip, ic_session->dst_ip)) { + ip_index = i; + break; + } + } + + /* Fail if no match*/ + if (ip_index < 0) { + PMD_DRV_LOG(ERR, + "Entry not found in the Rx IP table\n"); + return -1; + } + + /* Find a free entry in the SA table*/ + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) { + if (priv->rx_sa_tbl[i].spi == + rte_cpu_to_be_32(ic_session->spi)) { + sa_index = i; + break; + } + } + /* Fail if no match*/ + if (sa_index < 0) { + PMD_DRV_LOG(ERR, + "Entry not found in the Rx SA table\n"); + return -1; + } + + /* Disable and clear Rx SPI and key table entryes*/ + reg_val = NGBE_IPSRXIDX_WRITE | + NGBE_IPSRXIDX_TB_SPI | (sa_index << 3); + wr32(hw, NGBE_IPSRXSPI, 0); + wr32(hw, NGBE_IPSRXADDRIDX, 0); + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + reg_val = NGBE_IPSRXIDX_WRITE | + NGBE_IPSRXIDX_TB_KEY | (sa_index << 3); + wr32(hw, NGBE_IPSRXKEY(0), 0); + wr32(hw, NGBE_IPSRXKEY(1), 0); + wr32(hw, NGBE_IPSRXKEY(2), 0); + wr32(hw, NGBE_IPSRXKEY(3), 0); + wr32(hw, NGBE_IPSRXSALT, 0); + wr32(hw, NGBE_IPSRXMODE, 0); + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + priv->rx_sa_tbl[sa_index].used = 0; + + /* If last used then clear the IP table entry*/ + priv->rx_ip_tbl[ip_index].ref_count--; + if (priv->rx_ip_tbl[ip_index].ref_count == 0) { + reg_val = NGBE_IPSRXIDX_WRITE | NGBE_IPSRXIDX_TB_IP | + (ip_index << 3); + wr32(hw, NGBE_IPSRXADDR(0), 0); + wr32(hw, NGBE_IPSRXADDR(1), 0); + wr32(hw, NGBE_IPSRXADDR(2), 0); + wr32(hw, NGBE_IPSRXADDR(3), 0); + } + } else { /* session->dir == RTE_CRYPTO_OUTBOUND */ + int i; + + /* Find a match in the SA table*/ + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) { + if (priv->tx_sa_tbl[i].spi == + rte_cpu_to_be_32(ic_session->spi)) { + sa_index = i; + break; + } + } + /* Fail if no match entries*/ + if (sa_index < 0) { + PMD_DRV_LOG(ERR, + "Entry not found in the Tx SA table\n"); + return -1; + } + reg_val = NGBE_IPSRXIDX_WRITE | (sa_index << 3); + wr32(hw, NGBE_IPSTXKEY(0), 0); + wr32(hw, NGBE_IPSTXKEY(1), 0); + wr32(hw, NGBE_IPSTXKEY(2), 0); + wr32(hw, NGBE_IPSTXKEY(3), 0); + wr32(hw, NGBE_IPSTXSALT, 0); + wr32w(hw, NGBE_IPSTXIDX, reg_val, NGBE_IPSTXIDX_WRITE, 1000); + + priv->tx_sa_tbl[sa_index].used = 0; + } + + return 0; +} + +static int +ngbe_crypto_create_session(void *device, + struct rte_security_session_conf *conf, + struct rte_security_session *session, + struct rte_mempool *mempool) +{ + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device; + struct ngbe_crypto_session *ic_session = NULL; + struct rte_crypto_aead_xform *aead_xform; + struct rte_eth_conf *dev_conf = ð_dev->data->dev_conf; + + if (rte_mempool_get(mempool, (void **)&ic_session)) { + PMD_DRV_LOG(ERR, "Cannot get object from ic_session mempool"); + return -ENOMEM; + } + + if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD || + conf->crypto_xform->aead.algo != + RTE_CRYPTO_AEAD_AES_GCM) { + PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n"); + rte_mempool_put(mempool, (void *)ic_session); + return -ENOTSUP; + } + aead_xform = &conf->crypto_xform->aead; + + if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) { + if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) { + ic_session->op = NGBE_OP_AUTHENTICATED_DECRYPTION; + } else { + PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n"); + rte_mempool_put(mempool, (void *)ic_session); + return -ENOTSUP; + } + } else { + if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) { + ic_session->op = NGBE_OP_AUTHENTICATED_ENCRYPTION; + } else { + PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n"); + rte_mempool_put(mempool, (void *)ic_session); + return -ENOTSUP; + } + } + + ic_session->key = aead_xform->key.data; + ic_session->key_len = aead_xform->key.length; + memcpy(&ic_session->salt, + &aead_xform->key.data[aead_xform->key.length], 4); + ic_session->spi = conf->ipsec.spi; + ic_session->dev = eth_dev; + + set_sec_session_private_data(session, ic_session); + + if (ic_session->op == NGBE_OP_AUTHENTICATED_ENCRYPTION) { + if (ngbe_crypto_add_sa(ic_session)) { + PMD_DRV_LOG(ERR, "Failed to add SA\n"); + rte_mempool_put(mempool, (void *)ic_session); + return -EPERM; + } + } + + return 0; +} + +static int +ngbe_crypto_remove_session(void *device, + struct rte_security_session *session) +{ + struct rte_eth_dev *eth_dev = device; + struct ngbe_crypto_session *ic_session = + (struct ngbe_crypto_session *) + get_sec_session_private_data(session); + struct rte_mempool *mempool = rte_mempool_from_obj(ic_session); + + if (eth_dev != ic_session->dev) { + PMD_DRV_LOG(ERR, "Session not bound to this device\n"); + return -ENODEV; + } + + if (ngbe_crypto_remove_sa(eth_dev, ic_session)) { + PMD_DRV_LOG(ERR, "Failed to remove session\n"); + return -EFAULT; + } + + rte_mempool_put(mempool, (void *)ic_session); + + return 0; +} static const struct rte_security_capability * ngbe_crypto_capabilities_get(void *device __rte_unused) @@ -137,6 +512,8 @@ ngbe_crypto_capabilities_get(void *device __rte_unused) } static struct rte_security_ops ngbe_security_ops = { + .session_create = ngbe_crypto_create_session, + .session_destroy = ngbe_crypto_remove_session, .capabilities_get = ngbe_crypto_capabilities_get }; diff --git a/drivers/net/ngbe/ngbe_ipsec.h b/drivers/net/ngbe/ngbe_ipsec.h new file mode 100644 index 0000000000..8442bb2157 --- /dev/null +++ b/drivers/net/ngbe/ngbe_ipsec.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2021 Beijing WangXun Technology Co., Ltd. + * Copyright(c) 2010-2017 Intel Corporation + */ + +#ifndef NGBE_IPSEC_H_ +#define NGBE_IPSEC_H_ + +#include +#include +#include + +#define IPSRXMOD_VALID 0x00000001 +#define IPSRXMOD_PROTO 0x00000004 +#define IPSRXMOD_DECRYPT 0x00000008 +#define IPSRXMOD_IPV6 0x00000010 + +#define IPSEC_MAX_RX_IP_COUNT 16 +#define IPSEC_MAX_SA_COUNT 16 + +enum ngbe_operation { + NGBE_OP_AUTHENTICATED_ENCRYPTION, + NGBE_OP_AUTHENTICATED_DECRYPTION +}; + +/** + * Generic IP address structure + * TODO: Find better location for this rte_net.h possibly. + **/ +struct ipaddr { + enum ipaddr_type { + IPv4, + IPv6 + } type; + /**< IP Address Type - IPv4/IPv6 */ + + union { + uint32_t ipv4; + uint32_t ipv6[4]; + }; +}; + +/** inline crypto private session structure */ +struct ngbe_crypto_session { + enum ngbe_operation op; + const uint8_t *key; + uint32_t key_len; + uint32_t salt; + uint32_t sa_index; + uint32_t spi; + struct ipaddr src_ip; + struct ipaddr dst_ip; + struct rte_eth_dev *dev; +} __rte_cache_aligned; + +struct ngbe_crypto_rx_ip_table { + struct ipaddr ip; + uint16_t ref_count; +}; +struct ngbe_crypto_rx_sa_table { + uint32_t spi; + uint32_t ip_index; + uint8_t mode; + uint8_t used; +}; + +struct ngbe_crypto_tx_sa_table { + uint32_t spi; + uint8_t used; +}; + +struct ngbe_ipsec { + struct ngbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT]; + struct ngbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT]; + struct ngbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT]; +}; + +#endif /*NGBE_IPSEC_H_*/ From patchwork Wed Sep 8 08:37:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98312 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 723A0A0C56; Wed, 8 Sep 2021 10:39:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C2C341217; Wed, 8 Sep 2021 10:37:30 +0200 (CEST) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 4DF9E411DF for ; Wed, 8 Sep 2021 10:37:25 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090239toy06lqa Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:18 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: kN2ypXZVqgznlxMdaUdrbTkAYnVx3RMcubpczPvbOyZxBAGvpgbruEfrWfm8p ZKuwGFDm2jw8TD1jH0KroXO2+TUEtNe0AYkH805wq/5YprcCCTBkdA1ZMPOVyvy8Zye2b9h b2IukLIFA/IXWOF0mhp6cjIrv9F/hRywN7C8MMLpQPRt+6Y84Dhf67R3E0BXqB6x3WJP2Rq WDD7blbOcMLVd4YiZHQdcka+Cum30PVoneN3wyaKzOsf0aKCViTpAEPuKctfr1dYVJIpUeg b0liOhG92dKPNfdKXvLUsetEulEB0w2pTIxm9LfNIoUCEO0CAQc8wGj+qTCLjJeDjVvj3cS 0Nyw6WFIEVzwLPu3/fJWSJTGYrxeHR+Epazn1Qx X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:56 +0800 Message-Id: <20210908083758.312055-31-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign7 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 30/32] net/ngbe: support security operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Support to update a security session and clear a security session statistics. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ipsec.c | 41 +++++++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ipsec.h | 15 +++++++++++++ 2 files changed, 56 insertions(+) diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c index 80151d45dc..cc79d7d88f 100644 --- a/drivers/net/ngbe/ngbe_ipsec.c +++ b/drivers/net/ngbe/ngbe_ipsec.c @@ -360,6 +360,12 @@ ngbe_crypto_create_session(void *device, return 0; } +static unsigned int +ngbe_crypto_session_get_size(__rte_unused void *device) +{ + return sizeof(struct ngbe_crypto_session); +} + static int ngbe_crypto_remove_session(void *device, struct rte_security_session *session) @@ -385,6 +391,39 @@ ngbe_crypto_remove_session(void *device, return 0; } +static inline uint8_t +ngbe_crypto_compute_pad_len(struct rte_mbuf *m) +{ + if (m->nb_segs == 1) { + /* 16 bytes ICV + 2 bytes ESP trailer + payload padding size + * payload padding size is stored at + */ + uint8_t *esp_pad_len = rte_pktmbuf_mtod_offset(m, uint8_t *, + rte_pktmbuf_pkt_len(m) - + (ESP_TRAILER_SIZE + ESP_ICV_SIZE)); + return *esp_pad_len + ESP_TRAILER_SIZE + ESP_ICV_SIZE; + } + return 0; +} + +static int +ngbe_crypto_update_mb(void *device __rte_unused, + struct rte_security_session *session, + struct rte_mbuf *m, void *params __rte_unused) +{ + struct ngbe_crypto_session *ic_session = + get_sec_session_private_data(session); + if (ic_session->op == NGBE_OP_AUTHENTICATED_ENCRYPTION) { + union ngbe_crypto_tx_desc_md *mdata = + (union ngbe_crypto_tx_desc_md *) + rte_security_dynfield(m); + mdata->enc = 1; + mdata->sa_idx = ic_session->sa_index; + mdata->pad_len = ngbe_crypto_compute_pad_len(m); + } + return 0; +} + static const struct rte_security_capability * ngbe_crypto_capabilities_get(void *device __rte_unused) { @@ -513,7 +552,9 @@ ngbe_crypto_capabilities_get(void *device __rte_unused) static struct rte_security_ops ngbe_security_ops = { .session_create = ngbe_crypto_create_session, + .session_get_size = ngbe_crypto_session_get_size, .session_destroy = ngbe_crypto_remove_session, + .set_pkt_metadata = ngbe_crypto_update_mb, .capabilities_get = ngbe_crypto_capabilities_get }; diff --git a/drivers/net/ngbe/ngbe_ipsec.h b/drivers/net/ngbe/ngbe_ipsec.h index 8442bb2157..fa5f21027b 100644 --- a/drivers/net/ngbe/ngbe_ipsec.h +++ b/drivers/net/ngbe/ngbe_ipsec.h @@ -18,6 +18,9 @@ #define IPSEC_MAX_RX_IP_COUNT 16 #define IPSEC_MAX_SA_COUNT 16 +#define ESP_ICV_SIZE 16 +#define ESP_TRAILER_SIZE 2 + enum ngbe_operation { NGBE_OP_AUTHENTICATED_ENCRYPTION, NGBE_OP_AUTHENTICATED_DECRYPTION @@ -69,6 +72,18 @@ struct ngbe_crypto_tx_sa_table { uint8_t used; }; +union ngbe_crypto_tx_desc_md { + uint64_t data; + struct { + /**< SA table index */ + uint32_t sa_idx; + /**< ICV and ESP trailer length */ + uint8_t pad_len; + /**< enable encryption */ + uint8_t enc; + }; +}; + struct ngbe_ipsec { struct ngbe_crypto_rx_ip_table rx_ip_tbl[IPSEC_MAX_RX_IP_COUNT]; struct ngbe_crypto_rx_sa_table rx_sa_tbl[IPSEC_MAX_SA_COUNT]; From patchwork Wed Sep 8 08:37:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98311 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5824A0C56; Wed, 8 Sep 2021 10:39:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EE70141195; Wed, 8 Sep 2021 10:37:27 +0200 (CEST) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id DBDF641195 for ; Wed, 8 Sep 2021 10:37:25 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090240t0zrze82 Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:20 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: xfGhp5cbJEeWl8e5KN4RzOTx70tMhXFhpyDUCZs/rrzq7kB0oy9QUkzyq/gAA O2gI2u1RH+7jtY9tTWhvq9xfJTNOiDHi9Me86xVCAcRlEz5RjCQlTqWX3DRRcDzFX1UEAwv 68AHglpcwFEhCncsWEZslVbS+l1xo/B/yFyQ1I50on+WoN5tx/ylJqhvuXCDkHpQWC8Ltqa VjwO8jv6fBsISGtMv2FYviIxbLnFXOw/ZDSMYRcIeqxHqbjUu9UrbTCU1wEw4IIWOc36iJJ AgPufqmZhnFcJNje4iDU27PNbwTdC4olQclPJYhycpB/6QpgfQ5p9G9hUasTrIxL1hB+VBQ 8vA1LLkocUgLLQNIdGhh5Whkp+qoEThbYsFy1NP X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:57 +0800 Message-Id: <20210908083758.312055-32-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign5 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 31/32] net/ngbe: add security offload in Rx and Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add security offload in Rx and Tx process. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ipsec.c | 106 ++++++++++++++++++++++++++++++++++ drivers/net/ngbe/ngbe_ipsec.h | 2 + drivers/net/ngbe/ngbe_rxtx.c | 91 ++++++++++++++++++++++++++++- drivers/net/ngbe/ngbe_rxtx.h | 14 ++++- 4 files changed, 210 insertions(+), 3 deletions(-) diff --git a/drivers/net/ngbe/ngbe_ipsec.c b/drivers/net/ngbe/ngbe_ipsec.c index cc79d7d88f..54e05a834f 100644 --- a/drivers/net/ngbe/ngbe_ipsec.c +++ b/drivers/net/ngbe/ngbe_ipsec.c @@ -17,6 +17,55 @@ (a).ipv6[2] == (b).ipv6[2] && \ (a).ipv6[3] == (b).ipv6[3]) +static void +ngbe_crypto_clear_ipsec_tables(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + struct ngbe_ipsec *priv = NGBE_DEV_IPSEC(dev); + int i = 0; + + /* clear Rx IP table*/ + for (i = 0; i < IPSEC_MAX_RX_IP_COUNT; i++) { + uint16_t index = i << 3; + uint32_t reg_val = NGBE_IPSRXIDX_WRITE | + NGBE_IPSRXIDX_TB_IP | index; + wr32(hw, NGBE_IPSRXADDR(0), 0); + wr32(hw, NGBE_IPSRXADDR(1), 0); + wr32(hw, NGBE_IPSRXADDR(2), 0); + wr32(hw, NGBE_IPSRXADDR(3), 0); + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + } + + /* clear Rx SPI and Rx/Tx SA tables*/ + for (i = 0; i < IPSEC_MAX_SA_COUNT; i++) { + uint32_t index = i << 3; + uint32_t reg_val = NGBE_IPSRXIDX_WRITE | + NGBE_IPSRXIDX_TB_SPI | index; + wr32(hw, NGBE_IPSRXSPI, 0); + wr32(hw, NGBE_IPSRXADDRIDX, 0); + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + reg_val = NGBE_IPSRXIDX_WRITE | NGBE_IPSRXIDX_TB_KEY | index; + wr32(hw, NGBE_IPSRXKEY(0), 0); + wr32(hw, NGBE_IPSRXKEY(1), 0); + wr32(hw, NGBE_IPSRXKEY(2), 0); + wr32(hw, NGBE_IPSRXKEY(3), 0); + wr32(hw, NGBE_IPSRXSALT, 0); + wr32(hw, NGBE_IPSRXMODE, 0); + wr32w(hw, NGBE_IPSRXIDX, reg_val, NGBE_IPSRXIDX_WRITE, 1000); + reg_val = NGBE_IPSTXIDX_WRITE | index; + wr32(hw, NGBE_IPSTXKEY(0), 0); + wr32(hw, NGBE_IPSTXKEY(1), 0); + wr32(hw, NGBE_IPSTXKEY(2), 0); + wr32(hw, NGBE_IPSTXKEY(3), 0); + wr32(hw, NGBE_IPSTXSALT, 0); + wr32w(hw, NGBE_IPSTXIDX, reg_val, NGBE_IPSTXIDX_WRITE, 1000); + } + + memset(priv->rx_ip_tbl, 0, sizeof(priv->rx_ip_tbl)); + memset(priv->rx_sa_tbl, 0, sizeof(priv->rx_sa_tbl)); + memset(priv->tx_sa_tbl, 0, sizeof(priv->tx_sa_tbl)); +} + static int ngbe_crypto_add_sa(struct ngbe_crypto_session *ic_session) { @@ -550,6 +599,63 @@ ngbe_crypto_capabilities_get(void *device __rte_unused) return ngbe_security_capabilities; } +int +ngbe_crypto_enable_ipsec(struct rte_eth_dev *dev) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + uint32_t reg; + uint64_t rx_offloads; + uint64_t tx_offloads; + + rx_offloads = dev->data->dev_conf.rxmode.offloads; + tx_offloads = dev->data->dev_conf.txmode.offloads; + + /* sanity checks */ + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) { + PMD_DRV_LOG(ERR, "RSC and IPsec not supported"); + return -1; + } + if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) { + PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec"); + return -1; + } + + /* Set NGBE_SECTXBUFFAF to 0x14 as required in the datasheet*/ + wr32(hw, NGBE_SECTXBUFAF, 0x14); + + /* IFG needs to be set to 3 when we are using security. Otherwise a Tx + * hang will occur with heavy traffic. + */ + reg = rd32(hw, NGBE_SECTXIFG); + reg = (reg & ~NGBE_SECTXIFG_MIN_MASK) | NGBE_SECTXIFG_MIN(0x3); + wr32(hw, NGBE_SECTXIFG, reg); + + reg = rd32(hw, NGBE_SECRXCTL); + reg |= NGBE_SECRXCTL_CRCSTRIP; + wr32(hw, NGBE_SECRXCTL, reg); + + if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) { + wr32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA, 0); + reg = rd32m(hw, NGBE_SECRXCTL, NGBE_SECRXCTL_ODSA); + if (reg != 0) { + PMD_DRV_LOG(ERR, "Error enabling Rx Crypto"); + return -1; + } + } + if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) { + wr32(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_STFWD); + reg = rd32(hw, NGBE_SECTXCTL); + if (reg != NGBE_SECTXCTL_STFWD) { + PMD_DRV_LOG(ERR, "Error enabling Rx Crypto"); + return -1; + } + } + + ngbe_crypto_clear_ipsec_tables(dev); + + return 0; +} + static struct rte_security_ops ngbe_security_ops = { .session_create = ngbe_crypto_create_session, .session_get_size = ngbe_crypto_session_get_size, diff --git a/drivers/net/ngbe/ngbe_ipsec.h b/drivers/net/ngbe/ngbe_ipsec.h index fa5f21027b..13273d91d8 100644 --- a/drivers/net/ngbe/ngbe_ipsec.h +++ b/drivers/net/ngbe/ngbe_ipsec.h @@ -90,4 +90,6 @@ struct ngbe_ipsec { struct ngbe_crypto_tx_sa_table tx_sa_tbl[IPSEC_MAX_SA_COUNT]; }; +int ngbe_crypto_enable_ipsec(struct rte_eth_dev *dev); + #endif /*NGBE_IPSEC_H_*/ diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index bee4f04616..04c8ec4e88 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -33,6 +33,9 @@ static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM | PKT_TX_TCP_SEG | PKT_TX_TUNNEL_MASK | PKT_TX_OUTER_IP_CKSUM | +#ifdef RTE_LIB_SECURITY + PKT_TX_SEC_OFFLOAD | +#endif NGBE_TX_IEEE1588_TMST); #define NGBE_TX_OFFLOAD_NOTSUP_MASK \ @@ -274,7 +277,8 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, static inline void ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq, volatile struct ngbe_tx_ctx_desc *ctx_txd, - uint64_t ol_flags, union ngbe_tx_offload tx_offload) + uint64_t ol_flags, union ngbe_tx_offload tx_offload, + __rte_unused uint64_t *mdata) { union ngbe_tx_offload tx_offload_mask; uint32_t type_tucmd_mlhl; @@ -361,6 +365,19 @@ ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq, vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci); } +#ifdef RTE_LIB_SECURITY + if (ol_flags & PKT_TX_SEC_OFFLOAD) { + union ngbe_crypto_tx_desc_md *md = + (union ngbe_crypto_tx_desc_md *)mdata; + tunnel_seed |= NGBE_TXD_IPSEC_SAIDX(md->sa_idx); + type_tucmd_mlhl |= md->enc ? + (NGBE_TXD_IPSEC_ESP | NGBE_TXD_IPSEC_ESPENC) : 0; + type_tucmd_mlhl |= NGBE_TXD_IPSEC_ESPLEN(md->pad_len); + tx_offload_mask.sa_idx |= ~0; + tx_offload_mask.sec_pad_len |= ~0; + } +#endif + txq->ctx_cache[ctx_idx].flags = ol_flags; txq->ctx_cache[ctx_idx].tx_offload.data[0] = tx_offload_mask.data[0] & tx_offload.data[0]; @@ -592,6 +609,9 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint32_t ctx = 0; uint32_t new_ctx; union ngbe_tx_offload tx_offload; +#ifdef RTE_LIB_SECURITY + uint8_t use_ipsec; +#endif tx_offload.data[0] = 0; tx_offload.data[1] = 0; @@ -618,6 +638,9 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, * are needed for offload functionality. */ ol_flags = tx_pkt->ol_flags; +#ifdef RTE_LIB_SECURITY + use_ipsec = txq->using_ipsec && (ol_flags & PKT_TX_SEC_OFFLOAD); +#endif /* If hardware offload required */ tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK; @@ -633,6 +656,16 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_offload.outer_l3_len = tx_pkt->outer_l3_len; tx_offload.outer_tun_len = 0; +#ifdef RTE_LIB_SECURITY + if (use_ipsec) { + union ngbe_crypto_tx_desc_md *ipsec_mdata = + (union ngbe_crypto_tx_desc_md *) + rte_security_dynfield(tx_pkt); + tx_offload.sa_idx = ipsec_mdata->sa_idx; + tx_offload.sec_pad_len = ipsec_mdata->pad_len; + } +#endif + /* If new context need be built or reuse the exist ctx*/ ctx = what_ctx_update(txq, tx_ol_req, tx_offload); /* Only allocate context descriptor if required */ @@ -776,7 +809,8 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } ngbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req, - tx_offload); + tx_offload, + rte_security_dynfield(tx_pkt)); txe->last_id = tx_last; tx_id = txe->next_id; @@ -795,6 +829,10 @@ ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } olinfo_status |= NGBE_TXD_PAYLEN(pkt_len); +#ifdef RTE_LIB_SECURITY + if (use_ipsec) + olinfo_status |= NGBE_TXD_IPSEC; +#endif m_seg = tx_pkt; do { @@ -978,6 +1016,13 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status) pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD; } +#ifdef RTE_LIB_SECURITY + if (rx_status & NGBE_RXD_STAT_SECP) { + pkt_flags |= PKT_RX_SEC_OFFLOAD; + if (rx_status & NGBE_RXD_ERR_SECERR) + pkt_flags |= PKT_RX_SEC_OFFLOAD_FAILED; + } +#endif return pkt_flags; } @@ -1800,6 +1845,9 @@ ngbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt) { struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue; if (txq->offloads == 0 && +#ifdef RTE_LIB_SECURITY + !(txq->using_ipsec) && +#endif txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) return ngbe_tx_done_cleanup_simple(txq, free_cnt); @@ -1885,6 +1933,9 @@ ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq) { /* Use a simple Tx queue (no offloads, no multi segs) if possible */ if (txq->offloads == 0 && +#ifdef RTE_LIB_SECURITY + !(txq->using_ipsec) && +#endif txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) { PMD_INIT_LOG(DEBUG, "Using simple tx code path"); dev->tx_pkt_burst = ngbe_xmit_pkts_simple; @@ -1926,6 +1977,10 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev) if (hw->is_pf) tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT; +#ifdef RTE_LIB_SECURITY + if (dev->security_ctx) + tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY; +#endif return tx_offload_capa; } @@ -2012,6 +2067,10 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->offloads = offloads; txq->ops = &def_txq_ops; txq->tx_deferred_start = tx_conf->tx_deferred_start; +#ifdef RTE_LIB_SECURITY + txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads & + DEV_TX_OFFLOAD_SECURITY); +#endif txq->tdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXWP(txq->reg_idx)); txq->tdc_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXCFG(txq->reg_idx)); @@ -2220,6 +2279,11 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev) offloads |= (DEV_RX_OFFLOAD_QINQ_STRIP | DEV_RX_OFFLOAD_VLAN_EXTEND); +#ifdef RTE_LIB_SECURITY + if (dev->security_ctx) + offloads |= DEV_RX_OFFLOAD_SECURITY; +#endif + return offloads; } @@ -2745,6 +2809,7 @@ ngbe_dev_mq_rx_configure(struct rte_eth_dev *dev) void ngbe_set_rx_function(struct rte_eth_dev *dev) { + uint16_t i; struct ngbe_adapter *adapter = ngbe_dev_adapter(dev); if (dev->data->scattered_rx) { @@ -2788,6 +2853,15 @@ ngbe_set_rx_function(struct rte_eth_dev *dev) dev->rx_pkt_burst = ngbe_recv_pkts; } + +#ifdef RTE_LIB_SECURITY + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct ngbe_rx_queue *rxq = dev->data->rx_queues[i]; + + rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_SECURITY); + } +#endif } /* @@ -3052,6 +3126,19 @@ ngbe_dev_rxtx_start(struct rte_eth_dev *dev) if (hw->is_pf && dev->data->dev_conf.lpbk_mode) ngbe_setup_loopback_link(hw); +#ifdef RTE_LIB_SECURITY + if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) || + (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) { + ret = ngbe_crypto_enable_ipsec(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "ngbe_crypto_enable_ipsec fails with %d.", + ret); + return ret; + } + } +#endif + return 0; } diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h index d63b25c1aa..67c1260f6f 100644 --- a/drivers/net/ngbe/ngbe_rxtx.h +++ b/drivers/net/ngbe/ngbe_rxtx.h @@ -261,7 +261,10 @@ struct ngbe_rx_queue { uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */ uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */ uint16_t rx_free_trigger; /**< triggers rx buffer allocation */ - +#ifdef RTE_LIB_SECURITY + uint8_t using_ipsec; + /** indicates that IPsec Rx feature is in use */ +#endif uint16_t rx_free_thresh; /**< max free Rx desc to hold */ uint16_t queue_id; /**< RX queue index */ uint16_t reg_idx; /**< RX queue register index */ @@ -305,6 +308,11 @@ union ngbe_tx_offload { uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */ uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */ uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */ +#ifdef RTE_LIB_SECURITY + /* inline ipsec related*/ + uint64_t sa_idx:8; /**< TX SA database entry index */ + uint64_t sec_pad_len:4; /**< padding length */ +#endif }; }; @@ -355,6 +363,10 @@ struct ngbe_tx_queue { uint8_t tx_deferred_start; /**< not in global dev start */ const struct ngbe_txq_ops *ops; /**< txq ops */ +#ifdef RTE_LIB_SECURITY + uint8_t using_ipsec; + /**< indicates that IPsec TX feature is in use */ +#endif }; struct ngbe_txq_ops { From patchwork Wed Sep 8 08:37:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 98313 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 233C1A0C56; Wed, 8 Sep 2021 10:39:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3933A41233; Wed, 8 Sep 2021 10:37:31 +0200 (CEST) Received: from smtpbgsg2.qq.com (smtpbgsg2.qq.com [54.254.200.128]) by mails.dpdk.org (Postfix) with ESMTP id A3F56411DF for ; Wed, 8 Sep 2021 10:37:28 +0200 (CEST) X-QQ-mid: bizesmtp47t1631090242tdpqzw6i Received: from wxdbg.localdomain.com (unknown [183.129.236.74]) by esmtp6.qq.com (ESMTP) with id ; Wed, 08 Sep 2021 16:37:22 +0800 (CST) X-QQ-SSF: 01400000002000E0G000B00A0000000 X-QQ-FEAT: xqT8U4SkSpgBxUjrWHOYH0ead6546SdhFrYeR8LZ18l9lYa2fjBSdTNHKrRIO S7iasj5/RKJLykWWunYhRO1q0vXUALvvGKGkZkRDwnXi8SzR/Gkd+PGPmFT4o2bTdqDGRmI lnrOD0vj6vdhGdilsJnmrMCHHF3mCbt3Pb3pGKFX1JEhUHTApnFRYF3bnXbqQcGjrJ9lgWt QCMzOEi/3wdc6U03yOlOHfZPeNRXUMOg8BKFzfN/hxh+4+S3xys0DoHjmAs8e1eIddtS6mv LgPCxebh9gLUrDN2Vfx+aTuQ1HITzq57RouAjzSazVTaS8bYOXXk+mk41UQ/+YkQUmZhKbr 1NJckEzGkMfV1qa5tqkyhyhGhj6X3LlLWK5kN1X X-QQ-GoodBg: 2 From: Jiawen Wu To: dev@dpdk.org Cc: Jiawen Wu Date: Wed, 8 Sep 2021 16:37:58 +0800 Message-Id: <20210908083758.312055-33-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210908083758.312055-1-jiawenwu@trustnetic.com> References: <20210908083758.312055-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:trustnetic.com:qybgforeign:qybgforeign2 X-QQ-Bgrelay: 1 Subject: [dpdk-dev] [PATCH 32/32] doc: update for ngbe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add ngbe PMD new features in release note 21.11. Signed-off-by: Jiawen Wu --- doc/guides/rel_notes/release_21_11.rst | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 675b573834..81093cf6c0 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -62,6 +62,16 @@ New Features * Added bus-level parsing of the devargs syntax. * Kept compatibility with the legacy syntax as parsing fallback. +* **Updated Wangxun ngbe driver.** + Updated the Wangxun ngbe driver. Add more features to complete the driver, + some of them including: + + * Added offloads and packet type on RxTx. + * Added device basic statistics and extended stats. + * Added VLAN and MAC filters. + * Added multi-queue and RSS. + * Added SRIOV. + * Added IPsec. Removed Items -------------