From patchwork Fri Jun 18 10:37:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 94446 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB8BBA0C46; Fri, 18 Jun 2021 12:43:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5A8BF411CB; Fri, 18 Jun 2021 12:40:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B047D411CC for ; Fri, 18 Jun 2021 12:40:50 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 15IAZv9T004568 for ; Fri, 18 Jun 2021 03:40:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=/358aZVYgRj5M40OxKpLQp1AZWGE1tnviqWNdEDUMt0=; b=TpdtFFsro1LU2YfLOGNVzXFWf9E4BgiYDoFaXuU5saG5fHlMCdJJKe+bSXbrsfV5BPV8 rt8yIaON7Ujs9HApeDDza7W3e2+/8z9JTAn1zrh0pnvgblwv5NYqaVNPPLoDMfOl7Q/p 3QDmLG4qx7O7OjY0f7gxTZAt6J1vnGdJA5WFxiO6zZV5HgawIS1dxcj12MT5C+YDLmOp xvPoQFzpb4sqx1KlUdbwJojZ4QhnIksieowkUyJgyxo8uQGff3rlJ/46Bf96AM0QfYC2 FdGsmujU+p1DygYNk7M3zhkfbOnTlv8rN22VRFqjlAeYCcMFwJk9PHHdg2iLRGsJF3rX jw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 397udry7jf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 18 Jun 2021 03:40:50 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 18 Jun 2021 03:40:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 18 Jun 2021 03:40:47 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id B0ABA5B6A59; Fri, 18 Jun 2021 03:40:28 -0700 (PDT) From: Nithin Dabilpuram To: CC: , , , , , , , Date: Fri, 18 Jun 2021 16:07:16 +0530 Message-ID: <20210618103741.26526-38-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210618103741.26526-1-ndabilpuram@marvell.com> References: <20210306153404.10781-1-ndabilpuram@marvell.com> <20210618103741.26526-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: mvhqcNds6xJXVEJImYDFl5EYXtDNJ8Qq X-Proofpoint-GUID: mvhqcNds6xJXVEJImYDFl5EYXtDNJ8Qq X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.790 definitions=2021-06-18_04:2021-06-18, 2021-06-18 signatures=0 Subject: [dpdk-dev] [PATCH v3 37/62] net/cnxk: add Rx/Tx burst mode get ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Patch implements ethdev operations to get Rx and Tx burst mode. Signed-off-by: Sunil Kumar Kori --- doc/guides/nics/features/cnxk.ini | 1 + doc/guides/nics/features/cnxk_vec.ini | 1 + doc/guides/nics/features/cnxk_vf.ini | 1 + drivers/net/cnxk/cnxk_ethdev.c | 2 + drivers/net/cnxk/cnxk_ethdev.h | 4 ++ drivers/net/cnxk/cnxk_ethdev_ops.c | 127 ++++++++++++++++++++++++++++++++++ 6 files changed, 136 insertions(+) diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index b41af2d..298f167 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Burst mode info = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini index 7fe8018..a673cc1 100644 --- a/doc/guides/nics/features/cnxk_vec.ini +++ b/doc/guides/nics/features/cnxk_vec.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Burst mode info = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini index 5cc9f3f..335d082 100644 --- a/doc/guides/nics/features/cnxk_vf.ini +++ b/doc/guides/nics/features/cnxk_vf.ini @@ -11,6 +11,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Burst mode info = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index f3d5a9d..4ec0dfb 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1101,6 +1101,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = { .promiscuous_disable = cnxk_nix_promisc_disable, .allmulticast_enable = cnxk_nix_allmulticast_enable, .allmulticast_disable = cnxk_nix_allmulticast_disable, + .rx_burst_mode_get = cnxk_nix_rx_burst_mode_get, + .tx_burst_mode_get = cnxk_nix_tx_burst_mode_get, }; static int diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 70bc374..aea0005 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -234,6 +234,10 @@ int cnxk_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); int cnxk_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); int cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); +int cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); int cnxk_nix_configure(struct rte_eth_dev *eth_dev); int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint16_t nb_desc, uint16_t fp_tx_q_sz, diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 61ecbab..7ae961a 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -72,6 +72,133 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) } int +cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + const struct burst_info { + uint64_t flags; + const char *output; + } rx_offload_map[] = { + {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"}, + {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"}, + {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"}, + {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"}, + {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"}, + {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"}, + {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"}, + {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"}, + {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"}, + {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"}, + {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"}, + {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"}, + {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}, + {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"}, + {DEV_RX_OFFLOAD_SECURITY, " Security,"}, + {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"}, + {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"}, + {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"}, + {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"} + }; + static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:", + "Scalar, Rx Offloads:" + }; + uint32_t i; + + PLT_SET_USED(queue_id); + + /* Update burst mode info */ + rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena], + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + + /* Update Rx offload info */ + for (i = 0; i < RTE_DIM(rx_offload_map); i++) { + if (dev->rx_offloads & rx_offload_map[i].flags) { + rc = rte_strscpy(mode->info + bytes, + rx_offload_map[i].output, + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + } + } + +done: + return 0; +} + +int +cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + const struct burst_info { + uint64_t flags; + const char *output; + } tx_offload_map[] = { + {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"}, + {DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"}, + {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"}, + {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"}, + {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"}, + {DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"}, + {DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"}, + {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"}, + {DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"}, + {DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"}, + {DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"}, + {DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"}, + {DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"}, + {DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"}, + {DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"}, + {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}, + {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"}, + {DEV_TX_OFFLOAD_SECURITY, " Security,"}, + {DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"}, + {DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"}, + {DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"}, + {DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"} + }; + static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:", + "Scalar, Tx Offloads:" + }; + uint32_t i; + + PLT_SET_USED(queue_id); + + /* Update burst mode info */ + rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena], + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + + /* Update Tx offload info */ + for (i = 0; i < RTE_DIM(tx_offload_map); i++) { + if (dev->tx_offloads & tx_offload_map[i].flags) { + rc = rte_strscpy(mode->info + bytes, + tx_offload_map[i].output, + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + } + } + +done: + return 0; +} + +int cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);