From patchwork Fri Jun 18 13:06:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 94480 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60856A0C46; Fri, 18 Jun 2021 15:06:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D23C940150; Fri, 18 Jun 2021 15:06:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8444F40142; Fri, 18 Jun 2021 15:06:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 15ID0wZV008896; Fri, 18 Jun 2021 06:06:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0220; bh=29y40tyiXjQkxpku9Eh184aR4z7+p3DrQOde5jTJJdg=; b=j7bU7saFLuVr7Q3sp+tGI29FdGcFOBhWNBsxnNzIGT4woj7zCmBBRdROsqZM0dpGX7Mn zJyRP+nfq3havTnX3La5IHHACRdicxT4xCYxEhp+QrRAWLXdde2F5su1vsMRjqSuUwkw q8aqW9YxNDC41RanLh3gULDMSspOsmJ3kM8mPLA56KC9wlxM4ifSjCc5aSPRoIE2jZhb XUF7yL4R6v5doWnAdmkwfNOXvvxmVCelLKI0wbVLxOhTqGDPRcYrkqCaBMgBTvQbxuBh wsqOQ+nhKKfcKNoOM7yKg5Om86XBRM6M3S5iirnR6oCbvwbisAmcig71F2MOOupH1xd2 Eg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 398tu0r8kx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 18 Jun 2021 06:06:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 18 Jun 2021 06:06:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 18 Jun 2021 06:06:19 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id D6FD35B6949; Fri, 18 Jun 2021 06:06:17 -0700 (PDT) From: Nithin Dabilpuram To: CC: , Nithin Dabilpuram , Date: Fri, 18 Jun 2021 18:36:06 +0530 Message-ID: <20210618130606.21646-1-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 MIME-Version: 1.0 X-Proofpoint-GUID: iX1ungpea0mENIN3AwVyEfzBylVceOTC X-Proofpoint-ORIG-GUID: iX1ungpea0mENIN3AwVyEfzBylVceOTC X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.790 definitions=2021-06-18_07:2021-06-18, 2021-06-18 signatures=0 Subject: [dpdk-dev] [PATCH] net/octeontx2: use runtime lso format indices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently lso formats setup initially are expected to be compile time constants and start from 0. Change the logic in slow and fast path so that LSO format indexes are only determined runtime. Fixes: 3b635472a998 ("net/octeontx2: support TSO offload") Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram --- drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/net/octeontx2/otx2_ethdev.c | 72 ++++++++++++++++++++--------------- drivers/net/octeontx2/otx2_ethdev.h | 13 ++++++- drivers/net/octeontx2/otx2_tx.c | 8 +++- drivers/net/octeontx2/otx2_tx.h | 12 +++--- 5 files changed, 67 insertions(+), 40 deletions(-) diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index fd149be..3e36dce 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -264,7 +264,7 @@ otx2_ssogws_prepare_pkt(const struct otx2_eth_txq *txq, struct rte_mbuf *m, uint64_t *cmd, const uint32_t flags) { otx2_lmt_mov(cmd, txq->cmd, otx2_nix_tx_ext_subs(flags)); - otx2_nix_xmit_prepare(m, cmd, flags); + otx2_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt); } static __rte_always_inline uint16_t diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 0834de0..0a420c1 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1326,6 +1326,7 @@ otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq, txq->qconf.nb_desc = nb_desc; memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf)); + txq->lso_tun_fmt = dev->lso_tun_fmt; otx2_nix_form_default_desc(txq); otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 "" @@ -1676,7 +1677,7 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) struct otx2_mbox *mbox = dev->mbox; struct nix_lso_format_cfg_rsp *rsp; struct nix_lso_format_cfg *req; - uint8_t base; + uint8_t *fmt; int rc; /* Skip if TSO was not requested */ @@ -1691,11 +1692,9 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - base = rsp->lso_format_idx; - if (base != NIX_LSO_FORMAT_IDX_TSOV4) + if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV4) return -EFAULT; - dev->lso_base_idx = base; - otx2_nix_dbg("tcpv4 lso fmt=%u", base); + otx2_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx); /* @@ -1707,9 +1706,9 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - if (rsp->lso_format_idx != base + 1) + if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV6) return -EFAULT; - otx2_nix_dbg("tcpv6 lso fmt=%u\n", base + 1); + otx2_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx); /* * IPv4/UDP/TUN HDR/IPv4/TCP LSO @@ -1720,9 +1719,8 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - if (rsp->lso_format_idx != base + 2) - return -EFAULT; - otx2_nix_dbg("udp tun v4v4 fmt=%u\n", base + 2); + dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx; + otx2_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx); /* * IPv4/UDP/TUN HDR/IPv6/TCP LSO @@ -1733,9 +1731,8 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - if (rsp->lso_format_idx != base + 3) - return -EFAULT; - otx2_nix_dbg("udp tun v4v6 fmt=%u\n", base + 3); + dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx; + otx2_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx); /* * IPv6/UDP/TUN HDR/IPv4/TCP LSO @@ -1746,9 +1743,8 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - if (rsp->lso_format_idx != base + 4) - return -EFAULT; - otx2_nix_dbg("udp tun v6v4 fmt=%u\n", base + 4); + dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx; + otx2_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx); /* * IPv6/UDP/TUN HDR/IPv6/TCP LSO @@ -1758,9 +1754,9 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) rc = otx2_mbox_process_msg(mbox, (void *)&rsp); if (rc) return rc; - if (rsp->lso_format_idx != base + 5) - return -EFAULT; - otx2_nix_dbg("udp tun v6v6 fmt=%u\n", base + 5); + + dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx; + otx2_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx); /* * IPv4/TUN HDR/IPv4/TCP LSO @@ -1771,9 +1767,8 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - if (rsp->lso_format_idx != base + 6) - return -EFAULT; - otx2_nix_dbg("tun v4v4 fmt=%u\n", base + 6); + dev->lso_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx; + otx2_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx); /* * IPv4/TUN HDR/IPv6/TCP LSO @@ -1784,9 +1779,8 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - if (rsp->lso_format_idx != base + 7) - return -EFAULT; - otx2_nix_dbg("tun v4v6 fmt=%u\n", base + 7); + dev->lso_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx; + otx2_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx); /* * IPv6/TUN HDR/IPv4/TCP LSO @@ -1797,9 +1791,8 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) if (rc) return rc; - if (rsp->lso_format_idx != base + 8) - return -EFAULT; - otx2_nix_dbg("tun v6v4 fmt=%u\n", base + 8); + dev->lso_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx; + otx2_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx); /* * IPv6/TUN HDR/IPv6/TCP LSO @@ -1809,9 +1802,26 @@ nix_setup_lso_formats(struct otx2_eth_dev *dev) rc = otx2_mbox_process_msg(mbox, (void *)&rsp); if (rc) return rc; - if (rsp->lso_format_idx != base + 9) - return -EFAULT; - otx2_nix_dbg("tun v6v6 fmt=%u\n", base + 9); + + dev->lso_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx; + otx2_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx); + + /* Save all tun formats into u64 for fast path. + * Lower 32bit has non-udp tunnel formats. + * Upper 32bit has udp tunnel formats. + */ + fmt = dev->lso_tun_idx; + dev->lso_tun_fmt = ((uint64_t)fmt[NIX_LSO_TUN_V4V4] | + (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 8 | + (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 16 | + (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 24); + + fmt = dev->lso_udp_tun_idx; + dev->lso_tun_fmt |= ((uint64_t)fmt[NIX_LSO_TUN_V4V4] << 32 | + (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 40 | + (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 48 | + (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 56); + return 0; } diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index ac50da7..381e6b6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -182,6 +182,14 @@ enum nix_q_size_e { nix_q_size_max }; +enum nix_lso_tun_type { + NIX_LSO_TUN_V4V4, + NIX_LSO_TUN_V4V6, + NIX_LSO_TUN_V6V4, + NIX_LSO_TUN_V6V6, + NIX_LSO_TUN_MAX, +}; + struct otx2_qint { struct rte_eth_dev *eth_dev; uint8_t qintx; @@ -276,7 +284,9 @@ struct otx2_eth_dev { uint8_t tx_chan_cnt; uint8_t lso_tsov4_idx; uint8_t lso_tsov6_idx; - uint8_t lso_base_idx; + uint8_t lso_udp_tun_idx[NIX_LSO_TUN_MAX]; + uint8_t lso_tun_idx[NIX_LSO_TUN_MAX]; + uint64_t lso_tun_fmt; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t mkex_pfl_name[MKEX_NAME_LEN]; uint8_t max_mac_entries; @@ -359,6 +369,7 @@ struct otx2_eth_txq { rte_iova_t fc_iova; uint16_t sqes_per_sqb_log2; int16_t nb_sqb_bufs_adj; + uint64_t lso_tun_fmt; RTE_MARKER slow_path_start; uint16_t nb_sqb_bufs; uint16_t sq; diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index 439c46f..ff299f0 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -27,6 +27,7 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, struct otx2_eth_txq *txq = tx_queue; uint16_t i; const rte_iova_t io_addr = txq->io_addr; void *lmt_addr = txq->lmt_addr; + uint64_t lso_tun_fmt; NIX_XMIT_FC_OR_RETURN(txq, pkts); @@ -34,6 +35,7 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Perform header writes before barrier for TSO */ if (flags & NIX_TX_OFFLOAD_TSO_F) { + lso_tun_fmt = txq->lso_tun_fmt; for (i = 0; i < pkts; i++) otx2_nix_xmit_prepare_tso(tx_pkts[i], flags); } @@ -45,7 +47,7 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, rte_io_wmb(); for (i = 0; i < pkts; i++) { - otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags); + otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt); /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */ otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], tx_pkts[i]->ol_flags, 4, flags); @@ -65,6 +67,7 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, struct otx2_eth_txq *txq = tx_queue; uint64_t i; const rte_iova_t io_addr = txq->io_addr; void *lmt_addr = txq->lmt_addr; + uint64_t lso_tun_fmt; uint16_t segdw; NIX_XMIT_FC_OR_RETURN(txq, pkts); @@ -73,6 +76,7 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, /* Perform header writes before barrier for TSO */ if (flags & NIX_TX_OFFLOAD_TSO_F) { + lso_tun_fmt = txq->lso_tun_fmt; for (i = 0; i < pkts; i++) otx2_nix_xmit_prepare_tso(tx_pkts[i], flags); } @@ -84,7 +88,7 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, rte_io_wmb(); for (i = 0; i < pkts; i++) { - otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags); + otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt); segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags); otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], tx_pkts[i]->ol_flags, segdw, diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index a97b160..486248d 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -197,7 +197,8 @@ otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags) } static __rte_always_inline void -otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) +otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags, + const uint64_t lso_tun_fmt) { struct nix_send_ext_s *send_hdr_ext; struct nix_send_hdr_s *send_hdr; @@ -339,14 +340,15 @@ otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) (ol_flags & PKT_TX_TUNNEL_MASK)) { const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >> ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) & 0x1; + uint8_t shift = is_udp_tun ? 32 : 0; + + shift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4); + shift += (!!(ol_flags & PKT_TX_IPV6) << 3); w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM; w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0; /* Update format for UDP tunneled packet */ - send_hdr_ext->w0.lso_format += is_udp_tun ? 2 : 6; - - send_hdr_ext->w0.lso_format += - !!(ol_flags & PKT_TX_OUTER_IPV6) << 1; + send_hdr_ext->w0.lso_format = (lso_tun_fmt >> shift); } }