From patchwork Thu Nov 17 07:25:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 119925 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89505A00C2; Thu, 17 Nov 2022 08:26:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 760AF40F18; Thu, 17 Nov 2022 08:26:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CA44740F18 for ; Thu, 17 Nov 2022 08:26:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AH6pSaD025126 for ; Wed, 16 Nov 2022 23:26:12 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=r3d/jc0onlsocHFoVJEi4OcGvDojwm4eScETiwgh0xw=; b=VczL5fzUn0JFk94UTUsn+DAdJb8DeEfZU91lPKpClLfdn4oC+HODc4cqWa0olL4dcBTR e34Oc0saOERm71qyj44byiv8k6C0Hm1BQX1qEW2p8ZER2M9TuKjlFJyCIZIIl+5GiQ4x 7v5sQ689+2vQvy5iyQnf3U2Ero6WZlMXCe9YX0RJjDagx/ilVyx5FinGKQ413+iHoJIj WerUTaha/8njC01F0Vpn3FdK7ITtJnF4J32ZJpsuZkqVe8j/sYAqc+s2++iuPj+Xkady XCtaR8gozI079MkrrJkt+hQGjzecYtf7i7URxb8JPlrONI4+B9HQ7tOEearO6e2uCpzT 9g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3kwg2b0304-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 16 Nov 2022 23:26:12 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 16 Nov 2022 23:26:10 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 16 Nov 2022 23:26:10 -0800 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 99B5A5B692A; Wed, 16 Nov 2022 23:26:06 -0800 (PST) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , Subject: [PATCH v1 2/3] net/cnxk: add sg2 descriptor support Date: Thu, 17 Nov 2022 12:55:57 +0530 Message-ID: <20221117072558.3582292-2-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221117072558.3582292-1-asekhar@marvell.com> References: <20221117072558.3582292-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 0szwnZJmhDLLR17NZAvaqfSSEHYIG2g0 X-Proofpoint-ORIG-GUID: 0szwnZJmhDLLR17NZAvaqfSSEHYIG2g0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_04,2022-11-16_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for creating packets with segments from different pools. This is enabled by using the SG2 descriptors. SG2 descriptors are only used when the segment is to be freed by the HW. Signed-off-by: Ashwin Sekhar T K --- drivers/net/cnxk/cn10k_tx.h | 161 +++++++++++++++++++++++++++--------- 1 file changed, 123 insertions(+), 38 deletions(-) diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index a4c578354c..3f08a8a473 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -54,6 +54,36 @@ #define NIX_NB_SEGS_TO_SEGDW(x) ((NIX_SEGDW_MAGIC >> ((x) << 2)) & 0xF) +static __plt_always_inline uint8_t +cn10k_nix_mbuf_sg_dwords(struct rte_mbuf *m) +{ + uint32_t nb_segs = m->nb_segs; + uint16_t aura0, aura; + int segw, sg_segs; + + aura0 = roc_npa_aura_handle_to_aura(m->pool->pool_id); + + nb_segs--; + segw = 2; + sg_segs = 1; + while (nb_segs) { + m = m->next; + aura = roc_npa_aura_handle_to_aura(m->pool->pool_id); + if (aura != aura0) { + segw += 2 + (sg_segs == 2); + sg_segs = 0; + } else { + segw += (sg_segs == 0); /* SUBDC */ + segw += 1; /* IOVA */ + sg_segs += 1; + sg_segs %= 3; + } + nb_segs--; + } + + return (segw + 1) / 2; +} + static __plt_always_inline void cn10k_nix_vwqe_wait_fc(struct cn10k_eth_txq *txq, int64_t req) { @@ -915,15 +945,15 @@ cn10k_nix_xmit_prepare_tstamp(struct cn10k_eth_txq *txq, uintptr_t lmt_addr, static __rte_always_inline uint16_t cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) { + uint64_t prefree = 0, aura0, aura, nb_segs, segdw; struct nix_send_hdr_s *send_hdr; - union nix_send_sg_s *sg; + union nix_send_sg_s *sg, l_sg; + union nix_send_sg2_s l_sg2; struct rte_mbuf *m_next; - uint64_t *slist, sg_u; + uint8_t off, is_sg2; uint64_t len, dlen; uint64_t ol_flags; - uint64_t nb_segs; - uint64_t segdw; - uint8_t off, i; + uint64_t *slist; send_hdr = (struct nix_send_hdr_s *)cmd; @@ -938,20 +968,22 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) ol_flags = m->ol_flags; /* Start from second segment, first segment is already there */ - i = 1; - sg_u = sg->u; - len -= sg_u & 0xFFFF; + is_sg2 = 0; + l_sg.u = sg->u; + len -= l_sg.u & 0xFFFF; nb_segs = m->nb_segs - 1; m_next = m->next; slist = &cmd[3 + off + 1]; /* Set invert df if buffer is not to be freed by H/W */ - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= (cnxk_nix_prefree_seg(m) << 55); + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + prefree = cnxk_nix_prefree_seg(m); + l_sg.i1 = prefree; + } - /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG - if (!(sg_u & (1ULL << 55))) + /* Mark mempool object as "put" since it is freed by NIX */ + if (!prefree) RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif @@ -964,55 +996,103 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) if (!(flags & NIX_TX_MULTI_SEG_F)) goto done; + aura0 = send_hdr->w0.aura; m = m_next; if (!m) goto done; /* Fill mbuf segments */ do { + uint64_t iova; + + /* Save the current mbuf properties. These can get cleared in + * cnxk_nix_prefree_seg() + */ m_next = m->next; + iova = rte_mbuf_data_iova(m); dlen = m->data_len; len -= dlen; - sg_u = sg_u | ((uint64_t)dlen << (i << 4)); - *slist = rte_mbuf_data_iova(m); - /* Set invert df if buffer is not to be freed by H/W */ - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= (cnxk_nix_prefree_seg(m) << (i + 55)); - /* Mark mempool object as "put" since it is freed by NIX - */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - if (!(sg_u & (1ULL << (i + 55)))) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); -#endif - slist++; - i++; + nb_segs--; - if (i > 2 && nb_segs) { - i = 0; + aura = aura0; + prefree = 0; + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + aura = roc_npa_aura_handle_to_aura(m->pool->pool_id); + prefree = cnxk_nix_prefree_seg(m); + is_sg2 = aura != aura0 && !prefree; + } + + if (unlikely(is_sg2)) { + /* This mbuf belongs to a different pool and + * DF bit is not to be set, so use SG2 subdesc + * so that it is freed to the appropriate pool. + */ + + /* Write the previous descriptor out */ + sg->u = l_sg.u; + + /* If the current SG subdc does not have any + * iovas in it, then the SG2 subdc can overwrite + * that SG subdc. + * + * If the current SG subdc has 2 iovas in it, then + * the current iova word should be left empty. + */ + slist += (-1 + (int)l_sg.segs); + sg = (union nix_send_sg_s *)slist; + + l_sg2.u = l_sg.u & 0xC00000000000000; /* LD_TYPE */ + l_sg2.subdc = NIX_SUBDC_SG2; + l_sg2.aura = aura; + l_sg2.seg1_size = dlen; + l_sg.u = l_sg2.u; + + slist++; + *slist = iova; + slist++; + } else { + *slist = iova; + /* Set invert df if buffer is not to be freed by H/W */ + l_sg.u |= (prefree << (l_sg.segs + 55)); + /* Set the segment length */ + l_sg.u |= ((uint64_t)dlen << (l_sg.segs << 4)); + l_sg.segs += 1; + slist++; + } + + if ((is_sg2 || l_sg.segs > 2) && nb_segs) { + sg->u = l_sg.u; /* Next SG subdesc */ - *(uint64_t *)slist = sg_u & 0xFC00000000000000; - sg->u = sg_u; - sg->segs = 3; sg = (union nix_send_sg_s *)slist; - sg_u = sg->u; + l_sg.u &= 0xC00000000000000; /* LD_TYPE */ + l_sg.subdc = NIX_SUBDC_SG; slist++; } m->next = NULL; + +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG + /* Mark mempool object as "put" since it is freed by NIX + */ + if (!prefree) + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); +#endif m = m_next; } while (nb_segs); done: /* Add remaining bytes of security data to last seg */ if (flags & NIX_TX_OFFLOAD_SECURITY_F && ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD && len) { - uint8_t shft = ((i - 1) << 4); + uint8_t shft = (l_sg.subdc == NIX_SUBDC_SG) ? ((l_sg.segs - 1) << 4) : 0; - dlen = ((sg_u >> shft) & 0xFFFFULL) + len; - sg_u = sg_u & ~(0xFFFFULL << shft); - sg_u |= dlen << shft; + dlen = ((l_sg.u >> shft) & 0xFFFFULL) + len; + l_sg.u = l_sg.u & ~(0xFFFFULL << shft); + l_sg.u |= dlen << shft; } - sg->u = sg_u; - sg->segs = i; + /* Write the last subdc out */ + sg->u = l_sg.u; + segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off]; /* Roundup extra dwords to multiple of 2 */ segdw = (segdw >> 1) + (segdw & 0x1); @@ -1827,7 +1907,12 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws, struct rte_mbuf *m = tx_pkts[j]; /* Get dwords based on nb_segs. */ - segdw[j] = NIX_NB_SEGS_TO_SEGDW(m->nb_segs); + if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && + flags & NIX_TX_MULTI_SEG_F)) + segdw[j] = NIX_NB_SEGS_TO_SEGDW(m->nb_segs); + else + segdw[j] = cn10k_nix_mbuf_sg_dwords(m); + /* Add dwords based on offloads. */ segdw[j] += 1 + /* SEND HDR */ !!(flags & NIX_TX_NEED_EXT_HDR) +