From patchwork Sun Dec 11 21:52:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 120724 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40FD0A034C; Sun, 11 Dec 2022 14:41:48 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F91342D14; Sun, 11 Dec 2022 14:41:44 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id D940040A81; Sun, 11 Dec 2022 14:41:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670766102; x=1702302102; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pnETdQfJ/27zFhUnbl8PSOn5650rscTQgVpQpiUyutE=; b=mwYySh41P2s4dJpOmGVT966swHDJXVeN3Sdq347dZvhH71cswXcrrSUi 8gWD7EzL2Pq1JWx0xeaBXYaF02/RGgvy4aoKQxOr57tU/yA/Y6k0HzrcA CdVMGBXVG0ARECxI04nwwlDHV/3uG3rXV8JWrGmBf5rDN9O4gMHLSyE4q 4lClbkjhKZZeGXlSmTLpZ3nLgO8mJTFOFKZGLYrNQ73anKCCtrEiwKqfO k8wOIbPubYeMUwDUHVpwHEzC08kASGjHASX0HdBziNuUdYeSodjS2/7yo RqnbRRcQO7HGXObwio5f9r5XT+rvYU5Z6fmXMw6BOwG6pi0zX762ySECP A==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="382007061" X-IronPort-AV: E=Sophos;i="5.96,236,1665471600"; d="scan'208";a="382007061" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2022 05:41:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="711391918" X-IronPort-AV: E=Sophos;i="5.96,236,1665471600"; d="scan'208";a="711391918" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.4]) by fmsmga008.fm.intel.com with ESMTP; 11 Dec 2022 05:41:39 -0800 From: Qi Zhang To: mb@smartsharesystems.com, bruce.richardson@intel.com, wenzhuo.lu@intel.com Cc: dev@dpdk.org, wenjun1.wu@intel.com, Qi Zhang , stable@dpdk.org Subject: [PATCH 1/3] net/ice: support no IOVA as PA mode Date: Sun, 11 Dec 2022 16:52:54 -0500 Message-Id: <20221211215256.370099-2-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20221211215256.370099-1-qi.z.zhang@intel.com> References: <20221211215256.370099-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Remove buf_iova access when RTE_IOVA_AS_PA is not defined. Cc: stable@dpdk.org Signed-off-by: Qi Zhang --- drivers/net/ice/ice_rxtx_common_avx.h | 24 ++++++++++++++++++++++++ drivers/net/ice/ice_rxtx_vec_avx2.c | 11 +++++------ drivers/net/ice/ice_rxtx_vec_avx512.c | 17 +++++++++++------ 3 files changed, 40 insertions(+), 12 deletions(-) diff --git a/drivers/net/ice/ice_rxtx_common_avx.h b/drivers/net/ice/ice_rxtx_common_avx.h index 81e0db5dd3..377740d43b 100644 --- a/drivers/net/ice/ice_rxtx_common_avx.h +++ b/drivers/net/ice/ice_rxtx_common_avx.h @@ -11,6 +11,12 @@ #pragma GCC diagnostic ignored "-Wcast-qual" #endif +#if RTE_IOVA_AS_PA +#define _PKT_DATA_OFF_U64(pkt) ((pkt)->buf_iova + (pkt)->data_off) +#else +#define _PKT_DATA_OFF_U64(pkt) ((u64)(pkt)->buf_addr + (pkt)->data_off) +#endif + #ifdef __AVX2__ static __rte_always_inline void ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512) @@ -54,9 +60,15 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512) mb0 = rxep[0].mbuf; mb1 = rxep[1].mbuf; +#if RTE_IOVA_AS_PA /* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != offsetof(struct rte_mbuf, buf_addr) + 8); +#else + /* load buf_addr(lo 64bit) and next(hi 64bit) */ + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, next) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr); vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr); @@ -97,9 +109,15 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512) mb6 = rxep[6].mbuf; mb7 = rxep[7].mbuf; +#if RTE_IOVA_AS_PA /* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != offsetof(struct rte_mbuf, buf_addr) + 8); +#else + /* load buf_addr(lo 64bit) and next(hi 64bit) */ + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, next) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr); vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr); vaddr2 = _mm_loadu_si128((__m128i *)&mb2->buf_addr); @@ -161,9 +179,15 @@ ice_rxq_rearm_common(struct ice_rx_queue *rxq, __rte_unused bool avx512) mb2 = rxep[2].mbuf; mb3 = rxep[3].mbuf; +#if RTE_IOVA_AS_PA /* load buf_addr(lo 64bit) and buf_iova(hi 64bit) */ RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_iova) != offsetof(struct rte_mbuf, buf_addr) + 8); +#else + /* load buf_addr(lo 64bit) and next(hi 64bit) */ + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, next) != + offsetof(struct rte_mbuf, buf_addr) + 8); +#endif vaddr0 = _mm_loadu_si128((__m128i *)&mb0->buf_addr); vaddr1 = _mm_loadu_si128((__m128i *)&mb1->buf_addr); vaddr2 = _mm_loadu_si128((__m128i *)&mb2->buf_addr); diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c index 31d6af42fd..b0fd51d37e 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx2.c +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c @@ -821,8 +821,7 @@ ice_vtx1(volatile struct ice_tx_desc *txdp, if (offload) ice_txd_enable_offload(pkt, &high_qw); - __m128i descriptor = _mm_set_epi64x(high_qw, - pkt->buf_iova + pkt->data_off); + __m128i descriptor = _mm_set_epi64x(high_qw, _PKT_DATA_OFF_U64(pkt)); _mm_store_si128((__m128i *)txdp, descriptor); } @@ -869,15 +868,15 @@ ice_vtx(volatile struct ice_tx_desc *txdp, __m256i desc2_3 = _mm256_set_epi64x (hi_qw3, - pkt[3]->buf_iova + pkt[3]->data_off, + _PKT_DATA_OFF_U64(pkt[3]), hi_qw2, - pkt[2]->buf_iova + pkt[2]->data_off); + _PKT_DATA_OFF_U64(pkt[2])); __m256i desc0_1 = _mm256_set_epi64x (hi_qw1, - pkt[1]->buf_iova + pkt[1]->data_off, + _PKT_DATA_OFF_U64(pkt[1]), hi_qw0, - pkt[0]->buf_iova + pkt[0]->data_off); + _PKT_DATA_OFF_U64(pkt[0])); _mm256_store_si256((void *)(txdp + 2), desc2_3); _mm256_store_si256((void *)txdp, desc0_1); } diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c index 5bfd5152df..3c74331a5d 100644 --- a/drivers/net/ice/ice_rxtx_vec_avx512.c +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c @@ -56,8 +56,14 @@ ice_rxq_rearm(struct ice_rx_queue *rxq) } } +#if RTE_IOVA_AS_PA const __m512i iova_offsets = _mm512_set1_epi64 (offsetof(struct rte_mbuf, buf_iova)); +#else + const __m512i iova_offsets = _mm512_set1_epi64 + (offsetof(struct rte_mbuf, buf_addr)); +#endif + const __m512i headroom = _mm512_set1_epi64(RTE_PKTMBUF_HEADROOM); #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC @@ -1092,8 +1098,7 @@ ice_vtx1(volatile struct ice_tx_desc *txdp, if (do_offload) ice_txd_enable_offload(pkt, &high_qw); - __m128i descriptor = _mm_set_epi64x(high_qw, - pkt->buf_iova + pkt->data_off); + __m128i descriptor = _mm_set_epi64x(high_qw, _PKT_DATA_OFF_U64(pkt)); _mm_store_si128((__m128i *)txdp, descriptor); } @@ -1133,13 +1138,13 @@ ice_vtx(volatile struct ice_tx_desc *txdp, struct rte_mbuf **pkt, __m512i desc0_3 = _mm512_set_epi64 (hi_qw3, - pkt[3]->buf_iova + pkt[3]->data_off, + _PKT_DATA_OFF_U64(pkt[3]), hi_qw2, - pkt[2]->buf_iova + pkt[2]->data_off, + _PKT_DATA_OFF_U64(pkt[2]), hi_qw1, - pkt[1]->buf_iova + pkt[1]->data_off, + _PKT_DATA_OFF_U64(pkt[1]), hi_qw0, - pkt[0]->buf_iova + pkt[0]->data_off); + _PKT_DATA_OFF_U64(pkt[0])); _mm512_storeu_si512((void *)txdp, desc0_3); }