From patchwork Fri Oct 16 09:44:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 81073 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 097BBA04DB; Fri, 16 Oct 2020 11:51:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 776521EC5A; Fri, 16 Oct 2020 11:51:15 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 478F91EC3A for ; Fri, 16 Oct 2020 11:51:12 +0200 (CEST) IronPort-SDR: 1VxurVMO6usGBAPDVdgadBMI04X7Z3+YhMr0c0J0aTHWmlZ22HsY0jj/64EdHD45QIQePQWTb8 xbYh4xzSRUOg== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="165806226" X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="165806226" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2020 02:51:11 -0700 IronPort-SDR: sjTHQDoUmFcVeba3Gb6Q03OqVnEnptEfh+pCdZyB48tJKwsa/SrWNf09kW4yr0HkIwsgjcjzc1 voxJBREeiFZg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="464629426" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 16 Oct 2020 02:51:08 -0700 From: Jeff Guo To: jingjing.wu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com, haiyue.wang@intel.com, qiming.yang@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, mb@smartsharesystems.com, stephen@networkplumber.org, barbette@kth.se, Feifei.wang2@arm.com, bruce.richardson@intel.com, jia.guo@intel.com, helin.zhang@intel.com Date: Fri, 16 Oct 2020 17:44:27 +0800 Message-Id: <20201016094431.96889-2-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201016094431.96889-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20201016094431.96889-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 1/5] net/ixgbe: fix vector rx burst for ixgbe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Bugzilla ID: 516 Fixes: b20971b6cca0 ("net/ixgbe: implement vector driver for ARM") Fixes: 0e51f9dc4860 ("net/ixgbe: rename x86 vector driver file") Signed-off-by: Jeff Guo Tested-by: Feifei Wang Acked-by: Morten Brørup --- drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 61 +++++++++++++++---------- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 47 +++++++++++++------ 2 files changed, 70 insertions(+), 38 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c index aa27ee1777..4c81ae9dcf 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c @@ -130,17 +130,6 @@ desc_to_olflags_v(uint8x16x2_t sterr_tmp1, uint8x16x2_t sterr_tmp2, rx_pkts[3]->ol_flags = vol.e[3]; } -/* - * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) - * - * Notice: - * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit - * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two - * - don't support ol_flags for rss and csum err - */ - #define IXGBE_VPMD_DESC_EOP_MASK 0x02020202 #define IXGBE_UINT8_BIT (CHAR_BIT * sizeof(uint8_t)) @@ -206,6 +195,13 @@ desc_to_ptype_v(uint64x2_t descs[4], uint16_t pkt_type_mask, vgetq_lane_u32(tunnel_check, 3)); } +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) + * + * Notice: + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two + */ static inline uint16_t _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) @@ -226,9 +222,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16x8_t crc_adjust = {0, 0, rxq->crc_len, 0, rxq->crc_len, 0, 0, 0}; - /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_IXGBE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_IXGBE_DESCS_PER_LOOP); @@ -382,13 +375,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* +/** * vPMD receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) * * Notice: * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two * - don't support ol_flags for rss and csum err */ @@ -399,19 +390,17 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* +/** * vPMD receive routine that reassembles scattered packets * * Notice: * - don't support ol_flags for rss and csum err * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ -uint16_t -ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ixgbe_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] = {0}; @@ -443,6 +432,32 @@ ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_IXGBE_MAX_RX_BURST) { + uint16_t burst; + + burst = ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_IXGBE_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_IXGBE_MAX_RX_BURST) + return retval; + } + + return retval + ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index e77a7f31ce..2bea39a41c 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -302,13 +302,11 @@ desc_to_ptype_v(__m128i descs[4], uint16_t pkt_type_mask, get_packet_type(3, pkt_info, etqf_check, tunnel_check); } -/* +/** * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) * * Notice: * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ static inline uint16_t @@ -344,9 +342,6 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, __m128i mbuf_init; uint8_t vlan_flags; - /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_IXGBE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_IXGBE_DESCS_PER_LOOP); @@ -556,13 +551,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* +/** * vPMD receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) * * Notice: * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ uint16_t @@ -572,18 +565,16 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* +/** * vPMD receive routine that reassembles scattered packets * * Notice: * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST - * numbers of DD bit * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two */ -uint16_t -ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ixgbe_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ixgbe_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] = {0}; @@ -615,6 +606,32 @@ ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_IXGBE_MAX_RX_BURST) { + uint16_t burst; + + burst = ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_IXGBE_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_IXGBE_MAX_RX_BURST) + return retval; + } + + return retval + ixgbe_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Fri Oct 16 09:44:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 81074 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB024A04DB; Fri, 16 Oct 2020 11:51:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 307C31EC69; Fri, 16 Oct 2020 11:51:18 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id DE44D1EC61 for ; Fri, 16 Oct 2020 11:51:15 +0200 (CEST) IronPort-SDR: tS43tSSyw4bCqDEjpMulJOWBI8/72luGlUfB3G0FvYtrZLAsl2IFv4h+cM4rHGZLFpua3r0vwg oNgHdDhu0plQ== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="165806234" X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="165806234" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2020 02:51:15 -0700 IronPort-SDR: UFGbDYJi5Np9V2fzKC5eCGQ2JHo16sXArbwIL75HDQROukln+fk/n5vzSa1/qRhYU/njviIIcb NgZAOlSlBQkw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="464629455" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 16 Oct 2020 02:51:11 -0700 From: Jeff Guo To: jingjing.wu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com, haiyue.wang@intel.com, qiming.yang@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, mb@smartsharesystems.com, stephen@networkplumber.org, barbette@kth.se, Feifei.wang2@arm.com, bruce.richardson@intel.com, jia.guo@intel.com, helin.zhang@intel.com Date: Fri, 16 Oct 2020 17:44:28 +0800 Message-Id: <20201016094431.96889-3-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201016094431.96889-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20201016094431.96889-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 2/5] net/i40e: fix vector rx burst for i40e X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Bugzilla ID: 516 Fixes: 5b463eda8d26 ("net/i40e: make vector driver filenames consistent") Fixes: ae0eb310f253 ("net/i40e: implement vector PMD for ARM") Fixes: c3def6a8724c ("net/i40e: implement vector PMD for altivec") Signed-off-by: Jeff Guo Acked-by: Morten Brørup --- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 59 +++++++++++++++++------- drivers/net/i40e/i40e_rxtx_vec_neon.c | 48 ++++++++++++++----- drivers/net/i40e/i40e_rxtx_vec_sse.c | 48 ++++++++++++++----- 3 files changed, 114 insertions(+), 41 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index 6862a017e1..d3238bfb6a 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -188,11 +188,13 @@ desc_to_ptype_v(vector unsigned long descs[4], struct rte_mbuf **rx_pkts, ptype_tbl[(*(vector unsigned char *)&ptype1)[8]]; } - /* Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_I40E_DESCS_PER_LOOP) + * + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + * - floor align nb_pkts to a RTE_I40E_DESCS_PER_LOOP power-of-two + */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, uint8_t *split_packet) @@ -214,9 +216,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, }; vector unsigned long dd_check, eop_check; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -459,15 +458,15 @@ i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets - * Notice: - * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits - */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * + * Notice: + * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet + */ +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_I40E_VPMD_RX_BURST] = {0}; @@ -500,6 +499,32 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index 543ecadb07..f094de69ae 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -187,11 +187,12 @@ desc_to_ptype_v(uint64x2_t descs[4], struct rte_mbuf **__rte_restrict rx_pkts, } - /* +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_I40E_DESCS_PER_LOOP) + * * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits + * - floor align nb_pkts to a RTE_I40E_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, @@ -230,9 +231,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, 0, 0, 0 /* ignore non-length fields */ }; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -439,15 +437,15 @@ i40e_recv_pkts_vec(void *__rte_restrict rx_queue, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; @@ -482,6 +480,32 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 240ce478ab..4b2b6a28fc 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -342,11 +342,12 @@ desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, rx_pkts[3]->packet_type = ptype_tbl[_mm_extract_epi8(ptype1, 8)]; } - /* +/** + * vPMD raw receive routine, only accept(nb_pkts >= RTE_I40E_DESCS_PER_LOOP) + * * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits + * - floor align nb_pkts to a RTE_I40E_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -378,9 +379,6 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than RTE_I40E_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, RTE_I40E_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to RTE_I40E_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_I40E_DESCS_PER_LOOP); @@ -605,15 +603,15 @@ i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } - /* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * * Notice: * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet - * - nb_pkts > RTE_I40E_VPMD_RX_BURST, only scan RTE_I40E_VPMD_RX_BURST - * numbers of DD bits */ -uint16_t -i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +i40e_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct i40e_rx_queue *rxq = rx_queue; @@ -648,6 +646,32 @@ i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +i40e_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_I40E_VPMD_RX_BURST) { + uint16_t burst; + + burst = i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_I40E_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_I40E_VPMD_RX_BURST) + return retval; + } + + return retval + i40e_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Fri Oct 16 09:44:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 81075 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4893A04DB; Fri, 16 Oct 2020 11:52:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B5B2B1EC72; Fri, 16 Oct 2020 11:51:21 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id CFA9B1EC72 for ; Fri, 16 Oct 2020 11:51:19 +0200 (CEST) IronPort-SDR: FYy95ozG3eADsQY8B7SHIf0RW6mCvGCDo7e10c4cyUfK7QG4EpIxy5fY21IXuGyjToXCtocIL8 VwVI33YUamtA== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="165806238" X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="165806238" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2020 02:51:19 -0700 IronPort-SDR: LiACOgtZ4mlOwmOEVjpxOYz0gkgoEFs8mbNk3EKmLYtMDVZCHMBiCS/5JhRyuDrjr4je6pXzSi eExchY00nxjA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="464629484" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 16 Oct 2020 02:51:15 -0700 From: Jeff Guo To: jingjing.wu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com, haiyue.wang@intel.com, qiming.yang@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, mb@smartsharesystems.com, stephen@networkplumber.org, barbette@kth.se, Feifei.wang2@arm.com, bruce.richardson@intel.com, jia.guo@intel.com, helin.zhang@intel.com, Yingya Han Date: Fri, 16 Oct 2020 17:44:29 +0800 Message-Id: <20201016094431.96889-4-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201016094431.96889-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20201016094431.96889-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 3/5] net/ice: fix vector rx burst for ice X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Bugzilla ID: 516 Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx") Signed-off-by: Jeff Guo Tested-by: Yingya Han Acked-by: Morten Brørup --- drivers/net/ice/ice_rxtx_vec_sse.c | 46 +++++++++++++++++++++++------- 1 file changed, 35 insertions(+), 11 deletions(-) diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 1afd96ac9d..e950c1b922 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -254,10 +254,11 @@ ice_rx_desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, } /** + * vPMD raw receive routine, only accept(nb_pkts >= ICE_DESCS_PER_LOOP) + * * Notice: * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits + * - floor align nb_pkts to a ICE_DESCS_PER_LOOP power-of-two */ static inline uint16_t _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -314,9 +315,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, const __m128i eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL); - /* nb_pkts shall be less equal than ICE_MAX_RX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, ICE_MAX_RX_BURST); - /* nb_pkts has to be floor-aligned to ICE_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, ICE_DESCS_PER_LOOP); @@ -560,15 +558,15 @@ ice_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, return _ice_recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } -/* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * * Notice: * - nb_pkts < ICE_DESCS_PER_LOOP, just return no packet - * - nb_pkts > ICE_VPMD_RX_BURST, only scan ICE_VPMD_RX_BURST - * numbers of DD bits */ -uint16_t -ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +ice_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct ice_rx_queue *rxq = rx_queue; uint8_t split_flags[ICE_VPMD_RX_BURST] = {0}; @@ -602,6 +600,32 @@ ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +ice_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > ICE_VPMD_RX_BURST) { + uint16_t burst; + + burst = ice_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + ICE_VPMD_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < ICE_VPMD_RX_BURST) + return retval; + } + + return retval + ice_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void ice_vtx1(volatile struct ice_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) From patchwork Fri Oct 16 09:44:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 81076 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2D5FFA04DB; Fri, 16 Oct 2020 11:52:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0769D1EC81; Fri, 16 Oct 2020 11:51:27 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 7A7A51EC4D for ; Fri, 16 Oct 2020 11:51:23 +0200 (CEST) IronPort-SDR: pcjXZqgyJNgg7TQRWscUZmIMn+zrYJ9WfUSnaD7FQyGv8ZfCIPdq9PC5FI0Bp9uON7jhuy5LRg DIUKpuT0agJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="165806242" X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="165806242" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2020 02:51:23 -0700 IronPort-SDR: kQOat19JndmJXnMjEQSNBI3P5CZEbjk9xst+EJ/zxtqtCWJShQcalTfay633zKh/iILhNZUMxK jRTYc1xaxFSQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="464629503" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 16 Oct 2020 02:51:19 -0700 From: Jeff Guo To: jingjing.wu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com, haiyue.wang@intel.com, qiming.yang@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, mb@smartsharesystems.com, stephen@networkplumber.org, barbette@kth.se, Feifei.wang2@arm.com, bruce.richardson@intel.com, jia.guo@intel.com, helin.zhang@intel.com Date: Fri, 16 Oct 2020 17:44:30 +0800 Message-Id: <20201016094431.96889-5-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201016094431.96889-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20201016094431.96889-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 4/5] net/fm10k: fix vector rx burst for fm10k X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Bugzilla ID: 516 Fixes: fe65e1e1ce61 ("fm10k: add vector scatter Rx") Signed-off-by: Jeff Guo Acked-by: Morten Brørup --- drivers/net/fm10k/fm10k_rxtx_vec.c | 39 ++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 8 deletions(-) diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c index eff3933b5c..6fcc939ad9 100644 --- a/drivers/net/fm10k/fm10k_rxtx_vec.c +++ b/drivers/net/fm10k/fm10k_rxtx_vec.c @@ -645,18 +645,15 @@ fm10k_reassemble_packets(struct fm10k_rx_queue *rxq, return pkt_idx; } -/* - * vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets * * Notice: * - don't support ol_flags for rss and csum err - * - nb_pkts > RTE_FM10K_MAX_RX_BURST, only scan RTE_FM10K_MAX_RX_BURST - * numbers of DD bit */ -uint16_t -fm10k_recv_scattered_pkts_vec(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +fm10k_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct fm10k_rx_queue *rxq = rx_queue; uint8_t split_flags[RTE_FM10K_MAX_RX_BURST] = {0}; @@ -691,6 +688,32 @@ fm10k_recv_scattered_pkts_vec(void *rx_queue, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +fm10k_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > RTE_FM10K_MAX_RX_BURST) { + uint16_t burst; + + burst = fm10k_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + RTE_FM10K_MAX_RX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < RTE_FM10K_MAX_RX_BURST) + return retval; + } + + return retval + fm10k_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static const struct fm10k_txq_ops vec_txq_ops = { .reset = fm10k_reset_tx_queue, }; From patchwork Fri Oct 16 09:44:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Guo, Jia" X-Patchwork-Id: 81077 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91541A04DB; Fri, 16 Oct 2020 11:52:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DBCB11EC96; Fri, 16 Oct 2020 11:51:29 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 08D2A1EC82 for ; Fri, 16 Oct 2020 11:51:26 +0200 (CEST) IronPort-SDR: u9CAABz0kYStFtQOFT6BiJ88G1ZNDvM2sKhNbJWA2uQLjo3vNKFf8morrWmYlI+CRmrbVPjYdo QsjB5zro2o8Q== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="165806247" X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="165806247" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2020 02:51:26 -0700 IronPort-SDR: Mr6E/noudSxLfimmTHK6BfpkAx0MARp7coAfbkXLn8Zr3pE6X70wh2Ie+gewbwSaYYapGsD00t lL4Dd6kenmtA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="464629520" Received: from npg-dpdk-cvl-jeffguo-01.sh.intel.com ([10.67.111.128]) by orsmga004.jf.intel.com with ESMTP; 16 Oct 2020 02:51:23 -0700 From: Jeff Guo To: jingjing.wu@intel.com, qi.z.zhang@intel.com, beilei.xing@intel.com, haiyue.wang@intel.com, qiming.yang@intel.com Cc: dev@dpdk.org, ferruh.yigit@intel.com, mb@smartsharesystems.com, stephen@networkplumber.org, barbette@kth.se, Feifei.wang2@arm.com, bruce.richardson@intel.com, jia.guo@intel.com, helin.zhang@intel.com Date: Fri, 16 Oct 2020 17:44:31 +0800 Message-Id: <20201016094431.96889-6-jia.guo@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201016094431.96889-1-jia.guo@intel.com> References: <20200827075452.1751-1-jia.guo@intel.com> <20201016094431.96889-1-jia.guo@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v5 5/5] net/iavf: fix vector rx burst for iavf X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The limitation of burst size in vector rx was removed, since it should retrieve as much received packets as possible. And also the scattered receive path should use a wrapper function to achieve the goal of burst maximizing. Bugzilla ID: 516 Fixes: 319c421f3890 ("net/avf: enable SSE Rx Tx") Fixes: 1162f5a0ef31 ("net/iavf: support flexible Rx descriptor in SSE path") Fixes: 5b6e8859081d ("net/iavf: support flexible Rx descriptor in AVX path") Signed-off-by: Jeff Guo Acked-by: Morten Brørup Tested-by: Ling, Wei --- drivers/net/iavf/iavf_rxtx_vec_sse.c | 103 ++++++++++++++++++++------- 1 file changed, 78 insertions(+), 25 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 85c5bd4af0..11acaa029e 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -379,10 +379,12 @@ flex_desc_to_ptype_v(__m128i descs[4], struct rte_mbuf **rx_pkts, rx_pkts[3]->packet_type = type_table[_mm_extract_epi16(ptype_all, 7)]; } -/* Notice: +/** + * vPMD raw receive routine, only accept(nb_pkts >= IAVF_VPMD_DESCS_PER_LOOP) + * + * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits + * - floor align nb_pkts to a IAVF_VPMD_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, @@ -413,9 +415,6 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, offsetof(struct rte_mbuf, rx_descriptor_fields1) + 8); __m128i dd_check, eop_check; - /* nb_pkts shall be less equal than IAVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP); @@ -627,10 +626,13 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, return nb_pkts_recd; } -/* Notice: +/** + * vPMD raw receive routine for flex RxD, + * only accept(nb_pkts >= IAVF_VPMD_DESCS_PER_LOOP) + * + * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > IAVF_VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits + * - floor align nb_pkts to a IAVF_VPMD_DESCS_PER_LOOP power-of-two */ static inline uint16_t _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, @@ -688,9 +690,6 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq, const __m128i eop_check = _mm_set_epi64x(0x0000000200000002LL, 0x0000000200000002LL); - /* nb_pkts shall be less equal than IAVF_VPMD_RX_MAX_BURST */ - nb_pkts = RTE_MIN(nb_pkts, IAVF_VPMD_RX_MAX_BURST); - /* nb_pkts has to be floor-aligned to IAVF_VPMD_DESCS_PER_LOOP */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, IAVF_VPMD_DESCS_PER_LOOP); @@ -945,15 +944,15 @@ iavf_recv_pkts_vec_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, return _recv_raw_pkts_vec_flex_rxd(rx_queue, rx_pkts, nb_pkts, NULL); } -/* vPMD receive routine that reassembles scattered packets +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits */ -uint16_t -iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; @@ -986,16 +985,43 @@ iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, &split_flags[i]); } -/* vPMD receive routine that reassembles scattered packets for flex RxD +/** + * vPMD receive routine that reassembles scattered packets. + */ +uint16_t +iavf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > IAVF_VPMD_RX_MAX_BURST) { + uint16_t burst; + + burst = iavf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + IAVF_VPMD_RX_MAX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < IAVF_VPMD_RX_MAX_BURST) + return retval; + } + + return retval + iavf_recv_scattered_burst_vec(rx_queue, + rx_pkts + retval, + nb_pkts); +} + +/** + * vPMD receive routine that reassembles single burst of 32 scattered packets + * for flex RxD + * * Notice: * - nb_pkts < IAVF_VPMD_DESCS_PER_LOOP, just return no packet - * - nb_pkts > VPMD_RX_MAX_BURST, only scan IAVF_VPMD_RX_MAX_BURST - * numbers of DD bits */ -uint16_t -iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) +static uint16_t +iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct iavf_rx_queue *rxq = rx_queue; uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; @@ -1028,6 +1054,33 @@ iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, &split_flags[i]); } +/** + * vPMD receive routine that reassembles scattered packets for flex RxD + */ +uint16_t +iavf_recv_scattered_pkts_vec_flex_rxd(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t retval = 0; + + while (nb_pkts > IAVF_VPMD_RX_MAX_BURST) { + uint16_t burst; + + burst = iavf_recv_scattered_burst_vec_flex_rxd(rx_queue, + rx_pkts + retval, + IAVF_VPMD_RX_MAX_BURST); + retval += burst; + nb_pkts -= burst; + if (burst < IAVF_VPMD_RX_MAX_BURST) + return retval; + } + + return retval + iavf_recv_scattered_burst_vec_flex_rxd(rx_queue, + rx_pkts + retval, + nb_pkts); +} + static inline void vtx1(volatile struct iavf_tx_desc *txdp, struct rte_mbuf *pkt, uint64_t flags) {