From patchwork Fri Jan 8 08:21:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 86178 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E3A9A0524; Fri, 8 Jan 2021 09:22:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EBF5F140E64; Fri, 8 Jan 2021 09:22:02 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 88DA0140E1D; Fri, 8 Jan 2021 09:22:01 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E1B6D31B; Fri, 8 Jan 2021 00:22:00 -0800 (PST) Received: from net-arm-n1amp-01.shanghai.arm.com (net-arm-n1amp-01.shanghai.arm.com [10.169.208.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 961F93F70D; Fri, 8 Jan 2021 00:21:57 -0800 (PST) From: Ruifeng Wang To: Jerin Jacob , Ruifeng Wang , Bruce Richardson , Vladimir Medvedkin , Jianbo Liu Cc: dev@dpdk.org, nd@arm.com, drc@linux.vnet.ibm.com, honnappa.nagarahalli@arm.com, stable@dpdk.org Date: Fri, 8 Jan 2021 08:21:23 +0000 Message-Id: <20210108082127.1061538-2-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210108082127.1061538-1-ruifeng.wang@arm.com> References: <20210108082127.1061538-1-ruifeng.wang@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/4] lpm: fix vector lookup for Arm X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8 groups are created. This is caused by incorrect type casting of tbl8 group index that been stored in tbl24 entry. The casting caused group index truncation and hence wrong tbl8 group been searched. Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index. Fixes: cbc2f1dccfba ("lpm/arm: support NEON") Cc: jerinj@marvell.com Cc: stable@dpdk.org Signed-off-by: Ruifeng Wang --- lib/librte_lpm/rte_lpm_neon.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/librte_lpm/rte_lpm_neon.h b/lib/librte_lpm/rte_lpm_neon.h index 6c131d312..4642a866f 100644 --- a/lib/librte_lpm/rte_lpm_neon.h +++ b/lib/librte_lpm/rte_lpm_neon.h @@ -81,28 +81,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[0] = i8.u32[0] + - (uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]]; tbl[0] = *ptbl; } if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[1] = i8.u32[1] + - (uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]]; tbl[1] = *ptbl; } if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[2] = i8.u32[2] + - (uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]]; tbl[2] = *ptbl; } if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[3] = i8.u32[3] + - (uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]]; tbl[3] = *ptbl; } From patchwork Fri Jan 8 08:21:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 86179 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 73FD9A0524; Fri, 8 Jan 2021 09:22:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 60DB0140E72; Fri, 8 Jan 2021 09:22:12 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 59B30140E76; Fri, 8 Jan 2021 09:22:10 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D397831B; Fri, 8 Jan 2021 00:22:09 -0800 (PST) Received: from net-arm-n1amp-01.shanghai.arm.com (net-arm-n1amp-01.shanghai.arm.com [10.169.208.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 01B063F70D; Fri, 8 Jan 2021 00:22:05 -0800 (PST) From: Ruifeng Wang To: Bruce Richardson , Konstantin Ananyev , Vladimir Medvedkin , Michal Kobylinski , David Hunt Cc: dev@dpdk.org, nd@arm.com, jerinj@marvell.com, drc@linux.vnet.ibm.com, honnappa.nagarahalli@arm.com, Ruifeng Wang , stable@dpdk.org Date: Fri, 8 Jan 2021 08:21:24 +0000 Message-Id: <20210108082127.1061538-3-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210108082127.1061538-1-ruifeng.wang@arm.com> References: <20210108082127.1061538-1-ruifeng.wang@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8 groups are created. This is caused by incorrect type casting of tbl8 group index that been stored in tbl24 entry. The casting caused group index truncation and hence wrong tbl8 group been searched. Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index. Fixes: dc81ebbacaeb ("lpm: extend IPv4 next hop field") Cc: michalx.kobylinski@intel.com Cc: stable@dpdk.org Signed-off-by: Ruifeng Wang Acked-by: Vladimir Medvedkin --- lib/librte_lpm/rte_lpm_sse.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/librte_lpm/rte_lpm_sse.h b/lib/librte_lpm/rte_lpm_sse.h index 44770b6ff..eaa863c52 100644 --- a/lib/librte_lpm/rte_lpm_sse.h +++ b/lib/librte_lpm/rte_lpm_sse.h @@ -82,28 +82,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[0] = i8.u32[0] + - (uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]]; tbl[0] = *ptbl; } if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[1] = i8.u32[1] + - (uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]]; tbl[1] = *ptbl; } if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[2] = i8.u32[2] + - (uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]]; tbl[2] = *ptbl; } if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[3] = i8.u32[3] + - (uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]]; tbl[3] = *ptbl; } From patchwork Fri Jan 8 08:21:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 86180 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF7BEA0524; Fri, 8 Jan 2021 09:22:20 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D94D2140E81; Fri, 8 Jan 2021 09:22:18 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 7BE5B140E4E; Fri, 8 Jan 2021 09:22:17 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F248D31B; Fri, 8 Jan 2021 00:22:16 -0800 (PST) Received: from net-arm-n1amp-01.shanghai.arm.com (net-arm-n1amp-01.shanghai.arm.com [10.169.208.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 63AE43F70D; Fri, 8 Jan 2021 00:22:13 -0800 (PST) From: Ruifeng Wang To: David Christensen , Bruce Richardson , Vladimir Medvedkin , Chao Zhu , Gowrishankar Muthukrishnan Cc: dev@dpdk.org, nd@arm.com, jerinj@marvell.com, honnappa.nagarahalli@arm.com, Ruifeng Wang , stable@dpdk.org Date: Fri, 8 Jan 2021 08:21:25 +0000 Message-Id: <20210108082127.1061538-4-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210108082127.1061538-1-ruifeng.wang@arm.com> References: <20210108082127.1061538-1-ruifeng.wang@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 3/4] lpm: fix vector lookup for ppc64 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8 groups are created. This is caused by incorrect type casting of tbl8 group index that been stored in tbl24 entry. The casting caused group index truncation and hence wrong tbl8 group been searched. Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index. Fixes: d2cc7959342b ("lpm: add AltiVec for ppc64") Cc: gowrishankar.m@linux.vnet.ibm.com Cc: stable@dpdk.org Signed-off-by: Ruifeng Wang Tested-by: David Christensen --- lib/librte_lpm/rte_lpm_altivec.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/librte_lpm/rte_lpm_altivec.h b/lib/librte_lpm/rte_lpm_altivec.h index 228c41b38..4fbc1b595 100644 --- a/lib/librte_lpm/rte_lpm_altivec.h +++ b/lib/librte_lpm/rte_lpm_altivec.h @@ -88,28 +88,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[0] = i8.u32[0] + - (uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]]; tbl[0] = *ptbl; } if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[1] = i8.u32[1] + - (uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]]; tbl[1] = *ptbl; } if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[2] = i8.u32[2] + - (uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]]; tbl[2] = *ptbl; } if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { i8.u32[3] = i8.u32[3] + - (uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; + (tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]]; tbl[3] = *ptbl; } From patchwork Fri Jan 8 08:21:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 86181 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 756E7A0524; Fri, 8 Jan 2021 09:22:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F3735140E71; Fri, 8 Jan 2021 09:22:23 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 04BF6140E6D for ; Fri, 8 Jan 2021 09:22:23 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7EA8131B; Fri, 8 Jan 2021 00:22:22 -0800 (PST) Received: from net-arm-n1amp-01.shanghai.arm.com (net-arm-n1amp-01.shanghai.arm.com [10.169.208.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B97073F70D; Fri, 8 Jan 2021 00:22:19 -0800 (PST) From: Ruifeng Wang To: Bruce Richardson , Vladimir Medvedkin Cc: dev@dpdk.org, nd@arm.com, jerinj@marvell.com, drc@linux.vnet.ibm.com, honnappa.nagarahalli@arm.com, Ruifeng Wang Date: Fri, 8 Jan 2021 08:21:26 +0000 Message-Id: <20210108082127.1061538-5-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210108082127.1061538-1-ruifeng.wang@arm.com> References: <20210108082127.1061538-1-ruifeng.wang@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 4/4] test/lpm: improve coverage on tbl8 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Existing test cases create 256 tbl8 groups for testing. The number covers only 8 bit next_hop/group field. Since the next_hop/group field had been extended to 24-bits, creating more than 256 groups in tests can improve the coverage. Coverage was not expanded to reach the max supported group number, because it would take too much time to run for this fast-test. Signed-off-by: Ruifeng Wang Tested-by: David Christensen --- app/test/test_lpm.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c index 258b2f67c..ee6c4280b 100644 --- a/app/test/test_lpm.c +++ b/app/test/test_lpm.c @@ -993,7 +993,7 @@ test13(void) } /* - * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension. + * For TBL8 extension exhaustion. Add 512 rules that require a tbl8 extension. * No more tbl8 extensions will be allowed. Now add one more rule that required * a tbl8 extension and get fail. * */ @@ -1008,28 +1008,34 @@ test14(void) struct rte_lpm_config config; config.max_rules = 256 * 32; - config.number_tbl8s = NUMBER_TBL8S; + config.number_tbl8s = 512; config.flags = 0; - uint32_t ip, next_hop_add, next_hop_return; + uint32_t ip, next_hop_base, next_hop_return; uint8_t depth; int32_t status = 0; + xmm_t ipx4; + uint32_t hop[4]; /* Add enough space for 256 rules for every depth */ lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config); TEST_LPM_ASSERT(lpm != NULL); depth = 32; - next_hop_add = 100; + next_hop_base = 100; ip = RTE_IPV4(0, 0, 0, 0); /* Add 256 rules that require a tbl8 extension */ - for (; ip <= RTE_IPV4(0, 0, 255, 0); ip += 256) { - status = rte_lpm_add(lpm, ip, depth, next_hop_add); + for (; ip <= RTE_IPV4(0, 1, 255, 0); ip += 256) { + status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip); TEST_LPM_ASSERT(status == 0); status = rte_lpm_lookup(lpm, ip, &next_hop_return); TEST_LPM_ASSERT((status == 0) && - (next_hop_return == next_hop_add)); + (next_hop_return == next_hop_base + ip)); + + ipx4 = vect_set_epi32(ip + 3, ip + 2, ip + 1, ip); + rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX); + TEST_LPM_ASSERT(hop[0] == next_hop_base + ip); } /* All tbl8 extensions have been used above. Try to add one more and @@ -1037,7 +1043,7 @@ test14(void) ip = RTE_IPV4(1, 0, 0, 0); depth = 32; - status = rte_lpm_add(lpm, ip, depth, next_hop_add); + status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip); TEST_LPM_ASSERT(status < 0); rte_lpm_free(lpm);