From patchwork Wed Jun 19 05:36:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 54934 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6ACA81C220; Wed, 19 Jun 2019 07:36:43 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 0965E1C21C for ; Wed, 19 Jun 2019 07:36:42 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3BA8A360; Tue, 18 Jun 2019 22:36:41 -0700 (PDT) Received: from net-arm-c2400-02.shanghai.arm.com (net-arm-c2400-02.shanghai.arm.com [10.169.40.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A1FC73F246; Tue, 18 Jun 2019 22:36:39 -0700 (PDT) From: Ruifeng Wang To: bruce.richardson@intel.com, vladimir.medvedkin@intel.com Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com, Ruifeng Wang Date: Wed, 19 Jun 2019 13:36:13 +0800 Message-Id: <20190619053615.24613-1-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH v2 1/3] lib/lpm: memory orderings to avoid race conditions for v1604 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When a tbl8 group is getting attached to a tbl24 entry, lookup might fail even though the entry is configured in the table. For ex: consider a LPM table configured with 10.10.10.1/24. When a new entry 10.10.10.32/28 is being added, a new tbl8 group is allocated and tbl24 entry is changed to point to the tbl8 group. If the tbl24 entry is written without the tbl8 group entries updated, a lookup on 10.10.10.9 will return failure. Correct memory orderings are required to ensure that the store to tbl24 does not happen before the stores to tbl8 group entries complete. The orderings have impact on LPM performance test. On Arm A72 platform, delete operation has 2.7% degradation, while add / lookup has no notable performance change. On x86 E5 platform, add operation has 4.3% degradation, delete operation has 2.2% - 10.2% degradation, lookup has no performance change. Signed-off-by: Honnappa Nagarahalli Signed-off-by: Ruifeng Wang --- v2: no changes v1: initial version lib/librte_lpm/rte_lpm.c | 32 +++++++++++++++++++++++++------- lib/librte_lpm/rte_lpm.h | 4 ++++ 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index 6b7b28a2e..6ec450a08 100644 --- a/lib/librte_lpm/rte_lpm.c +++ b/lib/librte_lpm/rte_lpm.c @@ -806,7 +806,8 @@ add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, /* Setting tbl24 entry in one go to avoid race * conditions */ - lpm->tbl24[i] = new_tbl24_entry; + __atomic_store(&lpm->tbl24[i], &new_tbl24_entry, + __ATOMIC_RELEASE); continue; } @@ -1017,7 +1018,11 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, .depth = 0, }; - lpm->tbl24[tbl24_index] = new_tbl24_entry; + /* The tbl24 entry must be written only after the + * tbl8 entries are written. + */ + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, + __ATOMIC_RELEASE); } /* If valid entry but not extended calculate the index into Table8. */ else if (lpm->tbl24[tbl24_index].valid_group == 0) { @@ -1063,7 +1068,11 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, .depth = 0, }; - lpm->tbl24[tbl24_index] = new_tbl24_entry; + /* The tbl24 entry must be written only after the + * tbl8 entries are written. + */ + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, + __ATOMIC_RELEASE); } else { /* * If it is valid, extended entry calculate the index into tbl8. @@ -1391,6 +1400,7 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked, /* Calculate the range and index into Table24. */ tbl24_range = depth_to_range(depth); tbl24_index = (ip_masked >> 8); + struct rte_lpm_tbl_entry zero_tbl24_entry = {0}; /* * Firstly check the sub_rule_index. A -1 indicates no replacement rule @@ -1405,7 +1415,8 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked, if (lpm->tbl24[i].valid_group == 0 && lpm->tbl24[i].depth <= depth) { - lpm->tbl24[i].valid = INVALID; + __atomic_store(&lpm->tbl24[i], + &zero_tbl24_entry, __ATOMIC_RELEASE); } else if (lpm->tbl24[i].valid_group == 1) { /* * If TBL24 entry is extended, then there has @@ -1450,7 +1461,8 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked, if (lpm->tbl24[i].valid_group == 0 && lpm->tbl24[i].depth <= depth) { - lpm->tbl24[i] = new_tbl24_entry; + __atomic_store(&lpm->tbl24[i], &new_tbl24_entry, + __ATOMIC_RELEASE); } else if (lpm->tbl24[i].valid_group == 1) { /* * If TBL24 entry is extended, then there has @@ -1713,8 +1725,11 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, tbl8_recycle_index = tbl8_recycle_check_v1604(lpm->tbl8, tbl8_group_start); if (tbl8_recycle_index == -EINVAL) { - /* Set tbl24 before freeing tbl8 to avoid race condition. */ + /* Set tbl24 before freeing tbl8 to avoid race condition. + * Prevent the free of the tbl8 group from hoisting. + */ lpm->tbl24[tbl24_index].valid = 0; + __atomic_thread_fence(__ATOMIC_RELEASE); tbl8_free_v1604(lpm->tbl8, tbl8_group_start); } else if (tbl8_recycle_index > -1) { /* Update tbl24 entry. */ @@ -1725,8 +1740,11 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, .depth = lpm->tbl8[tbl8_recycle_index].depth, }; - /* Set tbl24 before freeing tbl8 to avoid race condition. */ + /* Set tbl24 before freeing tbl8 to avoid race condition. + * Prevent the free of the tbl8 group from hoisting. + */ lpm->tbl24[tbl24_index] = new_tbl24_entry; + __atomic_thread_fence(__ATOMIC_RELEASE); tbl8_free_v1604(lpm->tbl8, tbl8_group_start); } #undef group_idx diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index b886f54b4..6f5704c5c 100644 --- a/lib/librte_lpm/rte_lpm.h +++ b/lib/librte_lpm/rte_lpm.h @@ -354,6 +354,10 @@ rte_lpm_lookup(struct rte_lpm *lpm, uint32_t ip, uint32_t *next_hop) ptbl = (const uint32_t *)(&lpm->tbl24[tbl24_index]); tbl_entry = *ptbl; + /* Memory ordering is not required in lookup. Because dataflow + * dependency exists, compiler or HW won't be able to re-order + * the operations. + */ /* Copy tbl8 entry (only if needed) */ if (unlikely((tbl_entry & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { From patchwork Wed Jun 19 05:36:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 54935 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 84D581C274; Wed, 19 Jun 2019 07:36:47 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 581A81C274 for ; Wed, 19 Jun 2019 07:36:46 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E1717360; Tue, 18 Jun 2019 22:36:45 -0700 (PDT) Received: from net-arm-c2400-02.shanghai.arm.com (net-arm-c2400-02.shanghai.arm.com [10.169.40.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 53B113F246; Tue, 18 Jun 2019 22:36:44 -0700 (PDT) From: Ruifeng Wang To: bruce.richardson@intel.com, vladimir.medvedkin@intel.com Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com, Ruifeng Wang Date: Wed, 19 Jun 2019 13:36:14 +0800 Message-Id: <20190619053615.24613-2-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190619053615.24613-1-ruifeng.wang@arm.com> References: <20190619053615.24613-1-ruifeng.wang@arm.com> Subject: [dpdk-dev] [PATCH v2 2/3] lib/lpm: memory orderings to avoid race conditions for v20 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When a tbl8 group is getting attached to a tbl24 entry, lookup might fail even though the entry is configured in the table. For ex: consider a LPM table configured with 10.10.10.1/24. When a new entry 10.10.10.32/28 is being added, a new tbl8 group is allocated and tbl24 entry is changed to point to the tbl8 group. If the tbl24 entry is written without the tbl8 group entries updated, a lookup on 10.10.10.9 will return failure. Correct memory orderings are required to ensure that the store to tbl24 does not happen before the stores to tbl8 group entries complete. Suggested-by: Honnappa Nagarahalli Signed-off-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli Reviewed-by: Gavin Hu --- v2: fixed clang building issue by supplying alignment attribute. v1: initail version lib/librte_lpm/rte_lpm.c | 31 ++++++++++++++++++++++++------- lib/librte_lpm/rte_lpm.h | 4 ++-- 2 files changed, 26 insertions(+), 9 deletions(-) diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index 6ec450a08..0addff5d4 100644 --- a/lib/librte_lpm/rte_lpm.c +++ b/lib/librte_lpm/rte_lpm.c @@ -737,7 +737,8 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth, /* Setting tbl24 entry in one go to avoid race * conditions */ - lpm->tbl24[i] = new_tbl24_entry; + __atomic_store(&lpm->tbl24[i], &new_tbl24_entry, + __ATOMIC_RELEASE); continue; } @@ -892,7 +893,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth, .depth = 0, }; - lpm->tbl24[tbl24_index] = new_tbl24_entry; + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, + __ATOMIC_RELEASE); } /* If valid entry but not extended calculate the index into Table8. */ else if (lpm->tbl24[tbl24_index].valid_group == 0) { @@ -938,7 +940,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth, .depth = 0, }; - lpm->tbl24[tbl24_index] = new_tbl24_entry; + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, + __ATOMIC_RELEASE); } else { /* * If it is valid, extended entry calculate the index into tbl8. @@ -1320,7 +1323,14 @@ delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, if (lpm->tbl24[i].valid_group == 0 && lpm->tbl24[i].depth <= depth) { - lpm->tbl24[i].valid = INVALID; + struct rte_lpm_tbl_entry_v20 zero_tbl_entry = { + .valid = INVALID, + .depth = 0, + .valid_group = 0, + }; + zero_tbl_entry.next_hop = 0; + __atomic_store(&lpm->tbl24[i], &zero_tbl_entry, + __ATOMIC_RELEASE); } else if (lpm->tbl24[i].valid_group == 1) { /* * If TBL24 entry is extended, then there has @@ -1365,7 +1375,8 @@ delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, if (lpm->tbl24[i].valid_group == 0 && lpm->tbl24[i].depth <= depth) { - lpm->tbl24[i] = new_tbl24_entry; + __atomic_store(&lpm->tbl24[i], &new_tbl24_entry, + __ATOMIC_RELEASE); } else if (lpm->tbl24[i].valid_group == 1) { /* * If TBL24 entry is extended, then there has @@ -1647,8 +1658,11 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, tbl8_recycle_index = tbl8_recycle_check_v20(lpm->tbl8, tbl8_group_start); if (tbl8_recycle_index == -EINVAL) { - /* Set tbl24 before freeing tbl8 to avoid race condition. */ + /* Set tbl24 before freeing tbl8 to avoid race condition. + * Prevent the free of the tbl8 group from hoisting. + */ lpm->tbl24[tbl24_index].valid = 0; + __atomic_thread_fence(__ATOMIC_RELEASE); tbl8_free_v20(lpm->tbl8, tbl8_group_start); } else if (tbl8_recycle_index > -1) { /* Update tbl24 entry. */ @@ -1659,8 +1673,11 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, .depth = lpm->tbl8[tbl8_recycle_index].depth, }; - /* Set tbl24 before freeing tbl8 to avoid race condition. */ + /* Set tbl24 before freeing tbl8 to avoid race condition. + * Prevent the free of the tbl8 group from hoisting. + */ lpm->tbl24[tbl24_index] = new_tbl24_entry; + __atomic_thread_fence(__ATOMIC_RELEASE); tbl8_free_v20(lpm->tbl8, tbl8_group_start); } diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index 6f5704c5c..98c70ecbe 100644 --- a/lib/librte_lpm/rte_lpm.h +++ b/lib/librte_lpm/rte_lpm.h @@ -88,7 +88,7 @@ struct rte_lpm_tbl_entry_v20 { */ uint8_t valid_group :1; uint8_t depth :6; /**< Rule depth. */ -}; +} __rte_aligned(2); __extension__ struct rte_lpm_tbl_entry { @@ -121,7 +121,7 @@ struct rte_lpm_tbl_entry_v20 { uint8_t group_idx; uint8_t next_hop; }; -}; +} __rte_aligned(2); __extension__ struct rte_lpm_tbl_entry { From patchwork Wed Jun 19 05:36:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 54936 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8AF0C1C281; Wed, 19 Jun 2019 07:36:51 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 435CC1C27E for ; Wed, 19 Jun 2019 07:36:50 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C8D75360; Tue, 18 Jun 2019 22:36:49 -0700 (PDT) Received: from net-arm-c2400-02.shanghai.arm.com (net-arm-c2400-02.shanghai.arm.com [10.169.40.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 29DC83F246; Tue, 18 Jun 2019 22:36:47 -0700 (PDT) From: Ruifeng Wang To: bruce.richardson@intel.com, vladimir.medvedkin@intel.com Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com, Ruifeng Wang Date: Wed, 19 Jun 2019 13:36:15 +0800 Message-Id: <20190619053615.24613-3-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190619053615.24613-1-ruifeng.wang@arm.com> References: <20190619053615.24613-1-ruifeng.wang@arm.com> Subject: [dpdk-dev] [PATCH v2 3/3] lib/lpm: remove unnecessary inline X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Tests showed that the 'inline' keyword caused performance drop on some x86 platforms after the memory ordering patches applied. By removing the 'inline' keyword, the performance was recovered as before on x86 and no impact to arm64 platforms. Suggested-by: Medvedkin Vladimir Signed-off-by: Ruifeng Wang Reviewed-by: Gavin Hu --- v2: initail version to recover rte_lpm_add() performance lib/librte_lpm/rte_lpm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index 0addff5d4..c97b602e6 100644 --- a/lib/librte_lpm/rte_lpm.c +++ b/lib/librte_lpm/rte_lpm.c @@ -778,7 +778,7 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth, return 0; } -static inline int32_t +static int32_t add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop) { @@ -975,7 +975,7 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth, return 0; } -static inline int32_t +static int32_t add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, uint32_t next_hop) {