Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/103288/?format=api
http://patchwork.dpdk.org/api/patches/103288/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/patch/20211101060007.2632418-2-feifei.wang2@arm.com/", "project": { "id": 1, "url": "http://patchwork.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20211101060007.2632418-2-feifei.wang2@arm.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20211101060007.2632418-2-feifei.wang2@arm.com", "date": "2021-11-01T06:00:03", "name": "[v9,1/5] eal: add a new generic helper for wait scheme", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "edd4547e2fa76d1d2ff4739e8f75716a8267bc5a", "submitter": { "id": 1771, "url": "http://patchwork.dpdk.org/api/people/1771/?format=api", "name": "Feifei Wang", "email": "feifei.wang2@arm.com" }, "delegate": { "id": 24651, "url": "http://patchwork.dpdk.org/api/users/24651/?format=api", "username": "dmarchand", "first_name": "David", "last_name": "Marchand", "email": "david.marchand@redhat.com" }, "mbox": "http://patchwork.dpdk.org/project/dpdk/patch/20211101060007.2632418-2-feifei.wang2@arm.com/mbox/", "series": [ { "id": 20158, "url": "http://patchwork.dpdk.org/api/series/20158/?format=api", "web_url": "http://patchwork.dpdk.org/project/dpdk/list/?series=20158", "date": "2021-11-01T06:00:02", "name": "add new helper for wait scheme", "version": 9, "mbox": "http://patchwork.dpdk.org/series/20158/mbox/" } ], "comments": "http://patchwork.dpdk.org/api/patches/103288/comments/", "check": "success", "checks": "http://patchwork.dpdk.org/api/patches/103288/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id A168EA0C52;\n\tMon, 1 Nov 2021 07:00:24 +0100 (CET)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id F1154410FF;\n\tMon, 1 Nov 2021 07:00:21 +0100 (CET)", "from foss.arm.com (foss.arm.com [217.140.110.172])\n by mails.dpdk.org (Postfix) with ESMTP id 39594410FE\n for <dev@dpdk.org>; Mon, 1 Nov 2021 07:00:20 +0100 (CET)", "from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])\n by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 97B85D6E;\n Sun, 31 Oct 2021 23:00:19 -0700 (PDT)", "from net-x86-dell-8268.shanghai.arm.com\n (net-x86-dell-8268.shanghai.arm.com [10.169.210.102])\n by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7A3303F70D;\n Sun, 31 Oct 2021 23:00:16 -0700 (PDT)" ], "From": "Feifei Wang <feifei.wang2@arm.com>", "To": "Ruifeng Wang <ruifeng.wang@arm.com>", "Cc": "dev@dpdk.org, nd@arm.com, jerinjacobk@gmail.com,\n stephen@networkplumber.org, thomas@monjalon.net, david.marchand@redhat.com,\n Feifei Wang <feifei.wang2@arm.com>,\n Konstantin Ananyev <konstantin.ananyev@intel.com>,\n Jerin Jacob <jerinj@marvell.com>", "Date": "Mon, 1 Nov 2021 14:00:03 +0800", "Message-Id": "<20211101060007.2632418-2-feifei.wang2@arm.com>", "X-Mailer": "git-send-email 2.25.1", "In-Reply-To": "<20211101060007.2632418-1-feifei.wang2@arm.com>", "References": "<20210902053253.3017858-1-feifei.wang2@arm.com>\n <20211101060007.2632418-1-feifei.wang2@arm.com>", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=UTF-8", "Content-Transfer-Encoding": "8bit", "Subject": "[dpdk-dev] [PATCH v9 1/5] eal: add a new generic helper for wait\n scheme", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Add a new generic helper which is a macro for wait scheme.\n\nFurthermore, to prevent compilation warning in arm:\n----------------------------------------------\n'warning: implicit declaration of function ...'\n----------------------------------------------\nDelete 'undef' constructions for '__LOAD_EXC_xx', '__SEVL' and '__WFE'.\nAnd add ‘__RTE_ARM’ for these macros to fix the namespace.\nThis is because original macros are undefine at the end of the file.\nIf new macro 'rte_wait_event' calls them in other files, they will be\nseen as 'not defined'.\n\nSigned-off-by: Feifei Wang <feifei.wang2@arm.com>\nReviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>\nAcked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>\nAcked-by: Jerin Jacob <jerinj@marvell.com>\n---\n lib/eal/arm/include/rte_pause_64.h | 202 +++++++++++++++++-----------\n lib/eal/include/generic/rte_pause.h | 29 ++++\n 2 files changed, 155 insertions(+), 76 deletions(-)", "diff": "diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h\nindex e87d10b8cc..0ca03c6130 100644\n--- a/lib/eal/arm/include/rte_pause_64.h\n+++ b/lib/eal/arm/include/rte_pause_64.h\n@@ -26,47 +26,120 @@ static inline void rte_pause(void)\n #ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED\n \n /* Send an event to quit WFE. */\n-#define __SEVL() { asm volatile(\"sevl\" : : : \"memory\"); }\n+#define __RTE_ARM_SEVL() { asm volatile(\"sevl\" : : : \"memory\"); }\n \n /* Put processor into low power WFE(Wait For Event) state. */\n-#define __WFE() { asm volatile(\"wfe\" : : : \"memory\"); }\n+#define __RTE_ARM_WFE() { asm volatile(\"wfe\" : : : \"memory\"); }\n \n-static __rte_always_inline void\n-rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,\n-\t\tint memorder)\n-{\n-\tuint16_t value;\n-\n-\tassert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);\n-\n-\t/*\n-\t * Atomic exclusive load from addr, it returns the 16-bit content of\n-\t * *addr while making it 'monitored',when it is written by someone\n-\t * else, the 'monitored' state is cleared and a event is generated\n-\t * implicitly to exit WFE.\n-\t */\n-#define __LOAD_EXC_16(src, dst, memorder) { \\\n+/*\n+ * Atomic exclusive load from addr, it returns the 16-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \\\n \tif (memorder == __ATOMIC_RELAXED) { \\\n \t\tasm volatile(\"ldxrh %w[tmp], [%x[addr]]\" \\\n \t\t\t: [tmp] \"=&r\" (dst) \\\n-\t\t\t: [addr] \"r\"(src) \\\n+\t\t\t: [addr] \"r\" (src) \\\n \t\t\t: \"memory\"); \\\n \t} else { \\\n \t\tasm volatile(\"ldaxrh %w[tmp], [%x[addr]]\" \\\n \t\t\t: [tmp] \"=&r\" (dst) \\\n-\t\t\t: [addr] \"r\"(src) \\\n+\t\t\t: [addr] \"r\" (src) \\\n \t\t\t: \"memory\"); \\\n \t} }\n \n-\t__LOAD_EXC_16(addr, value, memorder)\n+/*\n+ * Atomic exclusive load from addr, it returns the 32-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \\\n+\tif (memorder == __ATOMIC_RELAXED) { \\\n+\t\tasm volatile(\"ldxr %w[tmp], [%x[addr]]\" \\\n+\t\t\t: [tmp] \"=&r\" (dst) \\\n+\t\t\t: [addr] \"r\" (src) \\\n+\t\t\t: \"memory\"); \\\n+\t} else { \\\n+\t\tasm volatile(\"ldaxr %w[tmp], [%x[addr]]\" \\\n+\t\t\t: [tmp] \"=&r\" (dst) \\\n+\t\t\t: [addr] \"r\" (src) \\\n+\t\t\t: \"memory\"); \\\n+\t} }\n+\n+/*\n+ * Atomic exclusive load from addr, it returns the 64-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \\\n+\tif (memorder == __ATOMIC_RELAXED) { \\\n+\t\tasm volatile(\"ldxr %x[tmp], [%x[addr]]\" \\\n+\t\t\t: [tmp] \"=&r\" (dst) \\\n+\t\t\t: [addr] \"r\" (src) \\\n+\t\t\t: \"memory\"); \\\n+\t} else { \\\n+\t\tasm volatile(\"ldaxr %x[tmp], [%x[addr]]\" \\\n+\t\t\t: [tmp] \"=&r\" (dst) \\\n+\t\t\t: [addr] \"r\" (src) \\\n+\t\t\t: \"memory\"); \\\n+\t} }\n+\n+/*\n+ * Atomic exclusive load from addr, it returns the 128-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \\\n+\tvolatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \\\n+\tif (memorder == __ATOMIC_RELAXED) { \\\n+\t\tasm volatile(\"ldxp %x[tmp0], %x[tmp1], [%x[addr]]\" \\\n+\t\t\t: [tmp0] \"=&r\" (dst_128->val[0]), \\\n+\t\t\t [tmp1] \"=&r\" (dst_128->val[1]) \\\n+\t\t\t: [addr] \"r\" (src) \\\n+\t\t\t: \"memory\"); \\\n+\t} else { \\\n+\t\tasm volatile(\"ldaxp %x[tmp0], %x[tmp1], [%x[addr]]\" \\\n+\t\t\t: [tmp0] \"=&r\" (dst_128->val[0]), \\\n+\t\t\t [tmp1] \"=&r\" (dst_128->val[1]) \\\n+\t\t\t: [addr] \"r\" (src) \\\n+\t\t\t: \"memory\"); \\\n+\t} } \\\n+\n+#define __RTE_ARM_LOAD_EXC(src, dst, memorder, size) { \\\n+\tRTE_BUILD_BUG_ON(size != 16 && size != 32 && size != 64 \\\n+\t\t&& size != 128); \\\n+\tif (size == 16) \\\n+\t\t__RTE_ARM_LOAD_EXC_16(src, dst, memorder) \\\n+\telse if (size == 32) \\\n+\t\t__RTE_ARM_LOAD_EXC_32(src, dst, memorder) \\\n+\telse if (size == 64) \\\n+\t\t__RTE_ARM_LOAD_EXC_64(src, dst, memorder) \\\n+\telse if (size == 128) \\\n+\t\t__RTE_ARM_LOAD_EXC_128(src, dst, memorder) \\\n+}\n+\n+static __rte_always_inline void\n+rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,\n+\t\tint memorder)\n+{\n+\tuint16_t value;\n+\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&\n+\t\tmemorder != __ATOMIC_RELAXED);\n+\n+\t__RTE_ARM_LOAD_EXC_16(addr, value, memorder)\n \tif (value != expected) {\n-\t\t__SEVL()\n+\t\t__RTE_ARM_SEVL()\n \t\tdo {\n-\t\t\t__WFE()\n-\t\t\t__LOAD_EXC_16(addr, value, memorder)\n+\t\t\t__RTE_ARM_WFE()\n+\t\t\t__RTE_ARM_LOAD_EXC_16(addr, value, memorder)\n \t\t} while (value != expected);\n \t}\n-#undef __LOAD_EXC_16\n }\n \n static __rte_always_inline void\n@@ -75,36 +148,17 @@ rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,\n {\n \tuint32_t value;\n \n-\tassert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);\n-\n-\t/*\n-\t * Atomic exclusive load from addr, it returns the 32-bit content of\n-\t * *addr while making it 'monitored',when it is written by someone\n-\t * else, the 'monitored' state is cleared and a event is generated\n-\t * implicitly to exit WFE.\n-\t */\n-#define __LOAD_EXC_32(src, dst, memorder) { \\\n-\tif (memorder == __ATOMIC_RELAXED) { \\\n-\t\tasm volatile(\"ldxr %w[tmp], [%x[addr]]\" \\\n-\t\t\t: [tmp] \"=&r\" (dst) \\\n-\t\t\t: [addr] \"r\"(src) \\\n-\t\t\t: \"memory\"); \\\n-\t} else { \\\n-\t\tasm volatile(\"ldaxr %w[tmp], [%x[addr]]\" \\\n-\t\t\t: [tmp] \"=&r\" (dst) \\\n-\t\t\t: [addr] \"r\"(src) \\\n-\t\t\t: \"memory\"); \\\n-\t} }\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&\n+\t\tmemorder != __ATOMIC_RELAXED);\n \n-\t__LOAD_EXC_32(addr, value, memorder)\n+\t__RTE_ARM_LOAD_EXC_32(addr, value, memorder)\n \tif (value != expected) {\n-\t\t__SEVL()\n+\t\t__RTE_ARM_SEVL()\n \t\tdo {\n-\t\t\t__WFE()\n-\t\t\t__LOAD_EXC_32(addr, value, memorder)\n+\t\t\t__RTE_ARM_WFE()\n+\t\t\t__RTE_ARM_LOAD_EXC_32(addr, value, memorder)\n \t\t} while (value != expected);\n \t}\n-#undef __LOAD_EXC_32\n }\n \n static __rte_always_inline void\n@@ -113,40 +167,36 @@ rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,\n {\n \tuint64_t value;\n \n-\tassert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);\n-\n-\t/*\n-\t * Atomic exclusive load from addr, it returns the 64-bit content of\n-\t * *addr while making it 'monitored',when it is written by someone\n-\t * else, the 'monitored' state is cleared and a event is generated\n-\t * implicitly to exit WFE.\n-\t */\n-#define __LOAD_EXC_64(src, dst, memorder) { \\\n-\tif (memorder == __ATOMIC_RELAXED) { \\\n-\t\tasm volatile(\"ldxr %x[tmp], [%x[addr]]\" \\\n-\t\t\t: [tmp] \"=&r\" (dst) \\\n-\t\t\t: [addr] \"r\"(src) \\\n-\t\t\t: \"memory\"); \\\n-\t} else { \\\n-\t\tasm volatile(\"ldaxr %x[tmp], [%x[addr]]\" \\\n-\t\t\t: [tmp] \"=&r\" (dst) \\\n-\t\t\t: [addr] \"r\"(src) \\\n-\t\t\t: \"memory\"); \\\n-\t} }\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&\n+\t\tmemorder != __ATOMIC_RELAXED);\n \n-\t__LOAD_EXC_64(addr, value, memorder)\n+\t__RTE_ARM_LOAD_EXC_64(addr, value, memorder)\n \tif (value != expected) {\n-\t\t__SEVL()\n+\t\t__RTE_ARM_SEVL()\n \t\tdo {\n-\t\t\t__WFE()\n-\t\t\t__LOAD_EXC_64(addr, value, memorder)\n+\t\t\t__RTE_ARM_WFE()\n+\t\t\t__RTE_ARM_LOAD_EXC_64(addr, value, memorder)\n \t\t} while (value != expected);\n \t}\n }\n-#undef __LOAD_EXC_64\n \n-#undef __SEVL\n-#undef __WFE\n+#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) \\\n+do { \\\n+\tRTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \\\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \\\n+\t\tmemorder != __ATOMIC_RELAXED); \\\n+\tconst uint32_t size = sizeof(*(addr)) << 3; \\\n+\ttypeof(*(addr)) expected_value = (expected); \\\n+\ttypeof(*(addr)) value; \\\n+\t__RTE_ARM_LOAD_EXC((addr), value, memorder, size) \\\n+\tif (!((value & (mask)) cond expected_value)) { \\\n+\t\t__RTE_ARM_SEVL() \\\n+\t\tdo { \\\n+\t\t\t__RTE_ARM_WFE() \\\n+\t\t\t__RTE_ARM_LOAD_EXC((addr), value, memorder, size) \\\n+\t\t} while (!((value & (mask)) cond expected_value)); \\\n+\t} \\\n+} while (0)\n \n #endif\n \ndiff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h\nindex 668ee4a184..5894a0ad94 100644\n--- a/lib/eal/include/generic/rte_pause.h\n+++ b/lib/eal/include/generic/rte_pause.h\n@@ -111,6 +111,35 @@ rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,\n \twhile (__atomic_load_n(addr, memorder) != expected)\n \t\trte_pause();\n }\n+\n+/*\n+ * Wait until *addr & mask makes the condition true. With a relaxed memory\n+ * ordering model, the loads around this helper can be reordered.\n+ *\n+ * @param addr\n+ * A pointer to the memory location.\n+ * @param mask\n+ * A mask of value bits in interest.\n+ * @param cond\n+ * A symbol representing the condition.\n+ * @param expected\n+ * An expected value to be in the memory location.\n+ * @param memorder\n+ * Two different memory orders that can be specified:\n+ * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to\n+ * C++11 memory orders with the same names, see the C++11 standard or\n+ * the GCC wiki on atomic synchronization for detailed definition.\n+ */\n+#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) \\\n+do { \\\n+\tRTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \\\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \\\n+\t\tmemorder != __ATOMIC_RELAXED); \\\n+\ttypeof(*(addr)) expected_value = (expected); \\\n+\twhile (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \\\n+\t\texpected_value)) \\\n+\t\trte_pause(); \\\n+} while (0)\n #endif\n \n #endif /* _RTE_PAUSE_H_ */\n", "prefixes": [ "v9", "1/5" ] }{ "id": 103288, "url": "