From patchwork Fri Feb 25 16:38:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luc Pelletier X-Patchwork-Id: 108360 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37E0AA034C; Fri, 25 Feb 2022 17:40:46 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F3A264114A; Fri, 25 Feb 2022 17:40:45 +0100 (CET) Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by mails.dpdk.org (Postfix) with ESMTP id EE27A4068B; Fri, 25 Feb 2022 17:40:44 +0100 (CET) Received: by mail-qt1-f170.google.com with SMTP id b23so2914626qtt.6; Fri, 25 Feb 2022 08:40:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=C93BvCrxxCXbA5Eej3Bjglrr1Cw2i/Y/mFkDNL+9neA=; b=V9XSKV/vHg++E0RhshMlJrDc//T7WIBM7/e8O3YccnOStTAiH4qGO4rbQEb0MfhYGa bhQRXchL955Q3CTIDItcpeeB4YutciqCmx1vSecOAOzyZ3Q3/57HoiTdml22q6KvL4Te /93MT4KbfupcOp3WEzxLxYFaIxoUEirig7mvd7XStXSboLCera1NdsGSgOhAPCvEkj+U 9nk24UoNkUMn+mspTrFb1XyNSQ6j5DMUzdRY/80Ukfyg4ZJcVwlJ6yWzRCxTjBxqULag bOakzO/Jzvow0/hRrVol7e3lFazurAM9C5Xw2x7XwOdhmzasTC0tfMRGYj4eexZyZ2x3 zx2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=C93BvCrxxCXbA5Eej3Bjglrr1Cw2i/Y/mFkDNL+9neA=; b=57H7zPIAhuPDly8wDm+ZMkNLabVAvwlGL6YAQ5QCHka72H23oTt9ivLxDeWDdGwZhe qcLbeJnufmXirkYvK/nBHGfpNENl7eY+Pv+I8SEubdJtdJAwMnnvuLhMsjWHTy7moLIa iK1GxaJtXz0GpLUyNGAGJlnjuz8u47P0mLtqQ80wd9KWNeXSJIqzSW8FpTA0LY0v9qRS P/kqbD0ooyaO078i1AjbSsChmFEVBUOOT1l2Xf5BxryPXaz0LuajGhA2rfiEobbjrGn6 afre4jitaaJV5QoHryHbrYWBpa1S8H9NUkTyawL45vbx7tOyJYZA5XinEN76ytsUUY2Z 7Cfg== X-Gm-Message-State: AOAM532Nfsw4oEpDAgGD5hn9wDuYSr4OAw3FhQeu8lVxu81L4gqWV5Kf yEshMP9hDGQTdXytOWr+Fpw= X-Google-Smtp-Source: ABdhPJx34kkAbfF7oizKPdLfKb99ga1MoxufNRxMkj67arrkcq6jO8y35GnZu2AfJ1Cki0TOUOjXCQ== X-Received: by 2002:a05:622a:1483:b0:2de:2d6f:7b3b with SMTP id t3-20020a05622a148300b002de2d6f7b3bmr7666700qtx.545.1645807244336; Fri, 25 Feb 2022 08:40:44 -0800 (PST) Received: from localhost.localdomain (bras-base-hullpq2034w-grc-11-74-14-174-69.dsl.bell.ca. [74.14.174.69]) by smtp.gmail.com with ESMTPSA id m2-20020ae9e002000000b0050819df7151sm1393173qkk.99.2022.02.25.08.40.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 25 Feb 2022 08:40:43 -0800 (PST) From: Luc Pelletier To: bruce.richardson@intel.com, konstantin.ananyev@intel.com Cc: dev@dpdk.org, Luc Pelletier , Xiaoyun Li , stable@dpdk.org Subject: [PATCH v7] eal: fix rte_memcpy strict aliasing/alignment bugs Date: Fri, 25 Feb 2022 11:38:05 -0500 Message-Id: <20220225163804.506142-1-lucp.at.work@gmail.com> In-Reply-To: <20220115194102.444140-1-lucp.at.work@gmail.com> References: <20220115194102.444140-1-lucp.at.work@gmail.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Calls to rte_memcpy for 1 < n < 16 could result in unaligned loads/stores, which is undefined behaviour according to the C standard, and strict aliasing violations. The code was changed to use a packed structure that allows aliasing (using the __may_alias__ attribute) to perform the load/store operations. This results in code that has the same performance as the original code and that is also C standards-compliant. Fixes: d35cc1fe6a7a ("eal/x86: revert select optimized memcpy at run-time") Cc: Xiaoyun Li Cc: stable@dpdk.org Signed-off-by: Luc Pelletier Acked-by: Konstantin Ananyev Tested-by: Konstantin Ananyev --- v7: * Fix coding style issue by adding new __rte_may_alias macro rather than directly use __attribute__ v6: * Refocus to fix strict aliasing problems discovered following discussions in this thread. * Modified the code to use __may_alias__ and packed structure. This fixes both the undefined behaviour of unaligned access (which is not the main concern), and also fixes the strict aliasing violations (which can cause major bugs, as demonstrated in a previous message in this thread). * Renamed new function from rte_mov15_or_less_unaligned to rte_mov15_or_less. * Modified code that copies <= 15 bytes to call rte_mov15_or_less. v5: * Replaced assembly with pure C code that uses a packed structure to make unaligned loads conform to C standard. v4: * Added volatile to asm statements, which is required under gcc. v3: * Removed for loop and went back to using assembly. v2: * Replaced assembly with a regular for loop that copies bytes one by one. v1: * Fix undefined behaviour of unaligned stores/loads by using assembly to perform stores/loads. lib/eal/include/rte_common.h | 5 ++ lib/eal/x86/include/rte_memcpy.h | 133 ++++++++++++------------------- 2 files changed, 56 insertions(+), 82 deletions(-) diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h index 4a399cc7c8..2f1ec69f3d 100644 --- a/lib/eal/include/rte_common.h +++ b/lib/eal/include/rte_common.h @@ -85,6 +85,11 @@ typedef uint16_t unaligned_uint16_t; */ #define __rte_packed __attribute__((__packed__)) +/** + * Macro to mark a type that is not subject to type-based aliasing rules + */ +#define __rte_may_alias __attribute__((__may_alias__)) + /******* Macro to mark functions and fields scheduled for removal *****/ #define __rte_deprecated __attribute__((__deprecated__)) #define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg))) diff --git a/lib/eal/x86/include/rte_memcpy.h b/lib/eal/x86/include/rte_memcpy.h index 1b6c6e585f..18aa4e43a7 100644 --- a/lib/eal/x86/include/rte_memcpy.h +++ b/lib/eal/x86/include/rte_memcpy.h @@ -45,6 +45,52 @@ extern "C" { static __rte_always_inline void * rte_memcpy(void *dst, const void *src, size_t n); +/** + * Copy bytes from one location to another, + * locations should not overlap. + * Use with n <= 15. + */ +static __rte_always_inline void * +rte_mov15_or_less(void *dst, const void *src, size_t n) +{ + /** + * Use the following structs to avoid violating C standard + * alignment requirements and to avoid strict aliasing bugs + */ + struct rte_uint64_alias { + uint64_t val; + } __rte_packed __rte_may_alias; + struct rte_uint32_alias { + uint32_t val; + } __rte_packed __rte_may_alias; + struct rte_uint16_alias { + uint16_t val; + } __rte_packed __rte_may_alias; + + void *ret = dst; + if (n & 8) { + ((struct rte_uint64_alias *)dst)->val = + ((const struct rte_uint64_alias *)src)->val; + src = (const uint64_t *)src + 1; + dst = (uint64_t *)dst + 1; + } + if (n & 4) { + ((struct rte_uint32_alias *)dst)->val = + ((const struct rte_uint32_alias *)src)->val; + src = (const uint32_t *)src + 1; + dst = (uint32_t *)dst + 1; + } + if (n & 2) { + ((struct rte_uint16_alias *)dst)->val = + ((const struct rte_uint16_alias *)src)->val; + src = (const uint16_t *)src + 1; + dst = (uint16_t *)dst + 1; + } + if (n & 1) + *(uint8_t *)dst = *(const uint8_t *)src; + return ret; +} + #if defined __AVX512F__ && defined RTE_MEMCPY_AVX512 #define ALIGNMENT_MASK 0x3F @@ -171,8 +217,6 @@ rte_mov512blocks(uint8_t *dst, const uint8_t *src, size_t n) static __rte_always_inline void * rte_memcpy_generic(void *dst, const void *src, size_t n) { - uintptr_t dstu = (uintptr_t)dst; - uintptr_t srcu = (uintptr_t)src; void *ret = dst; size_t dstofss; size_t bits; @@ -181,24 +225,7 @@ rte_memcpy_generic(void *dst, const void *src, size_t n) * Copy less than 16 bytes */ if (n < 16) { - if (n & 0x01) { - *(uint8_t *)dstu = *(const uint8_t *)srcu; - srcu = (uintptr_t)((const uint8_t *)srcu + 1); - dstu = (uintptr_t)((uint8_t *)dstu + 1); - } - if (n & 0x02) { - *(uint16_t *)dstu = *(const uint16_t *)srcu; - srcu = (uintptr_t)((const uint16_t *)srcu + 1); - dstu = (uintptr_t)((uint16_t *)dstu + 1); - } - if (n & 0x04) { - *(uint32_t *)dstu = *(const uint32_t *)srcu; - srcu = (uintptr_t)((const uint32_t *)srcu + 1); - dstu = (uintptr_t)((uint32_t *)dstu + 1); - } - if (n & 0x08) - *(uint64_t *)dstu = *(const uint64_t *)srcu; - return ret; + return rte_mov15_or_less(dst, src, n); } /** @@ -379,8 +406,6 @@ rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n) static __rte_always_inline void * rte_memcpy_generic(void *dst, const void *src, size_t n) { - uintptr_t dstu = (uintptr_t)dst; - uintptr_t srcu = (uintptr_t)src; void *ret = dst; size_t dstofss; size_t bits; @@ -389,25 +414,7 @@ rte_memcpy_generic(void *dst, const void *src, size_t n) * Copy less than 16 bytes */ if (n < 16) { - if (n & 0x01) { - *(uint8_t *)dstu = *(const uint8_t *)srcu; - srcu = (uintptr_t)((const uint8_t *)srcu + 1); - dstu = (uintptr_t)((uint8_t *)dstu + 1); - } - if (n & 0x02) { - *(uint16_t *)dstu = *(const uint16_t *)srcu; - srcu = (uintptr_t)((const uint16_t *)srcu + 1); - dstu = (uintptr_t)((uint16_t *)dstu + 1); - } - if (n & 0x04) { - *(uint32_t *)dstu = *(const uint32_t *)srcu; - srcu = (uintptr_t)((const uint32_t *)srcu + 1); - dstu = (uintptr_t)((uint32_t *)dstu + 1); - } - if (n & 0x08) { - *(uint64_t *)dstu = *(const uint64_t *)srcu; - } - return ret; + return rte_mov15_or_less(dst, src, n); } /** @@ -672,8 +679,6 @@ static __rte_always_inline void * rte_memcpy_generic(void *dst, const void *src, size_t n) { __m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8; - uintptr_t dstu = (uintptr_t)dst; - uintptr_t srcu = (uintptr_t)src; void *ret = dst; size_t dstofss; size_t srcofs; @@ -682,25 +687,7 @@ rte_memcpy_generic(void *dst, const void *src, size_t n) * Copy less than 16 bytes */ if (n < 16) { - if (n & 0x01) { - *(uint8_t *)dstu = *(const uint8_t *)srcu; - srcu = (uintptr_t)((const uint8_t *)srcu + 1); - dstu = (uintptr_t)((uint8_t *)dstu + 1); - } - if (n & 0x02) { - *(uint16_t *)dstu = *(const uint16_t *)srcu; - srcu = (uintptr_t)((const uint16_t *)srcu + 1); - dstu = (uintptr_t)((uint16_t *)dstu + 1); - } - if (n & 0x04) { - *(uint32_t *)dstu = *(const uint32_t *)srcu; - srcu = (uintptr_t)((const uint32_t *)srcu + 1); - dstu = (uintptr_t)((uint32_t *)dstu + 1); - } - if (n & 0x08) { - *(uint64_t *)dstu = *(const uint64_t *)srcu; - } - return ret; + return rte_mov15_or_less(dst, src, n); } /** @@ -818,27 +805,9 @@ rte_memcpy_aligned(void *dst, const void *src, size_t n) { void *ret = dst; - /* Copy size <= 16 bytes */ + /* Copy size < 16 bytes */ if (n < 16) { - if (n & 0x01) { - *(uint8_t *)dst = *(const uint8_t *)src; - src = (const uint8_t *)src + 1; - dst = (uint8_t *)dst + 1; - } - if (n & 0x02) { - *(uint16_t *)dst = *(const uint16_t *)src; - src = (const uint16_t *)src + 1; - dst = (uint16_t *)dst + 1; - } - if (n & 0x04) { - *(uint32_t *)dst = *(const uint32_t *)src; - src = (const uint32_t *)src + 1; - dst = (uint32_t *)dst + 1; - } - if (n & 0x08) - *(uint64_t *)dst = *(const uint64_t *)src; - - return ret; + return rte_mov15_or_less(dst, src, n); } /* Copy 16 <= size <= 32 bytes */