From patchwork Mon Aug 23 08:44:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aman Kumar X-Patchwork-Id: 97215 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B39CA0C56; Mon, 23 Aug 2021 10:44:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 931DE4014D; Mon, 23 Aug 2021 10:44:22 +0200 (CEST) Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by mails.dpdk.org (Postfix) with ESMTP id 66D4040143 for ; Mon, 23 Aug 2021 10:44:21 +0200 (CEST) Received: by mail-pl1-f174.google.com with SMTP id w6so9710930plg.9 for ; Mon, 23 Aug 2021 01:44:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vvdntech-in.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=c+aeKjVGx7moJMLaf/nkc/iazbJjFF2fo51+kmKFIAE=; b=COCB8NAhjTJZuLVlkbdzDtWIt99xqBnhTb3lLxMACQc1hKVmJBHJ/egxkUc2RuOBE8 ztNUG5fYVdA9sMO7hySmAc9uDy4CTwE0/bbTb36eAAcZog9WJLMtrNTYPfOTtwlnaKVC TjxkBn8rGYVobwCJPmF2AHfGkDasDPCGSv4exmDbZBjTQxrBWhMG8QFbzM3e3EDpIKNA KtWYfFwxlFaER9t4lJ0zOxPk9L6q0e1m5pV36/TilrEmtLAhfjHBsvOaXrXhnXbRAwfw NFof6gdqFIIb6CvGmcf0ekKjJJn2iz6kkvXEtUIZVtyd5jo6f+hC+oEJQZMO3dvwtwvC tMyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=c+aeKjVGx7moJMLaf/nkc/iazbJjFF2fo51+kmKFIAE=; b=UI26sNLo3KFfL27VsVN0GJMP9JiriOPTjZIghwYN39jaAUtaGINMQe2zRtnDxn8TiN n0ChffL8W6jExoiWQDJt7swJrU6qr8uQ8rr9Nss1OEecFQmVrg3QHl8JULdlutmlzgvD zWGFMY1ZKfb5tI6FkwAKnm1+E3hziT864U3PsnPdSxSwF9ywY5WeliwOVInRdoJQf3cV dw1ujGTF/X832XDGjmz7o79y3zGsgNtFHCOrZnQB5Ntp6TALQPmloGuiMIlPuZwFde+G cRbbtEArJL1+Xv0JdaDxraRh6miO3VPtLjb0jCbEC/tBJGTiVzIKV9g4Y5yOQUHAJ3nO iTEA== X-Gm-Message-State: AOAM533Y9u2fLa2oE0R/D0MBn9Nu5MMNUYxHla4J8F+6T37uihrD3PH2 h2dps4mnGjldujvG/RdkFe5qI5qYBeEqMail X-Google-Smtp-Source: ABdhPJxrmU6hqMfdKCxbZ/rcPyuckGMgEm4OuhmLsMouPrsnk1eNQMOabsKmTN5gn9T/ng+yOuKaNw== X-Received: by 2002:a17:90a:5305:: with SMTP id x5mr18344599pjh.135.1629708259978; Mon, 23 Aug 2021 01:44:19 -0700 (PDT) Received: from 470--5GDC--BLR.blr5glab.net ([106.51.39.131]) by smtp.gmail.com with ESMTPSA id j4sm17566832pgi.6.2021.08.23.01.44.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 01:44:19 -0700 (PDT) From: Aman Kumar To: dev@dpdk.org Cc: rasland@nvidia.com, asafp@nvidia.com, shys@nvidia.com, viacheslavo@nvidia.com, akozyrev@nvidia.com, matan@nvidia.com, anatoly.burakov@intel.com, keesang.song@amd.com, aman.kumar@vvdntech.in Date: Mon, 23 Aug 2021 14:14:10 +0530 Message-Id: <20210823084411.29592-1-aman.kumar@vvdntech.in> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/2] lib/eal: add amd epyc2 memcpy routine to eal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch provides rte_memcpy* calls optimized for AMD EPYC Gen2 platforms. This option is disabled by default and can be enabled by defining 'rte_memcpy_amdepyc2' in the meson build. Signed-off-by: Aman Kumar --- lib/eal/x86/include/meson.build | 1 + lib/eal/x86/include/rte_memcpy.h | 502 +++++++++++++++++++++++++++++++ meson_options.txt | 2 + 3 files changed, 505 insertions(+) diff --git a/lib/eal/x86/include/meson.build b/lib/eal/x86/include/meson.build index 12c2e00035..a03683779d 100644 --- a/lib/eal/x86/include/meson.build +++ b/lib/eal/x86/include/meson.build @@ -27,3 +27,4 @@ arch_indirect_headers = files( ) install_headers(arch_headers + arch_indirect_headers, subdir: get_option('include_subdir_arch')) dpdk_chkinc_headers += arch_headers +dpdk_conf.set('RTE_MEMCPY_AMDEPYC2', get_option('rte_memcpy_amdepyc2')) diff --git a/lib/eal/x86/include/rte_memcpy.h b/lib/eal/x86/include/rte_memcpy.h index 79f381dd9b..47dda9cb87 100644 --- a/lib/eal/x86/include/rte_memcpy.h +++ b/lib/eal/x86/include/rte_memcpy.h @@ -368,6 +368,498 @@ rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n) } } +#if defined RTE_MEMCPY_AMDEPYC2 + +/** + * Copy 16 bytes from one location to another, + * with temporal stores + */ +static __rte_always_inline void +rte_copy16_ts(uint8_t *dst, uint8_t *src) +{ + __m128i var128; + + var128 = _mm_stream_load_si128((__m128i *)src); + _mm_storeu_si128((__m128i *)dst, var128); +} + +/** + * Copy 32 bytes from one location to another, + * with temporal stores + */ +static __rte_always_inline void +rte_copy32_ts(uint8_t *dst, uint8_t *src) +{ + __m256i ymm0; + + ymm0 = _mm256_stream_load_si256((const __m256i *)src); + _mm256_storeu_si256((__m256i *)dst, ymm0); +} + +/** + * Copy 64 bytes from one location to another, + * with temporal stores + */ +static __rte_always_inline void +rte_copy64_ts(uint8_t *dst, uint8_t *src) +{ + rte_copy32_ts(dst + 0 * 32, src + 0 * 32); + rte_copy32_ts(dst + 1 * 32, src + 1 * 32); +} + +/** + * Copy 128 bytes from one location to another, + * with temporal stores + */ +static __rte_always_inline void +rte_copy128_ts(uint8_t *dst, uint8_t *src) +{ + rte_copy32_ts(dst + 0 * 32, src + 0 * 32); + rte_copy32_ts(dst + 1 * 32, src + 1 * 32); + rte_copy32_ts(dst + 2 * 32, src + 2 * 32); + rte_copy32_ts(dst + 3 * 32, src + 3 * 32); +} + +/** + * Copy len bytes from one location to another, + * with temporal stores 16B aligned + */ +static __rte_always_inline void * +rte_memcpy_aligned_tstore16_generic(void *dst, void *src, int len) +{ + void *dest = dst; + + while (len >= 128) { + rte_copy128_ts((uint8_t *)dst, (uint8_t *)src); + dst = (uint8_t *)dst + 128; + src = (uint8_t *)src + 128; + len -= 128; + } + while (len >= 64) { + rte_copy64_ts((uint8_t *)dst, (uint8_t *)src); + dst = (uint8_t *)dst + 64; + src = (uint8_t *)src + 64; + len -= 64; + } + while (len >= 32) { + rte_copy32_ts((uint8_t *)dst, (uint8_t *)src); + dst = (uint8_t *)dst + 32; + src = (uint8_t *)src + 32; + len -= 32; + } + if (len >= 16) { + rte_copy16_ts((uint8_t *)dst, (uint8_t *)src); + dst = (uint8_t *)dst + 16; + src = (uint8_t *)src + 16; + len -= 16; + } + if (len >= 8) { + *(uint64_t *)dst = *(const uint64_t *)src; + dst = (uint8_t *)dst + 8; + src = (uint8_t *)src + 8; + len -= 8; + } + if (len >= 4) { + *(uint32_t *)dst = *(const uint32_t *)src; + dst = (uint8_t *)dst + 4; + src = (uint8_t *)src + 4; + len -= 4; + } + if (len != 0) { + dst = (uint8_t *)dst - (4 - len); + src = (uint8_t *)src - (4 - len); + *(uint32_t *)dst = *(const uint32_t *)src; + } + + return dest; +} + +static __rte_always_inline void * +rte_memcpy_aligned_ntload_tstore16_amdepyc2(void *dst, + const void *src, + size_t size) +{ + asm volatile goto("movq %0, %%rsi\n\t" + "movq %1, %%rdi\n\t" + "movq %2, %%rdx\n\t" + "cmpq $(128), %%rdx\n\t" + "jb 202f\n\t" + "201:\n\t" + "vmovntdqa (%%rsi), %%ymm0\n\t" + "vmovntdqa 32(%%rsi), %%ymm1\n\t" + "vmovntdqa 64(%%rsi), %%ymm2\n\t" + "vmovntdqa 96(%%rsi), %%ymm3\n\t" + "vmovdqu %%ymm0, (%%rdi)\n\t" + "vmovdqu %%ymm1, 32(%%rdi)\n\t" + "vmovdqu %%ymm2, 64(%%rdi)\n\t" + "vmovdqu %%ymm3, 96(%%rdi)\n\t" + "addq $128, %%rsi\n\t" + "addq $128, %%rdi\n\t" + "subq $128, %%rdx\n\t" + "jz %l[done]\n\t" + "cmpq $128, %%rdx\n\t" /*Vector Size 32B. */ + "jae 201b\n\t" + "202:\n\t" + "cmpq $64, %%rdx\n\t" + "jb 203f\n\t" + "vmovntdqa (%%rsi), %%ymm0\n\t" + "vmovntdqa 32(%%rsi), %%ymm1\n\t" + "vmovdqu %%ymm0, (%%rdi)\n\t" + "vmovdqu %%ymm1, 32(%%rdi)\n\t" + "addq $64, %%rsi\n\t" + "addq $64, %%rdi\n\t" + "subq $64, %%rdx\n\t" + "jz %l[done]\n\t" + "203:\n\t" + "cmpq $32, %%rdx\n\t" + "jb 204f\n\t" + "vmovntdqa (%%rsi), %%ymm0\n\t" + "vmovdqu %%ymm0, (%%rdi)\n\t" + "addq $32, %%rsi\n\t" + "addq $32, %%rdi\n\t" + "subq $32, %%rdx\n\t" + "jz %l[done]\n\t" + "204:\n\t" + "cmpb $16, %%dl\n\t" + "jb 205f\n\t" + "vmovntdqa (%%rsi), %%xmm0\n\t" + "vmovdqu %%xmm0, (%%rdi)\n\t" + "addq $16, %%rsi\n\t" + "addq $16, %%rdi\n\t" + "subq $16, %%rdx\n\t" + "jz %l[done]\n\t" + "205:\n\t" + "cmpb $2, %%dl\n\t" + "jb 208f\n\t" + "cmpb $4, %%dl\n\t" + "jbe 207f\n\t" + "cmpb $8, %%dl\n\t" + "jbe 206f\n\t" + "movq -8(%%rsi,%%rdx), %%rcx\n\t" + "movq (%%rsi), %%rsi\n\t" + "movq %%rcx, -8(%%rdi,%%rdx)\n\t" + "movq %%rsi, (%%rdi)\n\t" + "jmp %l[done]\n\t" + "206:\n\t" + "movl -4(%%rsi,%%rdx), %%ecx\n\t" + "movl (%%rsi), %%esi\n\t" + "movl %%ecx, -4(%%rdi,%%rdx)\n\t" + "movl %%esi, (%%rdi)\n\t" + "jmp %l[done]\n\t" + "207:\n\t" + "movzwl -2(%%rsi,%%rdx), %%ecx\n\t" + "movzwl (%%rsi), %%esi\n\t" + "movw %%cx, -2(%%rdi,%%rdx)\n\t" + "movw %%si, (%%rdi)\n\t" + "jmp %l[done]\n\t" + "208:\n\t" + "movzbl (%%rsi), %%ecx\n\t" + "movb %%cl, (%%rdi)" + : + : "r"(src), "r"(dst), "r"(size) + : "rcx", "rdx", "rsi", "rdi", "ymm0", "ymm1", "ymm2", "ymm3", "memory" + : done + ); +done: + return dst; +} + +static __rte_always_inline void * +rte_memcpy_generic(void *dst, const void *src, size_t len) +{ + asm goto("movq %0, %%rsi\n\t" + "movq %1, %%rdi\n\t" + "movq %2, %%rdx\n\t" + "movq %%rdi, %%rax\n\t" + "cmp $32, %%rdx\n\t" + "jb 101f\n\t" + "cmp $(32 * 2), %%rdx\n\t" + "ja 108f\n\t" + "vmovdqu (%%rsi), %%ymm0\n\t" + "vmovdqu -32(%%rsi,%%rdx), %%ymm1\n\t" + "vmovdqu %%ymm0, (%%rdi)\n\t" + "vmovdqu %%ymm1, -32(%%rdi,%%rdx)\n\t" + "vzeroupper\n\t" + "jmp %l[done]\n\t" + "101:\n\t" + /* Less than 1 VEC. */ + "cmpb $32, %%dl\n\t" + "jae 103f\n\t" + "cmpb $16, %%dl\n\t" + "jae 104f\n\t" + "cmpb $8, %%dl\n\t" + "jae 105f\n\t" + "cmpb $4, %%dl\n\t" + "jae 106f\n\t" + "cmpb $1, %%dl\n\t" + "ja 107f\n\t" + "jb 102f\n\t" + "movzbl (%%rsi), %%ecx\n\t" + "movb %%cl, (%%rdi)\n\t" + "102:\n\t" + "jmp %l[done]\n\t" + "103:\n\t" + /* From 32 to 63. No branch when size == 32. */ + "vmovdqu (%%rsi), %%ymm0\n\t" + "vmovdqu -32(%%rsi,%%rdx), %%ymm1\n\t" + "vmovdqu %%ymm0, (%%rdi)\n\t" + "vmovdqu %%ymm1, -32(%%rdi,%%rdx)\n\t" + "vzeroupper\n\t" + "jmp %l[done]\n\t" + /* From 16 to 31. No branch when size == 16. */ + "104:\n\t" + "vmovdqu (%%rsi), %%xmm0\n\t" + "vmovdqu -16(%%rsi,%%rdx), %%xmm1\n\t" + "vmovdqu %%xmm0, (%%rdi)\n\t" + "vmovdqu %%xmm1, -16(%%rdi,%%rdx)\n\t" + "jmp %l[done]\n\t" + "105:\n\t" + /* From 8 to 15. No branch when size == 8. */ + "movq -8(%%rsi,%%rdx), %%rcx\n\t" + "movq (%%rsi), %%rsi\n\t" + "movq %%rcx, -8(%%rdi,%%rdx)\n\t" + "movq %%rsi, (%%rdi)\n\t" + "jmp %l[done]\n\t" + "106:\n\t" + /* From 4 to 7. No branch when size == 4. */ + "movl -4(%%rsi,%%rdx), %%ecx\n\t" + "movl (%%rsi), %%esi\n\t" + "movl %%ecx, -4(%%rdi,%%rdx)\n\t" + "movl %%esi, (%%rdi)\n\t" + "jmp %l[done]\n\t" + "107:\n\t" + /* From 2 to 3. No branch when size == 2. */ + "movzwl -2(%%rsi,%%rdx), %%ecx\n\t" + "movzwl (%%rsi), %%esi\n\t" + "movw %%cx, -2(%%rdi,%%rdx)\n\t" + "movw %%si, (%%rdi)\n\t" + "jmp %l[done]\n\t" + "108:\n\t" + /* More than 2 * VEC and there may be overlap between destination */ + /* and source. */ + "cmpq $(32 * 8), %%rdx\n\t" + "ja 111f\n\t" + "cmpq $(32 * 4), %%rdx\n\t" + "jb 109f\n\t" + /* Copy from 4 * VEC to 8 * VEC, inclusively. */ + "vmovdqu (%%rsi), %%ymm0\n\t" + "vmovdqu 32(%%rsi), %%ymm1\n\t" + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" + "vmovdqu -32(%%rsi,%%rdx), %%ymm4\n\t" + "vmovdqu -(32 * 2)(%%rsi,%%rdx), %%ymm5\n\t" + "vmovdqu -(32 * 3)(%%rsi,%%rdx), %%ymm6\n\t" + "vmovdqu -(32 * 4)(%%rsi,%%rdx), %%ymm7\n\t" + "vmovdqu %%ymm0, (%%rdi)\n\t" + "vmovdqu %%ymm1, 32(%%rdi)\n\t" + "vmovdqu %%ymm2, (32 * 2)(%%rdi)\n\t" + "vmovdqu %%ymm3, (32 * 3)(%%rdi)\n\t" + "vmovdqu %%ymm4, -32(%%rdi,%%rdx)\n\t" + "vmovdqu %%ymm5, -(32 * 2)(%%rdi,%%rdx)\n\t" + "vmovdqu %%ymm6, -(32 * 3)(%%rdi,%%rdx)\n\t" + "vmovdqu %%ymm7, -(32 * 4)(%%rdi,%%rdx)\n\t" + "vzeroupper\n\t" + "jmp %l[done]\n\t" + "109:\n\t" + /* Copy from 2 * VEC to 4 * VEC. */ + "vmovdqu (%%rsi), %%ymm0\n\t" + "vmovdqu 32(%%rsi), %%ymm1\n\t" + "vmovdqu -32(%%rsi,%%rdx), %%ymm2\n\t" + "vmovdqu -(32 * 2)(%%rsi,%%rdx), %%ymm3\n\t" + "vmovdqu %%ymm0, (%%rdi)\n\t" + "vmovdqu %%ymm1, 32(%%rdi)\n\t" + "vmovdqu %%ymm2, -32(%%rdi,%%rdx)\n\t" + "vmovdqu %%ymm3, -(32 * 2)(%%rdi,%%rdx)\n\t" + "vzeroupper\n\t" + "110:\n\t" + "jmp %l[done]\n\t" + "111:\n\t" + "cmpq %%rsi, %%rdi\n\t" + "ja 113f\n\t" + /* Source == destination is less common. */ + "je 110b\n\t" + /* Load the first VEC and last 4 * VEC to + * support overlapping addresses. + */ + "vmovdqu (%%rsi), %%ymm4\n\t" + "vmovdqu -32(%%rsi, %%rdx), %%ymm5\n\t" + "vmovdqu -(32 * 2)(%%rsi, %%rdx), %%ymm6\n\t" + "vmovdqu -(32 * 3)(%%rsi, %%rdx), %%ymm7\n\t" + "vmovdqu -(32 * 4)(%%rsi, %%rdx), %%ymm8\n\t" + /* Save start and stop of the destination buffer. */ + "movq %%rdi, %%r11\n\t" + "leaq -32(%%rdi, %%rdx), %%rcx\n\t" + /* Align destination for aligned stores in the loop. Compute */ + /* how much destination is misaligned. */ + "movq %%rdi, %%r8\n\t" + "andq $(32 - 1), %%r8\n\t" + /* Get the negative of offset for alignment. */ + "subq $32, %%r8\n\t" + /* Adjust source. */ + "subq %%r8, %%rsi\n\t" + /* Adjust destination which should be aligned now. */ + "subq %%r8, %%rdi\n\t" + /* Adjust length. */ + "addq %%r8, %%rdx\n\t" + /* Check non-temporal store threshold. */ + "cmpq $(1024*1024), %%rdx\n\t" + "ja 115f\n\t" + "112:\n\t" + /* Copy 4 * VEC a time forward. */ + "vmovdqu (%%rsi), %%ymm0\n\t" + "vmovdqu 32(%%rsi), %%ymm1\n\t" + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" + "addq $(32 * 4), %%rsi\n\t" + "subq $(32 * 4), %%rdx\n\t" + "vmovdqa %%ymm0, (%%rdi)\n\t" + "vmovdqa %%ymm1, 32(%%rdi)\n\t" + "vmovdqa %%ymm2, (32 * 2)(%%rdi)\n\t" + "vmovdqa %%ymm3, (32 * 3)(%%rdi)\n\t" + "addq $(32 * 4), %%rdi\n\t" + "cmpq $(32 * 4), %%rdx\n\t" + "ja 112b\n\t" + /* Store the last 4 * VEC. */ + "vmovdqu %%ymm5, (%%rcx)\n\t" + "vmovdqu %%ymm6, -32(%%rcx)\n\t" + "vmovdqu %%ymm7, -(32 * 2)(%%rcx)\n\t" + "vmovdqu %%ymm8, -(32 * 3)(%%rcx)\n\t" + /* Store the first VEC. */ + "vmovdqu %%ymm4, (%%r11)\n\t" + "vzeroupper\n\t" + "jmp %l[done]\n\t" + "113:\n\t" + /* Load the first 4*VEC and last VEC to support overlapping addresses.*/ + "vmovdqu (%%rsi), %%ymm4\n\t" + "vmovdqu 32(%%rsi), %%ymm5\n\t" + "vmovdqu (32 * 2)(%%rsi), %%ymm6\n\t" + "vmovdqu (32 * 3)(%%rsi), %%ymm7\n\t" + "vmovdqu -32(%%rsi,%%rdx), %%ymm8\n\t" + /* Save stop of the destination buffer. */ + "leaq -32(%%rdi, %%rdx), %%r11\n\t" + /* Align destination end for aligned stores in the loop. Compute */ + /* how much destination end is misaligned. */ + "leaq -32(%%rsi, %%rdx), %%rcx\n\t" + "movq %%r11, %%r9\n\t" + "movq %%r11, %%r8\n\t" + "andq $(32 - 1), %%r8\n\t" + /* Adjust source. */ + "subq %%r8, %%rcx\n\t" + /* Adjust the end of destination which should be aligned now. */ + "subq %%r8, %%r9\n\t" + /* Adjust length. */ + "subq %%r8, %%rdx\n\t" + /* Check non-temporal store threshold. */ + "cmpq $(1024*1024), %%rdx\n\t" + "ja 117f\n\t" + "114:\n\t" + /* Copy 4 * VEC a time backward. */ + "vmovdqu (%%rcx), %%ymm0\n\t" + "vmovdqu -32(%%rcx), %%ymm1\n\t" + "vmovdqu -(32 * 2)(%%rcx), %%ymm2\n\t" + "vmovdqu -(32 * 3)(%%rcx), %%ymm3\n\t" + "subq $(32 * 4), %%rcx\n\t" + "subq $(32 * 4), %%rdx\n\t" + "vmovdqa %%ymm0, (%%r9)\n\t" + "vmovdqa %%ymm1, -32(%%r9)\n\t" + "vmovdqa %%ymm2, -(32 * 2)(%%r9)\n\t" + "vmovdqa %%ymm3, -(32 * 3)(%%r9)\n\t" + "subq $(32 * 4), %%r9\n\t" + "cmpq $(32 * 4), %%rdx\n\t" + "ja 114b\n\t" + /* Store the first 4 * VEC. */ + "vmovdqu %%ymm4, (%%rdi)\n\t" + "vmovdqu %%ymm5, 32(%%rdi)\n\t" + "vmovdqu %%ymm6, (32 * 2)(%%rdi)\n\t" + "vmovdqu %%ymm7, (32 * 3)(%%rdi)\n\t" + /* Store the last VEC. */ + "vmovdqu %%ymm8, (%%r11)\n\t" + "vzeroupper\n\t" + "jmp %l[done]\n\t" + + "115:\n\t" + /* Don't use non-temporal store if there is overlap between */ + /* destination and source since destination may be in cache */ + /* when source is loaded. */ + "leaq (%%rdi, %%rdx), %%r10\n\t" + "cmpq %%r10, %%rsi\n\t" + "jb 112b\n\t" + "116:\n\t" + /* Copy 4 * VEC a time forward with non-temporal stores. */ + "prefetcht0 (32*4*2)(%%rsi)\n\t" + "prefetcht0 (32*4*2 + 64)(%%rsi)\n\t" + "prefetcht0 (32*4*3)(%%rsi)\n\t" + "prefetcht0 (32*4*3 + 64)(%%rsi)\n\t" + "vmovdqu (%%rsi), %%ymm0\n\t" + "vmovdqu 32(%%rsi), %%ymm1\n\t" + "vmovdqu (32 * 2)(%%rsi), %%ymm2\n\t" + "vmovdqu (32 * 3)(%%rsi), %%ymm3\n\t" + "addq $(32*4), %%rsi\n\t" + "subq $(32*4), %%rdx\n\t" + "vmovntdq %%ymm0, (%%rdi)\n\t" + "vmovntdq %%ymm1, 32(%%rdi)\n\t" + "vmovntdq %%ymm2, (32 * 2)(%%rdi)\n\t" + "vmovntdq %%ymm3, (32 * 3)(%%rdi)\n\t" + "addq $(32*4), %%rdi\n\t" + "cmpq $(32*4), %%rdx\n\t" + "ja 116b\n\t" + "sfence\n\t" + /* Store the last 4 * VEC. */ + "vmovdqu %%ymm5, (%%rcx)\n\t" + "vmovdqu %%ymm6, -32(%%rcx)\n\t" + "vmovdqu %%ymm7, -(32 * 2)(%%rcx)\n\t" + "vmovdqu %%ymm8, -(32 * 3)(%%rcx)\n\t" + /* Store the first VEC. */ + "vmovdqu %%ymm4, (%%r11)\n\t" + "vzeroupper\n\t" + "jmp %l[done]\n\t" + "117:\n\t" + /* Don't use non-temporal store if there is overlap between */ + /* destination and source since destination may be in cache */ + /* when source is loaded. */ + "leaq (%%rcx, %%rdx), %%r10\n\t" + "cmpq %%r10, %%r9\n\t" + "jb 114b\n\t" + "118:\n\t" + /* Copy 4 * VEC a time backward with non-temporal stores. */ + "prefetcht0 (-32 * 4 * 2)(%%rcx)\n\t" + "prefetcht0 (-32 * 4 * 2 - 64)(%%rcx)\n\t" + "prefetcht0 (-32 * 4 * 3)(%%rcx)\n\t" + "prefetcht0 (-32 * 4 * 3 - 64)(%%rcx)\n\t" + "vmovdqu (%%rcx), %%ymm0\n\t" + "vmovdqu -32(%%rcx), %%ymm1\n\t" + "vmovdqu -(32 * 2)(%%rcx), %%ymm2\n\t" + "vmovdqu -(32 * 3)(%%rcx), %%ymm3\n\t" + "subq $(32*4), %%rcx\n\t" + "subq $(32*4), %%rdx\n\t" + "vmovntdq %%ymm0, (%%r9)\n\t" + "vmovntdq %%ymm1, -32(%%r9)\n\t" + "vmovntdq %%ymm2, -(32 * 2)(%%r9)\n\t" + "vmovntdq %%ymm3, -(32 * 3)(%%r9)\n\t" + "subq $(32 * 4), %%r9\n\t" + "cmpq $(32 * 4), %%rdx\n\t" + "ja 118b\n\t" + "sfence\n\t" + /* Store the first 4 * VEC. */ + "vmovdqu %%ymm4, (%%rdi)\n\t" + "vmovdqu %%ymm5, 32(%%rdi)\n\t" + "vmovdqu %%ymm6, (32 * 2)(%%rdi)\n\t" + "vmovdqu %%ymm7, (32 * 3)(%%rdi)\n\t" + /* Store the last VEC. */ + "vmovdqu %%ymm8, (%%r11)\n\t" + "vzeroupper\n\t" + "jmp %l[done]" + : + : "r"(src), "r"(dst), "r"(len) + : "rax", "rcx", "rdx", "rdi", "rsi", "r8", "r9", "r10", "r11", "r12", "ymm0", + "ymm1", "ymm2", "ymm3", "ymm4", "ymm5", "ymm6", "ymm7", "ymm8", "memory" + : done + ); +done: + return dst; +} + +#else static __rte_always_inline void * rte_memcpy_generic(void *dst, const void *src, size_t n) { @@ -479,6 +971,8 @@ rte_memcpy_generic(void *dst, const void *src, size_t n) goto COPY_BLOCK_128_BACK31; } +#endif /* RTE_MEMCPY_AMDEPYC2 */ + #else /* __AVX512F__ */ #define ALIGNMENT_MASK 0x0F @@ -874,6 +1368,14 @@ rte_memcpy(void *dst, const void *src, size_t n) return rte_memcpy_generic(dst, src, n); } +#if defined __AVX2__ && defined(RTE_MEMCPY_AMDEPYC2) +static __rte_always_inline void * +rte_memcpy_aligned_tstore16(void *dst, void *src, int len) +{ + return rte_memcpy_aligned_ntload_tstore16_amdepyc2(dst, src, len); +} +#endif + #if defined(RTE_TOOLCHAIN_GCC) && (GCC_VERSION >= 100000) #pragma GCC diagnostic pop #endif diff --git a/meson_options.txt b/meson_options.txt index 0e92734c49..e232c9c340 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -42,6 +42,8 @@ option('platform', type: 'string', value: 'native', description: 'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.') option('enable_trace_fp', type: 'boolean', value: false, description: 'enable fast path trace points.') +option('rte_memcpy_amdepyc2', type: 'boolean', value: false, description: + 'to enable amd epyc memcpy routines') option('tests', type: 'boolean', value: true, description: 'build unit tests') option('use_hpet', type: 'boolean', value: false, description: From patchwork Mon Aug 23 08:44:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aman Kumar X-Patchwork-Id: 97216 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 19DA4A0C56; Mon, 23 Aug 2021 10:44:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 33C0E4114C; Mon, 23 Aug 2021 10:44:28 +0200 (CEST) Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by mails.dpdk.org (Postfix) with ESMTP id E9F0840143 for ; Mon, 23 Aug 2021 10:44:26 +0200 (CEST) Received: by mail-pg1-f171.google.com with SMTP id q2so16014327pgt.6 for ; Mon, 23 Aug 2021 01:44:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vvdntech-in.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9reN6twwZYkdoe0c4aWA+7ciYvEJ3dxbXY+hEswu3co=; b=L+ODMIgjlNTrjT46bprxvuR0akaetJ+9i2I7VZ9aUyba2eYnV9k0OB9aCRtMd/tZUM o9k6+IIoyeQxrh6CGGyChWpslxPPEu+876yl6Oq6RD/CgrlAvP8Nox08tJE4pHh4PrZU Ce6z2PAQboOehEsoehbUChpKVkn1t15lCh9RiP+FkwUR99bb+qeChjxDKaIkqJVQousg ti6vlfVnq74wMADIdM74o9WLCdG3LESbb7nA6950W0so1grpGeOeO++Q8kWyJdw8rpzA Czwrlvp2Qq91hAwusmioB3M7Gs5FBK3UD2homxdm+Ax9bl5uv2R1X09DnrfooGOYPTAS J8qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9reN6twwZYkdoe0c4aWA+7ciYvEJ3dxbXY+hEswu3co=; b=A4NwRgoTOhLTeohlZghnhBKMMqxowCGo+81KbI9Ud3VHjISK/vwxEojbu1EO14rjq5 vtYogMXO8I+3Z8UZkO1qrGKAJCowA5+Vk0ugRSZOnrDYtXBnYgCj8WfPPtI17lU0pkLK 6su8lURqyrFIgzsRQD5QAwVYMWZq73PO7y/XrmBVEtMARDewUaUvAQaGTt2aw7LaBLuC jwlnRJUJPrsm33HJ+BC+jX5guSE7AwkyhKRkpAJo1BaH9KFncCvm2uuCZxng2QOxtia9 YhmbQw30bv6ZHl9GOzrkpjvv7nhoGZIoqI1Ruvr+T5DUX0sz1z+ZH9p+lbmLrI1X3ybH +Mnw== X-Gm-Message-State: AOAM530P68hGWhtaF2cWU4QoKIHuNoVz61glWKWXjYB921HNHZ5XlXzV bX7irkwZ+DYrJgLaDAuc/WRfTS20xc64lxV9 X-Google-Smtp-Source: ABdhPJw78cSTtYJ9EfX0qs69Hzlbk8bp1N16+ReHp/cyTihGbKI3YAS5rahEI1OdF0u187FlUuW22w== X-Received: by 2002:a63:6447:: with SMTP id y68mr31577984pgb.154.1629708265700; Mon, 23 Aug 2021 01:44:25 -0700 (PDT) Received: from 470--5GDC--BLR.blr5glab.net ([106.51.39.131]) by smtp.gmail.com with ESMTPSA id j4sm17566832pgi.6.2021.08.23.01.44.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Aug 2021 01:44:25 -0700 (PDT) From: Aman Kumar To: dev@dpdk.org Cc: rasland@nvidia.com, asafp@nvidia.com, shys@nvidia.com, viacheslavo@nvidia.com, akozyrev@nvidia.com, matan@nvidia.com, anatoly.burakov@intel.com, keesang.song@amd.com, aman.kumar@vvdntech.in Date: Mon, 23 Aug 2021 14:14:11 +0530 Message-Id: <20210823084411.29592-2-aman.kumar@vvdntech.in> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210823084411.29592-1-aman.kumar@vvdntech.in> References: <20210823084411.29592-1-aman.kumar@vvdntech.in> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/2] net/mlx5: optimize mprq memcpy for AMD EPYC2 platforms X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" add non temporal load and temporal store for mprq memcpy. define mlx5_ntload_tstore in meson build configuration to enable this optimization. This utilizes AMD EPYC2 optimized rte_memcpy* routines. Signed-off-by: Aman Kumar --- drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.c | 12 ++++++++++++ drivers/net/mlx5/mlx5.h | 3 +++ drivers/net/mlx5/mlx5_rx.h | 24 ++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rxq.c | 4 ++++ meson_options.txt | 2 ++ 6 files changed, 46 insertions(+) diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index dac7f1fabf..0d2888742c 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -61,6 +61,7 @@ foreach option:cflags_options cflags += option endif endforeach +dpdk_conf.set('RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY', get_option('mlx5_ntload_tstore')) if get_option('buildtype').contains('debug') cflags += [ '-pedantic', '-DPEDANTIC' ] else diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f84e061fe7..cf57867c25 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -161,6 +161,11 @@ /* Configure timeout of LRO session (in microseconds). */ #define MLX5_LRO_TIMEOUT_USEC "lro_timeout_usec" +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY +/* mprq_tstore_memcpy */ +#define MLX5_MPRQ_TSTORE_MEMCPY "mprq_tstore_memcpy" +#endif + /* * Device parameter to configure the total data buffer size for a single * hairpin queue (logarithm value). @@ -1991,6 +1996,10 @@ mlx5_args_check(const char *key, const char *val, void *opaque) config->decap_en = !!tmp; } else if (strcmp(MLX5_ALLOW_DUPLICATE_PATTERN, key) == 0) { config->allow_duplicate_pattern = !!tmp; +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY + } else if (strcmp(MLX5_MPRQ_TSTORE_MEMCPY, key) == 0) { + config->mprq_tstore_memcpy = tmp; +#endif } else { DRV_LOG(WARNING, "%s: unknown parameter", key); rte_errno = EINVAL; @@ -2051,6 +2060,9 @@ mlx5_args(struct mlx5_dev_config *config, struct rte_devargs *devargs) MLX5_SYS_MEM_EN, MLX5_DECAP_EN, MLX5_ALLOW_DUPLICATE_PATTERN, +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY + MLX5_MPRQ_TSTORE_MEMCPY, +#endif NULL, }; struct rte_kvargs *kvlist; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e02714e231..7d5617f5ca 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -298,6 +298,9 @@ struct mlx5_dev_config { int tx_skew; /* Tx scheduling skew between WQE and data on wire. */ struct mlx5_hca_attr hca_attr; /* HCA attributes. */ struct mlx5_lro_config lro; /* LRO configuration. */ +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY + unsigned int mprq_tstore_memcpy:1; +#endif }; diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 3f2b99fb65..19318bdd1b 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -148,6 +148,9 @@ struct mlx5_rxq_data { uint32_t rxseg_n; /* Number of split segment descriptions. */ struct mlx5_eth_rxseg rxseg[MLX5_MAX_RXQ_NSEG]; /* Buffer split segment descriptions - sizes, offsets, pools. */ +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY + unsigned int mprq_tstore_memcpy:1; +#endif } __rte_cache_aligned; enum mlx5_rxq_type { @@ -422,6 +425,15 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len, const uint32_t offset = strd_idx * strd_sz + strd_shift; void *addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), offset); +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY + if (unlikely(!rxq->mprq_tstore_memcpy) && + len <= rxq->mprq_max_memcpy_len) { + rte_prefetch1(addr); + if (len > RTE_CACHE_LINE_SIZE) + rte_prefetch2((void *)((uintptr_t)addr + + RTE_CACHE_LINE_SIZE)); + } +#endif /* * Memcpy packets to the target mbuf if: * - The size of packet is smaller than mprq_max_memcpy_len. @@ -433,8 +445,20 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len, (hdrm_overlap > 0 && !rxq->strd_scatter_en)) { if (likely(len <= (uint32_t)(pkt->buf_len - RTE_PKTMBUF_HEADROOM))) { +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY + uintptr_t data_addr; + + data_addr = (uintptr_t)rte_pktmbuf_mtod(pkt, void *); + if (!((data_addr | (uintptr_t)addr) & ALIGNMENT_MASK) && + rxq->mprq_tstore_memcpy) + rte_memcpy_aligned_tstore16((void *)data_addr, + addr, len); + else + rte_memcpy((void *)data_addr, addr, len); +#else rte_memcpy(rte_pktmbuf_mtod(pkt, void *), addr, len); +#endif DATA_LEN(pkt) = len; } else if (rxq->strd_scatter_en) { struct rte_mbuf *prev = pkt; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index abd8ce7989..a1b0fa6455 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1449,6 +1449,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->socket = socket; if (dev->data->dev_conf.intr_conf.rxq) tmpl->irq = 1; +#ifdef RTE_LIBRTE_MLX5_NTLOAD_TSTORE_ALIGN_COPY + tmpl->rxq.mprq_tstore_memcpy = config->mprq_tstore_memcpy; +#endif + /* * This Rx queue can be configured as a Multi-Packet RQ if all of the * following conditions are met: diff --git a/meson_options.txt b/meson_options.txt index e232c9c340..cc7b629d17 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -38,6 +38,8 @@ option('max_lcores', type: 'integer', value: 128, description: 'maximum number of cores/threads supported by EAL') option('max_numa_nodes', type: 'integer', value: 32, description: 'maximum number of NUMA nodes supported by EAL') +option('mlx5_ntload_tstore', type: 'boolean', value: false, description: + 'to enable optimized MPRQ in RX datapath') option('platform', type: 'string', value: 'native', description: 'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.') option('enable_trace_fp', type: 'boolean', value: false, description: