[v3,3/3] lib/eal: add temporal store memcpy support on AMD platform
Checks
Commit Message
This patch provides a rte_memcpy* call with temporal stores.
Use -Dcpu_instruction_set=znverX with build to enable this API.
Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
---
config/x86/meson.build | 2 +
lib/eal/x86/include/rte_memcpy.h | 114 +++++++++++++++++++++++++++++++
2 files changed, 116 insertions(+)
Comments
26/10/2021 17:56, Aman Kumar:
> This patch provides a rte_memcpy* call with temporal stores.
> Use -Dcpu_instruction_set=znverX with build to enable this API.
>
> Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
> ---
> config/x86/meson.build | 2 +
> lib/eal/x86/include/rte_memcpy.h | 114 +++++++++++++++++++++++++++++++
It looks better as C code.
Do you achieve the same performance as the asm version?
> +#if defined RTE_MEMCPY_AMDEPYC
[...]
> +static __rte_always_inline void *
> +rte_memcpy_aligned_tstore16_generic(void *dst, void *src, int len)
So to be clear, an application will benefit of this optimization if
1/ DPDK is specifically compiled for AMD
2/ the application is compiled with above DPDK build (because of inlinining)
I guess there is no good way to benefit from the optimization
without specific compilation, because of inlining constraint.
Another design, with less constraint but less performance,
would be to have a function pointer assigned at runtime based on the CPU.
On Tue, 26 Oct 2021 21:26:45 +0530
Aman Kumar <aman.kumar@vvdntech.in> wrote:
> This patch provides a rte_memcpy* call with temporal stores.
> Use -Dcpu_instruction_set=znverX with build to enable this API.
>
> Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
Ok, but would be better to get it into glibc.
Would benefit wider array of platforms and get more testing.
On Tue, Oct 26, 2021 at 9:44 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> 26/10/2021 17:56, Aman Kumar:
> > This patch provides a rte_memcpy* call with temporal stores.
> > Use -Dcpu_instruction_set=znverX with build to enable this API.
> >
> > Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
> > ---
> > config/x86/meson.build | 2 +
> > lib/eal/x86/include/rte_memcpy.h | 114 +++++++++++++++++++++++++++++++
>
> It looks better as C code.
> Do you achieve the same performance as the asm version?
>
In a few corner cases assembly performed better, but overall we have very
similar perf observations.
> > +#if defined RTE_MEMCPY_AMDEPYC
> [...]
> > +static __rte_always_inline void *
> > +rte_memcpy_aligned_tstore16_generic(void *dst, void *src, int len)
>
> So to be clear, an application will benefit of this optimization if
> 1/ DPDK is specifically compiled for AMD
> 2/ the application is compiled with above DPDK build (because of
> inlinining)
>
> I guess there is no good way to benefit from the optimization
> without specific compilation, because of inlining constraint.
> Another design, with less constraint but less performance,
> would be to have a function pointer assigned at runtime based on the CPU.
>
You're right. We need to build DPDK and apps with this flag enabled to get
the benefit.
In future versions, we will try to adapt in a more dynamic way. Thanks.
On Wed, Oct 27, 2021 at 2:41 AM Stephen Hemminger <
stephen@networkplumber.org> wrote:
> On Tue, 26 Oct 2021 21:26:45 +0530
> Aman Kumar <aman.kumar@vvdntech.in> wrote:
>
> > This patch provides a rte_memcpy* call with temporal stores.
> > Use -Dcpu_instruction_set=znverX with build to enable this API.
> >
> > Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
>
> Ok, but would be better to get it into glibc.
> Would benefit wider array of platforms and get more testing.
>
Yes, we've considered this. This may go into glibc in future.
27/10/2021 08:34, Aman Kumar:
> On Tue, Oct 26, 2021 at 9:44 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> > 26/10/2021 17:56, Aman Kumar:
> > > This patch provides a rte_memcpy* call with temporal stores.
> > > Use -Dcpu_instruction_set=znverX with build to enable this API.
> > >
> > > Signed-off-by: Aman Kumar <aman.kumar@vvdntech.in>
> > > ---
> > > config/x86/meson.build | 2 +
> > > lib/eal/x86/include/rte_memcpy.h | 114 +++++++++++++++++++++++++++++++
> >
> > It looks better as C code.
> > Do you achieve the same performance as the asm version?
> >
>
> In a few corner cases assembly performed better, but overall we have very
> similar perf observations.
>
> > > +#if defined RTE_MEMCPY_AMDEPYC
> > [...]
> > > +static __rte_always_inline void *
> > > +rte_memcpy_aligned_tstore16_generic(void *dst, void *src, int len)
> >
> > So to be clear, an application will benefit of this optimization if
> > 1/ DPDK is specifically compiled for AMD
> > 2/ the application is compiled with above DPDK build (because of
> > inlinining)
> >
> > I guess there is no good way to benefit from the optimization
> > without specific compilation, because of inlining constraint.
> > Another design, with less constraint but less performance,
> > would be to have a function pointer assigned at runtime based on the CPU.
> >
>
> You're right. We need to build DPDK and apps with this flag enabled to get
> the benefit.
So the x86 packages, as in Linux distributions, won't have this optimization.
> In future versions, we will try to adapt in a more dynamic way. Thanks.
No, I was trying to say that unfortunately there is probably no solution.
@@ -76,8 +76,10 @@ dpdk_conf.set('RTE_MAX_NUMA_NODES', 32)
# AMD platform support
if get_option('cpu_instruction_set') == 'znver3'
dpdk_conf.set('RTE_MAX_LCORE', 512)
+ dpdk_conf.set('RTE_MEMCPY_AMDEPYC', 1)
elif get_option('cpu_instruction_set') == 'znver2'
dpdk_conf.set('RTE_MAX_LCORE', 512)
+ dpdk_conf.set('RTE_MEMCPY_AMDEPYC', 1)
elif get_option('cpu_instruction_set') == 'znver1'
dpdk_conf.set('RTE_MAX_LCORE', 256)
endif
@@ -376,6 +376,120 @@ rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
}
}
+#if defined RTE_MEMCPY_AMDEPYC
+
+/**
+ * Copy 16 bytes from one location to another,
+ * with temporal stores
+ */
+static __rte_always_inline void
+rte_copy16_ts(uint8_t *dst, uint8_t *src)
+{
+ __m128i var128;
+
+ var128 = _mm_stream_load_si128((__m128i *)src);
+ _mm_storeu_si128((__m128i *)dst, var128);
+}
+
+/**
+ * Copy 32 bytes from one location to another,
+ * with temporal stores
+ */
+static __rte_always_inline void
+rte_copy32_ts(uint8_t *dst, uint8_t *src)
+{
+ __m256i ymm0;
+
+ ymm0 = _mm256_stream_load_si256((const __m256i *)src);
+ _mm256_storeu_si256((__m256i *)dst, ymm0);
+}
+
+/**
+ * Copy 64 bytes from one location to another,
+ * with temporal stores
+ */
+static __rte_always_inline void
+rte_copy64_ts(uint8_t *dst, uint8_t *src)
+{
+ rte_copy32_ts(dst + 0 * 32, src + 0 * 32);
+ rte_copy32_ts(dst + 1 * 32, src + 1 * 32);
+}
+
+/**
+ * Copy 128 bytes from one location to another,
+ * with temporal stores
+ */
+static __rte_always_inline void
+rte_copy128_ts(uint8_t *dst, uint8_t *src)
+{
+ rte_copy32_ts(dst + 0 * 32, src + 0 * 32);
+ rte_copy32_ts(dst + 1 * 32, src + 1 * 32);
+ rte_copy32_ts(dst + 2 * 32, src + 2 * 32);
+ rte_copy32_ts(dst + 3 * 32, src + 3 * 32);
+}
+
+/**
+ * Copy len bytes from one location to another,
+ * with temporal stores 16B aligned
+ */
+static __rte_always_inline void *
+rte_memcpy_aligned_tstore16_generic(void *dst, void *src, int len)
+{
+ void *dest = dst;
+
+ while (len >= 128) {
+ rte_copy128_ts((uint8_t *)dst, (uint8_t *)src);
+ dst = (uint8_t *)dst + 128;
+ src = (uint8_t *)src + 128;
+ len -= 128;
+ }
+ while (len >= 64) {
+ rte_copy64_ts((uint8_t *)dst, (uint8_t *)src);
+ dst = (uint8_t *)dst + 64;
+ src = (uint8_t *)src + 64;
+ len -= 64;
+ }
+ while (len >= 32) {
+ rte_copy32_ts((uint8_t *)dst, (uint8_t *)src);
+ dst = (uint8_t *)dst + 32;
+ src = (uint8_t *)src + 32;
+ len -= 32;
+ }
+ if (len >= 16) {
+ rte_copy16_ts((uint8_t *)dst, (uint8_t *)src);
+ dst = (uint8_t *)dst + 16;
+ src = (uint8_t *)src + 16;
+ len -= 16;
+ }
+ if (len >= 8) {
+ *(uint64_t *)dst = *(const uint64_t *)src;
+ dst = (uint8_t *)dst + 8;
+ src = (uint8_t *)src + 8;
+ len -= 8;
+ }
+ if (len >= 4) {
+ *(uint32_t *)dst = *(const uint32_t *)src;
+ dst = (uint8_t *)dst + 4;
+ src = (uint8_t *)src + 4;
+ len -= 4;
+ }
+ if (len != 0) {
+ dst = (uint8_t *)dst - (4 - len);
+ src = (uint8_t *)src - (4 - len);
+ *(uint32_t *)dst = *(const uint32_t *)src;
+ }
+
+ return dest;
+}
+
+static __rte_always_inline void *
+rte_memcpy_aligned_tstore16(void *dst, void *src, int len)
+{
+ return rte_memcpy_aligned_tstore16_generic(dst, src, len);
+}
+
+#endif /* RTE_MEMCPY_AMDEPYC */
+
static __rte_always_inline void *
rte_memcpy_generic(void *dst, const void *src, size_t n)
{