36

We've been testing a server using 2x Xeon Gold 6154 CPUs with a Supermicro X11DPH-I motherboard, and 96GB RAM, and found some very strange performance issues surrounding memory when compared to running with only 1 CPU (one socket empty), similar dual CPU Haswell Xeon E5-2687Wv3 (for this series of tests, but other Broadwells perform similarly), Broadwell-E i7s, and Skylake-X i9s (for comparison).

It would be expected that the Skylake Xeon processors with faster memory would perform faster than the Haswell when it comes to various memcpy functions and even memory allocation (not covered in the tests below, as we found a workaround), but instead with both CPUs installed, the Skylake Xeons perform at almost half the speed as the Haswell Xeons, and even less when compared to an i7-6800k. What's even stranger, is when using Windows VirtualAllocExNuma to assign the NUMA node for memory allocation, while plain memory copy functions expectedly perform worse on the remote node vs. the local node, memory copy functions utilizing the SSE, MMX, and AVX registers perform much faster on the remote NUMA node than on the local node (what?). As noted above, with Skylake Xeons, if we pull 1 CPU it performs more or less as expected (still a bit slower than Haswell, but not by a dramatic amount).

I'm not sure if this is a bug on the motherboard or CPU, or with UPI vs QPI, or none of the above, but no combination of BIOS settings seems to avail this. Disabling NUMA (not included in test results) in the bios does improve the performance of all copy functions using the SSE, MMX and AVX registers, but all other plain memory copy functions suffer large losses as well.

For our test program, we tested both using inline assembly functions, and _mm intrinsic, we used Windows 10 with Visual Studio 2017 for everything except the assembly functions, which as msvc++ won't compile asm for x64, we used gcc from mingw/msys to compile an obj file using -c -O2 flags, which we included in the msvc++ linker.

If the system is using NUMA nodes, we test both operators new for memory allocation with VirtualAllocExNuma for each NUMA node and do a cumulative average of 100 memory buffer copies of 16MB each for each memory copy function, and we rotate which memory allocation we are on between each set of tests.

All 100 source and 100 destination buffers are 64 bytes aligned (for compatibility up to AVX512 using streaming functions) and initialized once to incremental data for the source buffers, and 0xff for the destination buffers.

The number of copies being averaged on each machine with each configuration varied, as it was much faster on some, and much slower on others.

Results were as follows:

Haswell Xeon E5-2687Wv3 1 CPU (1 empty socket) on Supermicro X10DAi with 32GB DDR4-2400 (10c/20t, 25 MB of L3 cache). But remember, the benchmark rotates through 100 pairs of 16MB buffers, so we probably aren't getting L3 cache hits.

---------------------------------------------------------------------------
Averaging 7000 copies of 16MB of data per function for operator new
---------------------------------------------------------------------------
std::memcpy                      averaging 2264.48 microseconds
asm_memcpy (asm)                 averaging 2322.71 microseconds
sse_memcpy (intrinsic)           averaging 1569.67 microseconds
sse_memcpy (asm)                 averaging 1589.31 microseconds
sse2_memcpy (intrinsic)          averaging 1561.19 microseconds
sse2_memcpy (asm)                averaging 1664.18 microseconds
mmx_memcpy (asm)                 averaging 2497.73 microseconds
mmx2_memcpy (asm)                averaging 1626.68 microseconds
avx_memcpy (intrinsic)           averaging 1625.12 microseconds
avx_memcpy (asm)                 averaging 1592.58 microseconds
avx512_memcpy (intrinsic)        unsupported on this CPU
rep movsb (asm)                  averaging 2260.6 microseconds

Haswell Dual Xeon E5-2687Wv3 2 cpu on Supermicro X10DAi with 64GB ram

---------------------------------------------------------------------------
Averaging 6900 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 0(local)
---------------------------------------------------------------------------
std::memcpy                      averaging 3179.8 microseconds
asm_memcpy (asm)                 averaging 3177.15 microseconds
sse_memcpy (intrinsic)           averaging 1633.87 microseconds
sse_memcpy (asm)                 averaging 1663.8 microseconds
sse2_memcpy (intrinsic)          averaging 1620.86 microseconds
sse2_memcpy (asm)                averaging 1727.36 microseconds
mmx_memcpy (asm)                 averaging 2623.07 microseconds
mmx2_memcpy (asm)                averaging 1691.1 microseconds
avx_memcpy (intrinsic)           averaging 1704.33 microseconds
avx_memcpy (asm)                 averaging 1692.69 microseconds
avx512_memcpy (intrinsic)        unsupported on this CPU
rep movsb (asm)                  averaging 3185.84 microseconds
---------------------------------------------------------------------------
Averaging 6900 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 1
---------------------------------------------------------------------------
std::memcpy                      averaging 3992.46 microseconds
asm_memcpy (asm)                 averaging 4039.11 microseconds
sse_memcpy (intrinsic)           averaging 3174.69 microseconds
sse_memcpy (asm)                 averaging 3129.18 microseconds
sse2_memcpy (intrinsic)          averaging 3161.9 microseconds
sse2_memcpy (asm)                averaging 3141.33 microseconds
mmx_memcpy (asm)                 averaging 4010.17 microseconds
mmx2_memcpy (asm)                averaging 3211.75 microseconds
avx_memcpy (intrinsic)           averaging 3003.14 microseconds
avx_memcpy (asm)                 averaging 2980.97 microseconds
avx512_memcpy (intrinsic)        unsupported on this CPU
rep movsb (asm)                  averaging 3987.91 microseconds
---------------------------------------------------------------------------
Averaging 6900 copies of 16MB of data per function for operator new
---------------------------------------------------------------------------
std::memcpy                      averaging 3172.95 microseconds
asm_memcpy (asm)                 averaging 3173.5 microseconds
sse_memcpy (intrinsic)           averaging 1623.84 microseconds
sse_memcpy (asm)                 averaging 1657.07 microseconds
sse2_memcpy (intrinsic)          averaging 1616.95 microseconds
sse2_memcpy (asm)                averaging 1739.05 microseconds
mmx_memcpy (asm)                 averaging 2623.71 microseconds
mmx2_memcpy (asm)                averaging 1699.33 microseconds
avx_memcpy (intrinsic)           averaging 1710.09 microseconds
avx_memcpy (asm)                 averaging 1688.34 microseconds
avx512_memcpy (intrinsic)        unsupported on this CPU
rep movsb (asm)                  averaging 3175.14 microseconds

Skylake Xeon Gold 6154 1 CPU (1 empty socket) on Supermicro X11DPH-I with 48GB DDR4-2666 (18c/36t, 24.75 MB of L3 cache)

---------------------------------------------------------------------------
Averaging 5000 copies of 16MB of data per function for operator new
---------------------------------------------------------------------------
std::memcpy                      averaging 1832.42 microseconds
asm_memcpy (asm)                 averaging 1837.62 microseconds
sse_memcpy (intrinsic)           averaging 1647.84 microseconds
sse_memcpy (asm)                 averaging 1710.53 microseconds
sse2_memcpy (intrinsic)          averaging 1645.54 microseconds
sse2_memcpy (asm)                averaging 1794.36 microseconds
mmx_memcpy (asm)                 averaging 2030.51 microseconds
mmx2_memcpy (asm)                averaging 1816.82 microseconds
avx_memcpy (intrinsic)           averaging 1686.49 microseconds
avx_memcpy (asm)                 averaging 1716.15 microseconds
avx512_memcpy (intrinsic)        averaging 1761.6 microseconds
rep movsb (asm)                  averaging 1977.6 microseconds

Skylake Xeon Gold 6154 2 CPU on Supermicro X11DPH-I with 96GB DDR4-2666

---------------------------------------------------------------------------
Averaging 4100 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 0(local)
---------------------------------------------------------------------------
std::memcpy                      averaging 3131.6 microseconds
asm_memcpy (asm)                 averaging 3070.57 microseconds
sse_memcpy (intrinsic)           averaging 3297.72 microseconds
sse_memcpy (asm)                 averaging 3423.38 microseconds
sse2_memcpy (intrinsic)          averaging 3274.31 microseconds
sse2_memcpy (asm)                averaging 3413.48 microseconds
mmx_memcpy (asm)                 averaging 2069.53 microseconds
mmx2_memcpy (asm)                averaging 3694.91 microseconds
avx_memcpy (intrinsic)           averaging 3118.75 microseconds
avx_memcpy (asm)                 averaging 3224.36 microseconds
avx512_memcpy (intrinsic)        averaging 3156.56 microseconds
rep movsb (asm)                  averaging 3155.36 microseconds
---------------------------------------------------------------------------
Averaging 4100 copies of 16MB of data per function for VirtualAllocExNuma to NUMA node 1
---------------------------------------------------------------------------
std::memcpy                      averaging 5309.77 microseconds
asm_memcpy (asm)                 averaging 5330.78 microseconds
sse_memcpy (intrinsic)           averaging 2350.61 microseconds
sse_memcpy (asm)                 averaging 2402.57 microseconds
sse2_memcpy (intrinsic)          averaging 2338.61 microseconds
sse2_memcpy (asm)                averaging 2475.51 microseconds
mmx_memcpy (asm)                 averaging 2883.97 microseconds
mmx2_memcpy (asm)                averaging 2517.69 microseconds
avx_memcpy (intrinsic)           averaging 2356.07 microseconds
avx_memcpy (asm)                 averaging 2415.22 microseconds
avx512_memcpy (intrinsic)        averaging 2487.01 microseconds
rep movsb (asm)                  averaging 5372.98 microseconds
---------------------------------------------------------------------------
Averaging 4100 copies of 16MB of data per function for operator new
---------------------------------------------------------------------------
std::memcpy                      averaging 3075.1 microseconds
asm_memcpy (asm)                 averaging 3061.97 microseconds
sse_memcpy (intrinsic)           averaging 3281.17 microseconds
sse_memcpy (asm)                 averaging 3421.38 microseconds
sse2_memcpy (intrinsic)          averaging 3268.79 microseconds
sse2_memcpy (asm)                averaging 3435.76 microseconds
mmx_memcpy (asm)                 averaging 2061.27 microseconds
mmx2_memcpy (asm)                averaging 3694.48 microseconds
avx_memcpy (intrinsic)           averaging 3111.16 microseconds
avx_memcpy (asm)                 averaging 3227.45 microseconds
avx512_memcpy (intrinsic)        averaging 3148.65 microseconds
rep movsb (asm)                  averaging 2967.45 microseconds

Skylake-X i9-7940X on ASUS ROG Rampage VI Extreme with 32GB DDR4-4266 (14c/28t, 19.25 MB of L3 cache) (overclocked to 3.8GHz/4.4GHz turbo, DDR at 4040MHz, Target AVX Frequency 3737MHz, Target AVX-512 Frequency 3535MHz, target cache frequency 2424MHz)

---------------------------------------------------------------------------
Averaging 6500 copies of 16MB of data per function for operator new
---------------------------------------------------------------------------
std::memcpy                      averaging 1750.87 microseconds
asm_memcpy (asm)                 averaging 1748.22 microseconds
sse_memcpy (intrinsic)           averaging 1743.39 microseconds
sse_memcpy (asm)                 averaging 3120.18 microseconds
sse2_memcpy (intrinsic)          averaging 1743.37 microseconds
sse2_memcpy (asm)                averaging 2868.52 microseconds
mmx_memcpy (asm)                 averaging 2255.17 microseconds
mmx2_memcpy (asm)                averaging 3434.58 microseconds
avx_memcpy (intrinsic)           averaging 1698.49 microseconds
avx_memcpy (asm)                 averaging 2840.65 microseconds
avx512_memcpy (intrinsic)        averaging 1670.05 microseconds
rep movsb (asm)                  averaging 1718.77 microseconds

Broadwell i7-6800k on ASUS X99 with 24GB DDR4-2400 (6c/12t, 15 MB of L3 cache)

---------------------------------------------------------------------------
Averaging 64900 copies of 16MB of data per function for operator new
---------------------------------------------------------------------------
std::memcpy                      averaging 2522.1 microseconds
asm_memcpy (asm)                 averaging 2615.92 microseconds
sse_memcpy (intrinsic)           averaging 1621.81 microseconds
sse_memcpy (asm)                 averaging 1669.39 microseconds
sse2_memcpy (intrinsic)          averaging 1617.04 microseconds
sse2_memcpy (asm)                averaging 1719.06 microseconds
mmx_memcpy (asm)                 averaging 3021.02 microseconds
mmx2_memcpy (asm)                averaging 1691.68 microseconds
avx_memcpy (intrinsic)           averaging 1654.41 microseconds
avx_memcpy (asm)                 averaging 1666.84 microseconds
avx512_memcpy (intrinsic)        unsupported on this CPU
rep movsb (asm)                  averaging 2520.13 microseconds

The assembly functions are derived from fast_memcpy in xine-libs, mostly used just to compare with msvc++'s optimizer.

Source Code for the test is available at https://github.com/marcmicalizzi/memcpy_test (it's a bit long to put in the post)

Has anyone else run into this or does anyone have any insight on why this might be happening?


Update 2018-05-15 13:40EST

So as suggested by Peter Cordes, I've updated the test to compare prefetched vs not prefetched, and NT stores vs regular stores, and tuned the prefetching done in each function (I don't have any meaningful experience with writing prefetching, so if I'm making any mistakes with this, please let me know and I'll adjust the tests accordingly. The prefetching does have an impact, so at the very least it's doing something). These changes are reflected in the latest revision from the GitHub link I made earlier for anyone looking for the source code.

I've also added an SSE4.1 memcpy, since prior to SSE4.1 I can't find any _mm_stream_load (I specifically used _mm_stream_load_si128) SSE functions, so sse_memcpy and sse2_memcpy can't be completely using NT stores, and as well the avx_memcpy function uses AVX2 functions for stream loading.

I opted not to do a test for pure store and pure load access patterns yet, as I'm not sure if the pure store could be meaningful, as without a load to the registers it's accessing, the data would be meaningless and unverifiable.

The interesting results with the new test were that on the Xeon Skylake Dual Socket setup and only on that setup, the store functions were actually significantly faster than the NT streaming functions for 16MB memory copying. As well only on that setup as well (and only with LLC prefetch enabled in BIOS), prefetchnta in some tests (SSE, SSE4.1) outperforms both prefetcht0 and no prefetch.

The raw results of this new test are too long to add to the post, so they are posted on the same git repository as the source code under results-2018-05-15

I still don't understand why for streaming NT stores, the remote NUMA node is faster under the Skylake SMP setup, albeit the using regular stores is still faster than that on the local NUMA node

22
  • 1
    Haven't had a chance to digest your data yet, but see also Why is Skylake so much better than Broadwell-E for single-threaded memory throughput? (comparing a quad-core Skylake against a many-core Broadwell, and seeing the downside of higher memory/L3 latency in many-core systems where single-core bandwidth is limited by max memory concurrency in one core, not by DRAM controllers.) SKX has high latency / low bandwidth per core to L3 / memory in general, according to Mysticial's testing and other results. You're probably seeing that. May 14, 2018 at 22:37
  • 1
    Are any of your copies using NT stores? I just checked, and all of your copies except MMX are using prefetchnta and NT stores! That's a huge important fact you left out of your question! See Enhanced REP MOVSB for memcpy for more discussion of ERMSB rep movsb vs. NT vector stores vs. regular vector stores. Messing around with that would be more useful than MMX vs. SSE. Probably just use AVX and/or AVX512 and try NT vs. regular, and / or leaving out the SW prefetch. May 14, 2018 at 23:08
  • 1
    Did you tune the prefetch distance for your SKX machines? SKX prefetchnta bypasses L3 as well as L2 (because L3 is non-inclusive), so it's more sensitive to prefetch distance (too late and data has to come all the way from DRAM again, not just L3), so it's more "brittle" (sensitive to tuning the right distance). Your prefetch distances look fairly low, though, under 500 bytes if I'm reading the asm correctly. @Mysticial's testing on SKX has found that prefetchnta can be a big slowdown on that uarch), and he doesn't recommend it. May 14, 2018 at 23:16
  • 1
    You definitely have some interesting results here, but we need to untangle them from various effects. Having numbers both with and without NT stores may tell us something useful about NUMA behaviour. Populating a 2nd socket forces even local L3 misses to snoop the remote CPU, at least on Broadwell/Haswell. Dual-socket E5 Xeons don't have a snoop filter. I think Gold Xeons do have snoop filters, because they're capable of operating in more than dual-socket systems. But I'm not sure how big it is, or what that really means :P I haven't done memory perf tuning on multi-socket. May 14, 2018 at 23:24
  • 2
    SKX is a fundamentally different interconnect; a mesh instead of a ring. It's an interesting result, but not unbelievable and may not be a sign of a misconfiguration. IDK, hopefully someone else with more experience with the hardware can shed more light. May 14, 2018 at 23:35

1 Answer 1

0

Is your memory the incorrect Rank? Perhaps your board has some weird thing with the memory ranking when you add that second CPU? I know when you have Quad CPU machines they do all kinds of weird things to make the memory work properly and if you have the incorrect ranked memory sometimes it will work but clock back to like 1/4 or 1/2 of the speed. Perhaps SuperMicro did something in that board to make the DDR4 and Dual CPU into Quad Channel and it is using similar math. Incorrect rank == 1/2 speed.

2
  • Doesn't appear to be the case, all the memory is 1R8, and matches rank from the supermicro qvl for the motherboard. Was worth a check though! May 1, 2019 at 23:11
  • 1
    I know this is a different system entirely, but this is what I was referring too. qrl.dell.com/Files/en-us/Html/Manuals/R920/… You'll note that the rank requirements change when you increase the amount of sticks/CPUs. May 2, 2019 at 18:05

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .