Memory Interleaving switching

Hi,

I have a 4 channels CPU with DIMMs in all channels, but revealed the memory bandwidth is below the expectations most probably because the memory interleaving is switched off (checked via wmi and by reading CPU memory controller registers).
Unfortunately the BIOS/UEFI has no a related setting, but there is a theoretical way to configure the memory interleaving properly by changing the memory controller registers directly.
I believe changing the registers on running system will cause the system crash immediately (even haven’t tried so far) as the interleaving should be set on a very early stage, before the system boot.
Could it be (at least) theoretically possible by expanding UEFI?
Is there a technology like UEFI extension/plugin/driver could be user for that?
Could you please navigate me where to look

Thank you

I’m not sure what your question is, but memory controller configuration is not something that software can do anything about

Memory controller is a (CPU internal) PCI device with a set of registers that can be read and written. You may want to review a CPU datasheet.
BIOS/UEFI (software as well) surely does that in volume.
Unfortunately the BIOS/UEFI incarnation on my motherboard doesn’t by default configure the registers I would like to be configured.
I know how to configure the registers, but looking for a proper place to put the code into

He’s not saying it’s impossible. He’s saying those registers do not belong to you. You don’t know what other operating system components assume a certain set of values.

Hardware registers belongs to physical system, I mean the platform, not an OS, so they formally belong to me )). There is a lot of user mode programs that read the registers (CPU-Z, HWInfo, etc, pci-utils, edac-utils, msrtools on Linux). There are (even user mode) programs that write to memory controller registers, intel overclockng utility modifies memory controller registers to change memory diagrams and voltages for instance.
I just realize the memory interleaving has to be adjusted before the memory is used, so before OS starts loading. The only known (to me) software that runs before OS is UEFI, so my humble question is whether the interleaving related changes can be safely made with an UEFI extension (if any)

You can write your own uefi extension, and that would be the only place to mess with those settings. I think there are open source kits for building uefi.

Hardware registers belongs to physical system, I mean the platform, not an OS,

You’re nitpicking semantics. In an operating system, each hardware resource is owned by an operating system entity. That operating system entity has the right to assume that no one else is mucking with the resources it owns. In this case, you don’t own the memory controller.

That’s why I think that before OS is loaded, at UEFI stage (while there is no OS at all), I can temporary take possession of the memory controller. I’m just not sure that UEFI may have its own possession. There is no problem to (eventually) write an UEFI driver, I just would like to realize in advance whether there could be an ownership related conflict at that stage as well.

whether there is any conflict depends on what your mother board manufacturer has put in their code. And is likely to be quite different from one model to another

It is still not clear what you are trying to do. Memory interleaving is a way for a NUMA hardware to present memory to a non-NUMA aware OS in a way that is less likely to perform badly by concentrating memory access on a single node. Standard desktop systems don’t have NUMA nodes, and Windows has been NUMA aware for many years.

There is a misunderstanding. Memory interleaving is not NUMA related at all.
It’s a way to increase memory performance by interleaving memory channels individually for each CPU. It’s a kind of stripe (RAID 0) for disks. Stripe can be typically applied to 2 disks, but memory interleaving can be 2,3,4 way depending on quantity of memory channels.
The interleaving is controlled by a single CPU (memory controller) register, I just need to put a proper value to the register. The register itself definitely depends on CPU, but I need to switch the interleaving on in my current CPU, so the interleaving related solution may be not universal.
Obviously as the interleaving rebuilds the entire memory, the register modification has to be made on a very early stage, before any stack/data is allocated in real memory and CPU internal memory is in the use only, so I focused on UEFI, but looks like UEFI drivers infrastructure works on already prepared (not interleaved in my case) memory. Looks like I need to patch an initial UEFI stage where all other CPU registers are set accordingly to UEFI settings at least.
I don’t think it’s too motherboard dependent, UEFi is a standard product of third party companies, just customized by motherboard vendors

and CPU internal memory is in the use only,

What CPU internal memory? For this to work, you’d have to be executing out of ROM, right? Even fetching the next instruction would fail.

CPU has a small internal memory, 64kb or so if I remember correctly.
Anyway it can be found in CPU datasheets.
It’s used by UEFI in the SEC boot stage when the code is in EEPROM and stack is in CPU memory.
The procedure is described in UEFI.org
It’s not a miracle anyway. Memory controller exists in any modern processor.
It has to be initialized and while the initialization procedure is not finished yet, initialization code may not use the main memory (otherwise the fail is guaranteed).
Memory controller has a lot of registers to be set during that procedure, memory interleaving related is just one of them. So I’m looking for that initialization place to add one register.
Can’t believe? Imagine ECC.
When UEFI starts it identifies CPU and realizes CPU supports ECC.
So ECC should be switched on if memory is ECC aware, then UEFI checks memory capabilities and only then instructs memory controller to use ECC. All the related code may not use the real memory,
so uses (small) CPU embedded one.

there is too much here for a simple post, but I don’t think you will realize any material gain in memory performance

Following another computer with the interleaving option available in BIOS - +82% with 2 way interleaving

So on another computer with a different BIOS, a different CPU and probably a different chipset, you see an 82% increase in some metric you are using to measuring this?

Precisely. It’s not my machine though. Asked a colleague to try
The problem arose by checking my 2 PCs.
1-st. 2 channels 2-way interleaved DDR4-2133 has 24Gb/s memory speed (memtest86) (btw faster than single/non-interleaved DIMM can theoretically have, DDR4-2133 = PC4-17000)
2-nd. 4 channels non-interleaved DDR4-2400 – 19Gb/s (the same memtest86) (very matching result for single/non-interleaved DDR4-2400=PC4-19200)
So slower memory works noticeably faster. The main difference is interleaving. The both without interleaving option in BIOS.
The interleaving related difference was initially discovered by
“wmic memorychip get InterleavePosition,InterleaveDataDepth”

I think you are missing the point. This test does not tell you anything about the performance effect of memory interleaving because of these hardware differences.

Also, memtest86 is a program that runs a memory diagnostic. This diagnostic writes patterns to ranges of adjacent addresses and then reads them back looking for electrical faults that can be exposed this way. This memory access pattern is very artificial versus what a UM program on a virtual memory OS (Windows) generates. Even the ‘benchmark’ tests are more dependent on the CPU micro architecture than the memory bus

A much more useful tool is the Intel vTune memory access analysis. This tool uses debug registers in the CPU(s) to look into the actual performance of the memory controller(s), cache effectiveness and the memory coherency protocol. Most often, memory fences are the limiting factor for application performance. False sharing is often also a problem. The raw performance of the RAM chips themselves, is usually far down the list