VSpace Address 0xDD00_0000 - 0xDD20_0000 in Qemu Virt?

So backstory, I’m trying to instantiate a Linux Kernel VM with 4GB of RAM. I’ve increased Qemu’s RAM size to something ridiculous like 10GB, turned highmem=on, and I’ve gotten the VM from the starting >1GB up to 2.2 GB. But the second I try to go over that (over RAM_SIZE 0x9000_0000). I run into the below are the errors I run into:

sel4utils_reserve_range_at_no_alloc@vspace.c:595 Range not available at 0x40000000, size 0xc0000000

vm_reserve_memory_at@guest_memory.c:335 Failed to allocate vm reservation: Unable to create vspace reservation at address 0x40000000 of size 3221225472

vm_ram_register_at@guest_ram.c:369 Unable to reserve ram region at addr 0x40000000 of size 0xc0000000

Assertion failed: !err (/host/test-vm-project/projects/vm/components/VM_Arm/src/modules/init_ram.c: init_ram_module: 21)

In order to figure out what was going on I put some print statements in vspace_internal.h off of is_available_range() to see which region was breaking RAM initialization, which resulted in:

install_vm_devices@main.c:704 module name: init_ram
bad 1 s: 0x00000000DD000000;  e: 0x00000000DD200000
bad 3 s: 0x00000000DD000000; ns: 0x00000000DD200000
bad 3 s: 0x00000000C0000000; ns: 0x0000000000000000

(Bad in this case was a return on False) So it looks like… something… is getting reserved in that region. When I down size RAM to 0x9000_0000 and check the device tree through /proc from inside the Linux VM it looks like there is a PCI device within that range, but when I look at the generated kernel.dts from build I don’t see anything that would occupy that range. Additionally I am using the dts overlay to explicitly reserve that memory range for vm-memory.

I’ve been going through the ninja build logs and through all the included projects looking for something dynamic that might be sneaking its way into that region of memory, but haven’t found anything and am now kind of at a loss. I’ve also checked that I am indeed building a 64 bit kernel and I am.

I’ve also checked through /proc that the VM is itself receiving an updated device tree.

Has anyone else run into this problem before or have any other leads?

This question seems to be counter to the secure nature of seL4, but is there a way to probe what has reserved that memory in vspace? Like through a badge number or capability?

Is there another path I should go down to try to resolve this?

Thank you an advance for any help or guidance.

audible sigh As is often the case with posting on forums after half of a good night sleep and a whiskey, I figured it out.

The offending file was:
projects/seL4_projects_libs/libsel4vmmplatsupport/plat_include/qemu-arm-virt/sel4vmmplatsupport/plat/vpci.h

I found it by setting ZF_LOG_LEVEL to 0 and finding the PCI device instantiate and then going from there.

In order to give your VM RAM larger thatn 0x9000_0000 (in qemu-arm-virt), you’ll need to move:

/* PCI host bridge configration space */
#define PCI_CFG_REGION_ADDR 0xDE000000
/* PCI host bridge IO space */
#define PCI_IO_REGION_ADDR 0xDD000000
/* PCI memory space */
#define PCI_MEM_REGION_ADDR 0xDF000000ull

Hopefully this saves someone some effort in the future.