PreRFC external tools


I have been messing around with the prospect of running external tools over the kernel xml,
during the kernel build, with the external tools being written in other languages, and generating bindinfs for other language and the like.

No one is relying on this in production, it is merely one experiment towards external tools.

I’ve set up a manifest which contains a small patch to the kernels build system,
It is currently just a prototype and doesn’t actually generate any useful code, or compile that code using the correct toolchains etc…

  1. kernel patch
  2. external tools
  3. manifest
  4. generated sources

The basic premise of this is it sets up a directory kernel/external,
and the kernel build basically assumes if this contains a CMakeLists.txt it adds it it via add_subdirectory.
There is currently nothing more to it than that.

The manifest then links the appropriate repositories into kernel/external,
The tools external tools repo then builds a bunch of rust code for the host system,
the generated sources repo then runs the tools previously compiled for the host system.

This needs both a rust compiler, and a mechanism for translating between cmake and building rust code. For that I ignored the rust support in the kernel as they depend on xargo which is in maintenance mode. The repositories which I link into kernel/external use GitHub - corrosion-rs/corrosion: Marrying Rust and CMake - Easy Rust and C/C++ Integration!

git clone
cmake -Scorrosion -Bbuild -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=~/.local
cmake --build build --config Release
cmake --install build --config Release
export CMAKE_PREFIX_PATH=$HOME/.local/lib64/cmake
repo init -u -m external_tools.xml -b external_tools
repo sync
mkdir build
cd build

This should build and run some tools along side building the (patched) kernel.
I wanted to post an RFC here, because even though the patch is relatively tiny, I could see it being objectionable.

All bets are basically off when it comes to verification and how this could affect the kernel itself.

I haven’t investigate on what the minimum cmake required for my tools is, or managed to fix all the rust/cmake warnings produced by this rust tooling…
However there shouldn’t be any impact in that regard when building the kernel without linking in the external tools.

Anyhow, appreciate your time and consideration in regards to whether anyone thinks this might be an acceptable path forwards.

Couldn’t the same be achieved when wrapping the kernel CMake build in another CMake build that runs on the sources and/or generated build artifacts then? CMake dependency management should be be able to figure out what steps can be skipped on rebuilds. For a custom project that generates a system, a wrapper CMake project is needed anyway. Thus, I wonder if it’s worth the effort to maintain a custom plugin interface - what would really need such a fine-grained access?

Thanks for having a look Axel,

My thinking involved the following:

  • The existing tools are in a subdirectory of the kernel and these largely mirror those.
  • The tools need to read (xml files), out of the kernel directory
    But in top-level manifests the kernel directory isn’t hard coded in any way
    In Incorporating into your project | seL4 docs sometimes it is in seL4,
    and others in kernel/.

This isn’t intended to be an OS personality, but used by one so I didn’t think prescribing a manifest layout was appropriate.

I hadn’t really considered making kernel a sub-directory like

  • external_tools/…
  • external_tools/kernel

Which if I understand correctly is what you are proposing.
This definitely still allows it to find the sources, and doesn’t require modifications to the kernel…
It’s definitely worth trying out.

I guess my feeling though is that this is at least trying to build a more maintainable tooling layout for downstream people who largely have their own generators. They are unlikely to want to restructure their projects to accommodate and which seL4 goes to length to avoid imposing.

Perhaps though not needing to patch the build system favors your proposed solution though.

You have to determine if adding more generators to the kernel is really a “kernel thing”, i.e. something the kernel cares about? The libsel4 seems bit special, because the kernel itself uses these headers for all API/ABI related parts also. For other languages the connection seem much looser and the kernel actually does not care at all - or does it? If not, all these generators could well run after the whole kernel built. It would be a convenience task to provide better example wrapper projects, that can be used as a baseline for further customization. In the end this leads to providing an “SDK” for seL4. (Disclaimer: This is what my company Hensoldt Cyber is working on with TRENTOS also)

Yeah, I would say the kernel doesn’t really care,

This situation is very much similar to that of libseL4. But libseL4 has the benefit of its code and kernel code being generated by the same code generator. However the kernel code, and this code must be in one-to-one correspondence. The kernel only cares so far as it’s system call numbers are recognized.

At the same time, (And I want to make very clear i’m not proposing this.) this could be used to generate code for both kernel and libseL4 in the same way that existing tools do,

The tradeoffs are different (different host tools requirements of python vs rust). The large difference at that point is the installation and number of host dependencies that need to be installed before build, as you can get away with just a rust compiler, and some tools for cmake integration and pull in the other dependencies (what would be equivalent to the python sel4-deps package) at build time.

In this latter case where these are actually replacing the kernel provided tools, this is very much a “kernel thing”, and it is certainly no accident that, I am considering the case where an os personality already requires e.g. a rust compiler we could potentially reduce the number of host dependencies for the seL4 build process.

In this sense it sort of depends on how it is being used since the same tools can generate both kernel and downstream code.

I’m not sure I’m fully getting the the dependency issue you mention. For the build environment, I’d delegate this to tools like docker. So an SDK would just provide the proper containers for building. And ideally also the container creating building details, so one can further customize the containers if needed.
I’m not a big fan of installing things at build time, better fail with an error and try to tell the user what might be needed.

my primary interests lie in self-hosted capability systems…
So, its much easier for me to port some tool which doesn’t require porting an entire python interpreter or posix emulation.

Also, I kind of also feel like sometimes all these host tools make it difficult to upgrade them, since it sometimes requires everyone to upgrade their pre-installed host tools in sync, and I’m uncertain of the build processes mechanisms to check that these are at the latest required versions.

As such I just wanted to experiment here with a parallel set of tools which made different choices,
regarding the host tools sync problem while making it easier to port the tools in the future to sel4 native.
Though there is still a lot of work in order to get it compiling itself, I consider having easier to port code generation tools somewhat minor progress in that regard.