What benefits of seL4 would be retained by porting the kernel implementation alone to another platform, such as Power ISA, without also porting verification/proofs or having it supported in any way by the developers of seL4?
As someone interested in seeing a general purpose operating system built on top of seL4, I think there is a lot of value in using an object capability-secure microkernel even without a full verification story.
Using seL4 in a configuring without a complete verification still gives you the benefit of the small trusted computing base, model, and the quality of the code used in the implementation.
Porting seL4 to a new platform or architecture, even without porting any verification, introduces a maintenance overhead.
We currently test a large selection of configurations over a range of hardware platforms and maintain compatibility with the architectures and platforms listed on the hardware page.
Adding a new platform with an existing architecture is difficult to maintain if we cannot run tests on the hardware or at least an emulation of the platform. Maintenance is also difficult as we need to implement and maintain a minimal set of drivers for each platform (e.g. IRQ controllers, kernel and user level timers, and serial devices).
Adding an entirely new architecture is a much greater maintenance burden. Kernel code for managing the IRQ controller, virtual addressing, mode and context switching all needs to be added and maintained for that architecture (and that’s before adding any support for SMP or hardware virtualization). New architectures also increase the work required to implement new features that depend on architecture-specific features.
If seL4 were to be ported to a new architectures, the first major issue to resolve is how ongoing support and maintenance will be provided. If that can be resolved then using it without any verification would be reasonable in low-assurance scenarios.
There are bunch of aspects to this question, and some have been answered already, esp small trusted computing based, capability model etc. I’ll focus on the verification side only.
tl;dr: there is a benefit, you’ll get lower defect density, it’s just not as strong as full verification.
If you port seL4 to a new architecture, you’ll end up with a portion of code that has undergone formal verification in a different context, and a portion that is new and has not undergone formal verification yet. If the new architecture is similar to existing architectures, that new code is probably following patterns similar to the ones used in verified code.
None of that constitutes a new proof or gives any guarantees, but compared to an implementation that is entirely new code and has only undergone normal testing, there are still some benefits. The verified generic code can still fail in a new context, when it is used in ways that doesn’t satisfy its preconditions. So the question is how likely is that? It depends, of course, on the quality of the code that calls it. Basically unknown until you start verification on that new code, but if you’re not doing anything unexpected in the new architecture and you are keeping to the established patterns in the rest of the implementation, odds are low that this code will fail those preconditions. Changes to proofs of generic code so far have been rare in the architecture ports that we did, i.e. they mostly still applied when we checked.
So assuming that, and assuming a standard unverified defect density for the new code, even if you don’t formally verify the new architecture port (yet), you end up with code that overall has a much lower defect density than normal OS code.