Is there any value to porting seL4 to a platform without also verifying the port?

There are bunch of aspects to this question, and some have been answered already, esp small trusted computing based, capability model etc. I’ll focus on the verification side only.

tl;dr: there is a benefit, you’ll get lower defect density, it’s just not as strong as full verification.

If you port seL4 to a new architecture, you’ll end up with a portion of code that has undergone formal verification in a different context, and a portion that is new and has not undergone formal verification yet. If the new architecture is similar to existing architectures, that new code is probably following patterns similar to the ones used in verified code.

None of that constitutes a new proof or gives any guarantees, but compared to an implementation that is entirely new code and has only undergone normal testing, there are still some benefits. The verified generic code can still fail in a new context, when it is used in ways that doesn’t satisfy its preconditions. So the question is how likely is that? It depends, of course, on the quality of the code that calls it. Basically unknown until you start verification on that new code, but if you’re not doing anything unexpected in the new architecture and you are keeping to the established patterns in the rest of the implementation, odds are low that this code will fail those preconditions. Changes to proofs of generic code so far have been rare in the architecture ports that we did, i.e. they mostly still applied when we checked.

So assuming that, and assuming a standard unverified defect density for the new code, even if you don’t formally verify the new architecture port (yet), you end up with code that overall has a much lower defect density than normal OS code.

1 Like