Nice question! The answer is a bit long, but the short version is: yes, we do have to watch out for it, although not too much – the tools guard against most, and there are additional things we do about it.
From what little I understand about formal specifications and proofs, I think it can be possible to write some logic in such a way that it unintentionally can be used to prove that anything as true.
These would be called “inconsistent” specifications or inconsistencies in the spec and/or basic logic and/or theorems. Tautologies are things that are always true, usually trivially true, so they would be harmless to introduce, but you wouldn’t want your main theorem to be a tautology in the trivial sense.
An inconsistency in the logic would make the entire development meaningless, so this is something to look out for. The basic logic we use (Higher Order Logic) is well understood in the literature and we don’t change it, so that part is fine. Logical definitions, as we use them in the seL4 specs, are checked by the tool before being admitted and can’t introduce inconsistency into the logic, so that part is fine as well.
Where things are more dangerous are the introduction of axioms or explicit assumptions to the proof. The difference between them in this context is that axioms must be universally true for all values of the variables they mention, whereas assumptions to theorems merely constrain the values that the theorem talks about, i.e. not all values have to satisfy the assumptions, just useful ones we want to talk about. What we would want to avoid is that there are no such values at all (which would make the theorem a trivial tautology).
We use axioms extremely sparingly and carefully and manually review them. For some, we can prove that a model exists, i.e. that they are an abstraction of a more complex world than we want to talk about in the proof, which makes them safe. The main (in fact, only, IIRC) use of axioms in seL4 is for the machine interface, where the properties we postulate are very straightforward. We initially used axioms for this instead of assumptions for technical reasons of the proof tool, which we could now lift, I think. This is something to consider, because it would reduce the need for manually checking those axioms.
There is a second method for making sure axioms haven’t introduced an inconsistency into the logic, which is trying to automatically derive “False”. We don’t do that explicitly (because it has to fail), but we would notice if the automated proof tools suddenly could prove everything. It’d make our lives much easier
For important assumptions, we usually prove that they are at least not always false, i.e. that we are not proving something useless. That does not prove that the theorem is useful in reality, but it does make sure that the logic can’t take any shortcut in the proof.
An instance of that technique is the invariant proof for the abstract specification. These proofs roughly have the form “if invariants are true before a kernel call, prove that they are true after”. If the invariants were inconsistent internally, that is False for all states, that statement would just be “prove that False implies False”, which is trivially true. We prove that there exists a (small, simple to write down) state that does satisfy the invariants, so that we know we don’t prove a trivial statement.
Similarly, the abstract specification could be inconsistent in the sense that it contains no executions at all. We could exclude that by actually executing a few steps on a small example state in the theorem prover, but the refinement proof down to C already requires us to show that there exists an abstract spec execution for every C execution, so once the full proof chain is assembled, the intermediate steps don’t actually have to be checked – the proof could not succeed if the spec was empty.
The actual test for theorem assumptions is using the theorems (e.g. the integrity security theorem) on an example system and state and prove that the assumptions are satisfied by that system. We do that as part of the regression test for (very) small examples for the security theorems, and we have applied them to actual larger systems as well (not part of the ongoing regression tests, though).
The main missing (as in not proved, only manually validated) large-ish assumption is that the initial state after boot satisfies the invariants. That part is something we can and should prove at some point. It’s mainly a question of team scheduling and funding. This proof would not fully remove any assumptions about initial state, but it would reduce them from “state after kernel init code has run” to “state after boot loader has run”, which is much simpler. You could in theory, with a boot loader proof, further reduce this to “state after machine has been switched on”. At that point you would then hit hardware verification which is also possible, and may eventually have assumptions about things like input voltage etc. You’d probably want to stop way before that, the main point is that there will always be residual assumptions left that have to be validated outside the proof world. You’d want those to be simple to actually check.