Right now, seL4 is essentially a black box. If I base a project on seL4, and later become blocked by a missing kernel feature, I am stuck. This increases the risk of the project.
Ideally, I would be able to at least try to implement the missing feature myself, as well as the corresponding proofs. However, there is currently no documentation for how to do this. Would it be possible to have a tutorial that explains how to make a simple change, and how to update the proofs accordingly?
It would certainly be possible, and it’s probably something the foundation or should do. It’d have to draw together quite a bit of material, i.e. you’d have to learn how to prove things. That is all possible, but requires work to put together.
I’d love to create some courses and tutorials on this. Main missing ingredient is time
I already know how to prove things, but I learned in the context of geometry, not software. So there is probably a bit of a leap to be made.
That said, I have noticed that I often find myself writing informal proofs as I write code. For instance, “we checked that this memory was valid the last iteration of the loop, so it is safe to use it as an array index.” How well does such reasoning carry over to writing formal proofs in Isabelle?
It certainly helps. Learning Isabelle is probably not the hard part, it’s more getting to grips with a very large number of existing theorems, understanding how they fit together and also understanding how the kernel works deeply enough to make formal statements about which things are invariant etc.
That makes sense, thank you. Would it be fair to say that the proofs are essentially the reasoning behind the correctness of each line of code, expressed in a format that a computer can check? And does that mean that if one is able to understand why a particular piece of code is correct, one should be able to express it in such a way that the proofs are accepted, without an exorbitant amount of effort?
I believe one reason that seL4 has had few external contributors is that people (erroneously) believe that the proofs are black magic that one needs a Ph.D. to understand or contribute to. I sincerely hope that that is not the case, and would love for formal verification to be accessible to any programmer who decides that the labor involved is a worthwhile investment.
Essentially yes, although more each program statement than each line. And there is more than one proof about each statement, since there are multiple different properties we prove (functional correctness, integrity, confidentiality, etc).
Yes-ish, but in reality not really: The thing is more that it can require a large amount of effort to actually deeply formally understand why a program statement is correct. This is not because those statements are so complex, but because the context is large, and the kernel necessarily (as a microkernel) is not very modular. To understand why some assignment to a pointer somewhere in the code is correct, you have to also understand what other code uses that pointer, what invariants it relies on, when those invariants might be temporarily broken, what that means for the security properties, and maybe also what that means for the proofs of other program statements that use this pointer for their own purposes that have their own invariants and conditions. If you understand all that fully and formally, then it is mostly a matter of writing all of it down at the correct points in the correct parts of the proofs that are affected. In reality, you rarely do understand all of it fully when you start proving, instead we find that we think we understand everything, start writing the proof, and during that discover the parts that we missed, didn’t think about, or where we simply made a mistake in what we thought. That is one of the big advantages of machine-checked proofs: it lets you find the proof and invariants/properties incrementally with imperfect understanding, but with the guarantee that you have not made any mistakes in reasoning.
I can confirm that this is not the case, because a large part of our proof engineering team does not have a PhD. Another large part of the team does have a PhD, but it is really not necessary for doing these proofs. When we advertise for the proof engineering team, we usually look for people with a degree in CS or Maths, with experience functional languages, C, and formal proof. The formal proof part is probably the smallest of these to learn.
I’m less sure about “any” programmer. A certain affinity to formal and mathematical reasoning is probably required, otherwise you’ll likely lose patience pretty quickly. But I can also say that I have successfully taught people with a primary degree in accounting how to do Isabelle proofs and program verification (they were doing a secondary degree in computer science at the time, but it was their first year in that, so there was not a lot of previous experience with programming or computer science, just some affinity for maths, logic, and the necessary interest – I am still very impressed by those students, because they also learned functional programming at the same time).
Sorry about joining in on this conversation a bit late.
My understanding is that with seL4 we have a minimum essential environment. For practical purposes I would want to treat it as a black box and build whatever I require on top of that provably secure foundation.
In an ideal world you would be correct, but in practice, OS writers may legitimately need features that seL4 does not currently provide. The most obvious example is platform ports, of both the code and the proofs. However, there are other cases too. For instance, the inability to listen on multiple notification objects at once (in the same thread) could be a problem. Or the limitation could be something else entirely.
Of course, seL4 cannot (and should not) try to implement every feature someone might want. But I also do not want seL4 to impose unnecessary constraints on systems based on it.