I am currently researching formal verification of systems from a security perspective.
I found some interesting and relevant papers about seL4, but could not find the actual properties which seL4 was verified to satisfy.
Is there a list of security properties used for the formal verification of seL4?
Thank you!
I did not know the first paper, and it does seem comprehensive. I will dig into it and see if I find what I’m looking for.
I was already familiar with the second paper you mentioned (as well as some of your other work, and have a lot of interest in the AADL integration).
I will try to share my understanding so far and raise a few related questions. Some of this discussion relates to establishing the value of formally verified software implementations.
From a “user” perspective, trying to build a system on seL4, these are the security properties that are formally verified:
Authority Confinement (Theorem 4, further supported by Theorem 6). This checks for a static policy enforcement. Specifically, it does not relate to a policy change (e.g., via the granting mechanism). I could use a tip on how a security analyst can perceive this property. Is it a “building block” for the other properties, which says that a security policy is enforced throughout the runtime? (thus leading to integrity, for example)
Integrity (Theorem 5, further supported by Theorem 7). This one focuses on making sure a component can make only modifications it is specified to perform. It does not relate to integrity of data saved in memory, e.g., this does not cover buffer overflows corrupting data.
Confidentiality (Theorem 8). This one focuses strictly on confidentiality of information flows (a take on noninterference), to guarantee non-leakage. It does not relate to other aspects of confidentiality, such as those relevant to stored data or even data contained in these information flows (e.g., when using one process to route sensitive data between other processes).
Any other properties that I missed?
Are there formal expressions of the properties using LTL (or a similar language)?
Is there a “cookbook” which tells the designer how to properly design the system to fully benefit from the verified properties? For example, with respect to the Authority Confinement, the first paper mentions use of static policies for untrusted objects. Has this been incorporated into any seL4 user manual / guidelines?
Yes, authority confinement is mostly a building block. It is asking the question “will subject X ever be able to gain authority A to subject Y?”
Authority confinement means that you can statically predict how far authority can travel in a system. It explicitly does include reasoning about granting or sharing capabilities – it basically assumes that if an untrusted component can grant then it will grant. If you do that, you can draw static boundaries around a dynamic system.
The policy you get from this saturation strategy is the maximal static policy of the system, even if the “real” policy of the system might be dynamic.
The prediction of how far authority can travel in the system gets more precise if you can assume behaviour of trusted components (e.g. because you have verified them). Then you do not need to assume that they will grant all authority they can grant, and therefore you can get tighter bounds on how much authority other components can get.
Yes, if I understand you correctly, this is the right interpretation. It is not about check sums or buffer, but about wether the kernel will allow a component to write to a specific memory page or modify the state of a kernel object (via system calls).
Correct. The confidentiality theorem assumes that all components are untrusted, so a component that takes sensitive data and has access to two different data sinks, it will be assumed to leak that information to both of them.
For high-level user-relevant properties, these are the main ones. There is also the absence of buffer overflows and crashes etc in the kernel, as well as termination of kernel calls etc, which can be security relevant, but we usually count them more as functional correctness.
They are expressed in higher-order logic in Isabelle/HOL (LTL is insufficient). They are not small properties, you can find authority confinement and integrity (proved together) in the “Access” session in the l4v repository, and confidentiality in the “InfoFlow” session. There is a Chrome extension to render mathematical Isabelle symbols on GitHub if you don’t want to download Isabelle to view them.
There isn’t a cookbook as such yet, because there are many ways in which you could use these, but both the CAmkES component system on top of seL4 and the seL4 Core Platform provide higher-level abstractions that make use of these theorems to give you components that are isolated from each other in terms of authority confinement and integrity and that can further be constrained to meet the configuration assumptions of the confidentiality theorem.
One easy high-level guideline is: if the system after initialisation does not contain any untyped capabilities any more, components (= sets of threads together with the kernel objects they have capabilities to) do not have capabilities with grant rights, and components do not share capability spaces, then authority cannot travel beyond these current component boundaries, and components can only read/write from/to the objects they capabilities to.
I agree that it’d be nice to write up some further high-level guidelines on these, but so far this has not happened yet.
Thank you very much, Gerwin.
This is very informative and of great help. I also liked Matthew Brecknell’s talk, so thank you for the reference.
Then it is perfectly “allowed” that a user application will overflow its own buffer (e.g., due to a bug), right?
If I understood correctly, all buffer related protections are with respect to kernel memory space and not to users, i.e., seL4 does not provide such memory management/protection for applications.
Were these by any chance integrated with the AADL related work you published?
BTW, I liked how you relate to a “security policy” (when discussing granting authority) in a way that echoes the definition in NIST SP 800-160 vol. 1.
I wonder how much of the “security policy” aspects we can incorporate and even formalise (e.g., as assumptions), so that the formal proofs will have a clearer interface with prospective users.
Can you please explain what you mean by “capabilities”? If this formally defined? (and if so, how and where? Isabelle/HOL?)
I’m interested because I’m actually working on a CHERI-related project (with hardware enforced “capabilities” used to protect memory sections).