Skip to content

seL4 is finally free! Can you afford not to use it?

The world’s most highly assured operating system (OS) kernel

… the only one with a formal proof of implementation correctness (freedom from bugs), the only one with a proof of security that extends all the way to the executable code, the only protected-mode OS with sound worst-case execution time (WCET) bounds, is finally free. It is available under the same license as the Linux kernel, the GPL v2. Everyone can use it, and everyone should. And, as I’ll explain, they may, one day, be considered negligent for not using seL4, and such liability could reasonably start with any system that is designed from now on.

Critical systems

Let’s expand on this a little. Let’s assume you, or our company, is building a device/system whatever which has safety or security requirement. There are lots of these, where failure could lead to death or injury, loss of privacy, or loss of money. Many systems that are part of everyday life fall into that category, not only national-security type of systems (but those obviously do too).

Medical implants

Medical implants could kill a patient or fail to keep them alive if they fail. Such systems may fail because their critical functionality (that keeps the patient alive) is compromised by a “non-critical” part that misbehaves (either by a bug triggered during normal operation, or a vulnerability exploited by an attacker). Most implants have such “non-critical” code, in fact, it tends to be the vast majority of the software on the device. Most devices these days have some wireless communication capability, used for monitoring the patient, the state of the device, and maybe even to allow the physician to adjust its operation. On systems not built on seL4, there can be no guarantee that this “non-critical” software is free from dangerous faults, nor can there be a guarantee that its failure cannot cause the critical bits to fail.

Cars

Complex software in a car could misbehave and compromise critical components like engine control, breaks, air bags. Attacks on cars are well documented. The CAN networks in cars are fundamentally vulnerable, so many of the devices on the bus can be the starting point of an attack. Furthermore, functionalities that used to run on separate ECUs (microprocessors) are increasingly consolidated onto shared processors, creating more potential vulnerabilities. Car hacks clearly have the potential to kill people.

Industrial automation

Industrial automation systems are in many ways – other than scale ;-) – similar to medical implants: Some critical functionality (which may or may not be small but is likely to be highly assured) runs beside large, complex software stacks that supports monitoring, control, reprogramming etc, and is far too complex for high assurance to be feasible. If such a system misbehaves it may cause millions of dollars worth of damage, or even endanger lives.

Others

There are many other systems where failure is expensive or dangerous. They include systems performing financial transactions (eg point-of-sale systems or ATMs), voting machines, systems holding or providing access to personal data (cards with medical history, etc).

There are many more examples.

Limitations

Of course, seL4 cannot give a complete guarantee of no failure either. The hardware could fail, the system could be ill-designed or wrongly configured, or few other things could go wrong. But it definitely (for a well-designed and -configured system) makes it impossible for a compromised “non-critical” subsystem to interfere, through a software fault, with the critical part. This is by far the most likely source of error, and seL4 gives you the highest degree of certainty achievable to date that this won’t happen.

So, why would you not want to use seL4 in your next design?

Let’s examine the reasons people usually give.

License

seL4 is published under the GPLv2. This means that any improvements will have to be open-sourced as well, which is a concern to some people.

I claim this is mostly a misconception. Nothing you build on top of seL4 will be affected by the kernel’s license (just as you can build closed-source stuff on Linux). seL4 contains the GPL to the kernel, and our license files have explicit wording (taken from Linux) confirming that.

How about changes to seL4 itself? Reality is that you aren’t likely to make significant changes to seL4. For one, any change will invalidate the proofs. Ok, maybe you don’t care too much about this, as even if it’s not 100% verified, even a slightly modified seL4 system will be massively more dependable than anything else. So maybe you want to do some “improvements” and then keep them to yourself?

As long as those are actual “improvements”, I suspect that you won’t be able to outrun the momentum that’s behind seL4 at present. No-one knows better than us how to improve seL4 even more (despite it already being much better than any comparable system), and we keep doing this. And if you have a real use case that requires some improvements, you’re better off talking to us than to try on your own.

Or maybe your improvements are really adaptations to proprietary hardware. In most cases that will be a non-issue, as seL4 contains no device drivers (other than a timer and the interrupt-controller driver), so any proprietary stuff isn’t likely to be visible to the kernel. But if you really have to modify the kernel in a way that exposes sensitive IP, then you can always get access under a different license (for a fee). If this is your situation then talk to us.

Processor architecture

Maybe you want to run seL4 on a processor that isn’t presently supported. As long as it’s a supported CPU architecture (ARM and x86) then that isn’t a big deal, and we can help you porting. If you need it on a different architecture (say Power) then that would involve a fair bit more work. Still, you should talk to us about a port, the cost is likely to be a small fraction of your overall project cost, and will be tiny compared to the potential cost of not using seL4 (more on that below).

Performance

I thought our work in the past 20 years was enough to debunk the “microkernels are slow” or even “protection is too expensive” nonsense. Ok, some microkernels are slow, and, in fact, almost all commercial ones are, especially the so-called high-assurance ones (OKL4 is an exception). But seL4 isn’t. It runs loops around all the crap out there! And OKL4, shipped in billions of power-sensitive mobile devices, has shown that a fast microkernel doesn’t cause a performance problem. Performance is definitely one of the more stupid excuses for not using seL4!

Resource usage

Unless you have an extremely constrained system, then seL4 is small (compared to just about everything else). If you’re running on a low-end microcontroller (with not only serious resource constrains but also no memory protection) you should look at our eChronos system instead.

Legacy

Your design has to re-use a lot of legacy software that is dependent on your old OS environment. Whether that’s a real argument probably depends a lot on how well your legacy software is written. If it is essentially unstructured homebrew that doesn’t abstract common functionality into libraries, then you may have a real problem. In the sense of having a real problem, which is going to bite you hard sooner or later, whether you’re migrating to seL4 or not. If that isn’t the case, then there is likely to be a sensible migration path, the cost of which is likely to be dwarfed by the potential cost of not using seL4 (see below).

Environment and support

seL4 is a young system, with a limited support environment in terms of libraries and tools. That is true, but it’s also changing rapidly. We’ll see userland developing quickly. And don’t you think that just because it’s open source there isn’t good support. Between NICTA and GD and our partners, we can certainly provide highly professional support for your seL4-based development. Just talk to us.

So, what’s the trade-off?

Well, ask yourself: You’re developing some critical system. Critical in the sense that failure will be very costly, in terms of lost business, lost privacy, loss of life or limb. Imagine you’re developing this based on yesterday’s technology, the sort of stuff others are trying to sell you, and what you may have been using believing (at the time at least) that it was state of the art. Now it ain’t.

So, imagine you are developing this critical system and decide not to use seL4. And, sooner or late, it fails, with the above costly consequences. You may find yourself in court, being sued for many millions, trying to explain why you developed your critical system using yesterday’s technology, rather than the state of the art. I wouldn’t want to be in your shoes. In fact, I might serve as an expert witness to the other side, which argues that you should have known, and that doing what you did was reckless.

Do you want to run that risk? For what benefit?

Think carefully.

Clarification on seL4 media reports

The open-sourcing of seL4 has created a bit of media coverage, generally very positive, good to see.

Some of it, however, is inaccurate and potentially misleading. I’ve commented where I could, but some sites don’t allow comments.

One frequent error is the (explicit or implicit) claim that seL4 was developed for or under the DARPA HACMS program, stated eg at Security Affairs, and seemingly originated at The Register. This is incorrect: seL4 pre-dates HACMS by several years, and was, in fact, a main motivation for HACMS. We are a “performer” under this project, and, together with our partners, are building an seL4-based high-assurance software stack for a drone, please see the SMACCM Project page for details. [Note 31/7/14: The Register has now corrected their article.]

LinuxPro states that DARPA and NICTA released the software, the release was by General Dynamics (who own seL4) and NICTA. Also, seL4 wasn’t developed for drones, it’s designed as a general-purpose platform suitable for all sorts of security- and safety-critical systems. I for one would like to see it making pacemakers secure, I may need one eventually and with present technology would feel rather uncomfortable about this.

I’ll keep adding if I find more…

Sense and Nonsense of Conference Rankings

The CORE Rankings

Some may know that a few years ago, the Australian Research Council (ARC) had a ranking of publication outlets produced. For computer science, the exercise was outsourced to CORE, the association of Australian and NZ CS departments (Oz equivalent of CRA). It categorised conferences (and journals) into A*, A, B and C venues.

I have in the past stated what I think of that list: very little. In short, I think it’s highly compromised and an embarrassment for Australian computer science. And I’m outright appalled when I see that other countries are adopting the “CORE Rankings”!

The ARC disendorsed the rankings in 2011. Yet, in 2013, CORE decided to maintain and update it. I argued that updating with a similar process as the original one will not improve the list.

The Fellows Letter

Now, some senior colleagues (Fellows of the Australian Academy of Science) have written an open letter, denouncing not only the CORE list, but basically any use of publication venues as an indicator of research quality.

The letter was, apparently, written by Prof Bob Williamson from the ANU, and fellow group leader at NICTA. Bob is a guy I have a lot of respect for, and we rarely disagree. Here we do completely. I also highly respect the other Fellows (one of them is my boss).

The Fellows essentially argue (with more clarification by Bob) that looking at where a person has published is useless, and the right way to judge a researcher’s work is to read their papers.

What I think

With all respect, I think this is just plain nonsense:

  1. These rankings exist, like it or not. In fact, we all use them all the time. (Ok, I cannot prove that the “all” bit is strictly true, some, like Bob, may not, the rest of us do.) When I look at a CV, the first thing I look for is where did they publish. And I claim that is what most people do. And I claim it makes sense.

    Fact is that we know what the “good” venues are in our respective disciplines. This is where we send our papers to, this is where we tell our students and ECRs they need to get their papers accepted. They are the yardsticks of the community, like it or not, it is where you publish to have impact. Publishing in the right venues leads to high citations, publishing in the wrong ones doesn’t.

    Of course, we really only understand the venues in our own sub-disciplines, and may be a few neighbouring ones. So, collecting and documenting these top venues across all of CS isn’t a bad thing, it creates clarity.

  2. The idea that someone can judge a person’s work simply by reading some of their papers (even the self-selected best ones), with respect, borders on arrogance. In effect, what this is saying is that someone from a different sub-discipline can judge what is good/significant/relevant work!

    If this was true, then we as a community could reduce our workload drastically: We’d stop having conference PCs where everyone has to read 30 papers, and every paper gets at least half a dozen reviews before being accepted (as at OSDI, where I’m presently struggling to get all my reviews done). Instead, every conference would simply convene a handful of Bobs, divide the submissions between them, and each decides which one of their share of the papers should be accepted.

    Of course, things don’t work like this, for good reasons. I’ve served on enough top-tier conference PCs to have experienced plenty of cases where the reviews of discipline experts diverge drastically on multiple papers. In my present OSDI stack of 29 papers this is true for about 35% of papers: 10 papers have at least one clear reject and one clear accept! And it is the reason why each paper gets at least 6 reviews: we get the full spectrum, and then at the PC meeting work out who’s right and who’s wrong. The result is still imperfect, but vastly superior to relying on a simple opinion.

    Now these reviewers are the discipline experts (in this case, leading researchers in “systems”, incorporating mostly operating systems and distributed systems). If you get such a diversity of opinions within such a relatively narrow subdiscipline, how much would you get across all of computer science? I certainly would not claim to be able to judge the quality of a paper in 80% of computer science, and someone thinks they can, then my respect for them is taking a serious hit.

    In summary, I think the idea that someone, even one of the brightest computer scientists, can judge an arbitrary CS paper for its significance is simply indefensible. An expert PC of a top conference accepting a paper has far more significance than the opinion of a discipline outsider, even a bright one!

  3. Of course, that doesn’t justify using the publication outlets as the only criterion for promotion/hiring or anything else. That’s why we do interviews, request letters etc. Also, I definitely believe that citations are a better metric (still imperfect). But citations are a useless measure for fresh PhDs, and mostly of not much use for ECRs.

  4. Nor do I want to defend the present CORE list in any way. I said that before, but I’m repeating for completeness: the present CORE list is the result of an utterly broken process, is completely compromised, and an embarrassment for Australian computer science. And any attempt to fix it by using the existing process (or some minor variant of it) is not going to fix this. The list must either be abandoned or re-done from scratch, using a sound, robust and transparent process.

  5. My arguments only are about top venues. A track record of publishing in those means something, and identifying across all of CS what those top venues are has a value. By the same token I believe trying to categorise further (i.e. B- and C-grade venues, as done in the CORE list) is a complete waste of time. Publishing in such venues means nothing (other than positively establishing that someone has low standards). So, if we bother to have a list, it should only be a list of discipline top venues, nothing more.

Closing the gap: Real OS security finally in reach!

A few years ago we formally verified our seL4 microkernel down to C code. That proof, unprecedented and powerful as it is, didn’t actually establish the security (or safety) of seL4. But the latest achievement of NICTA’s Trustworthy Systems team actually do provide a proof of security.

What the original seL4 verification achieved was a proof that the kernel’s C code correctly implements its specification. The specification is a formal (mathematical) model of the kernel, and the proof showed that the C implementation is “correct”, in the sense that all possible behaviours, according to a semantics of C, are allowed by the spec.

This is both very powerful and limited. It’s powerful in the sense that it completely rules out any programming errors (aka bugs). It’s still limited in three important respects:

  1. Ok, the code adheres to the spec, but who say’s the spec is any good? How do you know it’s “right”, i.e. does what you expect it to, is it “secure” in any real sense?
  2. The C standard is not a formal specification, and is, in fact, ambiguous in many places. In order to conduct the proof, we had to pick a subset which is unambiguous, and we proved that our kernel implementation stays within that subset. However, we have no guarantee that the compiler, even on our subset, assumes the same semantics!
  3. Everyone knows compilers are buggy. So, even if we agree with the compiler about the semantics of our C subset, we can still lose if the compiler produces incorrect code.

Two sets of recent results remove these limitations. One set of work addresses the first limitation, and another the remaining two.

Integrity and confidentiality proofs

What does it mean for the kernel spec to be “good”? It means that the kernel can be used to build the kind of systems we want to build, with the right properties. The specific properties we are after are the classical security properties: Integrity, confidentiality and availability. (For safety a further property is required, timeliness. seL4 has this too, but I’ll discuss this another time.)

Availability is actually a non-issue in seL4: the kernel’s approach to resource management is not to do any, but to leave it to user-level code. The exact model is beyond the level of technicality of this blog, I refer the interested reader to the paper describing the general idea. The upshot is that denial of kernel services is impossible by construction. (This does not mean that availability is necessarily easy to enforce in seL4-based systems, only that you can’t DOS the kernel, or a userland partition segregated by correct use of kernel mechanisms.)

Integrity, at the level of the microkernel, means that the kernel will never allow you to perform a store (write) operation to a piece of memory for which you haven’t been explicitly been given write privilege; including when the kernel acts on behalf of user code. The NICTA team proved this safety property for seL4 about two years ago (see the integrity paper for details). In a nutshell, the proof shows that whenever the kernel operates, it will only write to memory if it has been provided with a write “capability” to the respective memory region, and it will only make a region accessible for writing by user-code (eg by mapping it into an address space) if provided with a write capability.

confidConfidentiality is the dual to integrity: it’s about read accesses. The figure at the left illustrates this, by showing allowed (black) and forbidden (red) read accesses. However, it turns out that confidentiality is harder to prove than integrity, for two reasons. 1) violation of confidentiality is unobservable in the state of the entity whose confidentiality has been violated 2) confidentiality can be violated not only by explicit reads (loads or instruction fetches) but also by side channels, which leak information by means outside the system’s protection model.

A NICTA team led by researcher Toby Murray has just recently succeeded in proving that seL4 protects confidentiality. They did this by proving what is called “non-interference”. The details are highly technical, but the basic idea is as follows: Assume there is a process A in the system, which holds a secret X. Then they look at any arbitrary execution traces of an arbitrary second process B. If the initial state of the system of any two such traces is identical except for the value of X, they proved that the two traces must be indistinguishable to B. This implies that B cannot infer X, and thus A’s confidentiality is maintained. The technical paper on this work will be presented at next month’s IEEE “Oakland” security conference.

Binary verification

In order to remove from our trusted computing base the C compiler, as well as our assumptions on the exact semantics of C, the team proved that the kernel binary (i.e., the output of the compilation and linking process) is consistent with the formal model of the kernel implementation. This formal model is actually what had earlier been proved to be correct with respect to the kernel spec (it had been obtained by formalising the C code using the formal C semantics). By verifying the binary against the formal model, the formalisation process (and thus our assumptions on the C semantics) are taken out of the trusted computing base.

Binary verification workflow and proof chain

Binary verification workflow and proof chain

The verification of the binary went in multiple phases, as indicated by the figure on the right. First, the binary was formalised. This was done with the help of a formal model of the ARMv6/v7 architecture. That formalisation of the architecture had previously been done at Cambridge University, in close collaboration with ARM, and is extensively validated. It’s considered to be the best and most dependable formalisation of any main-stream processor.

The formalised binary, which is now a complex expression in mathematical logic inside the theorem prover HOL4, was then “de-compiled” into a more high-level functional form (the left box called “function code” in the figure). This translation is completely done in the theorem prover, and proved correct. Similarly, the formal kernel model is transformed, using a set of “rewrite rules” which perform a similar operation as the compiler does, into a functional form at a similar abstraction level as the output of the above de-compilation. Again, this operation is done in the theorem prover, by application of mathematical logic, and is proved to be correct.

We now have two similar representations of the code, one obtained by de-compilation of the binary, the other by a partial compilation process of the formalised C. What is left is to show their equivalence. Readers with a CS background will now say “but that’s equivalent to the halting problem, and therefore impossible!” In fact, showing the equivalence of two arbitrary programs is impossible. However, we aren’t dealing with two arbitrary programs, but with two different representations of the same program (modulo compiler bugs and semantics mismatch). The operations performed by a compiler are well-understood, and the re-writing attempts to retrace them. As long as the compiler doesn’t change the logic too much (which it might do at high optimisation levels) there is hope we can show the equivalence.

And indeed, it worked. The approach chosen by Tom Sewell, the PhD student who did most of the work, was to compare the two programs chunk-by-chunk. For each chunk he used essentially a brute-force approach, by letting an SMT solver loose on it. Again, some trickery was needed, especially to deal with compiler optimisations of  loops, but in the end, Tom succeeded. Details will be presented at the PLDI conference in June.

The resulting tool is fairly general, in that it is not in any way specific to seL4 code, although it has limitations which are ok for seL4 but might be show-stoppers for other code. Specifically, the tools can’t deal with nested loops (but seL4 doesn’t have any, nor should any microkernel!) At present, the tools grok the complete seL4 binary when compiled by gcc with the -O1 optimisation flag. We normally compile the kernel with -O2, and this introduces a few optimisations which presently defeat the tool, but we’re hopeful this will be fixed too.

This is actually quite a significant result. You may have heard of Ken Thompson’s famous Turing Award Lecture “Reflections on trusting trust“, where he describes how to hack a compiler so it will automatically build a back door into the OS. With this tool, seL4 is no longer susceptible to this hack! If the compiler had built in a back door, the tool would fail to verify the correctness of the binary. But regular compiler bugs would be found as well, of course.

Where are we now?

proof-chainWe now have: 1) Proofs that our abstract kernel spec provides integrity and confidentiality; 2) A proof that the formalised C code is a correct implementation of the kernel spec; and 3) a proof that the kernel binary is a correct translation of the formalised C code. Together this means that we have a proof that the kernel binary has the right security properties of enforcing confidentiality and integrity!

There are still some limitations, though. For example, we haven’t yet verified the kernel’s initialisation (boot-strap) code, this is work in progress (and is nearing completion). There are also bits of unverified assembler code in the kernel, and we don’t yet model the operation of the MMU in detail. So there are still some corners in which bugs could, in theory, be hiding, but we’re getting closer to sweeping out the last refuges of potential bugs.

A more fundamental issue regards the side channels mentioned above. These are traditionally classified into two categories: storage channels and timing channels. The difference is essentially whether a common notion of time is required to exploit them.

In principle our non-interference proof covers storage channels, up to the point to which hardware state that could constitute a channel is exposed in the hardware model that is used in the proof. Presently that model is fairly high-level, and therefore might hide potential storage channels. It does cover things like general-purpose CPU registers, so we know that the kernel doesn’t forget to scrub those in a context switch, but modern CPUs have more complex state that could potentially create a channel. Nevertheless, the approach can be adapted to an arbitrarily-detailled model of the hardware, which would indeed eliminate all storage channels.

Timing channels are a different matter. The general consensus is that it is impossible to close all timing channels, so people focus on limiting their bandwidth. So far our proofs have no concept of time, and therefore cannot cover timing channels. However, we do have some promising research in that area. I’ll talk about that another time.

When “persistent” storage isn’t

A couple of weeks ago we published our RapiLog paper. It describes how to leverage formal verification to perform logically synchronous disk writes at the performance of asynchronous I/O (see the previous blog on the implications of this work).

A frequent comment from reviewers of the paper was: “why don’t you just use an SSD and get better performance without any extra work?”

My answer generally was “SSDs are more expensive [in terms of $/GiB], and that’s not going to change any time soon.” In fact, the RapiLog paper shows that we match SSD performance using normal disks.

What I didn’t actually know at the time is that, unlike disks, SSDs tend to be really crappy as “persistent” storage media. A paper by Zheng et al. that appeared at the Usenix FAST conference in February studies the behaviour of a wide range of different SSDs when electrical power is cut – exactly the scenario we cover with RapiLog. They find that of the 15 different SSD models tested, 13 exhibit data loss or corruption when power is lost! In other words: SSDs are a highly unreliable storage medium when there is a chance of power loss. They also tested normal disks, and found out that cheap disks aren’t completely reliable either.

They did their best to stress their storage devices, by using concurrent random writes. In RapiLog, we ensure that writes are almost completely sequential, which helps to de-stress the media. And, of course, we give the device enough time to write its data before the power goes.

I quote from the summary of Zheng et al.: “We recommend system builders either not use SSDs for im- portant information that needs to be durable or that they test their actual SSD models carefully under actual power failures beforehand. Failure to do so risks massive data loss.”

For me, the take-away is that putting the database log on an SSD undermines durability just as running the database on a normal disk in unsafe mode (asynchronous logging or write cache enabled). In contrast, using RapiLog with a normal disk gives you the performance of an SSD with the durability of a normal disk. Why would you want to use anything else?

Green computing through formal verification?

This may seem like a long bow to draw, and I admit there’s a bit of marketing in the title, but there’s substance behind it: Verification can save energy in data centres, by significantly speeding up transaction processing.

How does that work?

The key is the defensive way many systems are built to cope with failure. Specifically database systems, which are behind e-commerce and many other web sites, are designed to protect against data loss in the case of system crashes. This is important: if you do a transaction over the web, you want to make sure that you aren’t charged twice, that if they charge you, they are actually going deliver etc. That wouldn’t be ensured (even if the vendor is trustworthy) if the server crashed in the middle of the transaction.

Databases avoid losses by ensuring that transactions (which is now a technical term) have the so-called ACID properties of atomicity, consistency, integrity and durability. The ones relevant to this discussion are atomicity (meaning that a transaction is always either completed in totality or not at all) and durability (meaning that once a transaction is completed successfully, its results cannot be lost). Databases achieve atomicity by operating (at least logically) on a temporary copy of the data and making all changes visible at once, without ever exposing intermediate state.

Durability is generally ensured by write-ahead logging: The database appends the intended effect of the transaction to a log, which is kept in persistent storage (disk), before letting the transaction complete and making its results visible to other transactions. If the system (the database itself or the underlying operating system) crashes halfway through, the database at restart goes through a recovery process. It will replay operations from the log, as long as the log records a complete transaction, and discards incomplete transaction logs. This ensures that only complete transaction effects are visible (atomicity).

Durability is only ensured if it can be guaranteed that the complete log entry of any completed transaction can be recovered. This requires that the database only continues transaction processing once it is certain that the complete transaction log is actually safely recorded on the disk.

This certainty does not normally exist for write operations to disk. Because disks are very slow (compared to RAM), the operating system does its best to hide the latency of writes. It caches writes to files in buffers in RAM (the so-called block cache), and performs the physical writes asynchronously, that is it overlaps them with continuing execution. This is fine, except if you want to ensure durability of your transactions!

Databases therefore perform log writes synchronously. They use an OS system call called “sync()” (to be precise, it’s “fsync()” in Unix/Linux systems). This call simply blocks the caller until any pending writes on the respective file have finished. The database then knows that the data are on disk, and can proceed, knowing that durability is maintained.

What’s the problem with this?

There’s a significant cost, in terms of latency. Disk writes are slow, and the database has to wait until completion. In some cases, it can spend the majority of the time waiting and twiddling thumbs.

What does formal verification have to do with this?

If the underlying operating system is guaranteed not to crash, then we don’t need to block. The database can continue processing, and the OS does the write asynchronously.

This is essentially the thought I had while sitting in a database seminar a few years ago. I was thinking “you guys go though such a lot of trouble to ensure you don’t lose data. We could do that so much easier!”

In fact, that’s what we’re doing now, in a system we call RapiLog. We’re using the stability guarantees of our formally verified  seL4 microkernel to get the best of both worlds: the performance advantage of asynchronous I/O, and the durability guarantees of synchronous writes. And does it ever work! We’re observing a doubling of database throughput under the right circumstances!

And increased throughput means that you can process more requests with the same hardware. This means that your data centre can be smaller, which doesn’t only reduce capital cost, but also running cost, most of which is the cost of electricity for powering the computers and for air conditioning. That’s the link to green computing.

And the cool thing is that we can do this with unmodified legacy systems: We run an unmodified database binary (including commercial databases where we have no source) on an unmodified Linux system. We just put it into a virtual machine and let an seL4-based “virtual disk” do all the magic.

How about power blackouts?

Good question! What happens if mains power suddenly goes down?

For the traditional database setup that isn’t a problem. Either the transaction is recorded in a log entry that is safely on disk when power goes, in which case the transaction is durable and will be recovered at restart time. Or there is no (complete) log entry, in which case the transaction was never completed, and is discarded at restart. But with seL4?

Turns out we can deal with that too. Computer power supply have capacitors which store some electrical energy. When mains power is cut, that energy can run the computer for a bit. Not a long time, only some 100 milliseconds. But that is plenty of time to flush out any remaining log data to the disk. Turns out that all we need is about 20 milliseconds, which is very comfortably inside the window given by the capacitors. All we need is an immediate warning (in form of an interrupt raised by hardware).

Implications on system complexity

Maybe even more important than the performance improvement is the potential for reducing system complexity. Complex systems are expensive (in terms of development and maintenance). They are also error prone, meaning more likely to fail. Reducing complexity is always a good thing. And we have shown that some of the complexity (in this case in database systems) can be avoided if we can provide guarantees about the reliability of the software. Which is what our Trustworthy Systems work is all about.

Want to know more?

Read the paper. It will be presented at next months’ EuroSys Conference in Prague.

Giving it away? Part 2: On microkernels and the national interest

In my previous blog I addressed a number of misconceptions which were contained in Nick Falkner’s blog on the OK Labs sale, and the newspaper article it was based on.

Note that as far as the newspaper article reported on facts it got them essentially right. However, it drew the wrong conclusions, based on an incorrect understanding of the situation and the realities of commercialisation, and these incorrect conclusions triggered Nick’s blog. My previous blog addressed issues around commercialisation of ICT IP. Now I’d like to address some of the specifics of the OK Labs situation, and the NICTA IP involved.

Before delving deeper, I must say that there are severe limits to what I can reveal. I was a director of OK Labs, and as such bound by law to confidentiality with regard to information which I obtained as a director. Beyond that there are confidentiality clauses affecting the main shareholders (which includes myself as well as NICTA). I also was an employee of OK Labs from early 2007 until mid 2010. Essentially I have to restrict my comments to what’s on the public record or was known to me before the respective agreements were signed.

A tale of three microkernels and four names

First there is the issue of the three kernels, the first one appearing under two names, which continues to create confusion, even though the facts were essentially correctly reported in the newspaper article.

L4

Before OK Labs there was NICTA’s version of the L4 microkernel. This was an evolution of the open-source Pistachio microkernel, originally mostly developed at Karlsruhe University in Germany. We had ported it to a number of architectures, including ARM, had optimised it for use in resource-constrained embedded systems, and had designed and implemented some really cool way of doing context switches really fast (factor 50 faster than Linux). We had also ported Linux to run on top (i.e. used L4 as a hypervisor to support a virtualised Linux). Thanks to the fast context-switching technology, that virtualized Linux ran faster than native.

As I said, this microkernel started off as open-source (BSD license), and remained open-source. While the BSD license would have allowed us to fork a closed-source version (while acknowledging the original authors) this would have been a stupid thing to do. We wanted our research outcomes to be used as widely as possible.

Qualcomm

In 2004, that L4 microkernel caught the attention of Qualcomm. They had two specific (and quite different) technical problems for which they were looking for solutions. One required a fast, real-time capable kernel with memory protection. The other required virtualization of Linux on ARM. Our L4 provided both, and nothing else out there came close.

Qualcomm engaged NICTA in a consulting arrangement to help them deploy L4 on their wireless communication chips. The initial evaluations and prototyping went well, and they decided to use L4 as the basis of their firmware.

This was all before OK Labs was founded. In fact, at the time we created OK Labs, the first phones with L4 inside were already shipping in Japan! And all based on the open-source kernel.

OKL4 microkernel

The engagement with Qualcomm grew to a volume where it was too significant a development/engineering effort to be done inside the research organisation. In fact, the consulting revenue started to threaten NICTA’s tax-free status! Furthermore, we saw a commercial opportunity which required taking business risks, something you can’t do with taxpayer $$. This is why we decided to spin the activity out as OK Labs. OK Labs marketed L4 under the name “OKL4 microkernel”, and continued its development into a commercial-grade platform.

OK Labs initially operated as a services business, serving Qualcomm, but also other customers. Note that they didn’t even need NICTA’s permission to do this, they took an open-source release and supported it. Anyone could have done this (but, of course, the people who had created the technology in the first place were best placed for it). Among others, this meant that there was never any question of royalties to NICTA.

Also, it is important to note that Qualcomm would almost certainly not have adopted L4 if it wasn’t open source. Their style is to do stuff in-house, and it would have been their natural approach to just re-do L4. The engagement with us was unusual for them, but it led to NICTA technology being deployed in over 1.5 billion devices.

OKL4 Microvisor

OK Labs later decided to become a product company, and seek VC investment to enable this. They developed their own product, the OKL4 Microvisor. This is the successor of the OKL4 microkernel, and was developed by OK Labs from scratch, NICTA (or anyone else) has no claim to it. It is licensed (and is shipping) on a royalty basis, which is exactly what you expect from a product company.

seL4

Then there is the third microkernel, seL4. This was developed from scratch by NICTA, and its implementation mathematically proved correct with respect to a specification.

International newspaper clips reporting on correctness proof of seL4

International headlines

The correctness proof was the big-news event that made headlines around the world. It is truly groundbreaking, but primarily as a scientific achievement: something people had tried since the ’70s and later put into the too-hard basket. But, as per my atomic-bomb metaphor in the previous blog, once people know it’s possible they can figure out how to do it themselves. Particularly since we had published the basics of the approach (after all, doing research is NICTA’s prime job, and it’s not research if it isn’t published). And it’s seL4’s development (and all the stuff that made its verification possible) that took 25 person years. This is the effort behind the biggest ICT research success that came out of Australia in a long time. It’s fair to say that this has put NICTA on the map internationally.

Commercialising seL4

seL4 is nevertheless something that can be turned into an exciting product, but that needs work. As argued in the previous blog, that’s not something you do in a research lab, it’s company business. That’s why NICTA needed a commercialisation channel.

The way they decided to do it was to license seL4 exclusively to OK Labs, with a buy-out option (i.e. the option to acquire the IP outright) on achieving certain milestones (for the reasons explained in the previous blog). In exchange, NICTA took equity (i.e. a shareholding) in OK Labs, as a way to get returns back if commercialisation succeeded. Using OK Labs as the commercialisation vehicle was an obvious choice: Firstly, OK Labs was developing the market and distribution for this kind of technology. Secondly, OK Labs does all its engineering in Australia, and any alternative would have been overseas. A reasonable deal.

How about national benefit?

The (more or less clearly stated) implication from the commentators that NICTA made a mistake is totally unfounded. And that should not come as a surprise: the people involved in the decision knew what they were doing. The director of the NICTA Lab where the work happened was Dr Terry Percival. He happens to be the person whose name is on the much-lauded CSIRO wifi patent! And NICTA’s CEO at the time was Dr David Skellern. He was the co-founder or Radiata, which implemented CSIRO’s wifi invention in hardware, and got sold for big bucks to CISCO! Those guys knew a bit about how to commercialise IP!

There are comments about the “non-discussion of how much money changed hands”. Well, that happens to be part of the stuff I can’t talk about, for the reasons listed at the beginning.

Also, national benefits aren’t simply measured in money returned to NICTA. There are other benefits here. For one, there is a publicly stated intent by OK Labs’ purchaser, General Dynamics (GD), to not only maintain but actually expand the engineering operation in Australia. Also, one thing we learn is that technology like seL4 isn’t trivial to commercialise, it requires a big investment. GD has the resources to do this, and is active in the right markets, so has the distribution channels. Finally, there is a lot of on-ging research in NICTA which builds on seL4, and is building other pieces which will be required to make best use of seL4. NICTA owns all this, and is certain to stay in the loop. Furthermore, we now have a lot of credibility in the high-security and safety-critical space. This has already shown tangible outcomes, some of which will be announced in the next few weeks.

Did we make all the best decisions? This is hard to say even with the benefit of hindsight. We certainly made good decisions based on the data at hand. The only sensible alternative (both then and with the benefit of hindsight) would have been to open-source seL4, as we had done with L4 earlier. This might or might not have been the best for maximising our long-term impact and national benefit.

We can’t really tell, but what we do know is that we’re in a very strong position of being able to do more high-impact research. In fact, we have already changed the way people think about security/safety-critical systems, and we are likely to completely change they way future such systems will be designed, implemented and verified.

Follow

Get every new post delivered to your Inbox.