Skip to content

Sense and Nonsense of Conference Rankings

The CORE Rankings

Some may know that a few years ago, the Australian Research Council (ARC) had a ranking of publication outlets produced. For computer science, the exercise was outsourced to CORE, the association of Australian and NZ CS departments (Oz equivalent of CRA). It categorised conferences (and journals) into A*, A, B and C venues.

I have in the past stated what I think of that list: very little. In short, I think it’s highly compromised and an embarrassment for Australian computer science. And I’m outright appalled when I see that other countries are adopting the “CORE Rankings”!

The ARC disendorsed the rankings in 2011. Yet, in 2013, CORE decided to maintain and update it. I argued that updating with a similar process as the original one will not improve the list.

The Fellows Letter

Now, some senior colleagues (Fellows of the Australian Academy of Science) have written an open letter, denouncing not only the CORE list, but basically any use of publication venues as an indicator of research quality.

The letter was, apparently, written by Prof Bob Williamson from the ANU, and fellow group leader at NICTA. Bob is a guy I have a lot of respect for, and we rarely disagree. Here we do completely. I also highly respect the other Fellows (one of them is my boss).

The Fellows essentially argue (with more clarification by Bob) that looking at where a person has published is useless, and the right way to judge a researcher’s work is to read their papers.

What I think

With all respect, I think this is just plain nonsense:

  1. These rankings exist, like it or not. In fact, we all use them all the time. (Ok, I cannot prove that the “all” bit is strictly true, some, like Bob, may not, the rest of us do.) When I look at a CV, the first thing I look for is where did they publish. And I claim that is what most people do. And I claim it makes sense.

    Fact is that we know what the “good” venues are in our respective disciplines. This is where we send our papers to, this is where we tell our students and ECRs they need to get their papers accepted. They are the yardsticks of the community, like it or not, it is where you publish to have impact. Publishing in the right venues leads to high citations, publishing in the wrong ones doesn’t.

    Of course, we really only understand the venues in our own sub-disciplines, and may be a few neighbouring ones. So, collecting and documenting these top venues across all of CS isn’t a bad thing, it creates clarity.

  2. The idea that someone can judge a person’s work simply by reading some of their papers (even the self-selected best ones), with respect, borders on arrogance. In effect, what this is saying is that someone from a different sub-discipline can judge what is good/significant/relevant work!

    If this was true, then we as a community could reduce our workload drastically: We’d stop having conference PCs where everyone has to read 30 papers, and every paper gets at least half a dozen reviews before being accepted (as at OSDI, where I’m presently struggling to get all my reviews done). Instead, every conference would simply convene a handful of Bobs, divide the submissions between them, and each decides which one of their share of the papers should be accepted.

    Of course, things don’t work like this, for good reasons. I’ve served on enough top-tier conference PCs to have experienced plenty of cases where the reviews of discipline experts diverge drastically on multiple papers. In my present OSDI stack of 29 papers this is true for about 35% of papers: 10 papers have at least one clear reject and one clear accept! And it is the reason why each paper gets at least 6 reviews: we get the full spectrum, and then at the PC meeting work out who’s right and who’s wrong. The result is still imperfect, but vastly superior to relying on a simple opinion.

    Now these reviewers are the discipline experts (in this case, leading researchers in “systems”, incorporating mostly operating systems and distributed systems). If you get such a diversity of opinions within such a relatively narrow subdiscipline, how much would you get across all of computer science? I certainly would not claim to be able to judge the quality of a paper in 80% of computer science, and someone thinks they can, then my respect for them is taking a serious hit.

    In summary, I think the idea that someone, even one of the brightest computer scientists, can judge an arbitrary CS paper for its significance is simply indefensible. An expert PC of a top conference accepting a paper has far more significance than the opinion of a discipline outsider, even a bright one!

  3. Of course, that doesn’t justify using the publication outlets as the only criterion for promotion/hiring or anything else. That’s why we do interviews, request letters etc. Also, I definitely believe that citations are a better metric (still imperfect). But citations are a useless measure for fresh PhDs, and mostly of not much use for ECRs.

  4. Nor do I want to defend the present CORE list in any way. I said that before, but I’m repeating for completeness: the present CORE list is the result of an utterly broken process, is completely compromised, and an embarrassment for Australian computer science. And any attempt to fix it by using the existing process (or some minor variant of it) is not going to fix this. The list must either be abandoned or re-done from scratch, using a sound, robust and transparent process.

  5. My arguments only are about top venues. A track record of publishing in those means something, and identifying across all of CS what those top venues are has a value. By the same token I believe trying to categorise further (i.e. B- and C-grade venues, as done in the CORE list) is a complete waste of time. Publishing in such venues means nothing (other than positively establishing that someone has low standards). So, if we bother to have a list, it should only be a list of discipline top venues, nothing more.

Closing the gap: Real OS security finally in reach!

A few years ago we formally verified our seL4 microkernel down to C code. That proof, unprecedented and powerful as it is, didn’t actually establish the security (or safety) of seL4. But the latest achievement of NICTA’s Trustworthy Systems team actually do provide a proof of security.

What the original seL4 verification achieved was a proof that the kernel’s C code correctly implements its specification. The specification is a formal (mathematical) model of the kernel, and the proof showed that the C implementation is “correct”, in the sense that all possible behaviours, according to a semantics of C, are allowed by the spec.

This is both very powerful and limited. It’s powerful in the sense that it completely rules out any programming errors (aka bugs). It’s still limited in three important respects:

  1. Ok, the code adheres to the spec, but who say’s the spec is any good? How do you know it’s “right”, i.e. does what you expect it to, is it “secure” in any real sense?
  2. The C standard is not a formal specification, and is, in fact, ambiguous in many places. In order to conduct the proof, we had to pick a subset which is unambiguous, and we proved that our kernel implementation stays within that subset. However, we have no guarantee that the compiler, even on our subset, assumes the same semantics!
  3. Everyone knows compilers are buggy. So, even if we agree with the compiler about the semantics of our C subset, we can still lose if the compiler produces incorrect code.

Two sets of recent results remove these limitations. One set of work addresses the first limitation, and another the remaining two.

Integrity and confidentiality proofs

What does it mean for the kernel spec to be “good”? It means that the kernel can be used to build the kind of systems we want to build, with the right properties. The specific properties we are after are the classical security properties: Integrity, confidentiality and availability. (For safety a further property is required, timeliness. seL4 has this too, but I’ll discuss this another time.)

Availability is actually a non-issue in seL4: the kernel’s approach to resource management is not to do any, but to leave it to user-level code. The exact model is beyond the level of technicality of this blog, I refer the interested reader to the paper describing the general idea. The upshot is that denial of kernel services is impossible by construction. (This does not mean that availability is necessarily easy to enforce in seL4-based systems, only that you can’t DOS the kernel, or a userland partition segregated by correct use of kernel mechanisms.)

Integrity, at the level of the microkernel, means that the kernel will never allow you to perform a store (write) operation to a piece of memory for which you haven’t been explicitly been given write privilege; including when the kernel acts on behalf of user code. The NICTA team proved this safety property for seL4 about two years ago (see the integrity paper for details). In a nutshell, the proof shows that whenever the kernel operates, it will only write to memory if it has been provided with a write “capability” to the respective memory region, and it will only make a region accessible for writing by user-code (eg by mapping it into an address space) if provided with a write capability.

confidConfidentiality is the dual to integrity: it’s about read accesses. The figure at the left illustrates this, by showing allowed (black) and forbidden (red) read accesses. However, it turns out that confidentiality is harder to prove than integrity, for two reasons. 1) violation of confidentiality is unobservable in the state of the entity whose confidentiality has been violated 2) confidentiality can be violated not only by explicit reads (loads or instruction fetches) but also by side channels, which leak information by means outside the system’s protection model.

A NICTA team led by researcher Toby Murray has just recently succeeded in proving that seL4 protects confidentiality. They did this by proving what is called “non-interference”. The details are highly technical, but the basic idea is as follows: Assume there is a process A in the system, which holds a secret X. Then they look at any arbitrary execution traces of an arbitrary second process B. If the initial state of the system of any two such traces is identical except for the value of X, they proved that the two traces must be indistinguishable to B. This implies that B cannot infer X, and thus A’s confidentiality is maintained. The technical paper on this work will be presented at next month’s IEEE “Oakland” security conference.

Binary verification

In order to remove from our trusted computing base the C compiler, as well as our assumptions on the exact semantics of C, the team proved that the kernel binary (i.e., the output of the compilation and linking process) is consistent with the formal model of the kernel implementation. This formal model is actually what had earlier been proved to be correct with respect to the kernel spec (it had been obtained by formalising the C code using the formal C semantics). By verifying the binary against the formal model, the formalisation process (and thus our assumptions on the C semantics) are taken out of the trusted computing base.

Binary verification workflow and proof chain

Binary verification workflow and proof chain

The verification of the binary went in multiple phases, as indicated by the figure on the right. First, the binary was formalised. This was done with the help of a formal model of the ARMv6/v7 architecture. That formalisation of the architecture had previously been done at Cambridge University, in close collaboration with ARM, and is extensively validated. It’s considered to be the best and most dependable formalisation of any main-stream processor.

The formalised binary, which is now a complex expression in mathematical logic inside the theorem prover HOL4, was then “de-compiled” into a more high-level functional form (the left box called “function code” in the figure). This translation is completely done in the theorem prover, and proved correct. Similarly, the formal kernel model is transformed, using a set of “rewrite rules” which perform a similar operation as the compiler does, into a functional form at a similar abstraction level as the output of the above de-compilation. Again, this operation is done in the theorem prover, by application of mathematical logic, and is proved to be correct.

We now have two similar representations of the code, one obtained by de-compilation of the binary, the other by a partial compilation process of the formalised C. What is left is to show their equivalence. Readers with a CS background will now say “but that’s equivalent to the halting problem, and therefore impossible!” In fact, showing the equivalence of two arbitrary programs is impossible. However, we aren’t dealing with two arbitrary programs, but with two different representations of the same program (modulo compiler bugs and semantics mismatch). The operations performed by a compiler are well-understood, and the re-writing attempts to retrace them. As long as the compiler doesn’t change the logic too much (which it might do at high optimisation levels) there is hope we can show the equivalence.

And indeed, it worked. The approach chosen by Tom Sewell, the PhD student who did most of the work, was to compare the two programs chunk-by-chunk. For each chunk he used essentially a brute-force approach, by letting an SMT solver loose on it. Again, some trickery was needed, especially to deal with compiler optimisations of  loops, but in the end, Tom succeeded. Details will be presented at the PLDI conference in June.

The resulting tool is fairly general, in that it is not in any way specific to seL4 code, although it has limitations which are ok for seL4 but might be show-stoppers for other code. Specifically, the tools can’t deal with nested loops (but seL4 doesn’t have any, nor should any microkernel!) At present, the tools grok the complete seL4 binary when compiled by gcc with the -O1 optimisation flag. We normally compile the kernel with -O2, and this introduces a few optimisations which presently defeat the tool, but we’re hopeful this will be fixed too.

This is actually quite a significant result. You may have heard of Ken Thompson’s famous Turing Award Lecture “Reflections on trusting trust“, where he describes how to hack a compiler so it will automatically build a back door into the OS. With this tool, seL4 is no longer susceptible to this hack! If the compiler had built in a back door, the tool would fail to verify the correctness of the binary. But regular compiler bugs would be found as well, of course.

Where are we now?

proof-chainWe now have: 1) Proofs that our abstract kernel spec provides integrity and confidentiality; 2) A proof that the formalised C code is a correct implementation of the kernel spec; and 3) a proof that the kernel binary is a correct translation of the formalised C code. Together this means that we have a proof that the kernel binary has the right security properties of enforcing confidentiality and integrity!

There are still some limitations, though. For example, we haven’t yet verified the kernel’s initialisation (boot-strap) code, this is work in progress (and is nearing completion). There are also bits of unverified assembler code in the kernel, and we don’t yet model the operation of the MMU in detail. So there are still some corners in which bugs could, in theory, be hiding, but we’re getting closer to sweeping out the last refuges of potential bugs.

A more fundamental issue regards the side channels mentioned above. These are traditionally classified into two categories: storage channels and timing channels. The difference is essentially whether a common notion of time is required to exploit them.

In principle our non-interference proof covers storage channels, up to the point to which hardware state that could constitute a channel is exposed in the hardware model that is used in the proof. Presently that model is fairly high-level, and therefore might hide potential storage channels. It does cover things like general-purpose CPU registers, so we know that the kernel doesn’t forget to scrub those in a context switch, but modern CPUs have more complex state that could potentially create a channel. Nevertheless, the approach can be adapted to an arbitrarily-detailled model of the hardware, which would indeed eliminate all storage channels.

Timing channels are a different matter. The general consensus is that it is impossible to close all timing channels, so people focus on limiting their bandwidth. So far our proofs have no concept of time, and therefore cannot cover timing channels. However, we do have some promising research in that area. I’ll talk about that another time.

When “persistent” storage isn’t

A couple of weeks ago we published our RapiLog paper. It describes how to leverage formal verification to perform logically synchronous disk writes at the performance of asynchronous I/O (see the previous blog on the implications of this work).

A frequent comment from reviewers of the paper was: “why don’t you just use an SSD and get better performance without any extra work?”

My answer generally was “SSDs are more expensive [in terms of $/GiB], and that’s not going to change any time soon.” In fact, the RapiLog paper shows that we match SSD performance using normal disks.

What I didn’t actually know at the time is that, unlike disks, SSDs tend to be really crappy as “persistent” storage media. A paper by Zheng et al. that appeared at the Usenix FAST conference in February studies the behaviour of a wide range of different SSDs when electrical power is cut – exactly the scenario we cover with RapiLog. They find that of the 15 different SSD models tested, 13 exhibit data loss or corruption when power is lost! In other words: SSDs are a highly unreliable storage medium when there is a chance of power loss. They also tested normal disks, and found out that cheap disks aren’t completely reliable either.

They did their best to stress their storage devices, by using concurrent random writes. In RapiLog, we ensure that writes are almost completely sequential, which helps to de-stress the media. And, of course, we give the device enough time to write its data before the power goes.

I quote from the summary of Zheng et al.: “We recommend system builders either not use SSDs for im- portant information that needs to be durable or that they test their actual SSD models carefully under actual power failures beforehand. Failure to do so risks massive data loss.”

For me, the take-away is that putting the database log on an SSD undermines durability just as running the database on a normal disk in unsafe mode (asynchronous logging or write cache enabled). In contrast, using RapiLog with a normal disk gives you the performance of an SSD with the durability of a normal disk. Why would you want to use anything else?

Green computing through formal verification?

This may seem like a long bow to draw, and I admit there’s a bit of marketing in the title, but there’s substance behind it: Verification can save energy in data centres, by significantly speeding up transaction processing.

How does that work?

The key is the defensive way many systems are built to cope with failure. Specifically database systems, which are behind e-commerce and many other web sites, are designed to protect against data loss in the case of system crashes. This is important: if you do a transaction over the web, you want to make sure that you aren’t charged twice, that if they charge you, they are actually going deliver etc. That wouldn’t be ensured (even if the vendor is trustworthy) if the server crashed in the middle of the transaction.

Databases avoid losses by ensuring that transactions (which is now a technical term) have the so-called ACID properties of atomicity, consistency, integrity and durability. The ones relevant to this discussion are atomicity (meaning that a transaction is always either completed in totality or not at all) and durability (meaning that once a transaction is completed successfully, its results cannot be lost). Databases achieve atomicity by operating (at least logically) on a temporary copy of the data and making all changes visible at once, without ever exposing intermediate state.

Durability is generally ensured by write-ahead logging: The database appends the intended effect of the transaction to a log, which is kept in persistent storage (disk), before letting the transaction complete and making its results visible to other transactions. If the system (the database itself or the underlying operating system) crashes halfway through, the database at restart goes through a recovery process. It will replay operations from the log, as long as the log records a complete transaction, and discards incomplete transaction logs. This ensures that only complete transaction effects are visible (atomicity).

Durability is only ensured if it can be guaranteed that the complete log entry of any completed transaction can be recovered. This requires that the database only continues transaction processing once it is certain that the complete transaction log is actually safely recorded on the disk.

This certainty does not normally exist for write operations to disk. Because disks are very slow (compared to RAM), the operating system does its best to hide the latency of writes. It caches writes to files in buffers in RAM (the so-called block cache), and performs the physical writes asynchronously, that is it overlaps them with continuing execution. This is fine, except if you want to ensure durability of your transactions!

Databases therefore perform log writes synchronously. They use an OS system call called “sync()” (to be precise, it’s “fsync()” in Unix/Linux systems). This call simply blocks the caller until any pending writes on the respective file have finished. The database then knows that the data are on disk, and can proceed, knowing that durability is maintained.

What’s the problem with this?

There’s a significant cost, in terms of latency. Disk writes are slow, and the database has to wait until completion. In some cases, it can spend the majority of the time waiting and twiddling thumbs.

What does formal verification have to do with this?

If the underlying operating system is guaranteed not to crash, then we don’t need to block. The database can continue processing, and the OS does the write asynchronously.

This is essentially the thought I had while sitting in a database seminar a few years ago. I was thinking “you guys go though such a lot of trouble to ensure you don’t lose data. We could do that so much easier!”

In fact, that’s what we’re doing now, in a system we call RapiLog. We’re using the stability guarantees of our formally verified  seL4 microkernel to get the best of both worlds: the performance advantage of asynchronous I/O, and the durability guarantees of synchronous writes. And does it ever work! We’re observing a doubling of database throughput under the right circumstances!

And increased throughput means that you can process more requests with the same hardware. This means that your data centre can be smaller, which doesn’t only reduce capital cost, but also running cost, most of which is the cost of electricity for powering the computers and for air conditioning. That’s the link to green computing.

And the cool thing is that we can do this with unmodified legacy systems: We run an unmodified database binary (including commercial databases where we have no source) on an unmodified Linux system. We just put it into a virtual machine and let an seL4-based “virtual disk” do all the magic.

How about power blackouts?

Good question! What happens if mains power suddenly goes down?

For the traditional database setup that isn’t a problem. Either the transaction is recorded in a log entry that is safely on disk when power goes, in which case the transaction is durable and will be recovered at restart time. Or there is no (complete) log entry, in which case the transaction was never completed, and is discarded at restart. But with seL4?

Turns out we can deal with that too. Computer power supply have capacitors which store some electrical energy. When mains power is cut, that energy can run the computer for a bit. Not a long time, only some 100 milliseconds. But that is plenty of time to flush out any remaining log data to the disk. Turns out that all we need is about 20 milliseconds, which is very comfortably inside the window given by the capacitors. All we need is an immediate warning (in form of an interrupt raised by hardware).

Implications on system complexity

Maybe even more important than the performance improvement is the potential for reducing system complexity. Complex systems are expensive (in terms of development and maintenance). They are also error prone, meaning more likely to fail. Reducing complexity is always a good thing. And we have shown that some of the complexity (in this case in database systems) can be avoided if we can provide guarantees about the reliability of the software. Which is what our Trustworthy Systems work is all about.

Want to know more?

Read the paper. It will be presented at next months’ EuroSys Conference in Prague.

Giving it away? Part 2: On microkernels and the national interest

In my previous blog I addressed a number of misconceptions which were contained in Nick Falkner’s blog on the OK Labs sale, and the newspaper article it was based on.

Note that as far as the newspaper article reported on facts it got them essentially right. However, it drew the wrong conclusions, based on an incorrect understanding of the situation and the realities of commercialisation, and these incorrect conclusions triggered Nick’s blog. My previous blog addressed issues around commercialisation of ICT IP. Now I’d like to address some of the specifics of the OK Labs situation, and the NICTA IP involved.

Before delving deeper, I must say that there are severe limits to what I can reveal. I was a director of OK Labs, and as such bound by law to confidentiality with regard to information which I obtained as a director. Beyond that there are confidentiality clauses affecting the main shareholders (which includes myself as well as NICTA). I also was an employee of OK Labs from early 2007 until mid 2010. Essentially I have to restrict my comments to what’s on the public record or was known to me before the respective agreements were signed.

A tale of three microkernels and four names

First there is the issue of the three kernels, the first one appearing under two names, which continues to create confusion, even though the facts were essentially correctly reported in the newspaper article.

L4

Before OK Labs there was NICTA’s version of the L4 microkernel. This was an evolution of the open-source Pistachio microkernel, originally mostly developed at Karlsruhe University in Germany. We had ported it to a number of architectures, including ARM, had optimised it for use in resource-constrained embedded systems, and had designed and implemented some really cool way of doing context switches really fast (factor 50 faster than Linux). We had also ported Linux to run on top (i.e. used L4 as a hypervisor to support a virtualised Linux). Thanks to the fast context-switching technology, that virtualized Linux ran faster than native.

As I said, this microkernel started off as open-source (BSD license), and remained open-source. While the BSD license would have allowed us to fork a closed-source version (while acknowledging the original authors) this would have been a stupid thing to do. We wanted our research outcomes to be used as widely as possible.

Qualcomm

In 2004, that L4 microkernel caught the attention of Qualcomm. They had two specific (and quite different) technical problems for which they were looking for solutions. One required a fast, real-time capable kernel with memory protection. The other required virtualization of Linux on ARM. Our L4 provided both, and nothing else out there came close.

Qualcomm engaged NICTA in a consulting arrangement to help them deploy L4 on their wireless communication chips. The initial evaluations and prototyping went well, and they decided to use L4 as the basis of their firmware.

This was all before OK Labs was founded. In fact, at the time we created OK Labs, the first phones with L4 inside were already shipping in Japan! And all based on the open-source kernel.

OKL4 microkernel

The engagement with Qualcomm grew to a volume where it was too significant a development/engineering effort to be done inside the research organisation. In fact, the consulting revenue started to threaten NICTA’s tax-free status! Furthermore, we saw a commercial opportunity which required taking business risks, something you can’t do with taxpayer $$. This is why we decided to spin the activity out as OK Labs. OK Labs marketed L4 under the name “OKL4 microkernel”, and continued its development into a commercial-grade platform.

OK Labs initially operated as a services business, serving Qualcomm, but also other customers. Note that they didn’t even need NICTA’s permission to do this, they took an open-source release and supported it. Anyone could have done this (but, of course, the people who had created the technology in the first place were best placed for it). Among others, this meant that there was never any question of royalties to NICTA.

Also, it is important to note that Qualcomm would almost certainly not have adopted L4 if it wasn’t open source. Their style is to do stuff in-house, and it would have been their natural approach to just re-do L4. The engagement with us was unusual for them, but it led to NICTA technology being deployed in over 1.5 billion devices.

OKL4 Microvisor

OK Labs later decided to become a product company, and seek VC investment to enable this. They developed their own product, the OKL4 Microvisor. This is the successor of the OKL4 microkernel, and was developed by OK Labs from scratch, NICTA (or anyone else) has no claim to it. It is licensed (and is shipping) on a royalty basis, which is exactly what you expect from a product company.

seL4

Then there is the third microkernel, seL4. This was developed from scratch by NICTA, and its implementation mathematically proved correct with respect to a specification.

International newspaper clips reporting on correctness proof of seL4

International headlines

The correctness proof was the big-news event that made headlines around the world. It is truly groundbreaking, but primarily as a scientific achievement: something people had tried since the ’70s and later put into the too-hard basket. But, as per my atomic-bomb metaphor in the previous blog, once people know it’s possible they can figure out how to do it themselves. Particularly since we had published the basics of the approach (after all, doing research is NICTA’s prime job, and it’s not research if it isn’t published). And it’s seL4′s development (and all the stuff that made its verification possible) that took 25 person years. This is the effort behind the biggest ICT research success that came out of Australia in a long time. It’s fair to say that this has put NICTA on the map internationally.

Commercialising seL4

seL4 is nevertheless something that can be turned into an exciting product, but that needs work. As argued in the previous blog, that’s not something you do in a research lab, it’s company business. That’s why NICTA needed a commercialisation channel.

The way they decided to do it was to license seL4 exclusively to OK Labs, with a buy-out option (i.e. the option to acquire the IP outright) on achieving certain milestones (for the reasons explained in the previous blog). In exchange, NICTA took equity (i.e. a shareholding) in OK Labs, as a way to get returns back if commercialisation succeeded. Using OK Labs as the commercialisation vehicle was an obvious choice: Firstly, OK Labs was developing the market and distribution for this kind of technology. Secondly, OK Labs does all its engineering in Australia, and any alternative would have been overseas. A reasonable deal.

How about national benefit?

The (more or less clearly stated) implication from the commentators that NICTA made a mistake is totally unfounded. And that should not come as a surprise: the people involved in the decision knew what they were doing. The director of the NICTA Lab where the work happened was Dr Terry Percival. He happens to be the person whose name is on the much-lauded CSIRO wifi patent! And NICTA’s CEO at the time was Dr David Skellern. He was the co-founder or Radiata, which implemented CSIRO’s wifi invention in hardware, and got sold for big bucks to CISCO! Those guys knew a bit about how to commercialise IP!

There are comments about the “non-discussion of how much money changed hands”. Well, that happens to be part of the stuff I can’t talk about, for the reasons listed at the beginning.

Also, national benefits aren’t simply measured in money returned to NICTA. There are other benefits here. For one, there is a publicly stated intent by OK Labs’ purchaser, General Dynamics (GD), to not only maintain but actually expand the engineering operation in Australia. Also, one thing we learn is that technology like seL4 isn’t trivial to commercialise, it requires a big investment. GD has the resources to do this, and is active in the right markets, so has the distribution channels. Finally, there is a lot of on-ging research in NICTA which builds on seL4, and is building other pieces which will be required to make best use of seL4. NICTA owns all this, and is certain to stay in the loop. Furthermore, we now have a lot of credibility in the high-security and safety-critical space. This has already shown tangible outcomes, some of which will be announced in the next few weeks.

Did we make all the best decisions? This is hard to say even with the benefit of hindsight. We certainly made good decisions based on the data at hand. The only sensible alternative (both then and with the benefit of hindsight) would have been to open-source seL4, as we had done with L4 earlier. This might or might not have been the best for maximising our long-term impact and national benefit.

We can’t really tell, but what we do know is that we’re in a very strong position of being able to do more high-impact research. In fact, we have already changed the way people think about security/safety-critical systems, and we are likely to completely change they way future such systems will be designed, implemented and verified.

Giving it away? Part 1: On Commercialisation of ICT IP

There has been some publicity here in Oz about the sale of OK Labs and its implications. Unfortunately not all of that publicity was particularly well-informed. One particular instance is a blog which in turn refers to an article in The Australian newspaper.

The blog, and, to a degree, the newspaper article exhibit some significant misconceptions about how IP commercialisation works, as well as a number of mis-interpretations of publicly-stated facts about the OK Labs sale.

Retracted papers – what happened at USENIX ATC’12?

At the opening session of this year’s USENIX Annual Technical Conference I mentioned that accepted papers were withdrawn, and commended the authors on their integrity. This created some interest, including people tweeting about it. I think the episode deserves some publicity.

There were, in fact, two papers which were withdrawn. Paper A was a “full” paper which was conditionally accepted by the program committee (PC), while Paper B was a “short” paper which was accepted. What happened?

Paper A

Paper A is actually a good example of the scientific peer-review process working well. The paper investigated a particular performance effect in storage devices, and at the PC meeting there was intense discussions of its merits.

The experts debated whether the result could be true and there was significant concern that the effect might be a fluke, potentially an issue with a particular set of devices. On the other hand, one PC member with access to confidential manufacturer data from a similar (yet unreleased) device had evidence that it might be real. If the effect was real, the work would be quite interesting, and definitely worth publishing.

After much debate, we decided to accept the paper conditionally, subject to the authors providing satisfactory answers to critical questions from the shepherd (who was an expert on these devices and the main sceptic).

I should add that, like most systems conferences, ATC uses shepherding of accepted papers. The shepherd is a PC member who needs to approve the final (camera-ready) version of the paper. Specifically, the shepherd ensures that all issues raised by the reviewers are dealt with by the authors. In this sense, all acceptances are somewhat conditional, but this case was different: there were fundamental doubts on whether the paper was sound, and the shepherd’s job was to investigate and either put the doubts to rest or kill the paper.

As it turned out, the reviewers (PC members) had indeed touched a sore point, as the authors noted when trying to provide answers to the shepherd’s questions. In the end, they found that the student doing the experiment had stuffed up, and misrepresented a crucial number by two orders of magnitude! And, for good measure, the student had made another mistake of similar magnitude, which had confused things further. (This is really bad, of course, but it’s a student, still learning the ropes, and it’s the professors’ responsibility to ensure that everything is correct.)

To their credit, the authors reacted quickly, and within less than a week admitted that they had stuffed up, apologised profoundly, and withdrew the paper. Btw, the “sceptic” on the PC still thinks that the work is likely to lead to publishable results, so the authors should definitely complete their investigation!

Paper B

Paper B’s story was a bit more surprising. It was also about a performance issue, this time with networking equipment. Since it was a “short” paper, no earth-shaking results were expected. Yet it was somewhat borderline, with most reviewers being luke-warm at best if not negative about it, thinking it didn’t really account to much of significance. It got up because one reviewer championed it, arguing that it was a surprising result coupled with some nice performance tuning work by the authors. In the end, the paper was accepted, subject to standard shepherding.

When preparing the final version, and re-running experiments to address issues raised by reviewers, the authors found that they could not reproduce the original results. This meant they had no real story that they could back with solid experimental data. So they had to withdraw the paper, and did so without any attempt to bullshit themselves out of the situation.

Lessons

Of course, neither team should have got into this situation in the first place. In both cases there was a failure of the authors’ internal process which allowed (potentially) incorrect results to get out.

However, nobody is perfect, and shit happens. And more of it happens when you’re under time pressure (as we almost always are working towards a paper deadline). And shit happens in the best of families (one of the paper was from very well-known and respected authors).

But it is important to be alert, particularly in a university environment, where inevitably the experimental work, and the low-level evaluation, is done by students, who have limited experience. The profs must be sure they know what’s going on and that they can be confident of the integrity of the experimental data.

The important thing is what both teams did once they realised they had a problem: if you have stuffed up, own up to it rather than making it worse by trying to hide it! Retracting an accepted paper is very painful and highly embarrassing, but that is the smaller evil compared to publishing nonsense. Someone is bound to find out, and then you’re looking really bad! In this case, it was only a few people on the program committee who knew about it, and they weigh the integrity of the authors’ actions higher than their mistakes.

So, let this be an example of what to avoid, but also of how to recover in an honourable way from a bad situation. I commend both teams on their actions.

Follow

Get every new post delivered to your Inbox.