The ARC released the composition of the ERA’15 Research Evaluation Committees (RECs) a few days ago. The one relevant to us is the Mathematics, Information and Computing Sciences (MIC) REC. So I was a bit surprised when I looked at it and recognised almost no names.
For those living outside Australia, outside academia, or have their head firmly burrowed in the sand, ERA is the Excellence in Research for Australia exercise the Australian Research Council (ARC, Oz equivalent of the NSF) has been running since 2010. It aims to evaluate the quality of research done at Australian universities. I was involved in the previous two rounds, 2010 as a member of the MIC panel, and 2012 as a peer reviewer.
The ERA exercise is considered extremely important, universities take it very seriously, and a lot of time and effort goes into it. The outcomes are very closely watched, universities use it to identify their strengths and weaknesses, and everyone expects that government funding for universities will increasingly be tied to ERA rankings.
The panel is really important, as it makes the assessment decisions. Assessment is done for “units of evaluation” – the cartesian product of universities and 4-digit field of research (FOR) codes. The 4-digit FORs relevant to computer science and information systems are the various sub-codes of the 2-digit (high-level) code 08 – Information and Computing Sciences.
For most other science and engineering disciplines, assessment is relatively straightforward: you look at journal citation data, which is a pretty clear indication of research impact, which in turn is a not unreasonable proxy for research quality. In CS, where some 80% of publications are in conferences, this doesn’t work (as I’ve clearly experienced in the ERA’10 round): the official citation providers don’t understand CS, they don’t (or only very randomly) index conferences, they don’t count citations of journal papers by conference papers, and the resulting impact factors are useless. As a result, the ARC moved to peer-review for CS in 2012 (as was used by Pure Maths and a few other disciplines in 2010 already).
Yes, the obvious (to any CS person) answer is to use Google Scholar. But for some reason or other, this doesn’t seem to work for the ARC.
Peer review works by institutions nominating 30% of their publications for peer review (the better ones, of course), and several peer reviewers are each reviewing a subset of those (I think the recommended subset is about 20%). The peer reviewer then writes a report, and the panel uses those to come up with a final assessment. (Panelists typically do a share of peer review themselves.)
Peer review is inevitably much more subjective than looking at impact data. You’d like to think that the people doing this are the leaders in the field, able to objectively assess the quality of the work of others. A mediocre researcher is likely to emphasise factors that would make themselves look good (although they are, of course, excluded from any discussion of their own university). Basically, I’d trust the judgment someone with an ordinary research track record much less than that of a star in the field.
So, how does the MIC panel fare? Half of it are mathematicians, and I’m going to ignore those, as I wouldn’t be qualified to say anything about their standing. But for CS folks, citation counts and h-factors as per google scholar, in the context of the number of years since their PhD, is a very good indication. So let’s look at the rest of the MIC panellists, i.e. the people from computer science, information systems or IT in general.
|Name||Institution||years of PhD||cites||h-index|
|Leon Sterling (Chair)||Swinburne||~25||5,800||28|
|Deborah Bunker||USyd||15?||max cite =45|
[Note that Prof Bunker has no public Scholar profile, but according to Scholar, her highest-cited paper has 45 citations. Prof’s Sterling’s public Scholar profile includes as the top-cited publication (3.3k cites) a book written by someone else, subtracting this leads to the 5.8k cites I put in the table. Note that his most cited publication is actually a textbook, if you subtract this the number of cites is 3.2k.]
Without looking at the data, one notices that only three of the group are from the research-intensive Group of Eight (Go8) universities, plus one from overseas. That in itself seems a bit surprising.
Looking at the citation data, one person is clearly in the “star” category: the international member Michael Papazoglou. None of the others strike me as overly impressive, a h-index of around 30 is good but not great, similar with citations around the 3000 mark. And in two cases I can really only wonder how they could possibly been selected. Can we really not come up with a more impressive field of Australian CS researchers?
Given the importance of ERA, I’m honestly worried. Those folks have the power to do a lot of damage to Australian CS research, by not properly distinguishing between high- and low-quality research.
But maybe I’m missing something. Let me know if you spot what I’ve missed.
What did I learn from this all?
Clearly, OK Labs was, first off, a huge missed opportunity. Missed by a combination of greed, arrogance and stupidity, aided by my initial naiveté and general lack of ability to effectively counter-steer.
That arrogance and greed is a bad way of dealing with customers and partners was always clear to me, although a few initial successes made this seem less obvious for a while. In the end it came back to haunt us.
Strategically it was a mistake looking for investment without a clear idea of what market to address and what product to build. We didn’t need the investment to survive, we had a steady stream of revenue that would have allowed us to grow organically for a while until we had a clearer idea what to grow into. Instead we went looking for investment with barely half-baked ideas, a more than fussy value proposition and a business plan that was a joke. As a result, we ended up with 2nd- or 3rd-tier investors and a board that was more of a problem than a solution. But, of course, Steve, being the get-rich-fast type, wanted a quick exit.
A fast and big success isn’t generally going to happen with the type of technology we had (a very few extreme examples to the contrary notwithstanding). Our kind of technology will eventually take over, but it will take a while. It isn’t like selling something that helps you improve your business, or provide extra functionality for your existing product, something that can be trialled with low risk.
Our low-level technology requires the adopter to completely re-design the critical guts of their own products. This not only takes time, it also bears a high risk. Customers embarking on that route cannot risk failure. Accordingly, they will take a lot of time to evaluate the technology, the risks and upsides associated with it, trial it on a few less critical products, etc. All this takes time, and It is unreasonable to get significant penetration of even a single customer in less than about 5 years.
So, it’s the wrong place for a get-rich-fast play. But understanding that takes more technical insight than Steve will ever have, and executing it requires more patience then he could possibly muster. And, at the time, I didn’t have enough understanding of business realities to see this.
What this also implies is that for a company such as OK Labs, it is essential that the person in control has a good understanding of technology (ours as well as the customers’). This is not the place for an MBA CEO (and Steve is a far cry from an MBA anyway). Business sense is importance (and I don’t claim I’ve got enough of it), but in the high-tech company, the technologist must be in control. This is probably the most important general lesson.
Personally I learnt that I should have trusted my instincts better. While not trained for the business environment, the sceptical-by-default approach that is the core to success in research (and which I am actively baking into my students) serves one well beyond direct technical issues. The challenge is, of course, that people not used to it can easily mistake it as negativity – particularly people who are not trained for or incapable of analysing a situation logically.
I’ve learnt that, when all a person says about stuff you understand well is clearly all bullshit, then there’s a high probability that everything else they say is all bullshit too. I should have assumed that, but I gave the benefit of the doubt for far too long.
As mentioned earlier, I also learnt that it was a big mistake to be a part-timer at the company. This could have worked (and could potentially be quite powerful) if the CEO was highly capable, and there was a high degree of mutual trust and respect and we were aligned on the goals. But I wasn’t in that situation, unfortunately, and my part-time status allowed me to be sidelined.
Finally, I always believed that ethics are important, in research as well as in business. The OK experience confirms that to me. Screwing staff, partners and customers may produce some short-term success, but will bite you in the end.
© 2014 by Gernot Heiser. All rights reserved.Permission granted for verbatim reproduction, provided the reproduction is of the complete, unmodified text, is not made for commercial gain, and this copyright note is included in full. Fair-use abstracting permitted.