Skip to content

Sense and Nonsense of Conference Rankings

2014/07/06

The CORE Rankings

Some may know that a few years ago, the Australian Research Council (ARC) had a ranking of publication outlets produced. For computer science, the exercise was outsourced to CORE, the association of Australian and NZ CS departments (Oz equivalent of CRA). It categorised conferences (and journals) into A*, A, B and C venues.

I have in the past stated what I think of that list: very little. In short, I think it’s highly compromised and an embarrassment for Australian computer science. And I’m outright appalled when I see that other countries are adopting the “CORE Rankings”!

The ARC disendorsed the rankings in 2011. Yet, in 2013, CORE decided to maintain and update it. I argued that updating with a similar process as the original one will not improve the list.

The Fellows Letter

Now, some senior colleagues (Fellows of the Australian Academy of Science) have written an open letter, denouncing not only the CORE list, but basically any use of publication venues as an indicator of research quality.

The letter was, apparently, written by Prof Bob Williamson from the ANU, and fellow group leader at NICTA. Bob is a guy I have a lot of respect for, and we rarely disagree. Here we do completely. I also highly respect the other Fellows (one of them is my boss).

The Fellows essentially argue (with more clarification by Bob) that looking at where a person has published is useless, and the right way to judge a researcher’s work is to read their papers.

What I think

With all respect, I think this is just plain nonsense:

  1. These rankings exist, like it or not. In fact, we all use them all the time. (Ok, I cannot prove that the “all” bit is strictly true, some, like Bob, may not, the rest of us do.) When I look at a CV, the first thing I look for is where did they publish. And I claim that is what most people do. And I claim it makes sense.

    Fact is that we know what the “good” venues are in our respective disciplines. This is where we send our papers to, this is where we tell our students and ECRs they need to get their papers accepted. They are the yardsticks of the community, like it or not, it is where you publish to have impact. Publishing in the right venues leads to high citations, publishing in the wrong ones doesn’t.

    Of course, we really only understand the venues in our own sub-disciplines, and may be a few neighbouring ones. So, collecting and documenting these top venues across all of CS isn’t a bad thing, it creates clarity.

  2. The idea that someone can judge a person’s work simply by reading some of their papers (even the self-selected best ones), with respect, borders on arrogance. In effect, what this is saying is that someone from a different sub-discipline can judge what is good/significant/relevant work!

    If this was true, then we as a community could reduce our workload drastically: We’d stop having conference PCs where everyone has to read 30 papers, and every paper gets at least half a dozen reviews before being accepted (as at OSDI, where I’m presently struggling to get all my reviews done). Instead, every conference would simply convene a handful of Bobs, divide the submissions between them, and each decides which one of their share of the papers should be accepted.

    Of course, things don’t work like this, for good reasons. I’ve served on enough top-tier conference PCs to have experienced plenty of cases where the reviews of discipline experts diverge drastically on multiple papers. In my present OSDI stack of 29 papers this is true for about 35% of papers: 10 papers have at least one clear reject and one clear accept! And it is the reason why each paper gets at least 6 reviews: we get the full spectrum, and then at the PC meeting work out who’s right and who’s wrong. The result is still imperfect, but vastly superior to relying on a simple opinion.

    Now these reviewers are the discipline experts (in this case, leading researchers in “systems”, incorporating mostly operating systems and distributed systems). If you get such a diversity of opinions within such a relatively narrow subdiscipline, how much would you get across all of computer science? I certainly would not claim to be able to judge the quality of a paper in 80% of computer science, and someone thinks they can, then my respect for them is taking a serious hit.

    In summary, I think the idea that someone, even one of the brightest computer scientists, can judge an arbitrary CS paper for its significance is simply indefensible. An expert PC of a top conference accepting a paper has far more significance than the opinion of a discipline outsider, even a bright one!

  3. Of course, that doesn’t justify using the publication outlets as the only criterion for promotion/hiring or anything else. That’s why we do interviews, request letters etc. Also, I definitely believe that citations are a better metric (still imperfect). But citations are a useless measure for fresh PhDs, and mostly of not much use for ECRs.

  4. Nor do I want to defend the present CORE list in any way. I said that before, but I’m repeating for completeness: the present CORE list is the result of an utterly broken process, is completely compromised, and an embarrassment for Australian computer science. And any attempt to fix it by using the existing process (or some minor variant of it) is not going to fix this. The list must either be abandoned or re-done from scratch, using a sound, robust and transparent process.

  5. My arguments only are about top venues. A track record of publishing in those means something, and identifying across all of CS what those top venues are has a value. By the same token I believe trying to categorise further (i.e. B- and C-grade venues, as done in the CORE list) is a complete waste of time. Publishing in such venues means nothing (other than positively establishing that someone has low standards). So, if we bother to have a list, it should only be a list of discipline top venues, nothing more.

Advertisements

From → academia

One Comment
  1. Sort-of related: Blog by Bertrand Meyer on the nonsense of Web of Science metrics: http://cacm.acm.org/blogs/blog-cacm/181755-what-is-your-research-culture-part-3-the-web-of-science/fulltext

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: