Preprints and a problem with academic publishing

This is the 250th post on the Theory, Evolutionary, and Games Group Blog. And although my posting pace has slowed in recent months, I see this as a milestone along the continuing road of open science. And I want to take this post as an opportunity to make some comments on open science.

To get this far, I’ve relied on a lot of help and encouragement. Both directly from all the wonderful guest posts and comments, and indirectly from general recognition. Most recently, this has taken the form of the Canadian blogging and science outreach network Science Borealis recognized us as one of the top 12 science blogs in Canada.

Given this connection, it is natural to also view me as an ally of other movements associated with open science; like, (1) preprints and (2) post-publication peer-review (PPPR). To some extent, I do support both of these activities. First, I regularly post my papers to ArXiv & BioRxiv. Just in the two preceeding months, I’ve put out a paper on the complexity of evolutionary equilibria and joint work on how fibroblasts and alectinib switch the games that cancers play. Another will follow later this month based on our project during the 2016 IMO Workshop. And I’ve been doing this for a while: the first draft of my evolutionary equilibria paper, for example, is older than BioRxiv — which only launched in November 2013. More than 20 years after physicists, mathematicians, and computer scientists started using ArXiv.

Second, some might think of my blog posts as PPPRs. For example. occasionally I try to write detailed comments on preprints and published papers. For example, my post on fusion and sex in proto-cells commenting on a preprint by Sam Sinai, Jason Olejarz and their colleagues. Finally, I am impressed and made happy by the now iconic graphic on the growth of preprints in biology.

But that doesn’t mean I find these ideas to be beyond criticism, and — more importantly — it doesn’t mean that there aren’t poor reasons for supporting preprints and PPPR.

Recently, I’ve seen a number of articles and tweets written on this topic both for and against (or neutral toward) pre-prints and for PPPR. Even Nature is telling us to embrace preprints. In the coming series of posts, I want to share some of my reflections on the case for preprints, and also argue that there isn’t anything all that revolutionary or transformative in them. If we want progress then we should instead think in terms of working papers. And as for post-publications peer review — instead, we should promote a culture of commentaries, glosses, and literature review/synthesis.

Currently, we do not publish papers to share ideas. We have ideas just to publish papers. And we need to change this aspect academic culture.

In this post, I will sketch some of the problems with academic publishing. Problems that I think any model of sharing results will have to address.

Academia has a problem

As Julia Galef has noted, there is an unfortunate effect that pointing out that something is a bad argument for X is often read as an argument for not X. This is not my intention with ideas in open science. I think open science is very important. What criticisms I offer in this post and the following is with the hope of improving open science and making sure that we don’t lose sight of the radical ideas underlying it. That we don’t settle for minor variants on the status quo.

As such, I want to take a moment to say that I am largely in agreement with people that view the existing academic publishing system — especially the for-profit parts of it — as a negative strain on science. The status quo seems like a way for the academic publishing industry to steal money from tax-payers and university students (or their parents) by exploiting their entrenched position as (partial) arbiters of academics’ worth. I also view our lackluster efforts against this (Germany’s recent move against Elsevier notwithstanding) as a way that the academy aids-and-abets this theft. There is definitely some element of bureaucrats entrenching bad publication practices — especially in places where you are paid in proportion to the ‘quality’ of the journal that you publish in instead of the quality of your work — but academics themselves are also at fault.

Academics still maintain a lot of autonomy over how hiring decisions are made, how our students are trained, and what feedback we provide in reviews (both for grants and finished work). It is in these conversations that we should be making sure that words like “impact factor” and “regularly publishes in X” aren’t mentioned and instead people are evaluated based on the specifics of their work. Further, these specifics should be ascertained by directly reading the work and not relying on inequality-enhancing friend-counting methods like references letters.

Of course, this is much harder than relying on easy metrics like citations counts, publication venues, and numbers of papers. Like many weapons of math destruction, these metrics feel “objective”. It is especially tempting to let these influence our self-evaluations. It is difficult to compare the merit of my own ideas to the ideas of others, it is clear that there is a bias. Part of the reason why I work on my ideas and not others is that I think these ideas are inherently more interesting — not a good starting point for a dispassionate evaluation. However, we all have the same Google Scholar pages and within a subdiscipline are aiming to send our papers to the same places. It is easy to obsess over your own metrics or to make a spreadsheet with how your metrics fail to measure up to those of academics that you look up to. I’d know: my spreadsheet has over 119 rows and growing.

Promoting preprints is now gaining traction as part of our academic autonomy. Part of the hope is to eliminate gatekeepers like editorial boards. Although in many cases — especially for good journals from scientific societies — these gate-keepers are other academics, so removing them doesn’t have a transformative impact on the net amount of academic autonomy. Given that editors tend to be more senior and powerful academics, it is tempting to argue that preprints democratize the academy. After all, preprints take some power away from senior editors and give junior academics an opportunity to self-publish. However, I think this is a misguided plus. Many of us, have always had the opportunity to self-publish on our own websites or blogs — I did this with one of my first papers on unitary t-designs in 2009 — but most don’t because we know it won’t be read. As such, preprints replaces — in part — the semi-transparent gatekeeper of edits by the non-transparent gatekeeper of popularity and self promotion. This is not necessarily good or bad, and I’ll attempt a more careful analysis of popularity versus editors in a later post.

What about issues of building up and evaluating academics? Do preprints really address the above issues with evaluation? Do they deal with the academe’s prestige problem?

Not in their current form.

The reason that they are called “preprints” is because (most) people still send those papers to journals. People still continue to produce derivative incremental work to post to the preprint servers. They do not view the publication process as compete until the paper reaches a journal. Nor do they revise papers in response to comments after they’ve appeared in a journal. In fact, given that people still terminate their paper pipelines with a journal publication, it seems like a more transformative move is to push harder for publishing in society journals over glam-mags and for-profits. Or by moving away from our library subscriptions to Sci-Hub. At least then, we can address the theft aspects of academic publishing.

If we want preprints to address the prestige-gating aspects of academic publishing then the real challenge isn’t the method of publishing, but the culture around it.

Of course, we can still consider other advertised positives of preprints: like the questions of speed and public access. I’ll address both in the next post.

Advertisements

About Artem Kaznatcheev
From the Department of Computer Science at Oxford University and Department of Translational Hematology & Oncology Research at Cleveland Clinic, I marvel at the world through algorithmic lenses. My mind is drawn to evolutionary dynamics, theoretical computer science, mathematical oncology, computational learning theory, and philosophy of science. Previously I was at the Department of Integrated Mathematical Oncology at Moffitt Cancer Center, and the School of Computer Science and Department of Psychology at McGill University. In a past life, I worried about quantum queries at the Institute for Quantum Computing and Department of Combinatorics & Optimization at University of Waterloo and as a visitor to the Centre for Quantum Technologies at National University of Singapore. Meander with me on Google+ and Twitter.

2 Responses to Preprints and a problem with academic publishing

  1. Pingback: Poor reasons for preprints & post-publication peer-review | Theory, Evolution, and Games Group

  2. Pingback: Cataloging a sparse year of blogging: IMO workshop and preprints | Theory, Evolution, and Games Group

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s