Academia must seem like a strange world to most. You get a job in a university lab, do... something... and move to a different university and repeat. Most people would only hear about research second hand through BBC press releases, or New Scientist. These nerdy rock-stars like Stephen Hawking, or this guy, always making waves, causing a scene about black holes or string theory or some exciting and bizarre thing. For the vast majority though, being a scientist is just another day job. No Nobel prizes, and no celebrity status. How do these scientists succeed then, if they're not going in the New Scientist mag? Do they just get paid for being a good lab hand? God, no. They succeed by publishing papers in peer-reviewed journals that remain largely walled off from the rest of society. But how does that work? Here's the advertised recipe for scientific research success, follow it to the letter and you'll do fine:
- Join a university, work in a lab for a few years and research whatever it is you're paid to research. Some labs will let you decide your research. Some won't. This is the easy bit, and it's not easy.
- Write your results up and submit to a journal with a high impact factor (I'll get to that later).
- Wait nervously while the editor decides if the research is novel enough for their journal.
- Wait nervously while a small group of anonymous peers picks your research apart for experimental/theoretical flaws and/or questionable conclusions.
- Pat yourself on the back for a job well done. Maybe sneak a grant application in when you get the time. Grant applications look great.
All seems reasonable. The system promotes healthy competition, and selects for sound research. Bad science gets filtered out, and successful scientists advance their careers. Right? Well, that's the idea, but it doesn't quite work that way. There are pitfalls to this recipe that can't be avoided by just being good at your job. You've also got to be lucky.
The first problem is that little phrase I used earlier: impact factor. This is, simply put, a number that (by some metric I'm not clear on) measures how frequently papers in that journal are cited by other papers. If a paper is cited a lot, it means it had a high impact on research, thus building on the overall impact of the journal. Makes sense then that scientists flock to publish in these top journals - if you get in, you're golden. The problem is that it's self-serving. The higher the journal's impact, the more people read said journal, and the more times it's naturally going to get citations.
The big gun journals (Nature and Science are the big 2) then, to help maintain this impact factor, prefer to publish "exciting" science. From a business point of view, this helps keep the journal at the cutting edge. The problem is that scientists then avoid submitting negative results. A negative result is the unfortunate outcome of spending months soundly testing your idea, and being proven wrong. Scientists are supposed to be ego-less, and only care about finding the truth, so these results should be just as important as positive results, maybe more so. The perception is the opposite; negative results are uninteresting. If you've proven your idea wrong, publishing it will prevent other groups from heading into the same dead end. Instead, you found the dead end in the science maze, turned around, and pretended it never happened. Further, not only do you avoid negative results, you chase areas of research that are deemed "Nature-worthy". But who decides what's good enough for Nature anyway? This leads us nicely onto our next obstacle: the editor.
Before your paper can be judged on its merits, you must get past the journal's editor. This is someone who has a decent scientific background, with some basic understanding of a wide variety of fields of research. They're the jack-of-all-trades of scientists, with enough knowledge to dabble here and there. The problem is that an editor is no expert on your ultra-specific research, so they're often not a particularly good judge of its merits. Particularly at the topmost journals, an editor's primary role is to filter out the non-exciting research (the "guff" as my colleague calls it). Filters I understand - Science doesn't want to be wasting time by sending any old crackpot idea off to be peer-reviewed - but the impact chasing approach goes well beyond that. The editor doesn't question your work's validity, only its punchiness - if it doesn't mention black holes or cancer you'll struggle. So you write a cover letter specifically for the editor, justifying why your paper is fantastic, and how their journal would be sorely wounded if they miss the opportunity to publish it. To get past the editor, you have to be both scientist and salesman, well versed in the art of bullsh#$!ing.
Let's assume you've made it this far. You've gotten past the feeling of unease over the impact factor system, and you've sly-talked your way past the boring-filter. Now, a group of anonymous peers, scientists chosen for being experts in your field, are independently judging your work (we call these guys and gals referees, but try not to imagine rugby legend Nigel Owens). This is the peer-review stage, and it's a cornerstone of science, there to keep bad science in check. Phew. Panic over. It's in the hands of reasonable scientists now, so there's nothing dodgy from here on out... Right?
If you've ever been involved in medical testing, you've likely been part of a double-blind experiment. What this means is that neither the patient nor the drug-administerer know whether the patient is taking the test drug or a placebo. Only after the trial is over and the data collected are the patients who took the drug revealed and their outcomes compared. This may seem pretty harsh if you're desperate for some new drug to cure your incurable disease, but it's vital. By keeping everyone in the dark, it removes false-positives that could be caused by the placebo effect (where people feel better simply because they think they're taking medicine) or misinterpretations of data caused by the expectations of the scientists. The data is laid bare, and only then do we know if the drug worked. If it did, send it out to the masses. If it didn't, we don't need to waste anymore time, money, and lives on it.
Given that this approach is common in research, it would seem obvious then to make peer-reviewing double-blind, to stop grudges (yes, grudges happen, and it's embarrassing) or biases getting involved. Prejudices are rampant in academia too, as Fiona Ingleby of the University of Sussex found out just last year, when a referee responded to her submission with the following comments:
"It would probably also be benficial to find one or two male biologists to work with..." "...to serve as a possible check against interpretations that may sometimes be drifting too far away from empirical evidence into ideologically biased assumptions".
"...it might well be that on average men publish in better journals … perhaps simply because men, perhaps, on average work more hours per week than women, due to marginally better health and stamina”.
The journal apologized for this anonymous referee's comments, but so far double-blind refereeing is still rare. There are valid (but weak) arguments against double-blind reviewing, most of which revolve around the fact that a scientific paper rarely stands alone. More often, a paper draws from previous publications by the same authors, so a referee that moonlights as a consultant detective would probably figure out who you are. I don't buy this argument at all. It being a bit more difficult to maintain double-blindedness is not a valid counter-argument when referee bias is the alterative. Sure, a sleuth referee could occasionally track your identity, and then some bias will creep in, but it doesn't need to be a perfect system, it just needs to be better!
One could argue that anonymous-sexist-referee above was just a bad egg. A sole bad egg, in a giant shiny box of golden goose eggs. I would argue however that he was just the one that was open about his sexism; there are bound to be many that would share his views without being so candid, and even more who have unconscious prejudices. People are often more bigoted than they realise, and it's very important for both science in general, and for individual authors' careers, that good research not be hindered by backward referees.
Finally, we're at the end of the road, and you've made it. You got good positive results in the lab. You gave the journal your best sales pitch and the editor bought it. The referee(s) agreed with your findings, and also didn't hold your background against you. Well done. Your career is now on sound ground, but you're exhausted, and you've forgotten what being interested in doing science felt like. You go on the journal's website to find your paper, just to see it on-line and remind yourself that all the hard work was wor... Oh, it's behind a pay wall. I'm so sorry.
Don't listen to me though, I'm just grumbling because I haven't published in a while.