By A.J. Peters
Oh my, does it get any better than this?
In 2005, three MIT Students developed software that could churn out nonsense disguised as academic papers. According to The Guardian, they wanted to expose how academic conferences charge substantial entry fees but have (laughably) low standards for the papers they accept. Their gibberish paper was accepted.
Since then, the software they used to create the paper has been released online, and is apparently responsible for over 100 papers that have been published by the U.S. Institute of Electrical and Electronic Engineers. Another 16 were printed by a German publisher, Springer.
Here’s a snippet from one of the original papers, courtesy of MIT News:
Many physicists would agree that, had it not been for congestion control, the evaluation of web browsers might never have occurred. In fact, few hackers worldwide would disagree with the essential unification of voice-over-IP and public-private key pair. In order to solve this riddle, we confirm that SMPs can be made stochastic, cacheable, and interposable.
Of course, there’s a side to all this humor that’s not so innocent or inconsequential. What does it say about the publish or perish academic environment, where researchers are pressured to release papers that no one will read, just to claim the achievement on their résumé? As Slate sums up:
Over the course of the second half of the 20th century, two things took place. First, academic publishing became an enormously lucrative business. And second, because administrators erroneously believed it to be a means of objective measurement, the advancement of academic careers became conditional on contributions to the business of academic publishing.
As Peter Higgs said after he won last year’s Nobel Prize in physics, “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today’s system puts pressure on scientists for, “too fast publication, and to publish too short papers, and the evaluation process use[s] a lot of manpower. It does not give time to become absorbed in a problem as the previous system [did].”
Leave your thoughts in the comments: what’s a better way to evaluate and provide funding for meaningful research?