REASON #1: The Epidemic of Communication
Blink, and another proceedings appears. Most papers turn out to be early progress reports, quickly superseded, yet pitched as mature and significant. The right place to hear about each other's fresh ideas is in our offices, not in print.
REASON #2: Superficial Reviewing
Improvements have been made, especially in the most prestigious conferences. But it is just unreasonable to suppose that people reviewing on the order of ten papers for at least one conference per year will apply the same effort as in reviewing a few journal papers. And it shows in the difference in quality, evident to anybody who has processed many journal reviews (e.g., as an AE or Guest Editor of a Special Issue). Anyway, the process is doomed since there simply aren't enough experienced reviewers.
REASON #3: Journals Work
The tradition of peer-reviewed journal articles has served science well for hundreds of years. Why change it? It allows for major revisions and second-round reviews, often markedly improving papers. And single-blind reviewing helps detect incremental work. The argument about turnaround time is naïve (at least in our fields); there just aren't that many ideas or results that are so important as to warrant speedy publication. Besides, journals have sped up the review process.
REASON #4: Noisy Personnel Decisions
Important career evaluations are increasingly based on the number of papers appearing in certain conferences. Given the vagaries of the review process, and the sheer numbers of papers individuals submit, this is adding considerable noise to the decision-making process.
REASON #5: Irrational Exuberance
Community-wide, do we really believe that every few months there are several hundred advances worthy of our attention? How many good ideas does the typical researcher (or anybody for that matter) have in a lifetime? A Poisson distribution with a small parameter?
REASON #6: Preferential Treatment
Area Chairs and other committees involved in the submission process are advantaged, and there is a clear conflict of interest. Holed up in an airport hotel room to make final decisions over a weekend, it is simply human nature to favor the ones you are face-to-face with.
REASON #7: Limited Accountability
Another corrupting factor is excessive anonymity in the chain of communication. With journals, there is two-way communication between reviewers and AE's, and between AE's and authors. In particular, as an AE, my identity is known to the authors and I take responsibility for my decisions. In contrast, Area Chairs are often anonymous; and Program Chairs defer to them. In the end, nobody is explicitly accountable.
REASON #8: Poor Scholarship
Apparently, for many researchers doing a literature search is now reduced to looking at the proceedings of a few conferences over the last few years. As a result, ideas are re-invented and re-cycled and credit is randomly distributed. This is more likely to be corrected in journal reviewing.
REASON #9: Diminished Real Productivity
Needless to say, productivity should not be measured by the number of pages written. But as far as I can see, our young colleagues are spending as much or more time writing up their results, and searching for minimum publishable units and catchy names, as they are thinking intensely and creatively.
REASON #10: The Fog of Progress
Are we making progress? Are we steadily (if slowly) building on solid foundations? It is difficult to know. Given all the noise due to the sheer volume of papers, the signal, namely the important, lasting stuff, is awfully difficult to detect.
At the FBI building in Washington D.C., there used to be a red light that blinked every time a major crime was committed in the United States. As the story goes, a visiting child once asked "Why not just turn off the light?". This works in our case. Researchers in other disciplines assemble and hear about each other's work, which is useful and fun, without having refereed conference proceedings; a summary (e.g., abstracts) is enough and technical reports provide a measure of protection.
My own (half-serious) suggestion is to limit everybody to twenty lifetime papers. That way we would think twice about spending a chip. And we might actually understand what others are doing---even be eager to hear about so-and-so's #9 (or #19).
DISCLOSURE: Although my opinions were largely formed many years ago, I have nonetheless submitted a few dozen papers to computer vision and other conferences over the past twenty years, mostly at the instigation of students and collaborators. In view of current practices, I can well understand their motivation.
Donald Geman
Johns Hopkins University
November 2007