Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
Some mathematicians I know appear to believe that there are too many papers published, and we need a barrier to publication. While I sympathize to some extent, let’s assume that there is no way to prevent publication (arXiv, own webpages …). Obvious questions then are: 1) how do we find the papers we really want to read (which excludes 99% of those in refereed journals anyway) and 2) how can we decide which ones are not even worth looking at because they have not been formally refereed to an acceptable standard?
I would like to gauge opinions of people here on 2), which I find a little disturbing. It has the feel of socially constructed in-groups to me.
Filtering out the papers whose content is not sound research is useful, and should be doable without too much bad effects. In fact, some example posted around here show that it would mean filtering out more than we actually do in publishing in the large. PLoS choice is reasonable for this (filter out only the papers that should not be published at all, and let the finer sorting be done afterward). If one wants to filter out much more papers, then the problem is much more difficult.
On a light note, since when has 70 000 000 been a very large number? (and it’s down to 60 000 000 now)
I should add that Zhang’s result was already known in some circles for a month before the talk (I learned this fact from Cathy ’Mathbabe’ O’Neil). It had already been noticed and partially filtered before most of us heard of it.
Zhang’s earlier paper on Landau-Siegel zeros has been on the arXiv for six years without filtering. The last I heard (a few weeks ago), experts thought there was probably an unfixable mistake but it hadn’t yet been definitively sorted out. I’m not sure what was different in the two cases that led to one paper being carefully studied and the other ignored for six years. Did Zhang push harder for the bounded gaps paper to be read carefully, perhaps because he was more confident it was correct? Was it just because the paper was more convincing? (Maybe this would be evidence that our current filtering system works surprisingly well.) Did he get lucky in having the bounded gaps paper taken more seriously?
First I should say that I neglected to credit Igor Pak’s blog post which noted (correctly in my opinion) that mediocre mathematicians and mathematicians from poor countries or who don’t have access to good libraries are the ones who would be hurt by free universal electronic publishing, because they need the help of an editor and a referee (hence an organized editorial system, which costs money) to improve and (if appropriate) promote their papers. I am merely extending his observation to mathematicians who are not necessarily mediocre but who have mediocre reputations.
Was the earlier paper submitted to a journal? Does anyone know the result of that (and can share it)? From this distance, not actually knowing anything, it seems to me that this paper was carefully studied because it was submitted to the Annals and the editors found referees who took it seriously.
The current system gives an explicit way for an author to ask that a paper be evaluated and commented upon, namely submission to a journal. We then think that the journal has a responsibility to make reasonable efforts to (find people to) evaluate and comment upon the paper.
In a publish-then-filter system, I don’t see any mechanism to ensure this happens to every paper.
Tim Gowers suggested the “how’s my preprint” service be separated from the evaluation service. I don’t see why this couldn’t be done. There are already commercial services, but it is not yet common. However, what’s wrong with the idea?
1 to 7 of 7