Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
  1. I'd like to suggest a single process that in principle could combine 1) refereeing + editor evaluation, 2) old-fashioned post-publication reviews, 3) comments or notes on papers as expected to appear in the future epijournals (or WDML). This depends naturally on the potential success of epijournals but I wonder if you would find the whole idea totally ridiculous or potentially workable.

    What's in a peer review of a paper? I can think of six essential ingredients: a) purely mathematical content, such as counterexamples, corrections and simplifications, b) a review and assessment of achievements (or shortcomings) of the paper, c) pointers to related or overlapping work, d) remarks on exposition and suggestions on how to improve it, including misprint corrections, e) reviewer's personal formal assessment of quality and novelty of results and quality of exposition, f) recommendations regarding acceptance for publication. Not all of these ingredients are sensitive to disclosure of identity of the reviewer. If the disclosure were available separately for each ingredient, I'd say, at the risk of oversimplification, that (a) and (b) would rather benefit from openness, (c) and (d) may have both positive and negative effects, but none of them significant, whereas (e) and (f) would rather benefit from secrecy.

    The option of disclosure could be ingredient-specific if different ingredients have different origins. In the end, the decision about acceptance is normally left to the editor, who is normally different from the referee. The editor cannot normally manage (e) and (f) because he'd have to study the paper and the review much more deeply than he normally can. However if the editor happens to be as familiar with the narrow area of the paper as the referee himself, and is willing to contribute say 1/5 of the time that the referee contributed, then the editor will effectively have his own version of (f) (which might be counterbalanced by (f) of the referee), and (e) becomes largely redundant in the presence of (b), (c) and (d).

    Now let us assume that we have an epijournal with ranked branching comments on submitted papers (like in Reddit but more in the style of StackExchange) and with good incentives for the majority of comments to be thoughtful and of scholarly value (this could include reputation, area-specific reputation, and various forced delays). Some comments could in fact be questions ("Anybody knows what this phrase is supposed to mean?") and others open problems ("Can this lemma be proved without assuming that condition?") often of unknown difficulty, and this is one reason for branching. Regardless of questions and problems, let us consider a new type of comment - "exercise". It is similar to a problem, except that it is supposed to be easy enough to be answerable by every careful reader in seconds or minutes (not hours) and have a computer-verifiable answer. An exercise can refer to the paper itself or to any comment on the paper that is not an exercise, and may be written by any user, including the authors of the paper. Evaluation of an exercise is supposed to carry a rather precise meaning: upvote if you can solve it and think that other careful readers should be able to, and downvote if you can't solve it with reasonable effort, or think it is irrelevant to the paper.

    Forgetting about incentives for now, let us further assume that we have a paper in the epijournal that happens to have enough comments that their collection amounts to parts (a) through (d) of a full peer review. Plus a bunch of exercises addressing mainly the introduction of the paper and the comments of types (a) and (b). We're assuming that each of these comments and exercises has been endorsed by a sufficient number of upvotes, and that the numbers of the sufficiently endorsed comments and exercises are beyond some barriers. Then a reader, presumably with area-specific reputation qualifications, who solves the exercises will get in the position of the paper-friendly editor who can do the (f) part. Such readers could be allowed to vote anonymously on acceptance of the paper. The question of acceptance would now be solved by this "informed democracy" procedure. (It happened so that I recently had to write a longer text about informed democracy in a political context, but it's only available in Russian at the moment.)

    This leaves the question of incentives: who would write all those comments that are effectively full paper reviews and also the exercises? Obviously, all those folks who would normally do the refereeing and/or reviewing for MR and Zentralblatt and spend their time on MathOverflow. There are some dangers here, namely that (non-math) journals with open refereeing have a higher decline rate among potential referees, whereas MR reviews are of course rarely as detailed as a regular referee's report. On the other hand, one could imagine some alternative mechanisms to the usual drafting of referees and post-publication reviewers, which would make writing open peer reviews more of a research and less of a distraction. People subscribing for arXiv categories might also subscribe for updates on the commenting of papers in their areas, and incorporate such commenting in regular academic habits. One could also imagine authors eager to get their work published asking friends and colleagues to comment on their work, since sufficient number of comments and exercises is a prerequisite for voting on acceptance of the paper. While this might lead to conflicts of interest, generally it seems to be a more transparent process than writing of recommendation letters, say.
    • CommentRowNumber2.
    • CommentAuthorAlexander Woo
    • CommentTimeJun 4th 2013

    How does this work for a mathematics professor at Alma College (or your choice of highly teaching-oriented institution) who publishes one paper in an obscure area of graph theory every several years and has no friends or colleagues active in research, must less in his area? Who would write comments for his or her paper?

    For that matter, how would it have worked for Yitang Zhang before he became famous?

    • CommentRowNumber3.
    • CommentAuthorSergey Melikhov
    • CommentTimeJun 5th 2013
    • (edited Jun 5th 2013)
    Alexander, I'm not saying that recruitment of reviewers could be avoided completely. (At least, not until great cultural changes in some distant generations.) But it could be reduced by observing some lag after initial submission of the paper. If the paper gets enough of reviewing material anyway, it might be unnecessary to solicit invited reviews. In fact, the voting on accepting or rejecting of the paper should include the third option to solicit additional reviews, in case the existing material looks obscure or biased.

    Alexander's comment also reminds me to mention that while one can judge the proposed system by its apparent effectiveness, it also brings up one additional issue into the discussion of new forms of journals: Grothendieck's critique of the mathematical community as an undemocratic meritocracy in his "Récoltes et Semailles". Before Yitang Zhang became famous, he had no chances to be on the editorial board of a good journal, but in the proposed system he could have qualified for voting on some papers in his area.
  2. Sergey, a few comment on your first message.

    1. Epijournals should not be expect to have a commenting feature, at least not in the episciences project. This issue is debated, and we want to focus on our priority : epi means over, so “epijournal” is mostly a French translation of “overlay journal”. It might happen at some point that comments could be open in some of our venues, but it is neither a priority nor something meant to be universal.

    2. Commenting features have been tried (notably by PLOS ONE) but as far as I know, none of them have attracted many comments yet. So incentives are probably the first thing to think about, not the last one.

    3. In any case, without the current process of submission (on the initiative of the author) and explicit hiring of referee, I very much doubt that most papers will get feedback. Incentives strong enough to motivate people to comment on more than the 10% most visible papers, and that are not meant as a favor to a third party that ask it to you, are very likely to have huge side effects.

    4. What you propose is highly sophisticated, and very different from the current habits. It makes it all the more difficult to enforce on the community. If you want post-publication refereeing, you should try to keep it simple and related to habits that already exist (in this regard, something MathOverflow-like might have a chance if it is simple enough).

  3. Benoit, thanks for an extended critique.

    1. Sorry for being sloppy in referring to epijournals. From the discussion in Terry Tao's blog, I got the impression that while there's a determined opposition to the commenting feature and everybody agrees that at the very least authors and entire epijournals should be able to opt out of it, the feature also has a strong support and is arguably the most interesting aspect of epijournals for potential authors and readers and the general public. (This is not to say that sustainable open access is unimportant, but it's probably more of concern for editors and campaigners as opposed to layman mathematicians, and is also being pursued in other ways; whereas the commenting feature does seem to be entering the world of math journals exclusively through epijournals, AFAIK.) So I took it more or less for granted that at least some epijournals are expected to have commenting as an option. Sorry if this turns out to be wrong in the end.

    Let me then abbreviate potential epijournals with the commenting feature as 2G journals (for the lack of a better term), and the model that I'm trying to describe as 3G journals (modesty being sacrificed for brevity). I don't really mean to suggest that this is exactly how actual "3G journals" should be in my opinion, but rather that there seems to be a logical need for the next step after 2G journals, and what is being discussed here might be not entirely irrelevant to that next step.

    2. Part of my point is that the problem of incentives in the 2G journal model (which is obvious already from the universal failure of many sites offering the possibility to comment on any paper in the arXiv) might be just solved by the 3G model. The 3G model attempts to aggregate the traditional incentives of referees, post-publication reviewers and mere "commentators" (such as authors of the old-fashioned short notes on others' and own more substantial work, modern bloggers and one type of MathOverflow users) into one big incentive to contribute. Moreover, it attempts to increase their traditional incentives, in several ways.

    When I'm getting an invitation to referee for a traditional journal, I often find myself in the situation that I'd be quite interested to leave some feedback on the paper, but I'm rather unwilling to read carefully all of it, to assume the responsibility of having checked it for errors, and especially to argue that the paper should or should not be published. In fact, my natural reaction to 90% of modern papers in my fields of interest is that "this could well become a really nice paper if the author pursues this and that issue further and comes up with somewhat more definite and convincing results in those directions". But this is usually not an appropriate conclusion for the referee's report. So as a referee, I'm often forced to take decisions which are neither honest nor benefit (in my opinion) the authors, the readers and the collective knowledge. (I realize that this attitude may be not so typical of an average referee, but surely other referees have their own dilemmas arising from the combination of the "naturally open" aspects (a), (b) of peer review with the "naturally secret" aspects (e), (f)). To summarize, I'd often be much more willing to leave some feedback instead of a full review.

    This brings the question of, "who is going to check it for errors if each contributor to the collective peer review does not care to read the entire paper". My answer is that, firstly, the practice nowadays is that only a few top journals really care to have their papers actually checked for errors, and even they can't guarantee anything. There are not so few papers with critical errors in the Annals and Inventiones, and many (most?) papers in lower rated journals are not even fully read by the referees. Secondly, claiming that "the proofs in this paper seem to be correct" in public is not the same as claiming it in private correspondence with the editor. The former carries more responsibility and therefore more weight. Also, claims such as "the proofs in sections 5 through 7 seem to be correct, but I'm not convinced that it's my job to check the other 10 sections" don't normally make it into a traditional referee's report. But it would be much more valuable to know that a certain known expert actually claims to have verified 3 sections than to know that an obscure referee is expected to have verified all 13 sections. This enables division of labor and should result in much more reliable literature, since transparency of verification is vital for its reliability.

    The expert who considers checking the 3 sections may actually be much more eager to do so if he's able to state the result of his work publicly. But comments of the type "I think I've checked sections 5 through 7" are not really considered appropriate for post-publication reviews, and even blogs and MathOverflow don't particularly encourage such comments. The 3G model would specifically encourage them, which I think would result in a very positive change in contemporary math culture.

    Similarly, a referee who has found a counterexample to some lemma or a trivial proof of another lemma may feel far more rewarded for his work if his comments are made public under his name. A post-publication reviewer will likely feel more rewarded by a high evaluation of his comment by peers, and an area-specific reputation increase as opposed to the eight AMS bucks or a Springer discount. At least in this respect the 3G model is, I think, very MathOverflow-like, and much more so than a 2G model could possibly be. Now MathOverflow is after all a Q&A site where mathematical novelty does not really count whereas presentation and social skills are crucial. This need not be so in the 3G model, so it may in the end give more reliable and transparent bits of information about a job seeker than traditional numerical values based on impact factors and other journal metrics. In the end, citation indices of the paper measured in any way say something (perhaps everything) about its influence, but don't say so much about how substantial or important it is. There are plenty of papers that I think are substantial and important but I'm not going to cite them because I'm simply doing something else in my papers; but I could well comment on them, which could help increase their visibility and appreciation. Overreliance on citation indices and therefore influence alone seems to be a significant factor in present day math culture, which I'm not sure is entirely positive. To summarize, a person "at Alma college who publishes one paper in an obscure area of graph theory every several years and has no friends or colleagues active in research" may actually be quite eager to contribute to open reviews just because that could be their best chance to get noticed at all.

    In conclusion, it seems to me that collaborative open refereeing, with the (e) and (f) parts of the process detached into anonymous voting, could in fact attract potential referees much better, and not worse than traditional refereeing and post-publication reviewing. I didn't say anything about motivations for writing the exercises, but that should probably take a rather small fraction of time and effort, and could be largely done by the original authors of the paper. Also, exercises could be made mandatory for some types of extended comments, in the sense that such a comment would not only need to pass a rank threshold itself, but also have its exercises pass some rank thresholds in order to qualify for contributing to the ultimate "discussion index" of the paper.

    3. When (and if) a first 3G journal is launched, surely 99% of its submissions would not get enough peer review material without the journal editors recruiting the referees just like they normally did. With time, the figure could drop, depending on the overall success of the project, and if eventually it stabilizes at say 50%, that would already make a huge difference in that the refereeing process would become less of a distraction and more of attending a "seminar 2.0".

    4. The sophistication of the 3G model must be similar to the sophistication of a new hi-tech gadget in the market: it's not easy describe in plain words (without pointing at the actual device), but most people eventually find it rather easy to use. Surely, TeX, MathOverflow and even arXiv submission are no less sophisticated to a mathematician who has never seen them than the 3G model is to a MathOverflow user. So I don't think that the "sophistication" barrier is so drastic that a single 3G journal in, say, higher categories wouldn't attract enough authors and reviewers just because the process is so "involved". On the contrary, there's some game/fun aspect to it which should make some people curious enough to try.
    • CommentRowNumber6.
    • CommentAuthorAlexander Woo
    • CommentTimeJun 6th 2013

    Dear Sergey,

    I fear that you are not really getting my concern.

    The problem here is that, under the current system, papers of close-to-zero interest still do get refereed for third-and-fourth-tier journals. This refereeing provides a valuable service for the authors of these papers and the communities they work in.

    Unless asked by an editor, no expert is going to touch one of these papers even with a ten-foot-pole. How will your system cater to these papers?

    I wrote about this problem in more detail long ago in


    Part of the problem is that you are assuming the mathematician at Alma College wants to get noticed by a research community. This is NOT going to be the case. The mathematician at Alma College is first and foremost a teacher. They are doing research primarily to be a living example of mathematics to their students, and not really for the purpose of contributing to mathematical knowledge. They simply want some expert out there to check to make sure they haven’t forgotten how to do mathematics and are still providing a good example of doing mathematics to their students.

    The current system organizes the provision of a subsidy (in refereeing time) by those with more expertise to those with less expertise. The provision of this subsidy has tremendous benefits to mathematics, and especially to education at the undergraduate level. I don’t see how a collaborative open refereeing system organizes the provision of this subsidy.

    • CommentRowNumber7.
    • CommentAuthorSergey Melikhov
    • CommentTimeJun 6th 2013
    • (edited Jun 6th 2013)

    Dear Alexander,

    thank you for reiterating your concern. I see that the procedure that I was trying to describe was not really clear, even in my mind.

    As I said, I'm not suggesting to stop the editor from drafting referees for the obscure paper in graph theory. I said that maybe in a few generations this would no longer be needed because of cultural changes. By this I only mean that all papers will somehow get reviewed and even overreviewed. (For example, if people find out that in order to get a position at Alma College a number of insightful comments on others' work helps better than an original but totally unilluminating paper.) Unless this happens, I'm all for keeping the current system of refereeing requests for those papers that don't otherwise get enough of useful comments.

    One apparent problem with what I've proposed above is that the "discussion index" of the paper, which is roughly the sum of user evaluations of comments and questions, is not really an accurate measure of anything. If the discussion index is low, it could mean that there are not enough of good comments, or it could mean that an invited referee has written a perfect review but nobody cares about it (as in the case of your Alma lecturer).

    Both problems admit the same solution, however: to ask a new person to look again at the paper. If there already are a lot of good comments, they can simply upvote each comment, and after a few iterations the desired discussion index would be reached. If the existing review is insufficient, the new reviewer can improve it or write a better one, or decline and pass on to the next potential reviewer. Note that the declining reviewer can give a partial feedback, and these accumulate, unlike in traditional journals.

    The problem with this method is that it clearly takes more time and effort on the part of reviewers than in most traditional journals. This could be seen as an advantage, because more people will have seen the obscure paper of the Alma lecturer, or a disadvantage, if one assumes that obscure papers are unlikely to be interesting.
    In the latter case some familiar solution can be used: either a many-tier family of 3G journals, where higher thresholds for the discussion index correspond to higher quality standards for accepted papers, or having the poll on paper acceptance triggered manually by the editor, regardless of the discussion index.

    In any case, I don't see how the Alma lecturer could possibly suffer from this system. On the contrary, publicity raises the chances that their paper will eventually get a genuine peer review.

    What I didn't specify is how the proposed system would enable revisions of the paper so that comments and exercises don't just all expire with every new revision. This is arguably a technical issue, and not an easy one, so let me not address it today. Another technical issue is that if several people work on reviewing a paper, one might well want to improve the other's comments, which calls for a Mathoverflow-style wiki-editing of branching comments (where different edits compete with each other). This is not entirely trivial and I did actually describe such a system in my note about (political) informed democracy. But I doubt that it's appropriate to dwell on such issues at this point.
    • CommentRowNumber8.
    • CommentAuthorAlexander Woo
    • CommentTimeJun 6th 2013

    Sorry, Sergey, for the iterated questions, but…

    Thank you for answering my concerns about refereeing. Now onto the next question: Who is going to bother voting on whether or not to accept this hypothetical paper? I seriously doubt anyone would care enough about the paper to even spend fifteen minutes to read it and do the exercises. The assigned referee does it because he or she has been asked to. Who else? Does the editor also need to ask specific individuals to read the referee report(s), skim the paper, and vote?

    There are a fair number of write-only journals out there, and they serve a valuable purpose. How does acceptance or rejection work for write-only journals?

    As you perfectly well know, in the United States where almost all citizens purport to value being in a democracy, half the eligible population does not bother voting, and, even worse, in some parts of the country, when it comes to local elections for mayor, school board, and the like (where each vote has a much greater effect than in a Presidential election!), three quarters of the people don’t bother voting. Voting on a paper is perhaps more like jury service than voting in an election, and my experience talking to other people tells me that almost everyone would avoid jury service if they had a free choice in the matter.

    I don’t think I spend 15 minutes on more than a dozen recent papers (say written within the last 3 years) per year, not counting the three or four I get asked to referee. I might be comparatively lazy as far as reading papers is concerned, but I don’t think I’m a whole order of magnitude lazier than average.

    (Completely separately - just for the record - I am sure that, assuming you have done enough research to earn a PhD, your teaching ability (in the style of teaching desired there) will matter far more than any amount or type of research activity as far as getting a position at Alma College is concerned.)

  4. Alexander, your questions are very helpful to sort out the details of my proposal and to judge how far it might be from being realistic. I appreciate your attention.

    I didn't mean to suggest that a high fraction of people working in some area is expected to vote on every paper in that area. That is certainly not feasible. But let us not forget that popular democratic ideas and values diverge significantly from the reality (in the US, say) and from political theory. In the 1930s, Joseph Schumpeter redefined democratic method as "that institutional arrangement for arriving at political decisions in which individuals acquire the power to decide by means of a competitive struggle for the people’s vote". Since the 50s and Robert Dahl, variations of this definition are referred to as "polyarchy" and it is argued (or at least so they did before the emergence of the internet) that, firstly, this is about the best political system that is practically feasible (as opposed to the utopia of genuine democracy, either direct or representative) and, secondly, that it has been more or less achieved in the US and Western Europe. So polyarchy might be a more accurate description of what is commonly referred to as democracy, and this concerns us here because informed democracy is not at all supposed to be a polyarchy. In informed democracy, individuals acquire the power to cast a vote towards a decision by means of demonstrating a minimal competence in advantages and disadvantages of that decision, as judged on the basis of a competitive struggle for the people's vote between arguments pro and contra and qualification tests.
    A little involved, unfortunately, but it doesn't seem to simplify.

    If we follow the model of a multi-tier system of 3G journals, I would think that as long as journals of higher acceptance standards manage to get enough people to review the paper, they should experience no lack of voters. So this seems to be more of a concern for lower class journals. For a lower class journal, I think it's perfectly fine if just 3-5 people vote on each paper, as long as they have no conflicts of interest. This brings the issue that the authors could recruit their friends to vote. So I think there should be a considerable reputation barrier (which brings considerable complications into launching a new 3G journal into a new area). Still, there could be many cases such as student's adviser voting over their paper. Some of this could be explicitly prohibited, perhaps to the point of disallowing people from the same institution to vote on each other's papers.

    With the division of labor offered by collaborative open refereeing, I think it is realistic that instead of just one referee doing all work on a paper, there will be, even for a boring paper in a lower class journal, 2-3 people looking at the paper somewhat beyond the introduction, and 3-5 further people looking at least at their reviews. Most of these people would be drafted by the editor, where the "editor" could in fact be a simple computer program. If the paper is not going to ever get 5 readers looking beyond the introduction, and 10 additional readers of the introduction and/or review, does it really make sense to publish it in a journal and not just keep on the arXiv? The author surely needs help from the experts, but the experts also have other things to do than helping every fermatist, so the division line has to be drawn somewhere. Conference talks, say, do often gather more than 15 people, and even a 20-minute talk often goes somewhat beyond the introduction. Now if the paper does have those 15 potential readers, it should be not impossible to locate some of them by trial and error, even for a Turing machine. This is one important distinction of the modern era from pre-internet times: Google can find.

    This is not all, however, because not all of those 2-3 plus 3-5 people who have already glanced over the paper and its review would qualify to vote. Some of them may have insufficient reputation and others conflicts of interests. Well, if 3 expert voters have not been found, the automatic "editor" may continue bothering mathematicians until it finds those voters, indeed.

    Alexander - I think you sort of convinced me and I now come to admit that the proposed system does create some barriers that may be essential (though arguably not fatal) for the teacher at Alma College. Of course, old-fashioned journals could coexist with these "3G journals" as one possible solution. On the other hand, I do think that there's something fundamentally wrong with the current incentives for doing research, and especially with the trend. I also think there is something fundamentally different between an oblique mathematician who can't find any job but is capable of proving very powerful results (as in Yitang Zhang's case) and a college teacher who does not really care if their work is ever going to be noticed by the research community (as you said) but still wants free advice from an expert. The current trend actually seems to be that the notion of "research" is being gradually redefined, in part by the needs of such college teachers, and at expense of obscure but genuine mathematicians such as Perelman, Mochizuki and Zhang. This cultural change may have positive aspects, such as the pressure to write in more accessible language and with more colorful pictures, but also includes increasing overreliance on fashion, in the form of considerable preference that the current system gives to a weak paper in a fashionable area over whatever paper that is orthogonal to fashion trends.
    • CommentRowNumber10.
    • CommentAuthorAlexander Woo
    • CommentTimeJun 7th 2013

    As heretical as this viewpoint might be on a forum most of whose audience is research mathematicians, I think broadening (or at least maintaining) access to a mathematical education in contact with the actual practice of mathematics is far more important for society than accumulating proofs of theorems. (Of course, as anyone vaguely conversant with economics knows, it does NOT follow that everyone should personally focus more on education rather than research.)

    I think almost everyone agrees that this goal of broadening access to a genuine mathematical education is an important one, even if they might disagree with me on its relative importance. I just want to make sure this goal does not get lost when we discuss future models for publishing.