Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorAndrew Stacey
    • CommentTimeFeb 7th 2012
    • (edited Feb 9th 2012)

    I hope that the title is legible, if not it is meant to read

    Peer Review,Refereeing=0. \langle\text{Peer Review},\text{Refereeing}\rangle = 0.

    It’s a common defence of current journals: that journals provide peer review and that’s part of the value that they add to an article.

    Total bunkum.

    I will admit that my experience of peer review and of refereeing is limited. But it is extremely varied. I’ve participated in refereeing from both sides: as an author and as a referee. I’ve also participated in peer review, mainly as a reviewer. And the two have almost nothing in common.

    “Refereeing” is something that is very hard to define. It can vary immensely between actual peer review and simply glancing at the paper in question. Which is the main problem with it: as the questions asked by editors and journals vary so much, and the standard of refereeing is so variable, the variety of things that “This article has been refereed” could mean is so large that it is effectively meaningless.

    “Peer review” on the other hand is easier to define. It is when your peers look through your paper and tear it to shreds. It can be a lot of fun - despite my description. So long as everyone sticks with it, it almost always leads to a better paper than was first produced. One of the most extensive peer review sessions that I participated in took place on the n-Category Café. After considerable discussion about the paper in question, John Baez (one of the authors) remarked:

    Thanks for the new round of comments, Andrew. I hope that when we submit this paper for publication — where? does anyone have suggestions? — you are the referee. And I hope that when you referee it, you write: “Thanks to the careful vetting this paper has received at the n-Category Café, it needs no further improvements.”

    to which I replied:

    I suspect that by participating in this discussion then I’ve removed myself from the list of potential referees. But actually you want a completely different referee who, nonetheless, says exactly the same thing.

    What I was trying to convey was that the referee should look at the peer review and see that it has already been gone over with a fine-toothed comb and therefore be able to concentrate on the evaluation part of their job: is the subject matter of the paper suitable for this journal? Not: is it written well enough for this journal?

    The problem is, of course, that not everyone has access to the resources of the n-Category Café. That’s why I like Tim Gowers’ suggestion in his more modest proposal as it suggests a way of opening this up to all mathematicians. As always, the difficulty is in the details! So could such a system work? What would be the incentives? How could we ensure that peer review was available for all, without the system getting swamped by papers that really need another revision even before they are looked at by another mathematician?

    For closing remarks, I would like to note that whilst I think that peer review and refereeing are orthogonal, they can happen simultaneously. I have no idea who the referee was on my first solo paper, but whoever it was - and the editor - worked extremely hard to show me how to make it far, far better than it was originally (and if you think it isn’t a great paper, just be thankful that you never read the version I originally submitted!).

    • CommentRowNumber2.
    • CommentAuthorTerence Tao
    • CommentTimeFeb 9th 2012

    As an editor of the Journal of the AMS, I solicited two types of reports for a given submission: quick opinions, which focused on the questions of the significance of result, degree of technical advance, and quality of exposition, and full reports, which focused more on correctness of the arguments, the relationship to existing literature, and suggestions for improving the paper. In practice, of course, the distinction was blurry, as the referees would naturally offer opinions beyond whatever narrow parameters I might specify. The editorial board would then consider all of these factors when voting on a JAMS paper. (This did unfortunately lead to some upset authors wondering why a paper was rejected if it had good full reports, but unenthusiastic quick opinions, but this is par for the course at a top-tier journal.)

    But JAMS is one of the “big three” maths journals, with a particularly thorough refereeing process. At the more median level of maths journal, one typically just relies on one or two referee reports, and so the feedback one obtains is indeed much more variable. But it is usually good enough for the basic purpose of deciding whether the paper reaches an acceptable level of quality to be published in a median-level journal, as opposed to the more demanding question of whether it is of such exceptional quality in all aspects that it deserves to go into a top-tier journal.

    • CommentRowNumber3.
    • CommentAuthorCharles Rezk
    • CommentTimeFeb 9th 2012
    It should probably be pointed out that the kind of public peer review that you might see on n-Category Cafe is not going to be for everyone. Some folks can thrive on that sort of thing, but others are going to be very reluctant to expose themselves to scrutiny that way. Even if care is taken to make everything constructive and polite. Perhaps especially if they are early in their career, or feel otherwise vulnerable.
    • CommentRowNumber4.
    • CommentAuthorJohn Baez
    • CommentTimeFeb 10th 2012

    I agree with Charles Rezk’s remark here. Many of here are somewhat more bold or adventurous or tough-skinned or foolhardy than the average mathematician, so we should always keep that in mind when imagining what systems others will like to use.

  1. I very much agree with Andrew: despite the quite different perception still there is between an article made public via the web (say, via arXiv) and so exposed to any possible positive or negative comment by peers (though not formally refereed) and a published article, we should be aware and face the reality that a "published paper" is in most cases just a paper which a single anonymous referee has said "ok, this is good for me".
    • CommentRowNumber6.
    • CommentAuthorAndrew Stacey
    • CommentTimeFeb 11th 2012

    What Terry writes about JAMS sounds great - until I start thinking about the details. So a paper in JAMS has undergone a rigorous peer review and I can cite it without fear of embarrassment - or at least, the editors of JAMS will be just as embarrassed as me. But what if a paper in JAMS cites a paper in a “lesser” journal? Can I now assume that the other article has been vetted? And when does this back-date to? Is it true that all articles in JAMS have this level of certification?

    It’s too random, and it hints at a two-tier system with an inner circle of “vetted articles”. I wouldn’t mind this too much if this were all explicit and there was a list of which journals were Top Notch and which just published any old rubbish.

    One of my biggest gripes with the current system is that it is too random. If you get a good editor and a good referee then, yes, the system can really work well for you. But if you get a busy editor and a rubbish referee, then that’s your bad luck and there’s not a lot you can do. I was really lucky with my first solo article and had a fantastic editor and a fantastic referee and they turned the article into something far better than it originally was, but I’ve had many (more) experiences of the other type, sadly. We’re meant to be professionals, so we should be professional about this. A physicist of my acquaintance is fond of quoting the difference between a professional and an amateur: “An amateur practises until he always gets it right, a professional practices until she never gets it wrong.”.

    I also agree with Charles that the kind of peer review that goes on on the n-Café isn’t for everyone. But I would be willing to bet that it is for more people than it currently is, and a system like that which Tim Gowers outlines would make it possible to take part without hosting ones own blog. The great thing about it is that it improves the situation for everyone whether they take part or not, since the papers that have gone through it will be better papers and thus easier to referee, easier to read, and easier to understand. Moreover, the aim of Tim Gowers’ suggestion - as I see it - is as a stage before the paper is submitted so it isn’t about reviewing the paper afterwards but about polishing the paper beforehand.

    I vividly remember what it was like to be a new mathematician. I almost felt as though I wanted to publish my papers without telling anyone about them in the fear that someone would come along and say that they were wrong, or I was doing it all wrong, or I was a rubbish mathematician. If I could just sneak a couple of papers into the system without anyone really looking at them, I’d feel a bit more confident! Nonsense, I know, but I do have sympathy with those who don’t want to open their work up to such scrutiny in such a public way.

    • CommentRowNumber7.
    • CommentAuthorHal Swyers
    • CommentTimeFeb 17th 2012
    I hope an engineer's opinion is welcome.
    Looking at your problem it seems that there are a few overlapping issues:

    1) Cost and fairness - Why should a community pay fees for a product the profits off other's labor? Granted there is a need for sustaining an organization that can handle costs of publication, but in this era of online publication, is the current system fair to the community?

    2) Credibility - In most cases it is assumed that a paper that has survived some peer review process and has been published in a major journal is credible. In this process, there is a type of economy. The economy centers around the exchange of reputation, which is the reward for the person submitting the paper. The process is important to those who wish to protect the interests of the community.

    3) Volume of and timing of content - With the increase of trained professionals, and the increase in the availability of media, the expectation of individuals and the community has changed. Being first is always important, along with publishing frequently. In addtion, the accessibility of material to the community is important in terms of the citation system. Having material behind a cost barrier places individuals behind the wall at a disadvantage. Exchange of information is restricted and the citiation system is probably biased.

    I would think that the biggest question would center around how one retains and tasks reviewers. If the review process is entirely volunteer, then the question is how one retains a sufficient critical mass of reviewers to ensure there is adequate throughput of material.

    I would offer the solution of building upon a barter system. What is it that people want? I think one element is to have their work reviewed and in order to gain credibility and authority. From a reviewers standpoint I would think it would be to retain credibility and authority and maintain the community standard.

    With that in mind, it would seem natural that the buy in to having your paper reviewed is to agree to review papers once you reach a certain level of authority. The review process can still be blind and randomized which would allow for protection of authority and prevent influence and corruption. The first step though would be to identify individuals who would be the first "investors" into the community, who have sufficient credibility and authority to create a pool of "capital" that would attract people into the system.

    Those are my thoughts, I hope people don't mind me sharing.