Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMark C. Wilson
    • CommentTimeMar 9th 2013
    • (edited Mar 10th 2013)

    This is the title of a paper by Brembs and Munafo http://arxiv.org/pdf/1301.3748v1.pdf Tim Gowers recently shared this on Google+. It would be nice if these kinds of discussions could occur here at Math2.0 rather than in assorted blog comments scattered all over the internet. The whole point of this forum is to centralize such discussions!

    The conclusions:

    1) Journal rank is a weak to moderate predictor of scientific impact; 2) Journal rank is a moderate to strong predictor of both intentional and unintentional scientific unreliability; 3) Journal rank is expensive, delays science and frustrates researchers; and, 4) Journal rank as established by IF violates even the most basic scientific standards, but predicts subjective judgments of journal quality.

    Tim says: “It’s hard to believe that the conclusions of this article apply to mathematics – surely an article published in the Annals is, on average, genuinely better than an article published in, for example, the Journal of the LMS. But probably a corresponding statement would seem obviously true to people in the medical sciences. It would be interesting to do a similar study in mathematics. “

    Point 1) has been discussed in the IMU CItation Statistics report from 2008 (highly skewed citation distributions), 3) is uncontroversial and 4) seems hardly surprising. Clearly 2) is the interesting one. In some fields there are “glamour mags”, such as Science and Nature, which tend to publish rather abbreviated descriptions of “exciting” research, not all of which turns out to be reliable. The career benefits to publishing in such venues are apparently very large. It seems that in mathematics, we don’t have this situation. Many people would like to publish in Annals of Math or Inventiones, but (presumably) they can’t bypass normal peer review just because the result is exciting. I would guess that the refereeing would be much tougher if one claimed to have proved the Riemann hypothesis. And because the entire work (in principle) can be checked by referees, there shouldn’t be anywhere to hide (in principle). In experimental sciences, there are many things that can go wrong: poor design, statistical errors, bad interpretation, outright faking of results.

    So, is mathematics a special case? My impression is that mathematicians don’t quote impact factors, so perhaps our community is lucky. However, too much rejection of quantitative measures can lead to other forms of error. I would like to know why “surely an article published in the Annals is, on average, genuinely better than an article published in, for example, the Journal of the LMS”. I have noticed a lack of consensus on what the “top” journals are. Of course, maybe it really doesn’t matter so much, in the age of the individual article which we now live in.

    So my main question is: exactly what ought we to conclude when evaluating a candidate whose papers are mostly in Annals of Math, as opposed to one whose are largely in Proc LMS, if reading the papers is out of the question?

    • CommentRowNumber2.
    • CommentAuthorHenry Cohn
    • CommentTimeMar 9th 2013

    It’s hard to believe that the conclusions of this article apply to mathematics – surely an article published in the Annals is, on average, genuinely better than an article published in, for example, the Journal of the LMS. But probably a corresponding statement would seem obviously true to people in the medical sciences.

    I’m skeptical that the corresponding statement would seem true in the medical sciences (for some of the same reasons Mark discusses above). I hear a lot of complaints from scientists about Science and Nature publishing too many over-hyped papers that don’t turn out to be as important as claimed, or even correct, in a way that rarely happens in mathematics.

    3) is uncontroversial

    It seems pretty controversial to me. Journal rank does not delay science at all if your field uses preprints, and it is not intrinsically expensive. (It does make it more difficult to start new journals, and some of the old journals are unnecessarily expensive, but that’s a pretty strained interpretation of “journal rank is expensive”. The Annals and JAMS are not particularly expensive.) Maybe it does frustrate researchers, but I see no reason to think it is more frustrating than other forms of assessment.

    exactly what ought we to conclude when evaluating a candidate whose papers are mostly in Annals of Math, as opposed to one whose are largely in Proc LMS, if reading the papers is out of the question?

    This is certainly a striking difference between the CVs, and if you had to make a decision on this basis of course it would make sense to go with the Annals papers. However, hiring based primarily on CV lines makes no sense (and the whole point of letters of recommendation is to add context and expert judgement). A properly functioning hiring system should never come down to counting papers in different journals.

    • CommentRowNumber3.
    • CommentAuthorMark C. Wilson
    • CommentTimeJun 14th 2013
    • (edited Jun 14th 2013)

    I went to an internal promotions workshop yesterday. At my allegedly top-100 and in any case certainly respectable research university, subjective journal rankings (and perhaps impact factors) of a researcher’s papers are still considered important evidence of quality of that researcher. I attempted to argue against this, but had trouble articulating what I wanted to say. Note that for promotions, reading papers is out of the question. Letters of reference are provided by applicants sufficiently high positions (this will include me this year, perhaps).

    Presumably, the argument for the status quo is as follows. Journals with higher JIF are generally considered more prestigious internationally. These attract more and better submissions on average, and allow editors to be more choosy. So if you have a paper in one of those, that is membership in an exclusive club. Surely that is positively correlated with quality somehow.

    Is there a way to make more rigorous either this argument, or the argument that it isn’t really useful to look at journals at all? One thing that bothers me is that we don’t actually know anything about the peer review mechanisms of most journals. For all we know the editor is acting as a dictator. Igor Pak’s refereeing war stories are worth looking at in this regard (shortened edited - by me - version quoted here):


    I recall … Don Knuth submitted a paper under assumed name with an obscure college address, in order to get full refereeing treatment (the paper was accepted and eventually published under Knuth’s real name). I tried this once, to unexpected outcome (let me not name the journal and the stupendous effort I made to create a fake identity). The referee wrote that the paper was correct, rather interesting but “not quite good enough” for their allegedly excellent journal. The editor was very sympathetic if a bit condescending, asking me not to lose hope, work on my papers harder and submit them again. So I tried submitting to a competing but equal in statue journal, this time under my own name. The paper was accepted in a matter of weeks.

    A combinatorialist I know (who shall remain anonymous) had the following story with Duke J. Math. A year and a half after submission, the paper was rejected with three (!) reports mostly describing typos. The authors were dismayed and consulted a CS colleague. That colleague noticed that the three reports were in .pdf but made by cropping from longer files. Turns out, if the cropping is made straightforwardly, the cropped portions are still hidden in the files. Using some hacking software the top portions of the reports were uncovered. The authors discovered that they are extremely positive, giving great praise of the paper. Now the authors believe that the editor despised combinatorics (or their branch of combinatorics) and was fishing for a bad report. After three tries, he gave up and sent them cropped reports.

    Another one of my stories is with the Journal of AMS. A year after submission, one of my papers was rejected with the following remarkable referee report which I quote here in full: The results are probably well known. The authors should consult with experts.