Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorMark C. Wilson
    • CommentTimeApr 23rd 2012

    There has been much discussion here of the future of peer reviewed journals. I assume for simplicity here that we stick with the current journal system (we can discuss more radical proposals later). I see two main problems: 1) a lot of low quality articles and 2) difficulty of finding high quality relevant literature. These may be related to some extent, because if you don’t even know about a paper that does what you intend to (only better and several years ago) you will likely publish something unnecessary.

    It is clear that the refereeing system is under strain: submissions to established, reputable journals take a very long time to referee (in my experience it is increasing, and 6 months is now very standard for a first report). I have read reports (can’t recall where) that editors are finding it harder to get good referees. As I said in the “Why referee” thread, this is not surprising given the lack of rewards for conscientious referees. The proliferation of new outlets (many of the vanity press, low quality type) means even more load on the system.

    In order to solve 1) we need good quality reviewing, both pre- and post-publication. I focus on pre-publication (“refereeing” here). This takes time and so there is an upper bound on how many refereeing assignments one can perform. For example I have a paper for a SIAM journal now that is extremely long and dense and I am dreading it (unfortunately it looks good and is well written, so I can’t just reject it). Suppose that there are N active researchers submitting an average of c papers per year, k is the mean number of authors per paper, and r is the mean number of referees per paper. Real current values for c,k,r might be 3,2,3 but I have no serious data. Then the mean number of referee reports required per researcher is cr/k which is probably more than c. Thus we need to referee more papers than we submit, on average. I don’t know many who do that. In other threads we have discussed dividing up the refereeing work so that, say, graduate students do more of the time-intensive line-by-line checking, and more senior people comment more on significance and connection to the literature. Presumably junior people would referee more papers than they submit under this system, as would senior ones?

    I know too many people who publish way more than they referee. How can incentives be put in place to ensure that researchers do their fair share of peer review?

    Suppose we can somehow stabilize the situation so that researchers all do their fair share of high quality refereeing. Unfortunately there are still at least several hundred articles per year than an individual researcher cannot in good conscience ignore. It is then important to be able to find good quality relevant research quickly. Abstracts and keyword searches are useful tools. In some fields (e.g. computer science) we can rely on programme committees of conferences to create artificial scarcity - we restrict our attention to a very small number of outlets, and mostly ignore anything outside that. I don’t see that working well in mathematics which is more organic as a subject and has more reputable outlets.

    One suggestion is to be more restrictive as to format. I have advocated that every paper should have a compulsory section entitled “Our contributions” which justifies the paper. This could be broken down into parts describing methodology, results, models, terminology, etc, and given as a standard questionnaire to authors. The keywords could also be made less free-form.

    I see a lot of papers which don’t cite relevant literature (eg my papers!). In many cases I suspect that the authors are using the “restrict to a few conferences” strategy and not seriously using Google etc. I spend at least an hour searching before proceeding to write a paper once I have some ideas. How can something like that be incentivized? What do other people do?

    (Noam Nisan’s post on Readerless Publications started me thinking about this issue a long time ago) http://agtb.wordpress.com/2009/05/14/readerless-publications/

    • CommentRowNumber2.
    • CommentAuthorMark C. Wilson
    • CommentTimeApr 24th 2012

    SInce no one has commented, perhaps it wasn’t written well. I would appreciate discussion of:

    • How can incentives be put in place to ensure that researchers do their fair share of prepublication peer review? I am concerned that the whole peer review system could collapse, and we will have only postpublication review. I think the latter could easily advantage the socially well-connected researchers even more than the current system. Of course that may not be bad for science (see the interesting STOC paper by Kleinberg and Oren <www.cs.cornell.edu/home/kleinber/stoc11-credit.pdf>) but it would really annoy me.

    • How can we improve searching for research results? This includes better citation methods, abstracting, keyword searching, postpublication review, …

    • CommentRowNumber3.
    • CommentAuthorHenry Cohn
    • CommentTimeApr 24th 2012

    Sorry, I’ve been meaning to reply but I’ve been distracted. I’ll follow up later, but here are some brief thoughts:

    I’m less worried about the collapse of peer review, but it is definitely something we should be keeping an eye on.

    Searchability is a big deal. I think part of the problem is psychological: we all know the terrible feeling when you have an interesting idea, get excited about it, and then start to wonder whether it’s already known. It can take real effort to make yourself do a thorough search, since you hope not to find anything, and the lazy/dishonest approach is just to write a paper and hope for the best. Anything that makes searching easier should help make it past this psychological barrier.

    One thing I’d love to see is better incentives for exposition, since that would really help with systematizing knowledge, collecting references, etc. Of course we can’t just decide to value it more - it has to be an organic process somehow - but perhaps we could find better ways to recognize it. For example, there’s a much bigger range of prestige for research journals than for exposition. A few expository papers get singled out for prizes, but there’s less recognition for excellent expositions that don’t happen to catch the eye of a prize committee. A new journal sounds hard, but perhaps some other form of distinction could be useful?

  1. On the recognition of good exposition, and the need of this kind of contribution to mathematics, there is a really interesting thing that is being tried in physics: living reviews. A sort of new electronic journal, or an electronic appendix to an existing journal, asks some scientists in the field to write and maintain up-to-date large reviews on a given subdomain. It is a lot of work, but it is also prestigious because of the invitation by a prestigious editorial board. The electronic format is mandatory to allow the review to evolve.

    I think we could set up something like that in maths, and that it would be very effective. In fact, I think it could also help to partially solve another common problem in math departments: people that get lost and stop or almost stop doing research while they made very valuable contributions. I think that they would have more time to write such a review than prominent mathematicians, but could do so well, and that it could even get them on track again. If a very prestigious journal where to set up this kind of reviews, I am pretty sure it could develop pretty fast.

    • CommentRowNumber5.
    • CommentAuthorHenry Cohn
    • CommentTimeApr 25th 2012

    The Electronic Journal of Combinatorics has “dynamic surveys”, some of which have been extremely successful. For example, Radziszowski has maintained the definitive account of progress on small Ramsey numbers, with 13 updates over the last 17 years.

    • CommentRowNumber6.
    • CommentAuthorDavidRoberts
    • CommentTimeApr 26th 2012
    • CommentRowNumber7.
    • CommentAuthorHenry Cohn
    • CommentTimeApr 27th 2012

    Has Publications of the nLab dealt with the archiving issue? (I couldn’t tell from looking at the web site.) This seems like a big deal to me, and it takes some serious work to get right, both in preserving the bits (via Portico or LOCKSS or the like) and in keeping everything in formats that maximize the chances of being readable in the future.

    It is definitely not enough to rely on archive.org and personal back-ups to preserve the site, or to hope that content will be shifted to updated formats over time. The former is awfully risky, and the latter depends on maintaining a critical mass of enthusiastic volunteers.