Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorOlivier GERARD
    • CommentTimeMar 20th 2012

    Questions I asked myself several times:

    • Do journals keep track of referee attribution on long term (i.e. longer than the life on the editorial board) ?
    • Are there rules to reveal who anonymously refereed which paper ?
    • What happens when there is public or legal contestation on the refereeing process ?
    • What are the positions of editors and current publishers on this process ?

    For traditional historical records, there are in every country peremption laws, saying this record cannot be made public until 50 years after this mark (for instance a trial sentence, the death of the last participant, etc.)

    In the process of digitalizing all previous issues, it would be very useful, even if it is closed for 50 years to also keep track of referee / author correspondance and to know who did what.

    • CommentRowNumber2.
    • CommentAuthorTerence Tao
    • CommentTimeMar 21st 2012

    In general, maths journals are a small enough enterprise that this sort of data is only kept on an ad hoc basis by the individual editors or the editorial software (or, going back a decade or so, the journal secretary). In particular, there are usually no long term storage plans for this sort of data that last beyond a few changes of the editorial board and the editorial software platform.

    While some mathematical correspondence is definitely worth preserving (Grothendieck-Serre being a famous example), I think author-referee correspondence is pretty low on the list of priorities for archiving.

    Also, I have never heard of any non-frivolous legal proceedings concerning publication or rejection of a journal article in mathematics. This is likely due in part to the lack of direct monetary value attached to these sorts of decisions, as well as the fact that an author can simply take his or her article to a different journal to publish it. (Crushing blows to an author’s ego are, generally speaking, not considered sufficient grounds for legal action.)

    • CommentRowNumber3.
    • CommentAuthorMark C. Wilson
    • CommentTimeMar 21st 2012
    • (edited Mar 22nd 2012)

    This comment and the attitude I infer from it (perhaps wrongly) bothers me (see also the thread on journal best practices). In my experience as author and referee (not editor), some strange things happen. We may not want them archived, but perhaps they should be made more transparent at the time.

    An example: in one journal I was one of two referees for a paper. I gave a very thorough report asking for changes. The other referee gave a couple of lines saying the paper was too badly written to save. The paper was eventually resubmitted to me to referee (perhaps at another journal - can’t remember now) and again I asked for changes (the writing was pretty poor, but there were some ideas there - perhaps I was too generous). I waited for another submission to come back for refereeing. Then to my surprise I saw the paper had been published. I asked the editor why, the next time I saw him. He said that it was published owing to some kind of clerical error related to embarrassment at how long the paper had been waiting in the system. This is not the kind of behaviour I expect from an ARC ERA A-rated journal.

    Of course, one could argue that journals are outmoded, etc, but I am assuming that for the purposes of this thread, we are not talking about changing the basic system. I think that if journals are to keep their place, some more consistent and transparent editorial behaviour is required. If journals are going to be used for serious evaluation of correctness and significance (as their proponents seem to think), we need more than just a handful of people appointed by some publisher doing things on an ad hoc basis. That kind of in-group behaviour annoys me immensely (as a non-member, and even as a member). It reduces the trust necessary for research to work well, and it runs the risk of making poor decisions (both on inclusion and rejection). I guess I am more comfortable with the wisdom of crowds than that of “experts”.

    • CommentRowNumber4.
    • CommentAuthordarij grinberg
    • CommentTimeMar 21st 2012

    Mark: in this case you would be able to post your referee report on the kind of electronic unsolicited peer-review system that we’re planning to make here.

    Actually I guess that most of the day-zero contributions to that system will be referee reports made for journals that were ignored by the authors and the editors.

    • CommentRowNumber5.
    • CommentAuthorTerence Tao
    • CommentTimeMar 22nd 2012
    • (edited Mar 22nd 2012)

    I should perhaps add that with the most modern version of editorial software, archiving referee reports etc. is significantly easier than it used to be. When I first started out as an editor about a decade ago, we were still handling referee requests by physical mail, and forwarding them on to the journal secretary who would then photocopy them to the other editors before we would physically meet to discuss and vote on the papers. It was difficult enough just to organise to the point where we could do this efficiently that we did not spend much additional effort to ensure that records would be systematically kept beyond a period of, say, two or three years. About two years ago, I had to revive a paper that was first submitted back in 1998 to the same journal, but for which the initial submission had stalled for a number of reasons (including some health problems with the corresponding author). There were decade-old referee reports for the paper, but unfortunately neither the authors nor the previous editors (from about three editorial boards ago!) could retrieve them, so we had to start afresh. It would of course have been nice to have the old records, but given the resources available to run a non-commercial maths journal (basically, a small number of hours of secretarial time), and the number of changes that took place over the intervening years (such as the installation of two completely different online editorial systems) I can understand why the records were not kept.

    Nowadays, my editorial work is primarily done online through a centralised system which can automatically keep these sorts of records, although there is still a significant portion of the work (e.g. private emails or personal discussions to experts about a paper, or board meetings via phone conference or in person) that is not directly recorded through the system, and is thus still only retained on an ad hoc basis. But these systems are at least good enough to avoid the sort of disasters you mention, when a report is simply lost. (This has happened to me as author once, because the journal involved was changing editorial software and a referee report came in during the transition and was recorded in the wrong place.) Being able to ensure long-term accessibility of the editorial data when one changes the software would be a nice feature to have, but I would imagine it to be a low-priority one, given how rarely one needs to access data beyond, say, two or three years in the past.

    Regarding "in-group" issues: as I said, maths journals tend to be small enterprises (with perhaps a hundred or so submissions a year), and as such do not have the volume to be run by more than a small number of people, at least until a viable crowdsourced model emerges. But one can ensure at least that the people involved have diverse and complementary areas of expertise, and to ensure that major editorial decisions (including acceptance or rejection of viable papers) are made by the entire board and not by an individual editor.

    Regarding consistency and transparency: each journal does have its own traditions and guidelines, based in part on institutional memory of previous submissions, but again due to the small volume involved (and the unique nature of each submission), these are rarely formalised (and I would think it would indeed be quite difficult to write down such a formalisation which did not lead to pathologies of one sort or another, in which one paper clearly has a better case for publication than another, but which is technically inferior according to the written guidelines). This can perhaps be contrasted with the approval process for a government funding agency (such as the NSF), which is much higher volume (and has far more monetary value attached to a given decision), and is thus bound by many more explicit rules and procedures (e.g. referee reports need to assign numerical ratings to various aspects of proposals, which have to be given nontrivial weight when considering the proposal). Such a process is certainly more consistent and transparent than what goes on at most journals, but if one applied an NSF-style system to a small mathematics journal, the result would simply be a more bureaucratic process, without much gain in journal quality or efficiency.

    • CommentRowNumber6.
    • CommentAuthorScott Morrison
    • CommentTimeMar 22nd 2012

    I know that the LMS journals actually destroy this correspondence once the paper has been published, because under the British legal system it may be impossible to prevent a sufficiently motivated (and lawyered) author from obtaining access to the correspondence otherwise.

    • CommentRowNumber7.
    • CommentAuthorHenry Cohn
    • CommentTimeMar 22nd 2012

    SIAM retains some information long-term (well, as long as they keep this editorial software, so who knows), including the number of papers refereed, average time spent refereeing, and date of last request. They certainly keep the actual assignments and reports for some time, since editors can go back and access submissions they handled in the past, but I don’t know whether they eventually disappear.

    • CommentRowNumber8.
    • CommentAuthorTom Leinster
    • CommentTimeMar 22nd 2012
    • (edited Mar 22nd 2012)

    Re #6: in the UK there’s the Data Protection Act, introduced in the late 1980s, I think. As I understand it, there are two main principles: (1) you can only hold information on other people for as long as it’s “needed” (whatever that means in the context), and (2) you have a right to see any data held about you.

    For example, universities can hold personal information about students while they’re enrolled and for a reasonable period afterwards, but can’t hang on to it indefinitely without good reason. And at any time, you can ask your university to give you a copy of the records it holds on you.

    The Act creates some difficulty with confidential letters of recommendation etc., in that a sufficiently motivated candidate could probably gain access to them. I’ve never heard of it being done; the social consequences are probably enough to put anyone off.

    When I read the comments above, I wondered whether UK-based journals would even be allowed to keep correspondence of this nature in the long term. I have only the haziest understanding of what counts as personal data and what doesn’t. But presumably the scenario that Scott (#6) is describing is this: I, as an author, claim that the record of who refereed my paper counts as personal data, and attempt to use the Data Protection Act to get access to it. In anticipation of this, the LMS destroys the data, even if it’s not actually obliged to.

    • CommentRowNumber9.
    • CommentAuthorScott Morrison
    • CommentTimeMar 22nd 2012

    Thanks for clarifying, Tom. I learnt about this from Rob Kirby; Mathematical Sciences Publishing now does a lot of the “backend” for the LMS publishing, and one of the software customizations they needed from MSP was this sort of data management.

    • CommentRowNumber10.
    • CommentAuthorAndrew Stacey
    • CommentTimeMar 22nd 2012

    We may not want them archived, but perhaps they should be made more transparent at the time.

    I agree with this. It’s not that I particularly want the referee’s report to be made public, just that I’d like to be able to challenge it if I feel that there’s something wrong with it. I have actually tried this but to no avail. In one case, I never even got a response from the editor and that was about a factual error in the “report” which was the reason for rejecting the paper.

    More generally, I would really like it if there were some way to find out what criteria a paper was accepted against. Each journal may have its own system and its own way of doing things, and there may well be some exemplary journals out there, but as has been pointed out in other discussions, as a reader then I don’t have a lot of choice in the matter. If the article that I want to read has been published in a particular journal, then I have to read it in that particular journal. I can’t say, “Actually, I’d rather read the JAMS version of this article.”.

    In addition, I don’t read that many papers that I have a good sense of what each journal’s style is. Again, as I’ve said elsewhere I just read the article. The fact that this article was published by the AMS and that by the LMS doesn’t enter into my conciousness so I have no yard-stick against which to measure articles. If I want to know about a particular article - how reliable it might be, how well it might have been refereed - then knowing what journal it was in means nothing.

    So both as an author and as a reader, I’d like more transparency: knowing the criteria that a journal uses in refereeing its papers.

    Actually, as a referee I’d like this too. I haven’t so much experience of this side of the track, but what I have suggests that the system ain’t working too well. A worryingly large proportion of my admittedly small number of referee requests have been for articles that are pretty clearly out of my specialist area. I’m still convinced that one request came because if you search for “determinant pascal triangle” then my name comes out top (well, it does when I search for that).

    Of course, I would never sue a journal for rejecting a paper (at least, I hope I wouldn’t). But there’s a long way between “I’m completely happy” and “You’ll be hearing from my solicitors”.

    • CommentRowNumber11.
    • CommentAuthorMark C. Wilson
    • CommentTimeMar 22nd 2012

    Andrew - this clear list of criteria is one of the “best practices for journals” I was trying to get discussed in another thread. I hope that almost all journals will actually use the exact same criteria. It is just the interpretation and the rejection threshold that should differ. But at the moment we have no way of knowing. As a minimum I would like to see each journal’s webpage list the referee report template it uses. Of course, some don’t even use one, but they ought to.