Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorAndrew Stacey
    • CommentTimeFeb 3rd 2012

    I’ll start the discussion on the future of journal publishing by repeating something that I put in the comments on Tim Gowers’ blog 1. I don’t claim to be the only one who had this idea, nor the first. Here’s what I said:

    My proposal would be to have “boards” that produce a list of “important papers” each time period (monthly, quarterly, annually – there’d be a place for each). The characteristics that I would consider important would be:

    1. The papers themselves reside on the arXiv. A board certifies a particular version, so the author can update their paper if they wish.

    2. A paper can be “certified” by any number of boards. This would mean that boards can have different but overlapping scopes. For example, the Edinburgh mathematical society might wish to produce a list of significant papers with Scottish authors. Some of these will be in topology, whereupon a topological journal might also wish to include them on their list.

    3. A paper can be recommended to a board in one of several ways: an author can submit their paper, the board can simply decide to list a particular paper (without the author’s permission), an “interested party” can recommend a particular paper by someone else.

    4. Refereeing can be more finely grained. The “added value” from the listing can be the amount of refereeing that happened, and (as with our Publications of the nLab) the type of refereeing can be shown. In the case of a paper that the board has decided themselves to list, the letter to the author might say, “We’d like to list your paper in our yearly summary of advances in Topology. However, our referee has said that it needs the following polishing before we do that. Would you be willing to do this so that we can list it?”


    1. I can’t find the actual comment right now, I’ll add the link when I can. 

    • CommentRowNumber2.
    • CommentAuthorKevin Walker
    • CommentTimeFeb 12th 2012
    I like this idea. In fact, I had a very similar idea independently.

    When I ran the proposal by another mathematician (one who has not been active in these journal reform discussions), her immediate response was "Where are you going to find people willing to serve on all these boards?" But if that's the only serious difficulty, I don't think it's an insuperable one.

    Further thoughts:

    This need not involve any (additional) refereeing at all. Boards could consider only papers which have already been refereed in traditional journals, or papers (preprints) with which they are personally familiar and are confident are substantially correct. In this version, the boards are not (initially) a replacement for traditional journals, but rather a parallel system which tries to give due recognition to the best papers.

    (In my opinion the current system for assessing papers --- based on vague, murky and not-universally-agreed-upon notions about which journals are more prestigious than other journals --- leaves very much to be desired. It came about through historical accident, not through some rational design process. No sane person, tasked with designing a system for assessing papers in order to help with hiring decisions etc., would dream up the current arrangement. So even if there were no problem with over-priced, pre-internet fossil journals, we would still want to make some changes to the status quo.)

    This sort of review board fits into a paradigm that is already familiar to mathematicians: prize committees, NSF grant review boards, etc.

    As Andrew points out, this can start small and requires no central planning.
    • CommentRowNumber3.
    • CommentAuthorYiftach Barnea
    • CommentTimeFeb 12th 2012
    This seems very similar to the overlay journal discussion. You might like to move there.
    • CommentRowNumber4.
    • CommentAuthorKevin Walker
    • CommentTimeFeb 12th 2012
    I think there are significant differences, but I will keep an eye on the overlay journal discussion also.
    • CommentRowNumber5.
    • CommentAuthorNoah Snyder
    • CommentTimeFeb 12th 2012
    I think the idea of PLoS ONE is that in order to be accepted the paper needs to be correct, but not necessarily interesting, and then you measure interestingness in other ways like download stats, review boards, citations.
    • CommentRowNumber6.
    • CommentAuthorzskoda
    • CommentTimeFeb 12th 2012
    • (edited Feb 12th 2012)

    The idea which I proposed after the 1-hour discussion on interaction of mathematicians and physicists in a special session in the 2000 orbifold conference in Madison to make a comment section of arxiv was somewhat different and it is based on the following observations

    1) in postings and publications people post a paper only once a very hi threshold of importance and quantity of results is achieved

    2) in discussions not aimed at concrete results on the contrary, the voluntary, vague and opinionated may ruin sometimes the level, trust and balance

    To answer this I proposed (paraphrasing) a

    comment section which would consist of concrete contributions, referring to the elsewhere started or existing work which is made of scientific contributions, remakes and corrections which are not significant or elaborated enough to warrant a journal level publications but have concrete achievements, concrete corrections or other added value which makes them useful to exist in concrete form in public.

    For example

    • you find a minor gap in a proof in a theorem or even in a lemma in some paper. You write the correct proof.

    • you describe a mathematical relation between the result of the paper and some earlier known result

    • you offer an alternative proof of published result

    • you write a historical note on the surrounding of a result

    • you find a new example of a definition from the paper, which has so far not many examples offered

    • you offer an intuitive explanation or heuristic why something described in a paper is true, or what is behind the notion in the paper

    • you offer a worthy list of typographical and other minor errors in a longer reference

    All such works, comments are below the publication threshold, even for proceedings, while they could be very useful. On the other hand, unlike nonconcrete comments, such content contributions tailored to supplement existing publications are noncontroversial, unlike pure opinion works.

    • CommentRowNumber7.
    • CommentAuthorMarcin Kotowski
    • CommentTimeFeb 12th 2012
    • (edited Feb 12th 2012)
    Incidentally, some people already have their own "one-man review boards", see e.g. "My choices" by Oded Goldreich (http://www.wisdom.weizmann.ac.il/~oded/my-choice.html), in theoretical computer science.

    "Comments and remarks below the publication threshold" are a very valuable resource that, now, remains mostly hidden from the public. In any given field, there is usually a vast amount of "tacit knowledge", which results and papers contain bugs, which techniques can be improved etc. that is accessible only via personal contact, coffee-break talks and the like. Such comments are very useful when you're reading a paper from outside your main field, when it's often difficult to discern which parts of proof are technical, but standard, and which require new insights. Even a short comment like "proof of Lemma 10.4 follows standard methods in probabilistic combinatorics, see [paper/textbook] for exposition" can save the less experienced reader a lot of time. Not to mention that many papers contain unclear or opaque passages, sometimes due to bad or sloppy writing, that require a lot of work to plow through - a single reader who understand a difficult part and summarizes it clearly in a comment could save everyone else a lot of effort.
  1. First, thanks to Andrew and Scott for creating this site.

    Second. Some comments.

    1) may I suggest a name: "arXiv++" (I already proposed it Gowers' blog).

    2) I think there should be two options to be able available

    a) Discussion place around the paper (e.g. questions, remarks - small matheoverflows attached to each paper ) (I would call this part "trackback++")

    b) Certification ("virtual journals") , or I would call it "rating agences"

    3) Objection to "A paper can be recommended to a board in one of several ways ",
    imho at first stage it should be absolutely NO pressure - only if author wants - he allows comments/questions on his paper;
    only if author wants he asks some "boards" to certify his paper.
    I think in some future if new publishing will be adopted - no one will forbid to discuss his paper - otherwise it will look very suspicious
    - " why the author is not open to discussion ?"

    4) Henry Cohn emphasized importance of "incomparablity of achievements" ("I am not a number I am free man" ) - this imho can be
    easily achieved in the new publishing system: the "boards"(="rating agencies") can give multi-scale ratings.
    E.g. "Standard&Gowers" put ratingS (many brands to one paper): "Exposition BB+; Depth AA-; Originality AAA+",
    and another rating agency say "Poor&Tao" gives ratings "Breakthrough AA; ToolDevelopement BB".
    • CommentRowNumber9.
    • CommentAuthorzskoda
    • CommentTimeFeb 16th 2012
    • (edited Feb 16th 2012)

    8: If any linear scale rating for anything is proposed, the numbers are more comprehensible. I do not understand BB+, AA-, AAA+, as I have no interest in Wall Street myths, and this kind of double and triple letters would be confusing to me.

  2. @Zoran. That Wall Street jargon was for the kind of joking...

    I mean linear scaleSSS -- several linear scales for one paper.
    That was my point: not ONE, but several scales and may be different scales from different "journals",
    in this way we can avoid direct comparing of people.

    You can make a breakthrough, but wrote a terrible paper and you will get ratings (Breakthrough: 10, Exposition: 0),
    Another person can write a good exposition and get: (Breakthrough: 0, Exposition: 10).

    It will not be obvious to compare these two ratings - that is what I want.
  3. On the other hand we NEED to compare different people. It might be unfortunate, but this is life.

    It seems this contradics my previous writing that we should have several incomparable ratings ?

    I think here is a solution:

    Several "ratings" can be averaged to get just 1 number, to allow quick comparing of different people.
    But this comparing should be well-known to be "dirty", i.e. it should be enough to create "short-list",
    but definitely not enough, to give more accuracy.
    • CommentRowNumber12.
    • CommentAuthorzskoda
    • CommentTimeFeb 17th 2012

    Sasha, the creator if impact factor Garfield wrote extensively that it was intended as a statistical measure for comparing institutions rather than the individuals. This is well known but it is used contrary to its well known assumption. Once administrators have the numbers forget the warning signs.

  4. Zoran, "if you not deal with politics, politics will deal with you" - phrase was popular after Hitler came to power and many people understood that passive position might not be good.

    I mean aminstrators WILL create ratings -- want we this or not -- this is life.
    So it is better mathematicians will create ratings by themselves and find something suitable for us,
    so do not allow administrators to use some crazy ratings...
    • CommentRowNumber14.
    • CommentAuthorzskoda
    • CommentTimeFeb 18th 2012

    Sasha, introducing and being loud about the practices, failour of the systems etc. has a role. When we serve on committees we can do some civil disobedience. A colleague of mine, when he served on a promotion board, at the time when they would COUNT papers, when he saw a couple of publications with more or less the same results (self/plagiarism) he would STAPLE them together with a stapler and count as the same paper. The law in Croatia says for such and such title one needs to be a scientist with world level achievements and so and so many papers minimum. Now they read this as if one has a minimum and comes to a selection, one has to be elected into the title. They neglect the qualitative phrase “world level achievement” which gives them right to dismiss mere number. Why people are not informed of such key details ?? Why we do not do some counterpropaganda. Why would I care about politics dealing with me ? I fight it without respect. And you should.

    • CommentRowNumber15.
    • CommentAuthorjhjensen
    • CommentTimeMar 14th 2012
    • (edited Mar 18th 2012 by Scott Morrison)

    I have created a chemistry overlay journal using Blogger.com: http://compchemhighlights.org. The main point I want to make here is that anyone can set up such a site in a few hours. The real challenges are “social”; building prestige and recognition associated with the journal and recruiting editors. It is not clear how best to do this, but I think the best approach is to create many such journals and see what works and what doesn’t. (I will cross-post over at the overlay journal site).