Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorJohn Baez
    • CommentTimeFeb 15th 2012
    • (edited Feb 15th 2012)

    Richard Poynder has interviewed Jan Velterop, who began his publishing career at Elsevier in the mid-1970s, and subsequently worked for a number of other leading publishers, including Academic Press, Nature, and Springer. He helped start journal bundling, but in 2000 he joined BioMed Central, the first commercial open-access science publisher, and in 2001 was involved in starting the Budapest Open Access Initiative. Now he’s CEO of Academic Concept Knowledge Limited (AQnowledge), a new company developing tools for “semantic knowledge navigation”.

    A couple of quotes:

    The evolution of scientific communication will go on, without any doubt, and although that may not mean the total demise of the traditional models, these models will necessarily change. After all, some dinosaur lineages survived as well. We call them birds. And there are some very attractive ones. They are smaller than the dinosaurs they evolved from, though. Much smaller.

    Instead of peer review, I think that in many cases an endorsement system like the one employed at ArXiv is sufficient to keep out sloppy methods and crackpots. The other elements of peer-review, such as a value judgement or placing an article in a ‘relevancy’ or ‘quality’ category (whatever that means anyway), can easily be done post-publication as and when the community thinks it serves a purpose. Essentially replacing the ‘filter first, then publish’ by ‘publish first, then filter’. The entire web works that way, and the exceptionalism of scientific publishing is no longer plausible, in my view. Of course there is a lot of rubbish on the web, but people are on the whole very discerning and only the most gullible run the risk of being taken in by that rubbish. Scientists are supposed to be sceptical and their critical thinking skills will ensure that, as a given community in a given discipline or sub-discipline, they are not easily fooled. Members of the public accessing scientific literature will get the same level of reliability from ArXiv-like repositories as from peer-reviewed journals.

    Q: What is your estimate of the potential savings to the research community of moving from today’s system to an endorsement model?

    A: Well, that seems a rather simple sum. If there are 1.5 million articles a year published, and the average savings are in the order of $2000 (assuming the arXiv per-article cost of some $7 is valid elsewhere for ArXiv-like outfits as well, and no journals are published in print), the savings amount to in the order of $3 billion a year.

    • CommentRowNumber2.
    • CommentAuthorJohn Iskra
    • CommentTimeFeb 18th 2012
    Like. Especially the second block.
    • CommentRowNumber3.
    • CommentAuthorjoyal
    • CommentTimeFeb 18th 2012
    I like the idea of "publish first, then filter". It does not seem entirely utopic to imagine that in some future the editors of journal will systematically explore the recent papers on the arXiv in search of the best papers to be "published" or "certified" by their journal. They would contact the author of a good paper to offer him assistance for correction and improvement. The reputation of their journal will depend on its collection of papers. Different journals may compete for the best papers. Some journal may publish papers by invitation only. Of course, the authors may also "submit" their paper to the journal of their choice.
  1. @joyal. I do not think it likely that editors will seek for papers (apart maybe for some of the very few high-rank journals), nor do I think it is desirable. Being an editor is already a big amount of work, and digging the arXiv seems completely unrealistic. It would probably end up with only well known scientists having their worked picked up.

    That said, the "publish first, then filter" principle does make a lot of sense nowadays. But I do not think we can or should expect the submission itself to be initiated by someone else than the author.
    • CommentRowNumber5.
    • CommentAuthorzskoda
    • CommentTimeFeb 18th 2012

    I like the idea of editor-initiated submissions for some class of future journals. In fact many people get persuaded even nowdays to write up their result for a particular journal or proceedings. Reviews in modern physics almost exclusively feature papers by invitations, very highly cited. The prizes function often that way; MathReviews have “featured article” category for reviews, and I think it is done pretty justly (simply breakthrough papers are chosen, not most papers by some author and none by another).

    • CommentRowNumber6.
    • CommentAuthorjoyal
    • CommentTimeFeb 18th 2012
    Kloeckner. I do not want to argue too much. Let me just say that it does not take long to a specialist to know if a paper in his field is really interesting. Journals are in *need* of excellent papers, as much as authors are in need to publish in highly ranked journals. I know that Gian-Carlo Rota has occasionally invited peoples to publish their latest result in the Advances in Mathematics. I see no reason why this cannot be done systematically.
    • CommentRowNumber7.
    • CommentAuthorLee Worden
    • CommentTimeFeb 19th 2012
    I wonder if it might be that after weeding out clearly bad stuff (via an endorsement system like arXiv's, or refereeing by a single person) much of the problem of selecting good papers from mediocre ones can be addressed just by the number and quality of citations they receive. It might be possible to use something like Google's PageRank to estimate a paper's quality by weighting citations from other high-quality papers more than other citations, which would deal with the problem of coalitions of "research spammers" who cite each other...
    • CommentRowNumber8.
    • CommentAuthorzskoda
    • CommentTimeFeb 19th 2012
    • (edited Feb 19th 2012)

    7: If something is easy to use, some simple advancement can be incorporated in other easy and non-important paper within weeks. A deep advancement often can not be absorbed by decades. Number of citations to Grothendieck’s Pursuing stacks in real publications was for first several years almost none, and it was an epochal work from somebody real well known and key people had the access to the copies. In another thread I gave an example of a person writing two papers on the same topic, one weak paper at the beginning of the project, and one real breakthrough a year later; the first has 5 and the second 72 citations. Because it is easier to write another variant of the trivial paper and hence cite it, than to write something essential. Majority of mathematicians write trivialities. Right, it is important who cites you. But also why. Some papers are reviews so get citations, without new things. And if judging the author, one has to take into account that there are coauthors. And, you are right, citation spammers.