Not signed in (Sign In)

Not signed in

Want to take part in these discussions? Sign in if you have an account, or apply for one below

  • Sign in using OpenID

Discussion Tag Cloud

Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to Math2.0
If you want to take part in these discussions either sign in now (if you have an account), apply for one now (if you don't).
    • CommentRowNumber1.
    • CommentAuthorHenry Cohn
    • CommentTimeMar 22nd 2012

    There have been a number of astonishing failures of peer review in Elsevier journals. In addition to Chaos, Solitons & Fractals, here are five other cases (in four different journals):

    E. Oxenhielm, On the second part of Hilbert’s 16th problem, accepted and made available online by Nonlinear Analysis in 2003 before the acceptance was withdrawn after publicity, [See for news coverage, and for a refutation of the paper’s ridiculous claims of a proof.]

    L. A. V. Carvalho, On some contradictory computations in multi-dimensional mathematics, Nonlinear Analysis 63 (2005), 725-734, [See for severe criticism, but the paper has not been retracted.]

    “Rohollah Mosallahnezhad”, Cooperative, compact algorithms for randomized algorithms, accepted by Applied Mathematics and Computation in 2007 and formatted/copy-edited but withdrawn after publicity, and [In this unbelievable case, computer-generated nonsense was accepted for publication. See]

    M. Sivasubramanian, New parallel theory, Applied Mathematics Letters 23 (2010), 1137-1139, [See]

    M. Sivasubramanian and S. Kalimuthu, A computer application in mathematics, Computers & Mathematics with Applications 59 (2010), 296-297, and [This paper supposedly proved the parallel postulate using “computer magnification”.]

    These aren’t poor judgement calls or mathematical mistakes that were uncovered too late in seemingly valuable articles. Each one is a case in which I find it difficult to believe any competent referee could possibly have recommended acceptance, even for a journal with the lowest possible standards. (And the Oxenhielm paper is the only one for which I can really imagine even an incompetent referee recommending acceptance, perhaps someone from a different field who does not understand proofs.) It’s really troubling that this is happening at multiple journals.

    What on earth happened in these cases? Were there actually any referee reports at all? If there were, did the referees even look at the papers? What sort of referee reports could they possibly have provided, and how were these referees chosen? Could there really be editors who are simply accepting papers without review, and if so why?

    Or were these papers somehow accepted by accident, due to editorial error or some flaw in Elsevier’s infrastructure? If so, why doesn’t this seem to occur in Elsevier’s stronger journals? Are the editors making sloppy mistakes because they aren’t taking their responsibilities seriously, or because Elsevier provides insufficient support for their lower-tier journals and the editors are therefore not able to do their jobs properly?

    How many other papers have been published after grossly inadequate review? Especially papers that don’t look obviously absurd and might be hiding in plain sight (they might be perfectly good papers that should have been accepted, but only after being refereed first, or problematic papers in more subtle ways than the examples listed above). Elsevier is damaging the mathematical literature, as well as the reputations of many authors who submitted papers to these journals in good faith, with the expectation of a thorough and fair review.

    If Elsevier thinks nobody reads or cares about the papers in these journals, then why are they selling them to libraries? [Presumably to drive up their bundle prices via quantity rather than quality, but do they really have so little respect for the community?] If they believe the papers matter, as I hope they do, then they owe it to the community to run the journals properly.

    What I take away from this is that Elsevier is incapable of ensuring the quality of their lower-ranked journals or, worse yet, is capable of doing it but chooses not to.

    Does anyone have further information on the scale or scope of this problem? In discussing particular cases, I’d prefer to limit it to clear-cut failures of peer review, not debatable cases, subtle errors being discovered in apparently legitimate papers, plagiarism that might be difficult to detect, or mostly competent papers that are just sloppy or uninteresting. (And in this discussion I’d prefer to focus on what is actually happening now, rather than how hypothetical review systems might perform better.)

    There are plenty of examples in some of the more obscure journals (e.g.,,,, There’s a lot of discussion of such cases on the internet (for example, I got three of these examples from However, they generally don’t tell us much about the publishing industry. What we learn is that anyone can set up a junk journal, and that even a journal being run in good faith can find it difficult to maintain minimal standards. These cases are unfortunate, but the Elsevier cases are much more remarkable, because they are endorsed by a major publisher that is responsible for many excellent journals.

    I understand that the academic community is in an adversarial business relationship with Elsevier, in which they will take as much money from us as they can, until we stop them. This is a serious problem, but we have an economic framework for understanding why they are doing it and how we can stop them (for example, the boycott). Is our scholarly relationship with Elsevier also adversarial? I certainly hope not, but these examples do not inspire optimism.

    My impression is that Elsevier has far more of these extraordinary failures than other major publishers do. Is that correct? If anyone can supply comparable failures for publishers like Springer, I’ll be very interested. I’d also love to see further examples for Elsevier, or any explanations of how this is happening.

    • CommentRowNumber2.
    • CommentAuthordarij grinberg
    • CommentTimeMar 22nd 2012
    • (edited Mar 22nd 2012)

    As far as Applied Mathematics Letters is concerned, here is another case: Apparently a creationist noticed the, to put it modestly, low standards of refereeing and tried to get a free ride:

    Note that “withdrawn from publication” doesn’t mean it had not already been published on the website. As the second link shows, it is really the letter from Rodin that got the paper revoked. Lots of damage has been done inspite of the revocation; on ID blogs, the case is being traded as an example of Darwinist censorship in science publishing…

    (Wondering if the title is plagiarized from Lieb and Yngvason…)

    • CommentRowNumber3.
    • CommentAuthorMark C. Wilson
    • CommentTimeMar 22nd 2012
    • (edited Mar 22nd 2012)

    Google found this one: “peer review failures”. It is not a math paper, but the journal is very highly respected, I believe. Not an Elsevier journal.

    Paper: See comments:

    Retraction Watch website/blog is interesting - not many math ones there, but it is amazing what famous authors can get away with and for how long.

    I found this by Michael Nielsen very interesting (for the Einstein quote alone) It raises the question of whether the fraction of math papers with substantial errors has decreased in the last few decades. I wonder whether anyone has studied peer review in math as a research problem.

    • CommentRowNumber4.
    • CommentAuthorHenry Cohn
    • CommentTimeMar 22nd 2012
    • (edited Mar 22nd 2012)

    The PNAS example is fascinating, although it’s a different sort of issue. The paper was definitely reviewed, namely by Lynn Margulis (who was a pretty famous scientist: National Medal of Science, etc.), and accepted on her recommendation; she seems to have been well aware of the contents, so the reason for acceptance was that she was eccentric, not sloppy. The issue there is whether it’s wise to have a system like the one PNAS had at that point, where some papers were published because they were communicated by an academy member. That system was already on the way out before the caterpillar paper (Abhinav Kumar and I published a PNAS paper about five months before, and when we saw there were two options and asked which submission method was typical, we were told by an academy member that direct submission was the only reasonable choice and would eventually be the only option).

    It would be great to have some statistical information about peer review in mathematics, although gathering this information could be labor-intensive. I spent half an hour on a fishing expedition, looking through Elsevier journals to see if I could spot papers that might be examples of peer review failure. Fortunately, most articles don’t seem to have anything obviously wrong with them, and I didn’t locate anything I recognized as remotely comparable to the examples listed above.

    • CommentRowNumber5.
    • CommentAuthorTom Leinster
    • CommentTimeMar 22nd 2012

    Mark (#3): that page by Michael Nielsen is excellent. The example of Jan Hendrik Schön’s fraudulent papers in Nature etc. is a good one for this page, too.

    I can’t resist reproducing that Einstein quote that Mark mentioned:

    The Einstein-Rosen paper was sent out for review, and came back with a (correct, as it turned out) negative report. Einstein’s indignant reply to the editor is amusing to modern scientific sensibilities, and suggests someone quite unfamiliar with peer review:

    Dear Sir,

    We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorized you to show it to specialists before it is printed. I see no reason to address the in any case erroneous comments of your anonymous expert. On the basis of this incident I prefer to publish the paper elsewhere.


    • CommentRowNumber6.
    • CommentAuthorHenry Cohn
    • CommentTimeMar 22nd 2012

    Jan Hendrik Schön’s papers are definitely one of the most dramatic cases of fraud in the history of science, but I don’t see them as a failure of peer review, since peer review has never been intended to catch intentional fraud in reporting experimental results. (Attempted replication is the only reliable way to deal with that, and it has never been part of peer review.) Schön’s fraud was particularly difficult to detect, since he spent a lot of time talking with theorists about what he might find. This meant that his results were very plausible, since he never claimed to find anything that could be ruled out on theoretical grounds, and furthermore some theorists became big advocates of his work, because they were so excited to see their predictions supported by beautiful experiments. One theory is that Schön may have hoped he would never get caught, because if the theories were right then future experiments would actually agree with his.

    The Einstein letter is really amazing, although I tend to think it shows Einstein being a jerk rather than Einstein being actually unfamiliar with peer review. (He must have been familiar with the concept, even if he had managed to avoid it himself.)

    • CommentRowNumber7.
    • CommentAuthorTom Leinster
    • CommentTimeMar 22nd 2012

    Re Schön: that’s interesting, Henry; thanks for the further information.

    What I like about Nielsen’s page is that it’s an honest effort to be realistic about what peer review can and can’t do. Personally, I’m sometimes uneasy when I see public defenders of science attaching such a great deal of importance to peer review, because I know that between ourselves, we often complain to each other about lazy or blinkered referees, the arbitrariness of the system, etc. Being a supporter of peer review (as I am) does not mean ignoring its faults or limitations.

    • CommentRowNumber8.
    • CommentAuthorHenry Cohn
    • CommentTimeMar 22nd 2012
    • (edited Mar 22nd 2012)

    The book Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World by Eugenie Samuel Reich has an incredible amount of information on the Schön case. One big factor here is that he was apparently genuinely talented, so even when people were unable to replicate his experiments they were willing to believe that he was just more talented and had perfected the skills needed to carry out these very delicate experiments. However, as his claims got more ambitious, he wound up with too many people working on replication and getting frustrated, and they became more and more incredulous at his amazing publication rate. Furthermore, he got lazy and re-used some of the same numbers in two plots of supposedly different data, and eventually someone noticed. At that point everyone got very suspicious, Schön made a lot of flimsy excuses (he had erased all his original data when he ran out of space on his hard drive, the organic transistors he had built had all been destroyed during testing and measurement, etc.), and he couldn’t replicate any of it himself, which is the point at which it became clear it was fraud.

    Regarding what peer review can or can’t do, medical research seems to be a particularly problematic case. See, for example, and

    Someone should carry out a survey of mathematicians’ experiences with peer review (if it hasn’t been done already). It would be easy to do it online, but it would be even better to try to get a less biased sample.

    • CommentRowNumber9.
    • CommentAuthorMark C. Wilson
    • CommentTimeMar 22nd 2012
    • (edited Mar 22nd 2012)

    Google search again finds something - I haven’t read it carefully but it seems highly relevant (although apparently not itself peer reviewed!). Peer review and knowledge by testimony in mathematics

    • CommentRowNumber10.
    • CommentAuthorjoyal
    • CommentTimeMar 23rd 2012
    Tom (#5): I can't resit adding more to the story of the Eistenin-Rosen paper,

    since it shows how useful peer-review can be.

    Sean Carrol: "After this incident, Einstein vowed never again to publish in Physical Review — and he didn’t. The Einstein-Rosen paper eventually appeared in the Journal of the Franklin Institute, but its conclusions were dramatically altered — the authors chose new coordinates, and showed that they had actually discovered a solution for cylindrical gravitational waves, now known as the “Einstein-Rosen metric.” It’s a little unclear how exactly Einstein changed his mind — whether it was of his own accord, through the influence of the referee’s report, or by talking to Robertson personally. But it’s pretty clear that he would have loved the innovation of"
    • CommentRowNumber11.
    • CommentAuthorDavidRoberts
    • CommentTimeMar 23rd 2012
    • CommentRowNumber12.
    • CommentAuthorDavidRoberts
    • CommentTimeMar 23rd 2012
    • (edited Mar 23rd 2012)

    Also, Greg Kuperberg links in a comment to that article to a paper he wrote in 2002: on exactly the sort of things we are discussing here (but this is off-topic for this thread)

    • CommentRowNumber13.
    • CommentAuthorjoyal
    • CommentTimeMar 23rd 2012
    Thanks for the correction David (#11). Gosh! I also wrote the name of Einstein incorrectly...
    • CommentRowNumber14.
    • CommentAuthorMark C. Wilson
    • CommentTimeApr 17th 2012
    • (edited Apr 17th 2012)

    A new one out today on Retraction Watch: Once again, a nonsubscriber must pay to read the retraction …

    The common thread seems to be a certain editor:

    • CommentRowNumber15.
    • CommentAuthorHenry Cohn
    • CommentTimeApr 17th 2012

    It’s actually the fifth paper listed above, but I’m really glad they have finally retracted it. I’m also curious exactly what they mean when they say it was “accepted because of an administrative error”. Do they mean it was accepted by mistake, when the editor intended to reject it? If so, what kind of system are they using that allows such mistakes, and why don’t they occur at stronger journals? Or was it accepted intentionally, and the error was an error of judgement?

    Unfortunately, I suspect this is just the tip of the iceberg, since the absence of peer review is only visible for crackpot papers. I’d guess that there must have been hundreds, and perhaps thousands, of papers accepted to these journals with little or no peer review, since it’s really not plausible that the editors got particularly sloppy when dealing with crackpot work, but we can’t tell just from looking at the papers. We have no way of gauging the true extent of the problem, and presumably Elsevier won’t tell us (they may not even know themselves), but this should be a major scandal.

    Rodin may have been part of the problem - I have no idea - but I don’t think this is the full answer. For example, I don’t think that he has ever been an editor of Nonlinear Analysis, which has published a couple of these papers, or Applied Mathematics and Computation. There appears to be a systemic problem, going beyond any single editor.

    • CommentRowNumber16.
    • CommentAuthorzskoda
    • CommentTimeApr 18th 2012
    • (edited Apr 18th 2012)

    Once they appointed a new/replacement/better editor, the latter had better asked his role to be clear, and hence trying to make the clear ground for work from now on. This should include investigation and publicizing all data, on previous error, including exact correspondence in such cases. I would not go into contaminated area without a good decontamination procedure.

    In Croatia some time ago a person from the ministry has sent anonymous defamation of certain scientist and I was among the people who discovered that the file had the automatically inserted fingerprint of his computer, which was easy to discover (I had once discovered 19 students copying and minimally alternating colleague’s course project in Wisconsin, by analyzing various fingerprints of their computer assignments and one got suspended from the university, so I was quite familiar with such things). The person from the ministry has claimed that he has nothing to do with this. I responded – give us the computer data which will make it clear, i.e. I listed which things should be inspected. If he was not guilty he would be happy to provide those and clear himself of suspicion. He ignored the request, so I considered him likely to be guilty. If Elsevier is going to hide more behind such statements like “administrative error” without substantiating it (where is the correspondence from the above case; what about the referees, were they humans or computer generated entities ?? etc.), less likely people will trust them. They ignored El Naschie case by publishing papers accepted in his period, after he was fired and not admitting the reason for firing him (according to El Naschie hiimself, he claims in the court, that it was a result of the scandal; the fact which even Elsevier does not admit). Ignorance of problems is a habit over there.

    I was threatened from the chaos solitons and fractals official address with elsevier’s domainname. This address was listed as being official Elsevier’s address of the journal! That means that I was threatened by Elsevier-authorized person to use that email address. The letter was signed by a (fictitious?) lawyer, who does not seem to exist and not to be related to Elsevier. No contact data were supplied. Later EN sent me two threatening letters from possibly semi-official address and this was AFTER he stepped down from the editorial. This email address seems to be related to the same Elsevier’s journal, but who knows. EN is well known for false affiliation claims, so having additional elsevier/like email addressed would not surprise me. Is it Elsevier’s policy to allow its official addresses to be used for threats. Threats were also to my institute. The director of my institute at the time has received an email from csf address asking him to fire me from my job and this was conditioned within several days to take place and also under a legal threat.

    • CommentRowNumber17.
    • CommentAuthorMark C. Wilson
    • CommentTimeDec 6th 2012
    • (edited Dec 6th 2012)

    Here’s another from Elsevier’s Applied Mathematics Letters, seemingly under Rodin’s watch: The author S. Kalimuthu has appeared before in this thread.

    • CommentRowNumber18.
    • CommentAuthorrlshuler
    • CommentTimeJun 2nd 2013
    • (edited Jun 2nd 2013)
    Thanks to Tom Leinster for Einstein's response to Physical Review. Something must have been different in early 1900's for Einstein without a PhD and only a patent office clerk job to get three important papers published, and noticed, but we rarely get insight into what. Note that PR is not purely mathematical, and a proof-only paper is difficult to achieve in physics without accepting theorems not themselves "proved."

    A case in point is General Relativity, which almost all physicists agree needs modification for reconciliation with quantum mechanics, but proof-only papers are routinely based on it for exotic untestable scenarios. PR told me in an email communication they have a policy against publishing any non-GR papers until the alternative theory is developed to the same extent as GR, which is very impractical given the billions of dollars and space missions, etc., expended on GR. On the other hand the exclusion is understandable given the huge proliferation of alternative theories which are really not very well thought out. In defense of PR, I got a more useful feedback from their reviewer than any other. It's interesting that (as of last time I checked) Verlinde's paper on entropic gravity had not been published other than on arXiv, probably due to this criteria, but works citing it had been published.

    A second interesting difference between the math and physics publishing worlds is that in physics does not actually accept papers except from non-established authors in specific physics sub-fields. So Einstein would NOT have been able to publish on arXiv. Many leading physicists do not even know of this policy, because their papers are accepted (I have discussed it with several). Even if a submitter gets a qualified sponsor, a paper that does not meet PR's criteria with regard to GR alternatives will not be accepted from a non-famous submitter. There are interlocking reviews between PR and the physics section of arXiv, who will refer to each other's reviews. Once a review is made, even a major revision may not be considered, but only the previous review referred to.

    Something akin to the modern dilemma must have, however, developed in physics by 1924, as Bose was unsuccessful in getting his famous boson statistics paper published until he asked for Einstein's help.
    • CommentRowNumber19.
    • CommentAuthorScott Morrison
    • CommentTimeJun 3rd 2013


    I’m seriously doubting your claim “So Einstein would NOT have been able to publish on arXiv.” There so many implicit counterfactuals in this statement I’m unsure exactly what you mean. Are you really claiming that it is impossible for someone outside academia to find an endorser for their first posting on the arXiv? Or something else?

    • CommentRowNumber20.
    • CommentAuthorDavidRoberts
    • CommentTimeJun 4th 2013

    Einstein did have a PhD; the thesis was completed in April 1905, and he’d already had a paper published in Annalen der Physik four years prior.

    But this isn’t the place to air grievances about the arXiv or leading journals not accepting papers on alternatives to relativity.