Want to take part in these discussions? Sign in if you have an account, or apply for one below
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
In the last year I have read a lot of discussion in blogs on experimental sciences about a crisis in replicability (e.g. read Scientific Utopia II, mentioned in another thread, for many details). The incentive for novelty in publication is making the literature less reliable than we would like. Some solutions have been proposed.
In mathematics (traditional, with traditional proofs) do we have this problem? I was just reading about citations on Math Overflow and elsewhere. It seems that a substantial proportion of mathematicians are happy to cite a result without even reading it, let alone understanding the proof. Personally I don’t do this, but then I don’t work on really deep stuff. Some people think that prepublication peer review ought to catch errors in proofs, but many referees, with or without journal advice, don’t check all the details of proofs.
So, how likely is it that there are nontrivial errors in papers? I realize that most papers will not be of interest to most people, and I sometimes see the argument that “important results will be checked properly, the others don’t matter”, but this worries me. What about “uninteresting” pure math results leading to algorithms used by engineers? I can see some serious consequences if we neglect replicability.
Anecdotes:
many years ago David Riley and I worked on a paper that cited an old Russian work. We could never actually understand the proof of a main theorem there, and we got no reply from the authors when we emailed them. In the end we raised our doubts in our paper, but not many people will read that one.
I have a paper by colleagues which was reviewed in Math Reviews - the review says the whole basis of the paper is fallacious, based on an error in probabilistic reasoning. The paper has not been retracted. What to do?
Ideas:
This is a real mystery. I’m confident in the well-studied part of mathematics, but the further you get from that, the less clear things are. I don’t see this as a serious problem for the field, but it’s something it would be nice to settle (and clean up if necessary).
What I hope is that someday machine checking of proofs will reach the point of being able to help with this. Right now, it’s just not feasible (giving a formal proof of a nontrivial theorem is a major undertaking - for example, Gonthier and his collaborators recently completed checking the Feit-Thompson theorem, but formalizing it took them six years). I can imagine that far in the future, formalizing a proof will be no harder than LaTeXing it is now, but we’re a long way from that.
It seems that a substantial proportion of mathematicians are happy to cite a result without even reading it, let alone understanding the proof.
I’d distinguish between two types of citation, namely for results you seriously rely on and for results you mention. In the former case, ideally you should understand these tools, although that’s not always feasible. In the latter case, I don’t think there’s any reason to feel bad about mentioning papers for which you don’t understand the proofs.
What about “uninteresting” pure math results leading to algorithms used by engineers?
I’m not worried about this. Engineers are used to free-wheeling mathematics, and nothing we could do is likely to throw them off. In particular, there are very few cases where engineers just trust mathematicians that something works as a black box, without being able to test it themselves. (The main examples I can think of are in cryptography.) And that’s an intrinsically dangerous situation, even if the mathematics is 100% rigorous, because someone who doesn’t understand the mathematics and can’t fully test it has no good way to find subtle implementation errors.
And even if there are caveats and warnings about the limitations of a particular piece of well-understood mathematics, I know from experience that people will misuse it anyway if the have a mind to.
What worries me is that we lack a culture of retractions and corrections. On the one hand, there’s the tendency to respond to certain errors and gaps with “well, an experience scholar surely has no problem fixing the proof” (which I expect to happen frequently since many referees will skip proofs of lemmas etc they already know to be correct). On the other, I know a number of papers that have simply incorrect results – incorrect to the level of “the opposite has been published and acknowledged by the original author” – yet no correction can be found for the original.
I think we could really do with a pure-mathematics version of https://retractionwatch.wordpress.com/ if only to raise awareness.
I’d avoid a comparison with Retraction Watch. My impression is that scientific papers are generally not retracted just because they are shown to be wrong by further experiments, but rather because of fraud, other misconduct, or really egregious mistakes. This makes retractions pretty humiliating (for example, a large majority of the ones in Retraction Watch seem to involve serious ethical problems).
I think the fraud problem in mathematics is negligible, while egregious mistakes are presumably uncommon and in any case not that hard to catch. The big issue is what happens when an honest, competent mathematician makes a subtle error, and it’s important to handle this in a way that doesn’t overly penalize the authors. (Publishing an incorrect proof should arguably be considered somewhat worse than not publishing it in the first place, because it causes problems for other researchers, and to avoid creating an incentive to publish a questionable proof on the grounds that there’s no downside. On the other hand, a retraction shouldn’t necessarily be humiliating.)
It would be helpful if it were easier to find retractions. For example, the paper Probabilistic computation and linear time was retracted, but the ACM does not seem to link the retraction to the original paper (except that it automatically appears in the list of papers that cite the original, if anyone looks there). A Google Scholar search for the title Probabilistic computation and linear time leads to the original paper but not the retraction (although an ordinary search finds the retraction).
I think (hope?) actual journals do a much better job of handling retractions, and the problem is that this was a CS conference. There have always been issues with retractions in conferences, and some conferences have had actual policies of not issuing any retractions or corrections. The argument is that CS conference papers aren’t real papers, but just extended abstracts of longer papers to be published in journals, and it doesn’t make any sense to retract an abstract. (Suppose you give a talk in a seminar, and an abstract is circulated to the attendees. If you discover a mistake a year later, you don’t ask the organizers to retract your previous abstract, since it’s considered ephemeral.) However, in practice they function as real papers, and in any case it’s bad for the community if it’s impossible to formally retract a conference paper.
The ACM seems to be handling this a bit better, by allowing actual retractions, but at least in this case they seem to be treating it as a new paper, rather than linking it to the old one.
Math Reviews is a useful resource that, if incentivized more, could be very useful. My impression is that many more “reviews” just quote the abstract these days. ANd we only get 8 AMS “dollars” per review. Many good researchers do not review. If it were seen as a more prestigious activity (perhaps give reviewers ratings), maybe these errors could be clearly pointed out more often.
Replying to myself:
Over the last many years I have randomly come across a few nice Math Reviews that have helped to correct errors or demolish an entire paper (at least they are claimed to):
MR0039515 (12,561a) (a well-known classic mentioned by Halmos in his automathography) MR1399602 (98h:20061) MR0255592 (41 #253) MR2149410 (2006a:11175)
I just did some searching on MathSciNet using various combinations of keywords "wrong", "error", "counterexample" and found some more. I am not sure how to estimate how many substantial errors there are in published theorems. This is just a sample:
MR2866190 (2012k:60085) MR2747412 (2012a:08004) MR2175197 (2006f:53051)
For easier searching, perhaps it would be better if reviewers could have a standard template, where they can tick the box marked "some of the proofs are wrong and I include a counterexample".
And of course MathSciNet is not open access, which means theorems could easily be applied without the citing author knowing they are wrong.
This one is amusing, and witnessing the “quality” of the journal (the representative of the journal has protested against the review instead of being thankful for the service of finding the cheater).
MR2082710 (2005f:37075) Aydin, Bünyamin (TR-CUM) Statistical convergent topological sequence entropy maps of the circle. (English summary) Entropy 6 (2004), no. 2, 257–261 (electronic). 37E10 (37B40)
The author managed to publish a copy of a paper by R. Hric [Comment. Math. Univ. Carolin. 41 (2000), no. 1, 53–59; MR1756926 (2001c:37016); also available at www.emis.ams.org/journals/CMUC/pdf/cmuc0001/hric.pdf] under his own name.
the representative of the journal has protested against the review instead of being thankful for the service of finding the cheater
Where did you find their response? The published paper does not seem to give any indication that there might be a problem. Comparing the two papers side by side leaves no doubt of serious plagiarism (and it’s pretty clear which direction it must have been in, since the Entropy paper was submitted three years after the other was published; also, look at Definition 1 in the Entropy paper). If MDPI knows about this problem and has decided to ignore it, that’s outrageous.
In MDPI’s long list of questionable open access journals, I noticed one called Axioms that deals with “mathematical logic and mathematical physics”.
Where did you find their response ?
The reviewer for MR2005f:37075 S. K. had problems with them, you may ask him privately for details.
There is a similar current case with plagiarism in several books recently published in Springer. The math series redaction warned about the plagiarism by a French professor (the case is yet not public so I will not yet list her name) has said that they have nothing to do with the case. Formally true, because it is published in the physics section. But still they represent the same publisher and it is a neighbouring area with whom they have to cooperate. If I work in the faculty of science and somebody from the street tells me about a problem in my department I will not say, I am not THE professor who has caused the trouble, but will find out in my house what happened. It is now so often the case that people who represent businesses represent them when it is about praising their company, but when it is about any trouble they say it is not my position/job and do not even inform their superiors/company of the customer being unhappy.
In MDPI’s long list of questionable open access journals, I noticed one called Axioms that deals with “mathematical logic and mathematical physics”.
Not only that it is a strange combination of topics which can hardly select a coherent editorial board, but the editorial board has also some editors which do not belong in any of the two areas (math physics an math logic) in the declaration of the journal. For example,
Matthias Dehmer is Professor for Bioinformatics and Systems Biology at UMIT, Institute for Bioinformatics and Translational Research
Hans J. Haubold is from UN Office for Outer Space Affairs, Vienna International Centre
Zhenyuan Wang (Department of Mathematics, University of Nebraska at Omaha) is interested in nonadditive measures; nonlinear integrals; data mining; Fuzzy sets; probability and statistics, and optimization
Emil Saucan: discrete differential geometry; geometric function theory; geometric modeling; geometric and topological methods for imaging and vision; manifold learning
This editorial board looks absolutely random…
Regarding the strange combination of topics, the opening editorial is entitled Another Journal on Mathematical Logic and Mathematical Physics?. See also the paper Axiomatic of Fuzzy Complex Numbers by the editor in chief of the journal. I wouldn’t go so far as to say editors should not publish papers in their own journals (I think it’s OK if there are strict procedures in place to avoid conflicts of interest; in particular, editors should have no role in judging their own papers and ideally should not even know who made the decision). However, a paper by the editor in chief should represent their ideal vision of the journal, especially in volume 1, issue 1, and this paper is, at the very least, highly unconventional.
(By the way, I had no idea there even was a UN Office for Outer Space Affairs.)
Are these editors even aware they are on the editorial board?
Are these editors even aware they are on the editorial board?
I’ve seen cases of journals fraudulently listing people as being on the editorial board, but I’d guess that this isn’t one of them.
Among the 30 people listed on the editorial board web page (not counting the assistant editor, production editor, or publisher, who work for MDPI), seven are authors of papers in the journal:
Angel Garrido, Kevin Knuth, Rémi Léandre, Radko Mesiar, Florin Nichita, Emil Saucan, Hari Srivastava
Four others list the editorial board on their web pages or publicly available CVs.
1 to 14 of 14