Footnotes:

# Mathematical Proofs - a world of precise certainty?

Page last updated 06 Oct 2021

Mathematicians would have you believe that the world they inhabit is a world of precision and certainty, a world where there is no vagueness, and no uncertainty.

But that is not the case. That precise and certain world is an idealistic concept, but it is not the world that real world mathematicians work in. It is a world that mathematicians desire so strongly that they cannot bring themselves to admit that the world they work in isn’t as clear-cut as they would like it to be. To see why this is the case, we need to consider the cornerstones of mathematics - mathematical proofs.

## What do we mean by a ‘Mathematical Proof’ ?

There is a widespread notion that once something has been proved mathematically, then it is, as it were, set in stone - that we have a mathematical proof that remains ‘true for all time’.

That is certainly a very optimistic idea, but the reality is that most mathematical proofs that anyone will encounter fall a long way short of this idealistic concept. A mathematical proof is ‘set in stone’ and is ‘true for all time’ only if certain conditions are fulfilled…

assumption and**Every**rule that defines how one step in the proof follows from a previous step are precisely defined. A proof is always dependent on whatever system of assumptions and rules of inference used to create the proof. So, if you had a system with a different set of assumptions and rules, it might be possible to prove the converse of a result that is proved in another system.**every**possible step in the proof follows precisely those rules of inference.**Every**- The system of assumptions and rules is consistent - i.e: they cannot possibly give rise to a contradiction.

Now, practically every proof you will encounter includes at least some natural language, and you hardly ever
encounter a proof where * every* assumption
and rule used is explicitly stated. Essentially, a mathematical proof isn’t a rigorously proven mathematical
proof unless it is proven within a totally formal system, since it is only in such a system that
there are precisely defined assumptions and rules that define how one step in
the proof follows from a previous step - in a formal proof, these are the axioms and the rules of inference of the system.

And the vast majority of mathematical proofs are not fully formal proofs - the vast majority of what are commonly called mathematical proofs have not been proved to a level of rigor that enables one to say that they are ‘set in stone’ for all time. And even if a proof is claimed to be proven in a formal system, that doesn’t necessarily mean that the proof is ‘true for all time’ - because if that formal system is inconsistent - capable of producing contradictory results - then that system could prove both a statement and the converse of that statement. (Note: despite this, it seems to be the case that creators of formal proofs are always absolutely convinced that their proof is absolutely right, even though they have not also proved that their formal system is consistent.) Another problem with fully formal proofs, especially computerized formal proofs, is that it is not always made clear what the formal proof actually proves - in some cases it has been claimed that a formal proof proves a particular statement, whereas a careful logical analysis shows that in fact, it proves something quite different. (Footnote: See the page Computer proofs.)

Note that showing that a proof is
wrong does not mean that the conclusion of the proof is necessarily wrong - it
may be possible to prove it by a different method. For example, it would be
easy to come up with a proof that contained an error, but which nevertheless
showed that **1 + 1 = 2**. Showing that the proof is wrong does
not mean that **1 + 1 = 2** is an incorrect statement.

### Clinging to the belief that you work in a world of absolute certainty

Now, mathematicians are aware of all the above - it’s just that they don’t like to be reminded of how different their actual world is from the idealistic world they crave so much. A common attitude among mathematicians is that of Dr Alex Kasman of the College of Charleston (also mentioned on this site’s webpage Review of Book ‘The Shackles of Conviction’), who states in an article Mathematics and Skepticism:

“If you cannot completely prove your claims in mathematics, the new results will not be accepted by the mathematical community, they will not be published in a journal, and -- to be blunt -- you won’t be a mathematician for long…”

Dr Kasman no doubt sincerely believes this, but the evidence that he is wrong is overwhelming. There have been many instances where errors in mathematical proofs have not been discovered for several years after they were first published. And these haven’t all been proofs that were published in an obscure journal that very few people had actually read. Several such proofs had received widespread attention before the error was discovered - for example, a flawed proof by Kempe (Footnote: A proof of the four color map theorem was published by the prominent mathematician Alfred Kempe in 1879, and which received widespread acclaim. Eleven years later, an error was found in the proof.) and a flawed proof by Tait. (Footnote: Another proof of the four color map theorem was published by Peter Guthrie Tait in 1880. Again, it was eleven years before anyone discovered that there was an error in the proof.) See also the webpage Mathoverflow: Widely accepted mathematical results that were later shown to be wrong which details numerous instances of proofs later discovered to be erroneous.

While several such instances are proofs that have been published and were later shown to have relied on a * hidden* fallacious assumption, the fact is that several proofs have been published even though they include steps which rely on unproven steps that are blatant assumptions (that are not axioms), yet these proofs continue to be accepted as though every step in the proof logically follows from the preceding steps. Gödel’s proof of incompleteness is a prime example of a case where a crucially important step is assumed and glossed over in a few lines, but which can be shown to be erroneous, see The flaw in Gödel’s incompleteness proof.

Kasman continues:

“A valid proof of a mathematical theorem is most certainly a more rigorous and certain thing than what passes for proof in the other sciences…” and “In contrast, something which seems true (such as the apparent fact that any even integer can be written as a sum of two prime numbers) is not given the status of a fact at all unless it has been proven.”

As indicated above, this is quite simply wrong. Most mathematical proofs that have been and continue to be published are not fully rigorous proofs. There are many proofs where the author makes an assertion that encompasses several single steps, and in so doing, he assumes that there cannot be an error in any of those single steps. And most mathematicians, when examining such a proof, will make a judgement as to whether that assumption is justified. So, when a mathematician says that he can find no error in a proof, that does not necessarily mean that there is not an error in the proof. It may mean that he has made a judgment where he makes assumptions about how one statement leads to another.

But, despite the inherent uncertainty surrounding mathematical proofs, for some unfathomable reason, mathematicians refuse to believe that it is possible for a proof to become a widely accepted proof in the mathematical community unless it actually is a rigorous mathematical proof. They seem to think that, unlike all other areas of human endeavour, the study of mathematics is somehow immune to human error.

Mathematicians like to believe that they are working in an environment of certainty, whereas the reality is that they are always attempting to move closer to an environment of certainty.

See What is a proof ? by Keith Devlin, former Dean of Science at Saint Mary’s College of California, who points out the difference between what a mathematician would like a proof to be and what passes for a ‘proof’ in the real world.

Also see the article PDF Formal proof - theory and practice (Footnote:
Harrison, J. “PDF *Formal** proof - theory and practice*”, 2008, Notices of the American Mathematical Society, 55 pp 1395-1406.)
by John Harrison, an advocate of computerized proof checking software, and the article on computerized proof checking software.

### Why mathematics is in many ways like a science

The philosopher Karl Popper (1902-1994) argued that scientific theories can never be considered to be unquestionably correct, and that they are inherently hypothetical. He argued that the only way we have of reckoning whether a scientific theory might or might not be correct is by testing the predictions of the theory against actual real events. He went on to argue that if you look at it logically, then it doesn’t matter how many tests fit in with the predictions of the theory, you still can’t be absolutely sure that the theory will always and invariably give a correct prediction. On the other hand, one single demonstration that a prediction of the theory is wrong is enough to show that the scientific theory isn’t completely correct. Popper called this the falsifiability of a scientific theory.

It appears that he never tried to apply his notion of falsifiability to mathematics. But in many ways the reality of real world mathematical proofs is similar to scientific theories. Because while it is very difficult to demonstrate with absolute certainty that a mathematical proof of a proposition is correct, if a flaw is found in a proof, that immediately invalidates that proof (Note: that does not necessarily mean that the proposition of the proof is incorrect - it may be possible to prove the proposition by a different method of proof.)

Also see Is that a fact? by Keith Devlin, former Dean of Science at Saint Mary’s College of California, who also sees similarities in the way we practice science and mathematics.

Rationale: Every logical argument must be defined in some language, and every language has limitations. Attempting to construct a logical argument while ignoring how the limitations of language might affect that argument is a bizarre approach. The correct acknowledgment of the interactions of logic and language explains almost all of the paradoxes, and resolves almost all of the contradictions, conundrums, and contentious issues in modern philosophy and mathematics.Site MissionPlease see the menu for numerous articles of interest. Please leave a comment or send an email if you are interested in the material on this site.

Interested in supporting this site?You can help by sharing the site with others. You can also donate at

_{}where there are full details.