Copyright © James R Meyer 2012 - 2017 www.jamesrmeyer.com
I have to say that I don’t consider Wang’s paradox very interesting in itself, I have to confess that I find it rather trite. However, when I came across it in a paper A Defense of Strict Finitism (Footnote: Jean Paul Van Bendegem, A Defense of Strict Finitism, Constructivist Foundations 7.2 (2012) pp. 141-149. Online at A Defense of Strict Finitism.) by Jean Paul Van Bendegem, which refers to Wang’s paradox in attempts to defend the notion of Strict Finitism (Footnote: Strict Finitism is subject to various definitions, but it is generally agreed to be a system of mathematics where there is not limitlessly many natural numbers.), I felt I had to put a virtual pen to virtual paper.
Wang’s paradox is based on what is known as the Sorites paradox. In this supposed paradox one typically considers a heap of sand, from which grains are removed one at a time. Eventually only one grain of sand will be left, and a single grain of sand is not a heap. The ‘Sorites paradox’ is that one cannot define at what number of grains does the heap stop being a heap. Clearly, the ‘paradox’ here results from the imprecision of the term ‘heap’. Wang’s paradox applies this to natural numbers as:
The number 1 is small,
If the number n is small, then the number n + 1 is also small,
Therefore, every natural number is small.
The reason why this may appear paradoxical is dealt with below by reference to Van Bendegem’s paper, which demonstrates a typical case of a failure to see the fundamental fallacy underlying common interpretations of Wang’s paradox.
In Van Bendegem’s paper, he considers a common argument against the notion of Strict Finitism, stating, “This first argument against Strict Finitism is without doubt the most popular one. It runs as follows. It is totally senseless to claim that one can only count up to a certain number n and then suddenly stop. Put otherwise: if one can write down a number (sign) n, then one can also write down number (sign) n + 1.”
Van Bendegem bases his counter argument around three statements:
(∀n)(A(n) ⊃ A(n + 1))
i.e., (P1) says that the number 1 has the property A, (P2) that if n has property A, then the next number in line also has it, and (C) says that all numbers have the property A.
He then introduces Wang’s paradox by defining A(n) as:
“the number n is small.”
and he says that “If (P1) and (P2) are acceptable, then (C) follows, or in other words, all numbers are [small]. Which is obviously wrong. ”
He then considers how we might deal with this. But the two obvious options are either:
While Van Bendegem considers that the first option is ‘obviously wrong’ without providing any logical reason why it might be wrong, in mathematical terms, it doesn’t actually matter which I choose, when considering numbers in purely mathematical terms, because there is no prior mathematical definition of ‘small’ for a number. What we can say is that for two different natural numbers, one is always smaller than the other, and in mathematical terms:
x is smaller than y implies that there exists a z, where x + z = y.
Here the term ‘smaller’ is a relative term. However, as Van Bendegem uses it, the term ‘small’ is an absolute term, and ‘small’ as an absolute term has no definition in mathematics. So one can apply any definition one likes for the term ‘small’.
Van Bendegem rejects the hypothesis that the property of being ‘small’ is a property to which P2 applies, i.e., he rejects the hypothesis that all numbers are ‘small’, and then he states that this results in the expression of classical logic which is:
(∃n)(A(n) & ¬A(n + 1))
and he claims that “We have hereby transformed the vague A to a precise one.”
Well, no, he hasn’t, all he has done is to claim that A is a property for which P2 does not apply, and the result of
(∃n)(A(n) & ¬A(n + 1))
is simply another expression of the negation of P2. This expression (∃n)(A(n) & ¬A(n + 1)) is a precise expression of classical logic only when A is precisely defined. Van Bendegem’s claim that by this expression he has transformed the vague notion of ‘small’ (i.e., the property A) to a precise one is preposterous nonsense; the actual number at which a number stops being ‘small’ has still not been defined. All that (∃n)(A(n) & ¬A(n + 1)) is telling us that A is a variable whose domain is all properties for which P2 does not apply.
Van Bendegem goes on to ask the question, “Who would be prepared to claim that 1000 is small, whereas 1001 is not?”, but this is disingenuous, since all he is doing is passing the buck on defining a ‘small’ number to the reader, while refusing to do so himself. But I have no problem at all in defining it for any given set of natural numbers - I can define it quite simply as the smallest number in the set. So, for example, for all natural numbers excluding zero I define small to apply to the number 1 and to no other number in the set.
So, given that definition, where’s the problem? Well, perhaps that definition of a small number might seem to be unreasonable, because someone might say, for example, “Surely three is a small number?”. But here we see that we are attaching common everyday notions to numbers, which are actually not properties of the numbers themselves. We commonly would say that three is a small number because, in everyday situations, we regard three things as a small number of things, but we would regard a thousand things as a large number of things.
Van Bendegem’s fallacious argument is simply an illustration of the pitfalls of using common language terms for mathematical properties. If in the first instance, Van Bendegem had simply said:
A(n) is defined as: “the number n is qwelky”
A(n) is defined as: “the number n is small”,
then his rhetorical question:
“Who would be prepared to claim that 1000 is ‘small’, whereas 1001 is not?”
“Who would be prepared to claim that 1000 is ‘qwelky’, whereas 1001 is not?”
where the reliance of the rhetoric of the question on non-mathematical notions now becomes obvious. And his earlier assertion that it is ‘obviously wrong’ to apply a definition that all numbers are ‘small’ is now seen to be simply an appeal to everyday intuition rather than based on any logical considerations. It is evident that Van Bendegem’s argument is simply the construction of a straw man that he can easily demolish, and it is not a argument that has any logical value.
Van Bendegem goes on to use Wang’s paradox in an attempt to justify the assertion that numbers greater than a certain value cannot be mathematically ‘useful’. He refers to the definitions of the propositions P1 and P2 states: “For property A(n), take ‘I can write down the number (sign) n’ ”.
I can write down the number 1.
If I can write down the number n, then I can also write down the number n + 1.
Van Bendegem asserts that “This claim also seems perfectly acceptable.” No, it doesn’t, the claim is absurd. The original expressions P1 and P2 are strictly logical mathematical expressions, and have no connection to physical actions. Physical actions obviously have limitations - but what has that got to do with the theory of mathematics?
Now, it is possible to have relatively simple physical representations of relatively huge numbers, where the number is defined by a relatively simple pattern - so we could have two such numbers where it would be impossible to have a representation of every single number that lies between two such numbers - since there would be numbers between the two numbers that do not have any simple pattern or representation. Hence there is not a simple linear progression where the possibility of physical representation is directly proportional to the size of a number. Van Bendegem attempts to deal with this by defining a specific physical representation as follows:
I can write down the number 1 on paper in the decimal system.
If I can write down the number n, then I can also write down the number n + 1 on paper in the decimal system.
Van Bendegem admits that: “Of course, your average classical mathematician can reply that this precisely shows that there is no room in mathematics for properties such as ‘to write down the number n on paper in the decimal system.’ In all honesty, I have nothing against this.” But while he claims to have nothing against it, what he says later belies this; he goes on to state that there is a gray area between numbers that can definitely be written down, and those that definitely cannot, and claims: “If statements about numbers in the gray zone are indetermined, then nothing much can be said about them in mathematical terms anyway. In other words: these numbers are not useful.”
But in saying this, he is simply replacing the original statements (P1) and (P2) by:
The number 1 is mathematically useful.
If the number n is mathematically useful, then the number n + 1 is also mathematically useful,
and then surreptitiously stating that, well, actually there is a limiting number n beyond which (P2) does not apply and a third statement (P3) also applies:
There is a number m such that for all n > m, n is not
Here Van Bendegem’s argument is laughable. He refers to a gray area given by the first definition, which admits of any physical representation. Then he confuses the issue by referring to representations that are only decimal representations. But this is a red herring since in fact, we do know that there are precise definitions of numbers that have been used in mathematical contexts (e.g., Graham’s number), and where it has been calculated that it would be impossible to represent them in standard decimal notation in our universe. Such numbers are way above any possible decimal representation, yet they are mathematically ‘useful’; they are numbers that few mathematicians would be prepared to claim that all of them are mathematically useless. At this point Van Bendegem has thrown logic out of the window and supplanted it by nonsense.
There are several papers on this subject, for example, Wang’s Paradox (Footnote: Michael Dummett, Wang’s paradox, Synthese 30 (1975) 3-4 pp. 301-324. Online at Wang’s Paradox.). by Michael Dummett argues against Strict Finitism, and Strict Finitism Refuted? (Footnote: Ofra Magidor, Strict finitism refuted?, Proceedings of the Aristotelian Society vol.107 (2007) No.1 pt 3 pp. 403-411. Online at Strict Finitism Refuted?) where Ofra Magidor disputes Dummett’s argument.
Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked. Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - any address will do, it does not require verification. Your e-mail will only be used to notify you of replies to your comments - it will never be used for any other purpose and will not be displayed. If you cannot see any comments below, see Why isn’t the comment box loading?.
Please wait for comments to load …
There is now a new page on Lebesgue measure theory and how it is contradictory.
There is now a new page Halbach and Zhang’s Yablo without Gödel which demonstrates the illogical assumptions used by Halbach and Zhang.
It has come to my notice that, when asked about the demonstration of the flaw in his proof (see A Fundamental Flaw in an Incompleteness Proof by Peter Smith PDF), Smith refuses to engage in any logical discussion, and instead attempts to deflect attention away from any such discussion. If any other reader has tried to engage with Smith regarding my demonstration of the flaw, I would be interested to know what the outcome was.
I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).
I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.
There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.
For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem
– a page with general links:
– and a page relating specifically to the Gödel mind-machine debate:
All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.
Comments on this site are welcome, please see the comment section.
Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.
If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.
Feedback about site design would also be appreciated so that I can improve the site.
Copyright © James R Meyer 2012 - 2017