Logic and Language

Logic and Language

Copyright © James R Meyer 2012 - 2018 www.jamesrmeyer.com

This page is keyboard accessible:

• Use**Tab**, **Shift + Tab **keys to traverse the main menu. To enter a sub-menu use the **Right Arrow** key. To leave a sub-menu use the **Left Arrow** or the **Escape** key.

• The**Enter** or the **Space** key opens the active menu item.

• To skip the menu and move to the main content, press**Tab** after the page loads to reveal a skip button.

• To get back to the top of the page anytime, press the**Home** key.

• For more information, click here: Accessibility Close this tip.

• Use

• The

• To skip the menu and move to the main content, press

• To get back to the top of the page anytime, press the

• For more information, click here: Accessibility Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.

Lebesgue measure is a theory that arose from the concept of a “real number line”. Mathematicians began to contemplate what it meant to refer to distances between points on such a line in case of sets of points that had rather involved definitions. But before we go into that, let’s talk about the concept of a “real number line”.

Many of the mathematical concepts that are used today had origins that were based on Platonist beliefs - beliefs that mathematical things ‘exist’ - as real as physical things, but in some non-physical way (whatever that might mean). And mathematicians noticed that you could have a concept of a “real number line”, where given any real number, you could have a corresponding point on your “real number line”.

And, being Platonists, they assumed that such a “real number line” actually exists as a mathematical object, and is composed of an accumulation of points. This was a fundamental error. The reality is that the notion of a real number line is a notion that is inherently a fractal, where no matter how close one zooms in, the line always looks the same. It may be a simple one-dimensional fractal, but a fractal it is, and that means that there never is a situation where the fractality ends and - behold - you then have a solid line where you cannot fit in any more points.

And so there cannot be an actual sequence of all the real numbers between any two values (such as 0 and 1) where every number is set in order according to its value, since for any real number, there is no ‘next’ number. Similarly, there cannot be an actual sequence of points that somehow make an actual line. Moreover, by the very definition of a point, a point has no length or width, so that it is impossible for a collection of points to constitute a line.

But if you recognize that when you define a line where one end corresponds to 0 and the other end corresponds to 1, you are only defining a concept, not describing any actual thing, and that, since there is no limit to how many real numbers you can have between 0 and 1, then similarly, there is no limit to the number of points you can define on this line. But you never actually reach the state where the line is ‘filled’ with points.

This is in direct opposition to the Platonist stance which insists that all the points on the line ‘exist’ simultaneously, thus constituting an entire continuous “real number line”.

First, a couple of definitions:

An ** open interval **is an interval that does not include the endpoints that define that interval (for example the open interval whose endpoints are

A * closed interval* is an interval whose endpoints are included in the interval.

Now, let’s consider a definition of a set **A** of ever decreasing intervals that is defined like this:

We start with the closed interval between 0 and 1. Now take a suitable listing (Footnote: See One-to-one correspondences and Listing the rationals.) of the rational numbers between 0 and 1 (for details see below *A specific listing of rational numbers*). Then, going through this list of rational numbers, for the first rational we define an * open* interval

Given this definition, it is easy to show logically that this definition excludes the possibility that * any* point in the closed interval between 0 and 1 could not be included in some defined interval. Since each rational is the midpoint of its defined interval, then both endpoints of that interval are rationals. And since every rational is included in the listing, then each endpoint is itself the midpoint of some interval. Hence every defined interval must overlap some other defined interval - which means that the definition of the recursive decreasing intervals excludes

Some people appear to have some difficulty accepting that that has to be the case, so I have added the following:

We started with a closed interval whose endpoints are rational numbers. At every iteration, we remove an open interval whose endpoints are rational numbers. This means that any interval that is left after an iteration must be a closed interval with rational endpoints. (Footnote: Note that a single point is a closed interval with identical endpoints, so if a single point might remain after an iteration, it would have to be a rational point.)

It follows that there cannot be any number not included by some such iteration. For if there were any number remaining, it would have to be within a closed interval with rational endpoints. That is impossible, since those rational endpoints, by the original definition, must be themselves midpoints of some interval that is defined, by the definition, to be in the set **A.**

Note that we can in fact define the set **A** without any reference to iterations, see below A definition without iterations.

But some people still refuse to accept the logic behind this and try to devise various arguments against it. You can see an example of such an argument by a well-known professor at Fallacy by hidden definition.

But, according to Lebesgue measure theory, there remain in the interval 0 to 1 infinitely many isolated points (Footnote: Clearly, there cannot be any intervals that have more than a single point remaining, since for any two points, there is a rational between them.) not covered by any interval at all of the set **A**!

Yes, really!

Besides, it claims that although there are are infinitely many intervals between these isolated points, these isolated points constitute a ‘bigger’ infinity than that of the intervals - that there are somehow ‘more’ of these points than the intervals between them! And that somehow (although exactly how is never divulged) because there is a ‘bigger’ infinity of these isolated points, they have a total measure of at least ** ^{8}⁄_{9} **even though each such isolated point has a measure of precisely zero. Welcome to fantasy land.

In Lebesgue measure theory, the way to obtain the measure of what is not included in that set of defined ever decreasing intervals is to first assume that the limiting value of the sum of the measures of the defined intervals isn’t actually the limiting value of ** ^{1}⁄_{10}** +

This is, of course, absurd, since all the points that supposedly account for this measure of ** ^{8}⁄_{9}** must be isolated, and each point has precisely zero length. Besides, for

But, as is so often the case, Platonists don’t let contradictions get in the way of their beloved and bizarre notions. As an hilarious example of how Platonists mange to congratulate themselves on pretending that there isn’t a contradiction involved, see the web-page An apparent inconsistency of Lebesgue measure. As Wilfrid Hodges (Footnote: Wilfrid Hodges, An Editor Recalls Some Hopeless Papers, The Bulletin of Symbolic Logic,
Vol 4, Number 1, March 1998.) has remarked (with reference to flawed attempts to attack the diagonal argument): *‘to attack an argument, you must find something wrong
in it. Several authors believed that you can avoid [that] by simply doing something else.’* But that is precisely what the protagonists on the web-page do - they attempt to avoid the contradiction by doing something else other than finding something wrong with the contradictory statement. But while on the one hand, it is hilarious, it is also pathetic and sad that they are unable to see that the presence of a contradiction is telling them that there is something fundamentally wrong with their mathematical foundations. They are so sure that there is nothing wrong with the theory of Lebesgue measure that they cannot contemplate the possibility that it might be an inconsistent theory.

This brings us nicely to Lebesgue’s theory of measure.

Lebesgue’s theory of measure is a theory that has to be bolted on to conventional number theory. (Footnote: Note that Lebesgue measure theory has never had any confirmation of any efficacy in relation to any real world application - unlike the conventional use of numbers, which has been used time and time again in real world applications.) The reason for this necessity for bolting on is that in conventional number theory, for any two different numbers, there is a numerical value that is simply the difference between those two numbers, while the difference between a number and itself is precisely zero. But when you have the concept of a “real number line”, the notion of an interval now corresponds to the notion of the difference between two numbers. And what people refer to as a single point on the real number line corresponds to a single number; this isn’t really an interval, but sometimes it is referred to as a degenerate interval - in which case the measure of such a degenerate interval is precisely zero.

A measure, in its very simplest form, is simply the difference between two real numbers. And one expects that more complex measures would be dependent on multiples of such basic measures. But Lebesgue measure manages to assume that a collection of isolated zeros (each consisting of the difference between a number and itself) can somehow constitute a measure that is greater than zero.

Yes, really! I’m not kidding.

The key assertions in Lebesgue theory are essentially: (Footnote: These are, of course, somewhat simplified here, but the essential facets of the theory are given by this.)

- For any set of isolated points that is denumerable, the
*Lebesgue measure*of that set is zero. - For a set of non-overlapping intervals, but
provided the intervals are denumerable, the**only***Lebesgue measure*is the sum of the lengths of all of the intervals. (Footnote: It also assumes there there is always a simple summation of the lengths of infinitely many ever-decreasing intervals, which is incorrect, see below Different orders of summation.) - For a set of numbers between two numbers
and**a**that is not made up of either of the two above types, the**b***Lebesgue measure*cannot be deduced directly, but is given by subtracting the total of*Lebesgue measures*of the sets of type A and B from the overall length betweenand**a**. (Footnote: Note: In order to avoid contradictions when Lebesgue theory is used along with set theory and the axiom of choice, then this means there must be sets of points that don’t have any measure - not a zero measure, nor some finite measure, nor an infinite measure - just no measure at all. Which means that, when using set theory and the axiom of choice, there are sets of points for which you cannot use the Lebesgue theory of measure to deduce a measure for those sets.)**b**

The Lebesgue theory of measure is based around the requirement that if an interval is split into two sets of points, then the sum of the *Lebesgue measures* of the two sets must always sum up to the total length of the interval. Now, while it might be nice to have that requirement satisfied, the Lebesgue method of doing so comes at a high price. The downsides are many. One downside is that it is never explained how a collection of infinitely many zeros (the measures of single isolated points) can be a finite non-zero value.

But the principal downside is that it leads to a direct contradiction - as in the case described above of ever decreasing intervals.

The problems arise because of a failure to acknowledge that some definitions involve limitlessness, such as the recursive algorithm defined above that never terminates. Now, although a definition involves limitlessness, what you can do is to applying a limiting condition. But you must be careful. If there is a choice of limiting conditions that can be applied, then you must be sure to choose the limiting condition that corresponds to whatever aspect of the limitlessness that you are attempting to calculate a limiting value for. In the case of the ever decreasing intervals as described above, you can either:

(i)

calculate a limiting condition for the total length of the intervals, without including any consideration of the relationships between the endpoints of the intervals

or

(ii)

calculate a limiting condition for the totality of points that are in the set of points given by all defined intervals, without including any consideration of the actual lengths of the intervals.

In case (i), you get a value of number theory: a numerical value of ** ^{1}⁄_{9}**.

In case (ii), you get a value of set theory: the set of points between 0 and 1.

These are two completely different types of values. To assume that the value (i) must imply the other case (ii) is absurd, and indicates a complete failure to understand limitlessness.

You can also see a formal paper on some of the problems of calculating the measure of some sets that are defined in terms of limitlessness, see On Smith-Volterra-Cantor sets and their measure (PDF).

In the assertion that the set **A** has a measure of ** ^{1}⁄_{9}** it is assumed that it a very simple matter - that one simply adds up an interval of length

*R*_{1} − *L*_{1} + *R*_{2} − *L*_{2} + *R*_{3} −_{3} *L* + …

where *L*_{1} and *R*_{1} are the left and right endpoints of the first interval, *L*_{2} and *R*_{2} are the left and right endpoints of the second interval, *L*_{3} and *R*_{3} are the left and right endpoints of the third interval, and so on.

For a special case where each subsequent interval is added so as to ‘touch’ the previous one (is adjacent to) the previous one - the endpoints coincide - then we can have:

^{1}⁄_{10} |
+ | ^{1}⁄_{100} |
+ | ^{1}⁄_{1000} |
+ | ^{1}⁄_{10000} |

0.10 − 0.00 | + | 0.11 − 0.10 | + | 0.111 − 0.110 | + | 0.1111 − 0.1110 |

For a finite sum, the left endpoint of one interval coincides with the right endpoint of the previous interval, and so the corresponding endpoint numbers cancel out - in the above 0.10, 0.11, and 0.111 cancel out, leaving 0.1111 as the correct summation. If the process continues infinitely, the * limiting* value is 0.111… which is equal to

But for the case of intervals that are not adjacent, and where the process continues infinitely, there is not necessarily any simple such summation. It is well known that for an infinite series that has both positive and negative terms, the limiting sum is dependent both on the values and the order in which they appear in the series (see Sums of infinitely many fractions: 1). But there are infinitely many ways in which we can order the addition of the intervals, and in fact, as previously noted, we can define the set **A** in a way that does not specify any order of addition of lengths at all, see below A definition without iterations.

Furthermore, the two endpoints of any given interval do not need to appear consecutively in any such ordering. For example, if *L*_{1}, *R*_{1}, *L*_{2}, *R*_{2}, *L*_{3}, *R*_{3}, *L*_{4}, *R*_{4} are in ascending order, then the total length of the intervals (*L*_{1}, *R*_{1}), (*L*_{2}, *R*_{2}), (*L*_{3}, *R*_{3}), (*L*_{4}, *R*_{4}) can be given by
*R*_{4} − *L*_{1} + *R*_{3} − *L*_{2} + *R*_{1} − *L*_{2} + *R*_{2} − *L*_{3} + *R*_{3} − *L*_{4}.

The simplistic summation of infinitely many interval lengths overlooks the crucially important fact that there can be any order of summation and subtraction of the endpoints, which can result in different limiting values. The naive assumption that one can always calculate the size of such a set by simply adding the lengths is completely erroneous. Since there are infinitely many different possible orderings, ignoring the fact that different orderings can result in different limiting values is absurd. The assertion that the total length of the set **A** must be ** ^{1}⁄_{9}** is an absurdity that should be obliterated from mathematics.

If Platonism is correct, then the measure of any set of points must be an intrinsic property of the set - rather than being merely a human invention that is used for certain purposes. And so, if Platonism is correct, then there can only be one correct calculation of the measure of any set of points. Clearly, Lebesgue measure cannot be the correct Platonist theory of measure, since it leads directly to a blatant contradiction. There is no logical reason to suppose that Lebesgue theory is a theory that reflects some Platonist measure that exists independently of the human mind. It follows that there is no reason to promote Lebesgue measure theory as the ‘correct’ theory of measure.

It should be noted that while the above iterative definition of the set **A** is a fairly informal definition, we can formally define it without any reference to iteration by:

r ∈ A ⇔ {∃*n* ∈ N, *n* > 0, ∧ [*q*(*n*) – 1/(2·10^{n}) < r < *q*(*n*) + 1/(2·10^{n})]}

where *q*(*n*) is a function that lists the rational numbers between 0 and 1, defined for *n* > 0.

Any number that satisfies the definition must be in the set **A**. And, as stated above, for any given *q*(*n*) each left and right endpoint is defined as not included in the interval for that *q*(*n*). Hence if there could be any number in the interval 0 to 1 and not in the set **A**, it would have to be in a closed interval whose endpoints are rational numbers – which is impossible - since every rational is the midpoint of some interval.

Some people have suggested that they can circumvent the contradiction by using enumerations (see also One-to-one correspondences and Listing the rationals) of the rationals that are defined in terms of various conditional requirements, which render the enumeration and the sequence of intervals interdependent. Rather than trying to construct a set of rules as to which enumerations are applicable, all that is required is one specific enumeration. We can define that the set **A** is to be given by one specific enumeration using the pattern of rationals:

^{1}⁄_{2} |
^{1}⁄_{3} |
^{1}⁄_{4} |
^{1}⁄_{5} |
^{1}⁄_{6} |
… |

^{2}⁄_{3} |
^{2}⁄_{4} |
^{2}⁄_{5} |
^{2}⁄_{6} |
… | |

^{3}⁄_{4} |
^{3}⁄_{5} |
^{3}⁄_{6} |
… | ||

^{4}⁄_{5} |
^{4}⁄_{6} |
… | |||

^{5}⁄_{6} |
… |

We go through this pattern, leaving out any duplicates, which gives the first terms of the enumeration as

^{1}⁄_{2,}^{1}⁄_{3,}^{2}⁄_{3,}^{1}⁄_{4,}** ^{3}⁄_{4}**,

Given this enumeration, there are no points in the interval **0** to **1** that are not in the set **A**.

Note that this enumeration follows a pattern that for each subsequent denominator, the values run from the lowest to the highest value of the numerator. For every subsequent denominator, this gives a pattern of rationals across the interval **0** to **1** which is mirrored about ** ^{1}⁄_{2}**. This patterning continues infinitely as the terms progress.

Footnotes:

Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked. Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - any address will do, it does not require verification. Your e-mail will only be used to notify you of replies to your comments - it will never be used for any other purpose and will not be displayed. If you cannot see any comments below, see Why isn’t the comment box loading?.

Please wait for comments to load …

There is now a new page Halbach and Zhang’s *Yablo without Gödel* which analyzes the illogical assumptions used by Halbach and Zhang.

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.

There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.

8 Apr 2016 Are we alone in the Universe?

13 May 2015 Good Math, Bad Math?

31 Mar 2015 Cranks and Crackpots

16th Mar 2015 Bishops Dancing with Pixies?

For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem

– a page with general links:

– and a page relating specifically to the Gödel mind-machine debate:

All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.

Note: for some browsers JavaScript must be enabled for this to operate correctly.

Comments on this site are welcome, please see the comment section.

Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.

If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.

Feedback about site design would also be appreciated so that I can improve the site.

Copyright © James R Meyer 2012 - 2018

www.jamesrmeyer.com