Logic and Language

Logic and Language

Copyright © James R Meyer 2012 - 2017 www.jamesrmeyer.com

This page is keyboard accessible:

• Use**Tab**, **Shift + Tab **keys to traverse the main menu. To enter a sub-menu use the **Right Arrow** key. To leave a sub-menu use the **Left Arrow** or the **Escape** key.

• The**Enter** or the **Space** key opens the active menu item.

• To skip the menu and move to the main content, press**Tab** after the page loads to reveal a skip button.

• To get back to the top of the page anytime, press the**Home** key.

• For more information, click here: Accessibility Close this tip.

• Use

• The

• To skip the menu and move to the main content, press

• To get back to the top of the page anytime, press the

• For more information, click here: Accessibility Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.

Lebesgue measure is a theory that arose from the concept of a “real number line”. Mathematicians began to contemplate what it meant to refer to distances between points on such a line in case of sets of points that had rather involved definitions. But before we go into that, let’s talk about the concept of a “real number line”.

Many of the mathematical concepts that are used today had origins that were based on Platonist beliefs - beliefs that mathematical things ‘exist’ - as real as physical things, but in some non-physical way (whatever that might mean). And mathematicians noticed that you could have a concept of a “real number line”, where given any real number, you could have a corresponding point on your “real number line”.

And, being Platonists, they assumed that such a “real number line” actually exists as a mathematical object, and is composed of an accumulation of points. This was a fundamental error. The reality is that the notion of a real number line is a notion that is inherently a fractal, where no matter how close one zooms in, the line always looks the same. It may be a simple one-dimensional fractal, but a fractal it is, and that means that there never is a situation where the fractality ends and - behold - you then have a solid line where you cannot fit in any more points.

And so there cannot be an actual sequence of all the real numbers between any two values (such as 0 and 1) where every number is set in order according to its value, since for any real number, there is no ‘next’ number. Similarly, there cannot be an actual sequence of points that somehow make an actual line. Moreover, by the very definition of a point, a point has no length or width, so that it is impossible for a collection of points to constitute a line.

But if you recognize that when you define a line where one end corresponds to 0 and the other end corresponds to 1, you are only defining a concept, not describing any actual thing, and that, since there is no limit to how many real numbers you can have between 0 and 1, then similarly, there is no limit to the number of points you can define on this line. But you never actually reach the state where the line is ‘filled’ with points.

This is in direct opposition to the Platonist stance which insists that all the points on the line ‘exist’ simultaneously, thus constituting an entire continuous “real number line”.

Let’s consider a definition of a set of ever decreasing intervals that is defined like this:

Take any listing of the rational numbers between 0 and 1. (Footnote: See One-to-one correspondences and Listing the rationals.) Then, going through this list of rational numbers, for the first rational define an interval ** ^{1}⁄_{10}** wide with that rational at the midpoint of the rational. For the next number, define an interval

Given this definition, it is easy to show logically that this definition excludes the possibility that * any* point could not be included in some defined interval. Since each rational is the midpoint of its defined interval, then both endpoints of that interval are rationals. And since every rational is included in the listing, then each endpoint is itself the midpoint of some interval. Hence every defined interval must overlap some other defined interval - which means that the definition of the recursive decreasing intervals excludes

But, according to Lebesgue measure theory, there are infinitely many points not covered by any interval! In Lebesgue measure theory, the way to obtain the measure of what is not included in that set of defined ever decreasing intervals is to first assume that the limiting value of the sum of the measures of the defined intervals isn’t actually a limiting value, but an * actual* sum of infinitely many intervals - which gives you a sum of

This is, of course, absurd, since for * every* point between 0 and 1 (as indicated above) there must be a defined interval that includes that point.

But, as is so often the case, Platonists don’t let contradictions get in the way of their beloved and bizarre notions. Which brings us nicely to Lebesgue’s theory of measure.

Lebesgue’ theory of measure is a theory that has to be bolted on to conventional number theory. (Footnote: Note that Lebesgue measure theory has never had any confirmation of any efficacy in relation to any real world application - unlike the conventional use of numbers, which has been used time and time again in real world applications.) The reason for this necessity for bolting on is that in conventional number theory, for any two different numbers, there is a numerical value that is simply the difference between those two numbers, while the difference between a number and itself is precisely zero. But when you have the concept of a “real number line”, the notion of an interval now corresponds to the notion of the difference between two numbers. And what people refer to as a single point on the real number line corresponds to a single number; this isn’t really an interval, but sometimes it is referred to as a degenerate interval - in which case the measure of such a degenerate interval is precisely zero.

A measure, in its very simplest form, is simply the difference between two real numbers. And one expects that more complex measures would be dependent on multiples of such basic measures. But Lebesgue measure manages to assume that a collection of isolated zeros (each consisting of the difference between a number and itself) can somehow constitute a measure that is greater than zero.

Yes, really! I’m not kidding.

The key assertions in Lebesgue theory are essentially: (Footnote: These are, of course, somewhat simplified here, but the essential facets of the theory are given by this.)

- For any set of isolated points that is listable, the
*Lebesgue measure*of that set is zero. - For a set of non-overlapping intervals, but
provided the intervals are denumerable, the**only***Lebesgue measure*is the sum of the lengths of all of the intervals. - For a set of numbers between two numbers
and**a**that is not made up of either of the two above types, the**b***Lebesgue measure*cannot be deduced directly, but is given by subtracting the total of*Lebesgue measures*of the sets of type A and B from the overall length betweenand**a**. (Footnote: Note: In order to avoid contradictions when Lebesgue theory is used along with set theory and the axiom of choice, then this means there must be sets of points that don’t have any measure - not a zero measure, nor some finite measure, nor an infinite measure - just no measure at all.)**b**

The Lebesgue theory of measure is based around the requirement that if an interval is split into two sets of points, then the sum of the *Lebesgue measures* of the two sets must always sum up to the total length of the interval. Now, while it might be nice to have that requirement satisfied, the Lebesgue method of doing so comes at a high price. The downsides are many. One downside is that it is never explained how a collection of infinitely many zeros (the measures of single isolated points) can be a finite non-zero value.

But the principal downside is that it leads to a direct contradiction - as in the case described above of ever decreasing intervals.

The problems arise because of a failure to acknowledge that some definitions involve limitlessness, such as the recursive algorithm defined above that never terminates. Now, although a definition involves limitlessness, what you can do is to applying a limiting condition. But you must be careful. If there is a choice of limiting conditions that can be applied, then you must be sure to choose the limiting condition that corresponds to whatever aspect of the limitlessness that you are attempting to calculate a limiting value for. In the case of the ever decreasing intervals as described above, you can either:

(i)

calculate a limiting condition for the total length of the intervals, without including any consideration of the relationships between the endpoints of the intervals

or

(ii)

calculate a limiting condition for the totality of points that are in the set of points given by all defined intervals, without including any consideration of the actual lengths of the intervals.

In case (i), you get a value of number theory: a numerical value of ** ^{1}⁄_{9}**.

In case (ii), you get a value of set theory: the set of points between 0 and 1.

These are two completely different types of values. To assume that the value (i) must imply the other case (ii) is absurd, and indicates a complete failure to understand limitlessness.

If Platonism is correct, then the measure of any set of points must be an intrinsic property of the set - rather than being merely a human invention that is used for certain purposes. And so, if Platonism is correct, then there can only be one correct calculation of the measure of any set of points. Clearly, Lebesgue measure cannot be the correct Platonist theory of measure, since it leads directly to a blatant contradiction. There is no logical reason to suppose that Lebesgue theory is a theory that reflects some Platonist measure that exists independently of the human mind. It follows that there is no reason to promote Lebesgue measure theory as the ‘correct’ theory of measure.

Footnotes:

Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked (comments will be checked before appearing on this site). Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - this will only be used to notify you of replies to your comments - it will never be used for any other purpose, will never be displayed and does not require verification. Comments are common to the entire website, so please indicate what section of the site you are commenting on.

If you cannot see any comments below, it may be that a plug-in on your browser is blocking Disqus comments from loading. Avast anti-virus in particular is known to do this, especially with Internet Explorer and Safari. See Disqus Browser plug-in/extension conflicts or Why isn’t the comment box loading?.

Please wait for comments to load …

There is now a new page Halbach and Zhang's *Yablo without Gödel* which demonstrates the illogical assumptions used by Halbach and Zhang.

It has come to my notice that, when asked about the demonstration of the flaw in his proof (see A Fundamental Flaw in an Incompleteness Proof by Peter Smith PDF), Smith refuses to engage in any logical discussion, and instead attempts to deflect attention away from any such discussion. If any other reader has tried to engage with Smith regarding my demonstration of the flaw, I would be interested to know what the outcome was.

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.

There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.

13 May 2015 Good Math, Bad Math?

16th Mar 2015 Bishops Dancing with Pixies?

23rd Feb 2015 Artificial Intelligence

31 Mar 2015 Cranks and Crackpots

For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem

– a page with general links:

– and a page relating specifically to the Gödel mind-machine debate:

All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.

Note: for some browsers JavaScript must be enabled for this to operate correctly.

Comments on this site are welcome, please see the comment section.

Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.

If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.

Feedback about site design would also be appreciated so that I can improve the site.

Copyright © James R Meyer 2012 - 2017

www.jamesrmeyer.com