This page is keyboard accessible:
• Use Tab, Shift + Tab keys to traverse the main menu. To enter a sub-menu use the Right Arrow key. To leave a sub-menu use the Left Arrow or the Escape key.
• The Enter or the Space key opens the active menu item.
• To skip the menu and move to the main content, press Tab after the page loads to reveal a skip button.
• To get back to the top of the page anytime, press the Home key.
• For more information, click here: Accessibility   Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.

Lebesgue Measure

Lebesgue measure is a theory that arose from the concept of a “real number line”. Mathematicians began to contemplate what it meant to refer to distances between points on such a line in case of sets of points that had rather involved definitions. But before we go into that, let’s talk about the concept of a “real number line”.


The “real number line”

Many of the mathematical concepts that are used today had origins that were based on Platonist beliefs - beliefs that mathematical things ‘exist’ - as real as physical things, but in some non-physical way (whatever that might mean). And mathematicians noticed that you could have a concept of a “real number line”, where given any real number, you could have a corresponding point on your “real number line”.


And, being Platonists, they assumed that such a “real number line” actually exists as a mathematical object, and is composed of an accumulation of points. This was a fundamental error. The reality is that the notion of a real number line is a notion that is inherently a fractal, where no matter how close one zooms in, the line always looks the same. It may be a simple one-dimensional fractal, but a fractal it is, and that means that there never is a situation where the fractality ends and - behold - you then have a solid line where you cannot fit in any more points.


And so there cannot be an actual sequence of all the real numbers between any two values (such as 0 and 1) where every number is set in order according to its value, since for any real number, there is no ‘next’ number. Similarly, there cannot be an actual sequence of points that somehow make an actual line. Moreover, by the very definition of a point, a point has no length or width, so that it is impossible for a collection of points to constitute a line.


But if you recognize that when you define a line where one end corresponds to 0 and the other end corresponds to 1, you are only defining a concept, not describing any actual thing, and that, since there is no limit to how many real numbers you can have between 0 and 1, then similarly, there is no limit to the number of points you can define on this line. But you never actually reach the state where the line is ‘filled’ with points.


This is in direct opposition to the Platonist stance which insists that all the points on the line ‘exist’ simultaneously, thus constituting an entire continuous “real number line”.


A theory of how to measure - or not?

First, a couple of definitions:

An open interval is an interval that does not include the endpoints that define that interval (for example the open interval whose endpoints are 13 and 12 is the set of all points between 13 and 12 but not including the points 13 and 12).

A closed interval is an interval whose endpoints are included in the interval.


Now, let’s consider a definition of a set A of ever decreasing intervals that is defined like this:

We start with the closed interval between 0 and 1. Now take a suitable listing (Footnote: See One-to-one correspondences and Listing the rationals.) of the rational numbers between 0 and 1 (for details see below A specific listing of rational numbers). Then, going through this list of rational numbers, for the first rational we define an associated open interval 110 wide with that rational at the midpoint of the interval; our set now includes all the numbers in that interval (not including the endpoints). For the next number, define an associated open interval 1100 wide with that rational at the midpoint of the interval; we add those numbers to our set. For the next number, define an associated open interval 11000 wide with that rational at the midpoint of the interval; we add those numbers to our set. And so on, with each subsequent open interval being 110 of the length of the previous interval.


Given this definition, it is easy to show logically that this definition excludes the possibility that any point in the closed interval between 0 and 1 could not be included in some defined interval. Since each rational is the midpoint of its defined interval, then both endpoints of that interval are rationals. And since every rational is included in the listing, then each endpoint is itself the midpoint of some interval. Hence every defined interval must overlap some other defined interval - which means that the definition of the recursive decreasing intervals excludes any possibility whatever of a point that is not covered by some defined interval.


Some people appear to have some difficulty accepting that that has to be the case, so I have added the following:


We started with a closed interval whose endpoints are rational numbers. At every iteration, we remove an open interval whose endpoints are rational numbers. This means that any interval that is left after an iteration must be a closed interval with rational endpoints. (Footnote: Note that a single point is a closed interval with identical endpoints, so if a single point might remain after an iteration, it would have to be a rational point.)


It follows that there cannot be any number not included by some such iteration. For if there were any number remaining, it would have to be within a closed interval with rational endpoints. That is impossible, since those rational endpoints, by the original definition, must be themselves midpoints of some interval that is defined, by the definition, to be in the set A.


Note that we can in fact define the set A without any reference to iterations, see below A definition without iterations.


But some people still refuse to accept the logic behind this and try to devise various arguments against it. You can see an example of such an argument by a well-known professor at Fallacy by hidden definition.


But, according to Lebesgue measure theory, there remain in the interval 0 to 1 infinitely many isolated points (Footnote: Clearly, there cannot be any intervals that have more than a single point remaining, since for any two points, there is a rational between them.) not covered by any interval at all of the set A !


Yes, really !


Besides, it claims that although there are are infinitely many intervals between these isolated points, these isolated points constitute a ‘bigger’ infinity than that of the intervals - that there are somehow ‘more’ of these points than the intervals between them ! And that somehow (although exactly how is never divulged) because there is a ‘bigger’ infinity of these isolated points, they have a total measure of at least 89 even though each such isolated point has a measure of precisely zero. Welcome to fantasy land.


In Lebesgue measure theory, the way to obtain the measure of what is not included in that set of defined ever decreasing intervals is to first assume that the limiting value of the sum of the measures of the defined intervals isn’t actually the limiting value of 110 + 1100 + 11000 + 110000 + …, but an actual sum of infinitely many intervals - which gives you a sum of 19 for the total measure of all of the defined intervals. (Footnote: See Geometric Series for how this is calculated.) (Footnote: Note that this is a maximum value, since if intervals overlap, the limiting sum will be less than 19.) Which means that since the length of the original interval (from 0 to 1) is 1, then the remaining length, according to Lebesgue measure, is at least 89. And so, according to this theory, there must be points remaining that account for this value 89.


This is, of course, absurd, since all the points that supposedly account for this measure of 89 must be isolated, and each point has precisely zero length. Besides, for every point between 0 and 1 (as indicated above) there must be a defined interval that includes that point.



And it is a simple matter to demonstrate that that method of determining the measure of the set A leads directly to a contradiction. Informally, we can take every interval of the set A and widen it by an arbitrarily small fraction, to give a new set B (this informal notion is fully formalized below).


Since the points that are supposedly not in the set A are isolated points that are separated by intervals that are all of finite non-zero size (Footnote: Given the claim is that there are irrationals that are not in the set A, it follows that any such points must be isolated - between any two irrationals there exists a rational number, and that rational must be covered by an interval. Furthermore, any such points must be separated by one or more non-degenerate intervals, since all the intervals defined by the set A are of non-zero measure.) then any increase in the width of the intervals means that the endpoints of each interval of the set A are covered by the widened intervals. Hence the new set must be the entire interval between 0 and 1 and so its measure is 1. But at the same time each interval is only increased by an arbitrarily small fraction - here we shall use the example of 1/10. Hence, since the set A was claimed to have a measure of no more than 1/9, according to this method, the set B cannot have a measure greater than 11/10 × 1/9, which is clearly much less than 1.


We have a blatant contradiction which demonstrates that the above method of determining the measure of the set A is not logically valid. (Footnote: Note that some sort of reference to non-denumerability does not change the outcome - a similar argument can be applied to the denumerable intervals that are in the definition of the set A. Simply widen each interval associated with a rational by say 1/10 of its width. Since the intervals between the irrationals that are alleged to be in the the set A must be comprised of these intervals, then the finite increase in width of every such interval associated with a rational means that every interval between the irrationals that are alleged to be in the the set A also widens by a small finite amount on either side - and so covers the irrationals that are claimed to be in the set A. This can also be fully formalized.)


Formal definition of the set B:

This definition assumes that there exists a set A defined as above, and that the set A is such that there are irrational points between 0 and 1.


[∃r1 ∉ A ∧ ∃r2 ∉ A ∧ ∀x, r1 < x < r2 x ∈ A] ⇒


[(∀y, r1 −  p(r1 + r2)/2 < y < r2 +  p(r1 + r2)/2 y ∈ A) ⇔ y ∈ B]


In the above p is the fraction by which the measure of each interval is increased, e.g., p = 1/10 gives that the measure of each interval is to be increased by 1/10, where each side of the interval is increased by 1/20.


But, as is so often the case, Platonists don’t let contradictions get in the way of their beloved and bizarre notions. As an hilarious example of how Platonists mange to congratulate themselves on pretending that there isn’t a contradiction involved, see the web-page An apparent inconsistency of Lebesgue measure. As Wilfrid Hodges (Footnote: Wilfrid Hodges, An Editor Recalls Some Hopeless Papers, The Bulletin of Symbolic Logic, Vol 4, Number 1, March 1998.) has remarked (with reference to flawed attempts to attack the diagonal argument): ‘to attack an argument, you must find something wrong in it. Several authors believed that you can avoid [that] by simply doing something else.’ But that is precisely what the protagonists on the web-page do - they attempt to avoid the contradiction by doing something else other than finding something wrong with the contradictory statement. But while on the one hand, it is hilarious, it is also pathetic and sad that they are unable to see that the presence of a contradiction is telling them that there is something fundamentally wrong with their mathematical foundations. They are so sure that there is nothing wrong with the theory of Lebesgue measure that they cannot contemplate the possibility that it might be an inconsistent theory.


This brings us nicely to Lebesgue’s theory of measure.


Lebesgue’s theory of measure

Lebesgue’s theory of measure is a theory that has to be bolted on to conventional number theory. (Footnote: Note that Lebesgue measure theory has never had any confirmation of any efficacy in relation to any real world application - unlike the conventional use of numbers, which has been used time and time again in real world applications.) The reason for this necessity for bolting on is that in conventional number theory, for any two different numbers, there is a numerical value that is simply the difference between those two numbers, while the difference between a number and itself is precisely zero. But when you have the concept of a “real number line”, the notion of an interval now corresponds to the notion of the difference between two numbers. And what people refer to as a single point on the real number line corresponds to a single number; this isn’t really an interval, but sometimes it is referred to as a degenerate interval - in which case the measure of such a degenerate interval is precisely zero.


A measure, in its very simplest form, is simply the difference between two real numbers. And one expects that more complex measures would be dependent on multiples of such basic measures. But Lebesgue measure manages to assume that a collection of isolated zeros (each consisting of the difference between a number and itself) can somehow constitute a measure that is greater than zero.


Yes, really ! I’m not kidding.


The key assertions in Lebesgue theory are essentially: (Footnote: These are, of course, somewhat simplified here, but the essential facets of the theory are given by this.)

  1. For any set of isolated points that is denumerable, the Lebesgue measure of that set is zero.
  2. For a set of non-overlapping intervals, but only provided the intervals are denumerable, the Lebesgue measure is the sum of the lengths of all of the intervals. (Footnote: It also assumes there there is always a simple summation of the lengths of infinitely many ever-decreasing intervals, which is incorrect, see below Different orders of summation.)
  3. For a set of numbers between two numbers a and b that is not made up of either of the two above types, the Lebesgue measure cannot be deduced directly, but is given by subtracting the total of Lebesgue measures of the sets of type A and B from the overall length between a and b. (Footnote: Note: In order to avoid contradictions when Lebesgue theory is used along with set theory and the axiom of choice, then this means there must be sets of points that don’t have any measure - not a zero measure, nor some finite measure, nor an infinite measure - just no measure at all. Which means that, when using set theory and the axiom of choice, there are sets of points for which you cannot use the Lebesgue theory of measure to deduce a measure for those sets.)


The Lebesgue theory of measure is based around the requirement that if an interval is split into two sets of points, then the sum of the Lebesgue measures of the two sets must always sum up to the total length of the interval. Now, while it might be nice to have that requirement satisfied, the Lebesgue method of doing so comes at a high price. The downsides are many. One downside is that it is never explained how a collection of infinitely many zeros (the measures of single isolated points) can be a finite non-zero value.


But the principal downside is that it leads to a direct contradiction - as in the case described above of ever decreasing intervals.


The problems arise because of a failure to acknowledge that some definitions involve limitlessness, such as the recursive algorithm defined above that never terminates. Now, although a definition involves limitlessness, what you can do is to applying a limiting condition. But you must be careful. If there is a choice of limiting conditions that can be applied, then you must be sure to choose the limiting condition that corresponds to whatever aspect of the limitlessness that you are attempting to calculate a limiting value for. In the case of the ever decreasing intervals as described above, you can either:


calculate a limiting condition for the total length of the intervals, without including any consideration of the relationships between the endpoints of the intervals



calculate a limiting condition for the totality of points that are in the set of points given by all defined intervals, without including any consideration of the actual lengths of the intervals.


In case (i), you get a value of number theory: a numerical value of 19.


In case (ii), you get a value of set theory: the set of points between 0 and 1.


These are two completely different types of values. To assume that the value (i) must imply the other case (ii) is absurd, and indicates a complete failure to understand limitlessness.


You can also see a formal paper on some of the problems of calculating the measure of some sets that are defined in terms of limitlessness, see On Smith-Volterra-Cantor sets and their measure (PDF).


Different orders of summation

In the assertion that the set A has a measure of 19 it is assumed that it a very simple matter - that one simply adds up an interval of length 110 then add another interval of length 1100 and so on - and that you can extrapolate that to infinity. But the definition of each of those lengths is dependent on the interval it is associated with - each length is defined by the left endpoint and the right endpoint for each case. So while it is simply asserted that the total length obtained by adding up infinitely many decreasing fractions, this conceals the fact that the calculation that is actually being defined is:


R1L1 + R2L2 + R33 L + …


where L1 and R1 are the left and right endpoints of the first interval, L2 and R2 are the left and right endpoints of the second interval, L3 and R3 are the left and right endpoints of the third interval, and so on.


For a special case where each subsequent interval is added so as to ‘touch’ the previous one (is adjacent to) the previous one - the endpoints coincide - then we can have:

110 + 1100 + 11000 + 110000
0.10 − 0.00 + 0.11 − 0.10 + 0.111 − 0.110 + 0.1111 − 0.1110


For a finite sum, the left endpoint of one interval coincides with the right endpoint of the previous interval, and so the corresponding endpoint numbers cancel out - in the above 0.10, 0.11, and 0.111 cancel out, leaving 0.1111 as the correct summation. If the process continues infinitely, the limiting value is 0.111… which is equal to 19.


But for the case of intervals that are not adjacent, and where the process continues infinitely, there is not necessarily any simple such summation. It is well known that for an infinite series that has both positive and negative terms, the limiting sum is dependent both on the values and the order in which they appear in the series (see Sums of infinitely many fractions: 1). But there are infinitely many ways in which we can order the addition of the intervals, and in fact, as previously noted, we can define the set A in a way that does not specify any order of addition of lengths at all, see below A definition without iterations.


Furthermore, the two endpoints of any given interval do not need to appear consecutively in any such ordering. For example, if L1, R1, L2, R2, L3, R3, L4, R4 are in ascending order, then the total length of the intervals (L1, R1), (L2, R2), (L3, R3), (L4, R4) can be given by R4L1 + R3L2 + R1L2 + R2L3 + R3L4.


The simplistic summation of infinitely many interval lengths overlooks the crucially important fact that there can be any order of summation and subtraction of the endpoints, which can result in different limiting values. The naive assumption that one can always calculate the size of such a set by simply adding the lengths is completely erroneous. Since there are infinitely many different possible orderings, ignoring the fact that different orderings can result in different limiting values is absurd. The assertion that the total length of the set A must be 19 is an absurdity that should be obliterated from mathematics.


One correct calculation of measure?

If Platonism is correct, then the measure of any set of points must be an intrinsic property of the set - rather than being merely a human invention that is used for certain purposes. And so, if Platonism is correct, then there can only be one correct calculation of the measure of any set of points. Clearly, Lebesgue measure cannot be the correct Platonist theory of measure, since it leads directly to a blatant contradiction. There is no logical reason to suppose that Lebesgue theory is a theory that reflects some Platonist measure that exists independently of the human mind. It follows that there is no reason to promote Lebesgue measure theory as the ‘correct’ theory of measure.


Also see: Fallacy by hidden definition for absurd arguments that the measure of the set A must be 19.


A definition without iterations

It should be noted that while the above iterative definition of the set A is a fairly informal definition, we can formally define it without any reference to iteration by:


r ∈ A  ⇔ {∃n ∈ N, n > 0, ∧ [q(n) – 1/(2·10n) < r < q(n) + 1/(2·10n)]}


where q(n) is a function that lists the rational numbers between 0 and 1, defined for n > 0.


Any number that satisfies the definition must be in the set A. And, as stated above, for any given q(n) each left and right endpoint is defined as not included in the interval for that q(n). Hence if there could be any number in the interval 0 to 1 and not in the set A, it would have to be in a closed interval whose endpoints are rational numbers – which is impossible - since every rational is the midpoint of some interval.


A specific listing of rational numbers

Some people have suggested that they can circumvent the contradiction by using enumerations (see also One-to-one correspondences and Listing the rationals) of the rationals that are defined in terms of various conditional requirements, which render the enumeration and the sequence of intervals interdependent. Rather than trying to construct a set of rules as to which enumerations are applicable, all that is required is one specific enumeration. We can define that the set A is to be given by one specific enumeration using the pattern of rationals:

12 13 14 15 16
  23 24 25 26
    34 35 36
      45 46

We go through this pattern, leaving out any duplicates, which gives the first terms of the enumeration as

12, 13, 23, 14, 34,15, 25, 35, 45, 16, 56, …


Given this enumeration, there are no points in the interval 0 to 1 that are not in the set A.


Note that this enumeration follows a pattern that for each subsequent denominator, the values run from the lowest to the highest value of the numerator. For every subsequent denominator, this gives a pattern of rationals across the interval 0 to 1 which is mirrored about 12. This patterning continues infinitely as the terms progress.








Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked. Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - any address will do, it does not require verification. Your e-mail will only be used to notify you of replies to your comments - it will never be used for any other purpose and will not be displayed. If you cannot see any comments below, see Why isn’t the comment box loading?.




The Lighter Side



Recently added pages

How you can tell if someone is a crackpot


Platonism’s Logical Blunder


Richard’s Paradox


Alexander’s Horned Sphere


Curry’s Paradox


A review of Buldt’s The Scope of Gödel’s First Incompleteness Theorem


Lebesgue Measure

There is now a new page on a contradiction in Lebesgue measure theory.



Illogical Assumptions

There is now a new page Halbach and Zhang’s Yablo without Gödel which analyzes the illogical assumptions used by Halbach and Zhang.



Easy Footnotes

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).



O’Connor’s “computer checked” proof

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.


Previous Blog Posts  




For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem


– a page with general links:

Gödel Links


– and a page relating specifically to the Gödel mind-machine debate:

Gödel, Minds, and Machines


Printer Friendly


All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.

Note: for some browsers JavaScript must be enabled for this to operate correctly.




Comments on this site are welcome, please see the comment section.


Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.


If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.


Feedback about site design would also be appreciated so that I can improve the site.


Copyright © James R Meyer 2012 - 2018