Logic and Language
Load the menuLoad the menu


Copyright   James R Meyer    2012 - 2024 https://www.jamesrmeyer.com

BANNER CONTENT

The Origins of Transfinite Numbers

Page last updated 02 Jul 2023

 

The origins of transfinite numbers can be seen in Cantor’s definitive account in his major work Über unendliche lineare Punktmannig-faltigkeiten (On infinite linear point-sets), especially in Part 2 and Part 5 (Grundlagen). Cantor’s transfinite numbers come in two varieties, the transfinite ordinals and the transfinite cardinals. Here we will concentrate on Cantor’s transfinite ordinal integers; for an overview of the transfinite cardinal numbers see Cardinal Numbers.

 

In Part 2 of his Grundlagen Cantor describes his work on what he calls derivatives of sets of points. Cantor’s use of the term derivative here is different to the term as it is used in calculus - Cover of book: The Infinity Delusion the derivative of a set of points is the set of all limit points of S (informally, a limit point of a set S is a point which has points of S, other than itself, arbitrarily close to it). Given a set S of points, one can define the first derivative set of points S′, and from that, the second derivative S′′ set of points, and so on. The derived sets can have certain points in common, and the intersection of all the sets in a given sequence of derived sets is defined as the set that contains every point that is common to all the derived sets in the sequence.

 

In the following, the details of the derivative operation are immaterial and do not concern us, and so we will simply use the general term ‘operation’.

 

In any such case of repeated operations that give a repeated intersection of the sets that result from these operations, either there is a finite number of repeated operations, where if another operation would be applied, there would be no change in the elements in the intersection of the sets defined up to that point, or else there is a change at every repeated operation. If there is a step that produces no change, then that is the termination of the process of repeated operations, and there is a well-defined resultant set of intersections. On the other hand, if there is not any such finite number then, ipso facto, there is no final definitive intersection set because the set of intersections changes at every repetition.

 

With regard to the latter case, Cantor simply announces that a set “emerges” from the continuing repetitions of the operation, and which he will denote by an infinity symbol:

“This point-set R, which emerges from the set P, is denoted by the use of the symbol ω, viz:

P (ω)

and is called the derivative of P whose order is ω.”

 

Limits and infinity

Cantor makes no mention whatsoever of a limit value. He goes on to state that the specified operation can also be applied to this set P (ω) and he uses a notation of his own devising to indicate that operation and further such operations. He designates the resultant set of one such operation on the “emergent” set by P (ω + 1) and for v repetitions on the “emergent” set by P (ω + v). He provides no reason why one should use this notation rather than continuing with his prior notation, which would be to indicate a repetition of v operations by v prime symbols or by a superscript in brackets, which would give us, for example:

(P (ω))′′′ or (P (ω))(v)

 

Unfortunately, the notation that he does use tends to confuse two concepts - the concept of repeated operations, and the concept that one can simply use numbers to simply indicate various steps in a sequence. But his chosen method of notation is only of incidental interest here - it is Cantor’s prior glib sidestepping of the question of what he actually means by a set “emerging” from limitlessly many repetitions of an operation that points us in the precise direction in which we need to go.

 

We know that Cantor was well versed in the concept of mathematical limits, and frequently used them. But many of his works, especially his Grundlagen, and particularly its Section 9 show us quite clearly that for Cantor, the notion of a limit is only a way of acknowledging the “existence” of a set that simultaneously contains limitlessly many “existing” elements, and which, for him, already has a “existing” Platonist reality, where this Platonist reality has primacy over any notion of a limit. (Footnote: In particular, see this paragraph in Section 9 of Cantor’s Grundlagen where he asserts that irrational numbers
… have a reality in our minds that is just as certain as for the rational numbers …
and that the value of an irrational number
… does not have to be obtained by a limiting process but rather, on the contrary, one can in general be convinced, by such possession of that reality in our minds, of the efficiency and soundness of the limiting processes. )
In several places of his Grundlagen we see that while Cantor understood the notion of a limit, it is his insistence that an “actual” infinite “exists” meant that he really believed that limitlessly many entities do actually “exist” simultaneously in some sort of Platonist realm. This belief was intimately associated with his deep religious beliefs, see Cantor’s religious beliefs and his transfinite numbers.

 

What Cantor said enables us to see why he used the notation that he did for his repeated operations; it was because while he accepted that a limit value could be calculated for his “emergent” set, he really believed that this value was actually the result of limitlessly many repeated operations, and that the calculation of a limit value was simply a convenient human way to access the required already “existing” value. (Footnote: Of course, if a limitless sequence of operations does not have a limit value, for example if the values produced by the operations continuously oscillate over two or more values, then ipso facto, there could neither be any singular specific value for Cantor’s “infini-th” P (ω).)

 

In this way Cantor’s beliefs resulted in two primary errors that are still present in mathematical theories that are still widely promulgated today:

  1. The notion that one can have an integer that is greater than any integer in the set of all integers where there is no limit on how large an integer can be,
    and
  2. The notion that, in general, there can be an intersection of limitlessly many sets.

 

We will deal with these in turn:

 

Error 1: The notion that one can have an integer that is greater than any integer in the set of all integers where there is no limit on how large an integer can be

The absurdity of this notion should be obvious; it is an oxymoron, since by definition, there is no largest integer of limitlessly many integers, with each one bigger than the previous one. In Section 11 of his Grundlagen, Cantor tries to circumvent this contradiction by claiming that while it might be contradictory to speak of a larger integer within a set which has the first cardinal number, it would not be contradictory if that larger integer instead belonged to a set which is not of the first cardinal number. (Footnote: “ As contradictory as it would be to speak of a largest number in the number-class (I), on the other hand it is not abhorrent to think of a new number … which is to be an expression for the fact that the natural succession of the complete number-class (I) is given according to established rules … It is even permissible to think of the newly created number ω as the limit towards which the numbers v approach, provided nothing else is understood by that.”)

 

This is a claim that is not only totally devoid of logic, but is self-contradictory; the impossibility of an integer that is larger than any integer arises directly from the fact that there is no limit to the size that an integer can be. Note that the claim that ordinals are not integers does not circumvent this contradiction, see the Appendix: Ordinal numbers and Integers below.

 

Cantor’s claim is intimately intertwined with his terminology; for example, ω + 1 supposedly indicates a limitlessly large integer value plus one. The absurdity of the notion of such transfinite numbers is that it deliberately assumes that there can be an ordering of integers where the first transfinite ordinal integer comes “after” the “end” of the finite integers, that is, the claim is that there is a sequence of integers and transfinite integers where the transfinite ones come “after” all the finite ones.

 

Suppose that someone suggested, for example, that one could define a new number by taking a rational number whose decimal expansion is such that at some digit in the expansion, all the subsequent digits are 3, which gives a non-terminating sequence of the digit 3, and then one “adds” the digit 7 after the “end” of the decimal expansion to create a new number. One would not be surprised if the response to such a suggestion was that it was the crackpot ravings of an eccentric crank who might be in need of a psychiatric evaluation. Yet the underlying principle that the theory of transfinite ordinals that mathematicians eulogize today is built on this very same principle, with the underlying inanity carefully hidden behind sanitized terminology. It is not surprising that in the 130 years or so since its introduction, transfinite number theory has never had any useful scientific or technological application.

 

How can people believe that there can be any logical underlying foundation for this contradictory notion? Perhaps it arises from a naive intuition that since some quantity is approaching a limit value as the number of iterations increase, then the number of iterations is getting “closer” to infinity. This, of course, is nonsense; one can never be any “closer” to infinity. (Footnote: As Wittgenstein remarked, “Where the nonsense starts is with our habit of thinking of a large number as closer to infinity than a small one.” (Ludwig Wittgenstein, ‘Philosophical Remarks’, Blackwell, 1975, § 138).) It may also be the case that the fallacy arises from some sort of erroneous intuitive association of the values of the integers that represent the iterations and the associated values of the sequence. But the iteration integers are absolute fixed values, whereas the “approaching a limit” describes relative values, and these are two very distinct concepts.

 

When Cantor states in his Grundlagen:

“It is even permissible to think of the newly created number ω as the limit towards which the numbers v approach, provided nothing else is understood by that, so that ω is to be the first integer that follows all numbers v, i.e: is to be called larger than each of the numbers v,”

he is promoting a concept that actually demands the application of a complete misunderstanding of the concept of a limit. A limit is a value that cannot be exceeded, but there is no limit to the upper value of a number, so there cannot be any sort of applicable limit, nor can there be any logical concept that limitlessly many increasing integers approach a limit. Cantor’s attempt to use the concept of a limit to mean precisely the opposite of its real meaning should not fool anyone.

 

This is not to say that there cannot be a symbol to represent the concept of the absence of any limit to the value of any real number, see the appendix below: The symbols  and - ∞

 

Error 2: The notion that, in general, there can be an intersection of limitlessly many sets

The fact that an error often does not appear to result in any difficulty can never serve to prove that it can never do so; it only takes one instance to demonstrate that the error is not trivial. The error here is the same error as I have previously referred to regarding the addition of limitlessly many values, see Sums of infinitely many fractions: 1 and Sums of infinitely many fractions: 2. The fact is that, like addition and multiplication, an intersection of more than two sets necessarily involves repeating the operation of intersection in some way; the notion of an intersection of sets is an iterative notion. The common idea that an intersection of more than two sets can somehow simply “exist” without any iteration is never explained, it is simply assumed that there “exists” such an intersection in some Platonist realm. The untoward consequences of this naive intuitive assumption are swept under the carpet, discernible only by the observant.

 

The general case for the intersection of sets is that an iterative process is required: the intersection of two sets must be obtained, then the intersection of that resultant set and a third set is obtained, then the intersection of that resultant set and a fourth set is obtained, and so on. It is certainly not the case, that for every case of intersecting sets, there is a simple definition that can specify every one of the sets to be intersected by changing a few parameters. Of course, cases involving simple definitions are commonly used to justify certain results, but these are not representative of the general case, and assuming a general rule based on specific cases is a classic logical error.

 

For example, in standard set theory, the intersection of limitlessly many intervals where each is inside of the previous one, is conventionally indicated without any indication of a limit by:

[an , bn]
n = 1

The Nested Intervals theorem of set theory states that the value of this expression is either a closed non-degenerate interval or a single point. The repeated unit incrementation of n never terminates, and the question of what is really defined by  for the upper “value” of the incrementation of n is carefully avoided. (Footnote: Without any application of a limit, there must be an intersection operation that results in the “last” closed interval or the “final” single point. For example, take the sequence of intervals where each nested interval is, say 0.8 of the width of the previous one. If there could be an actual intersection of infinitely many such intervals that was a closed non-single point interval, then there must also be an interval enclosing it that is 1.25 times its width, and those two must be the “final” two sets of all the intersections. But of course, that is impossible, since the definition itself means that there is no last intersection, and there are limitlessly many iterations of the intersection operations. On the other hand, if the resultant interval could be a single point, then since it has no width there cannot be any previous interval from which it can be generated, since 1.25 times zero is also zero.) But given a specific case such as

[ −1n , 1n ]
n = 1

we can see that up to any given value of n, the result is straightforward, and it simply gives a non-degenerate interval whose endpoints are the points −1n and 1n.  And we can also see that all sets for all values less than any given n are redundant and make no difference whatsoever to the final result. Hence we might argue that we can ignore all sets for all finite n, which leaves us with the result of [ −1 , 1 ] and of course, this requires the application of a limit to give the result of [0, 0] or simply [0]. On the other hand, we could also argue that for all finite n, there exists some m > n such that [ −1m , 1m ] is within [ −1n , 1n ], so for all finite n, there remain elements in the intersection besides zero.

 

This demonstrates that there is no simple answer to the question of what constitutes the intersection of infinitely many sets unless there is a logical application of limits. The claim that limits are not used to obtain results in set theory is not tenable, since it is quite clear that in many cases, limits are being used implicitly. Note that some proofs of the Nested Intervals theorem actually use limits explicitly in the proof, e.g. Planetmath Nested Intervals. This bizarre approach where, on the one hand, when it is convenient, all the limitlessly many sets are assumed to simultaneously “exist”, and other hand, when it is convenient, the assumption that they all simultaneously “exist” is ignored and a limit is applied, inevitably results in confusion, and depending on the particular case, can result in contradictions.

 

Limits are not optional

The above gave an example of nested intervals that are defined by the closed intervals [ −1n , 1n ], where the endpoints −1n and 1n are included in the intervals, and where the limit of the intersection of the intervals is the single point zero, represented as a degenerate closed interval by [0].

 

However, if we now consider the corresponding open intervals ( −1n , 1n ), where the endpoints −1n and 1n are not included in the intervals, the limit of the recursive intersections is no interval at all. Since the limit is (0, 0), this is not an interval at all because 0 cannot be both an endpoint that is not included in the interval and at the same time be within the interval.

 

But by the definition that the intersection of a collection of sets is the elements that are common to every set of the collection without any reference to limits, then the result obtained is the same as that for the closed intervals, that is that the intersection of the nested open intervals is a degenerate closed interval [0]. That definition fails to discern any difference between the two cases. (Footnote: Note that the Nested Intervals theorem of set theory only applies for closed intervals, so it does not assert that there must be a closed non-degenerate interval or a single point for nested open intervals; but neither does it define whether limits should or should not be applied.)

 

Since no two intervals of the sequence overlap each other, the result that the intersection could be an interval with closed endpoints when every interval in the sequence has open endpoints is a direct contradiction. And while the above case might be considered trivial, the fact is, as shown on the pages Understanding Limits and Infinity and Understanding sets of decreasing intervals (and see also Lebesgue Measure), that there are cases where it is impossible to avoid severe contradictions that lead to absurdities unless the limit definition is applied, which demonstrates the fallacy of the notion that the use of limits in such cases can be optional. Suggestions to the contrary, in the pretense that such contradictions do not exist, is a disgraceful rejection of the basic tenets of mathematics.

 

Unions involving limitlessly many sets

In general the same considerations apply to unions involving limitlessly many sets, though there are some special cases where it can appear that a result is obtained without any limit. An example of such a case is a union involving infinitely many sets, where there is only one limit set (by the term limit set here, we mean the set that is the limit set of the sets of the relevant sequence; an example is given below). In such cases, a result can be generated by subtracting that limit set from the set that is the limit of the union of all the sets of the sequence.

 

This gives a result that is in accordance with the intuitive notion of the union of infinitely many sets. However, it is most unfortunate that such special cases have been assumed to be indicative of the general case, an assumption which is most definitely incorrect; it is worth noting some of the worst errors in the history of mathematics have resulted from the erroneous assumption that a special case represents the general case.

 

A simple example of such a special case is the union involving infinitely many sets, where each set is associated with some natural number n, n ≠ 0 and whose elements r satisfy the condition 1n + 1  ≤  r  ≤  1n for that natural number n. This gives the sequence of closed intervals of the form [ 1n + 1 , 1n ].

 

Conventionally, the union is assumed to be the set of all points between 0 and 1, including 1 but not 0, i.e, the semi-closed interval (0, 1].

 

On the other hand, the limit of the union of such sets is the set of all points between 0 and 1, including both 1 and 0, i.e, the closed interval [0, 1]. This follows since the limit of both 1n + 1 and 1n is 0, and in this case we have a single limit set, which is the degenerate single point interval that is the point 0. The union of infinitely many sets is such that the union of n sets approaches the limit set as n increases, but never reaches it - otherwise there would be some iteration n at which that limit set was reached, contradicting the fact that there is no limit on the number of iterations with an associated change of the resultant union set.

 

The conventional account simply ignores the inherent contradiction in the claim that a union of intervals where:

  • every interval is left-end closed, and
  • every interval has a very definite left-end point which is within the interval, and
  • every such left-end point of any such interval has a very definite specific real number difference to the left-end points of the previous and subsequent sets of the sequence

is an interval that is:

  • left-end open, and
  • has no definitive left-end point that is within the interval.

 

In such cases, the conventional result is not obtained by the limitless repetitions of union operations; the points in the resultant set are not given by repeatedly adding the next set between 1n + 1 and 1n for the next n, but by an implicit consideration of the limit of the union of such sets as n increases. Once one has the value of that limit set, one can always subtract a single point from the set, be it 0 or any other point.

 

If we suppose that no limit is implicitly invoked, then the alternative is that the union consists of all the sets between 1n + 1 and 1n for all n. But for every such interval, there is always an interval between 0 and the smaller endpoint 1n + 1 of that interval, and ipso facto, for every such interval, there remain points in the interval between 0 and the smaller endpoint 1n + 1 of that interval. (Footnote: Between any two different rational numbers there is at least one irrational number; in fact, between any two different real numbers there are always infinitely many real numbers. And for every endpoint 1n + 1 there is an irrational number that is greater than zero but less than that 1n + 1. Hence without the application of a limit, there always remain points greater than zero and less than every 1n + 1 that cannot be in the supposed union set.) Since there is no “final” n, then there always remain points greater than 0 that are not in the union so defined, unless a limit is applied.

 

This is a subtle point, which is why the implicit invocation of a limit in such special cases is not usually recognized. But in any case, as already noted, one should never assume that a generalization follows from a special case.

 

 

Transfinite numbers and today’s mathematics

The errors indicated above are errors that are the direct result of misunderstandings on the part of Georg Cantor. Some of these difficulties can be attributed to the fact that mathematicians at the time were to a large extent groping in the dark, without having any fully coherent picture of what they were trying to pin down.

 

But the problem that we have today is that there seems to be no attempt whatsoever to move forward away from such misunderstandings. Instead of mathematicians continually striving to refine and improve mathematics by a progressive elimination of the causes of contradictions, they seem to want to see mathematics fossilized at a point in time, like a species for which evolution has stopped.

 

Why?

 

It seems to be the case that in many spheres, once a set of initial tenets are taken as fundamental truths, then most of its proponents will brush aside any consequential problems, and twist the facts until they fit into a picture that, for them, seems to accord with their fundamental tenets.

 

We see this in many religions, in young earth creationism, in cults, but why do we also see it in mathematics, a subject of study that surely should be striving to be as logical as possible, rather than relying on 130 year-old fundamental tenets that had their origins in faith based beliefs rather than reason?

 

 

Cantor’s “Freedom of Mathematics”

When Cantor was trying to get his ideas accepted in the face of opposition from people like Leopold Kronecker, he became a champion for the idea that there should be complete freedom in mathematics, provided that it satisfies certain conditions such as avoiding contradictions. In Section 8 of his Grundlagen Cantor says:

“Mathematics is completely free in its development and is only bound to the self-evident consideration that its concepts are both free of contradictions and that they are in fixed relationships to proven concepts that have already been previously established … every mathematical concept also carries the necessary corrective in itself; if it is sterile or inexpedient, it very soon shows it through its uselessness and it is then dropped because of lack of success. On the other hand, every superfluous constriction of the impulse for mathematical research seems to me to bring with it a much greater danger, and one that is all the greater as no justification can really be drawn for it from the nature of science; for the essence of mathematics lies precisely in its freedom.”

 

What has happened to Cantor’s pleas for freedom in mathematics? Today can anyone get published any analysis of contradictions in today’s set theories in any mainstream mathematical journal?

 

No.

 

The origins of today’s set theories can be traced back directly to Cantor. But despite Cantor’s admonition that freedom should be paramount in mathematics, today it is forbidden to even suggest that the contradictions that arise in such theories may indicate that there is a problem with such theories. One is not even allowed to call them contradictions, the only allowable term is “paradox”, fostering the pretense that giving something a different name can somehow change its intrinsic character.

 

And so, today, instead of Cantor’s freedom of mathematical ideas, instead of welcoming attempts to move mathematics forward rather than being stuck in a century-old rut, today’s mathematics stifles new ideas, obstructs any progress, and has become a dull ubiquitous monotonous mediocracy whose primary objective is maintaining the status quo. See also Cantor’s invention of transfinite numbers.

 

 

Misleading Histories

Some writers make it seem as though the notion of transfinite numbers arise naturally, even inevitably, from Cantor’s study of trigonometric series, but this is an egregious misrepresentation of the facts. It is true that Cantor’s study of trigonometric series led to his study of sets of points, and that this study of sets of points led to his invention of his transfinite numbers. But readers should be aware that there never was any natural progression from trigonometric series to transfinite numbers.

 

While a casual reading of an article such as Dauben’s “The Trigonometric Background to Georg Cantor’s Theory of Sets” (Footnote: Joseph W. Dauben, “The trigonometric background to Georg Cantor’s theory of sets” Archive for history of exact sciences 7.3 (1971), pp. 181-216. ) might lead one to believe his claim that “All of this was to lead, in a natural and direct way, to… [Cantor’s theory of sets]”, the reality, as shown on this page, is that it is transparently obvious that there was nothing natural nor inevitable about Cantor’s leap in the dark from sets of points to his notion of numbers that could be bigger than any limitlessly large number. (Footnote: It might be thought that if a study of sets of points leads to the development of a set theory, then that must lead in turn to transfinite numbers, but that is not the case. Cantor’s set theory, as can be clearly discerned in his Grundlagen, is a theory that is based on the completely illogical assumption that the absence of a one-to-one correspondence between two infinite sets indicates that one of the sets with limitlessly many elements must have “more” elements than the other set with limitlessly many elements. The same applies to almost all current set theories, see Proof of more Real numbers than Natural numbers? and Why do people believe weird things?
A study of sets of points that does not make that illogical assumption leads directly and naturally to a theory of sets that does not include any transfinite number anywhere. What is astonishing is that most of today’s mathematicians seem to be content to continue making that illogical assumption without ever questioning why they are doing so.)

 

There is no intrinsic path, and there never was any such path, from trigonometric series to transfinite numbers, waiting to be uncovered by a canny explorer. Cantor’s leap was a leap of faith, not a brilliant discovery of a hidden link; it was a leap of faith prompted by his deep religious convictions, not by any logical considerations.

 

The reality is that there is no valid justification, and there never was any valid justification for the inclusion of the inane notion of transfinite numbers in mathematics, and no juggling of the facts regarding their origins is ever going to change that.

 

Also see Shaughan Lavine’s “Understanding the Infinite” for an example of a disgraceful misrepresentation of historical facts and which is also replete with logical fallacies.

Footnotes:

Appendix: Ordinal numbers and Integers

Some people attempt to paper over the absurdities by saying that transfinite ordinals are not integers, with the pretense that assigning a different name to a defined entity can somehow erase the contradictions generated by its definition. The fact is that Cantor, the originator of these transfinite ordinal numbers, referred to them both as integers and as ordinals, and I do likewise. See for example Section 2 of Cantor’s Grundlagen where he refers to them as integers - but he also says that while the finite ordinals follow the same rules as integers, with his transfinite integers there are somewhat different rules of addition and multiplication, see Section 3 of his Grundlagen.

 

But the crucial point is that Cantor claimed that these transfinite numbers exist as members of a well-defined ordered sequence, which is of course the fundamental property of natural numbers. (Footnote: Just as is the case for natural numbers, Cantor’s transfinite ordinals can increment or decrement by one or finite multiples of the unit integer - but unlike natural numbers, they can also increment or decrement by a quantity greater than a limitlessly large value, whatever that might mean. ) Calling them ordinals rather than integers does not somehow obliterate the absurdity of the notion that there is a thing/number (Cantor’s ω), regardless of its name, that somehow describes a position “after” the non-existent “end” of a limitlessly continuing sequence of increasing natural numbers.

 

Furthermore, since there is no difference in the properties of all natural numbers and all finite ordinal numbers, then ipso facto, since there is no upper limit to the size of finite natural numbers, there cannot be any upper limit to the size of finite ordinal numbers - in other words, there is no number that is greater than all finite natural numbers, and hence no number that is greater than all finite ordinal numbers. Changing names does not change facts.

 

Note that Cantor’s notions regarding these transfinite ordinal numbers were prompted by his completely unproven assumption that there must be sets with “more” elements than the limitlessly large set of natural numbers, see for example, Proof of more Real numbers than Natural numbers?, a notion that was intimately connected with his religious beliefs, see Cantor’s religious beliefs and his transfinite numbers. (Footnote: Cantor’s Grundlagen demonstrates that he believed that, although the natural numbers cannot set the real numbers into a sequence, there must be other types of numbers that can do so - but there is no proof whatsoever to support that notion. ) Take away Cantor’s contradictory assumptions and the absurdity of the transfinite ordinal numbers becomes ever more apparent. It is astonishing that these absurd beliefs are still held today by people who call themselves logicians.

Footnotes:

 

Appendix: Ordinal numbers as Sets

Every finite number has a finite successor, but transfinite ordinal theory is an exercise in pretending that this isn’t the case. And in conventional set theory, all numbers are sets, and so ordinal numbers are also sets. So, we might suppose that if an ordinal is the set of all the preceding ordinals, then, for example, where # represents the first ordinal, we would have:

1st = #

2nd = (#)

3rd = (#, (#))

4th = (#, (#), (#, (#)))

There’s no problem with using this definition for finite ordinals. The problem only arises when it is assumed that there can logically be an entity:

ω = (#, (#), (#, (#)), (#, (#), (#, (#))), …) where ω is also an ordinal.

 

Clearly, any such set ω cannot have only finitely many elements, which leaves the possibility that it could have infinitely many elements. Now, every ordinal contains as an element the ordinal that is its immediate predecessor, apart from the initial ordinal.

 

In the above ω is supposedly a set that contains an infinite quantity of ordinals, where it is also the case, since it must itself be an ordinal, that for every ordinal in the set ω, except the initial ordinal, there is a predecessor. It is also the case that every finite ordinal must be an element of ω. But here’s the rub - by definition, every finite ordinal has a successor that is also a finite ordinal - and since every finite ordinal has a finite successor, there cannot be a successor ordinal that is not finite. Hence there can be no ω. In other words, the set of all finite ordinals cannot itself be an ordinal, since every ordinal must have a predecessor which is an element of that ordinal, and for that set there is no predecessor.

 

The symbols   and  - ∞

While there is no limit to the size (positive or negative) of any integer or real number, the symbols for plus or minus infinity can be a used to represent that absence of any limit, in a similar fashion to the way in which the symbol for zero serves for the concept of absence of any quantity. And in the same way that zero behaves differently to other real numbers in certain cases (such as 00 having no definitive value), the symbols   and  - ∞ also have certain characteristics which mean they also behave differently to all real numbers (for example, has no definitive value).

 

Note that there is a significant difference between the concepts represented by zero and  ± ∞ :

Given any natural number, one can repeatedly subtract 1 from that number and one must always reach zero - and one can also continue subtracting to produce negative integers. On the other hand, the repetition of additions (or subtractions) of 1 to any natural number can never attain a value other than some finite natural number. In other words, while zero can be obtained by addition and subtraction of integers (and also by certain combinations of non-integers), the same cannot be said for  ± ∞. As such, while zero can be said to have a definitive position in relation to positive and negative real numbers, the same cannot be said for  ± ∞.

 

One can observe that, if the notion of infinitely large numbers that are “larger” than the “first infinitely large number” has a meaningful and coherent basis, then it might be expected that, as x increases, the limit of 1x should have a range of different values when x is replaced by “different” “infinitely large numbers” rather than the limit always simply being the singular value of zero.

 

Interested in supporting this site?

You can help by sharing the site with others. You can also donate at Go Get Funding: Logic and Language where there are full details.

 

 

As site owner I reserve the right to keep my comments sections as I deem appropriate. I do not use that right to unfairly censor valid criticism. My reasons for deleting or editing comments do not include deleting a comment because it disagrees with what is on my website. Reasons for exclusion include:
Frivolous, irrelevant comments.
Comments devoid of logical basis.
Derogatory comments.
Long-winded comments.
Comments with excessive number of different points.
Questions about matters that do not relate to the page they post on. Such posts are not comments.
Comments with a substantial amount of mathematical terms not properly formatted will not be published unless a file (such as doc, tex, pdf) is simultaneously emailed to me, and where the mathematical terms are correctly formatted.


Reasons for deleting comments of certain users:
Bulk posting of comments in a short space of time, often on several different pages, and which are not simply part of an ongoing discussion. Multiple anonymous user names for one person.
Users, who, when shown their point is wrong, immediately claim that they just wrote it incorrectly and rewrite it again - still erroneously, or else attack something else on my site - erroneously. After the first few instances, further posts are deleted.
Users who make persistent erroneous attacks in a scatter-gun attempt to try to find some error in what I write on this site. After the first few instances, further posts are deleted.


Difficulties in understanding the site content are usually best addressed by contacting me by e-mail.

 

Based on HashOver Comment System by Jacob Barkdull

Copyright   James R Meyer   2012 - 2024
https://www.jamesrmeyer.com