Logic and
Language
Load the menuLoad the menu


Copyright   James R Meyer    2012 - 2024 https://www.jamesrmeyer.com

BANNER CONTENT

J. R. Lucas: Talk on The Implications of Gödel’s Theorem, 1998

This is a copy of a web-page by J.R. Lucas, the text of a talk given to the Sigma Club on February 26th, 1998, on The Implications of Gödel’s Theorem. It has previously been available in the public domain as published by J. R. Lucas on his web-pages before his demise. See the sitemap for other online articles by Lucas.

 

Let me start with a disclaimer. I am not going to be primarily concerned with the Gödelian argument against mechanism (Minds, Machines and Gödel), although that is what I am primarily associated with in the public mind. Not that I don’t stand by it. Although there have been many criticisms, some of them ill informed and evidently based on not having read what I had actually written, the critics had a strong tendency to disagree with one another more than they did with me, or later with Roger Penrose. At first I tried to answer critical arguments, but became somewhat bored with having to make the same points over and over again. I often wondered what Gödel himself thought, having a strong sense of his being in sympathy with my general line of argument. In fact he had expressed his views in a lecture on Boxing Day, 1951, which was finally published in the third volume of his Collected Works in 1995. Gödel argues for a disjunction: an Either/Or, with the strong suggestion that the second disjunct is untenable, and hence by Modus Tollendo Ponens that the first disjunct must be true.

So the following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified. (Footnote: Kurt Gödel: Collected Works, III, ed. Feferman, Oxford, 1995, p. 310. )

It is clear that Gödel thought the second disjunct false, so that by Modus Tollendo Ponens he was implicitly denying that any Turing machine could emulate the powers of the human mind. (Footnote: It is necessary to stress this point, as many critics try to distance Gödel’s thought from that of Lucas and Penrose. Although there are significant differences between the three approaches, their conclusions are substantially similar. See H.Wang, From Mathematics to Philosophy, London, 1974, pp.324-326; and Reflections on Kurt Gödel, MIT Press, Cambridge, Mass., USA, 1987, p.48, pp.117-118.)

 

Roger Penrose uses not Gödel’s theorem itself but one of its corollaries, Turing’s theorem, which he applies to the whole world-wide activity of all mathematicians put together, and claims that their creative activity cannot be completely accounted for by any algorithm, any set of rigid rules that a Turing machine could be programmed to follow. I used Gödel’s theorem itself, and considered only individuals, reasonably numerate (able to follow and understand Gödel’s theorem) but not professional mathematicians. I did not give a direct argument, but rather a schema, a schema of disproof, whereby any claim that a particular individual could be adequately represented by a Turing machine could be refuted. My version was, designedly, much less formal than the others, partly because I was addressing a not-very-numerate audience, but chiefly because I was not giving a direct disproof, but rather a schema which needed to be adapted to refute the particular case being propounded by the other side. I was trying to convey the spirit of disproof, not to dot every i and cross every t, which might have worked against one claim but would have failed against others. I also chose, in arguing against the thesis that the mind can be represented by a Turing machine, to use Gödel’s theorem, not Turing’s. This was in part because, once again, Gödel’s theorem is easier to get the flavour of than Turing’s theorem, but also because it involves the concept of truth, itself a peculiarly mental concept.

 

Other differences need to be noted. Gödel was a convinced dualist. Penrose is a materialist, but thinks that physics needs to be radically revised in order to accommodate mental phenomena. Quite apart from this, he reckons physics must be developed to account for the phenomenon of the collapse of the psi function in quantum mechanics. He hopes to produce a unified theory which will be both a non-algorithmic theory of quantum collapse and accommodate the phenomenon of mind. I acknowledge the importance of both these problems, but think they are separate. Instead of trying to expand physics in order to have a physical theory of mind, I distinguish sharply two different types of explanation, the regularity explanations we use to explain natural phenomena and the rational explanations we use to justify and explain the actions of rational agents. In that sense I am a dualist. I have difficulty with the full-blooded Cartesian dualism of different sorts of substance, which was, I think, Gödel’s position, and am, as regards substance only a one-and-a-halfist at most. But, as regards explanation, I am at least a dualist, indeed, a many-times-more-than-dualist.

 

Gödel disliked controversy. That may be one reason why he did not publish his Gibbs lecture. But another was that he did not feel the need to refute materialism: he thought it obvious that minds were essentially different from, and irreducible to, matter; why waste effort flogging a dead horse? Looking back on it from the end of the Twentieth Century, I think he may have been right. I came upon the Gödelian argument in my search for a refutation of materialism, and adduced it as a schema of disproof which would get under the skin of even a hard-nosed determinist. But other arguments I appealed to then have now matured, and carry the field. (Footnote: J.R.Lucas, The Freedom of the Will, Oxford, 1970, Section 20, pp.107-113. ) Quantum mechanics seemed indeterminist then, but is now almost incontrovertibly so. I claimed then that there was evidence for the brain being sufficiently sensitive for quantum mechanical fluctuations to have significant effects, but the evidence is better now. And although AI enthusiasts still regard me as a Very Bad man, few of them now think a Turing machine could be programmed to simulate the output of a human mind.

 

Let me therefore turn to the other aspects of Gödel’s theorem which have entirely altered our understanding of mathematics, logic and reasoning generally. Gödel’s great achievement was to produce a water-tight version of the Epimenides paradox. By means of his Gödel numbering, he was able to produce a well-formed formula which expressed a coded proposition to the effect:

 

“This well-formed formula is unprovable-in-Peano-Arithmetic.”

 

It follows, provided Peano Arithmetic is consistent, that if the Gödelian formula were to be provable, it would be true, contrary to what it itself claims. So it must be unprovable, and hence true. Truth is not the same as provability. Truth outruns provability. This is the most fundamental consequence, with wide implications for philosophy; but first let me deal with consequences within mathematical logic.

 

Gödel’s First Theorem shows that any consistent system of first-order logic together with Peano’s postulates (or other postulates sufficient for defining the natural numbers, addition and multiplication) contained a closed well-formed formula which was neither provable nor disprovable. Elementary number theory is syntactically incomplete. Admittedly, the Gödelian formula as constructed by Gödel is very artificial, but there is not just one, but infinitely many since there are infinitely many possible numerical codings of the primitive terms of Peano Arithmetic. There was a possibility that famous unsolved problems - the Four-Colour Theorem, Fermat’s Last Theorem, Goldbach’s Conjecture - might actually be undecidable within Peano Arithmetic. A doctrine of Assurance, preached by Hilbert and Peirce, was no longer available. Although the Four-Colour Theorem and Fermat’s Last Theorem have been subsequently proved, some significant well-formed formulae are undecidable in Peano Arithmetic. (Footnote: J.Paris and L.Harrington, “A Mathematical Incompleteness in Peano Arithmetic”, in J.Barwise, ed., Handbook of Mathematical Logic, Amsterdam, 1977.)

 

When von Neumann heard of Gödel’s theorem, he was lecturing on Hilbert’s programme; he immediately discontinued those lectures, and expounded Gödel’s work instead, because it had conclusively shown Hilbert’s Programme to be unachievable. Hilbert had hoped to rescue mathematics from the paradoxes of Cantorian Set theory and Transfinite Arithmetic by axiomatizing them, and then showing metamathematically that the axiomatic systems were consistent. Granted a satisfactory consistency proof, a mathematician could continue working in a theory, whatever objections the critics might raise, without fear of inconsistency. But Gödel’s Second Theorem showed that if first-order arithmetic is, indeed, consistent, it is impossible to prove its consistency within first-order arithmetic itself. That is not to say that we cannot prove the consistency of first-order arithmetic: Gentzen did so; but he had to use transfinite induction. And the whole point of Hilbert’s Programme was to secure the reliability of disputed branches of mathematics by proving their consistency by methods so simple as to admit of no dispute. If Cantor’s Transfinite Arithmetic is in issue, it is no good proving consistency by means of transfinite induction. Hilbert’s hope of sicherheit is necessarily unattainable.

 

Gödel showed that first-order logic was complete, and it had seemed natural to suppose that it was decidable. But it cannot be, or we should be able to decide well-formed formulae of Sorites Arithmetic by expressing them as implications with the first four of Peano’s postulates as the antecedent and the well-formed formula in question as the consequent. Nonetheless we have got a method, albeit a very tedious one, of telling if a well-formed formula is a theorem of first-order logic, if indeed it is one; we just grind out proofs of all the theorems, one by one, until eventually we come upon a proof of the theorem in question. Hence there cannot be a comparable method of grinding out all the non-theorems. The theorems are recursively enumerable, the non-theorems are not. This is Church’s Theorem. It is the best illustration of the difference between one-way decidability and two-way decidability. It is illuminating further, because it forces a distinction between general methods and particular insights. We do not have a general method for telling that a well-formed formula in first-order logic is not a theorem, but we might still be able to do so in each individual case, sometimes going outside the resources of first-order logic itself. I like to entertain the idea of a von Neumann machine, which is a black box, with John von Neumann inside: when fed with a problem, he unfailingly comes up with answer, applying whatever ingenious insight is best suited to the problem. A von Neumann machine might be uniformly successful in detecting non-theorems, but by all sorts of different and devious means, and not at all compromising Church’s Theorem. It has always been one of the paradoxes of our thinking about mathematics, that being in fact one of the most difficult of disciplines, we hold that it should be accessible to the meanest intellect. We feel that a proof is valid only if the merest moron - even a Turing machine - should be able to follow through it and forced to acknowledge its cogency, and seek algorithms, which can similarly be carried out entirely “mechanically”. Gödel’s theorem shows that mathematical insight need not be algorithmic. (Footnote: For many illuminating examples, see Roger Penrose “On Understanding Understanding” International Studies in Philosophy of Science, vol.11, no.1, March 1997, pp.7-20.)

 

The Gödelian formula cannot be proved. Hence its negation cannot be disproved. So the conjunction of the negation of the Gödelian formula and Peano Arithmetic is consistent. So, by Gödel’s Completeness theorem, the conjunction of the negation of the Gödelian formula and Peano Arithmetic has a model. But this model is not isomorphic with the model of the conjunction of the Gödelian formula itself and Peano Arithmetic: hence first-order Peano Arithmetic has more than one model, which differ significantly from each other. There are deviant models of Peano Arithmetic in first-order logic. The non-standard models differ from the standard one in what happens over the horizon beyond infinity. They have a concertina effect of lots of sequences each isomorphic with the integers (of order-type w*+w and together having the order-type eta of the rational numbers. The details need not bother us. (Footnote: See more fully, G.S.Boolos and R.C.Jeffrey, Computability and Logic, Cambridge, 1974, 1980, 1989, Section 17, pp.193-195. ) What is important is that no first-order characterization of the natural numbers, with addition and multiplication, can characterize them adequately, yet nevertheless we know very well which is the “right” model. This supports mathematical realism. We are acquainted with the natural numbers, because they exist independently of us, and we apprehend them. A partial characterization is enough for us to be able to recognise them, much as with material objects. Goodstein acknowledges the force of this consideration, but resists “neo-realism”, suggesting instead that we are led to make more precise what we meant to mean by natural number in much the same way as a novelist may feel that he has not managed to write the story he had intended to. (Footnote: R.L.Goodstein, “The Significance of Incompleteness Theorems”, British Journal for the Philosophy of Science, 14, 1963-64, p.215. ) Perhaps. But then one of Wittgenstein’s favourite contentions must be abandoned. Rules and meanings have a life of their own, and guide us in definite directions, not leaving us a free choice how they should be developed.

 

It has often been seen as a merit of first-order logic that it can be done algorithmically: it is the logic that computers can do. Gödel’s theorem shows the inadequacy of first-order logic, and leads us, pace Quine, to look further afield, and in particular to recognise the merits of second-order logic, which is not semantically complete and not recursively axiomatizable. Once we have second-order logic, we can characterize the natural numbers categorically and adequately: Peano’s postulates are monomorphic. The reason is that in second-order logic the fifth postulate is expressed:

 

(∀F ){F(0) & [F(m) ⇒ F(m′)] ⇒ (∀n)F(n)}

 

with quantification over all predicates F, whereas in first-order logic it is expressed by an axiom schema:

 

F(0) & [F(m) ⇒ F(m′)] ⇒ (∀n)F(n)

 

with the F as a free variable, not bound by any quantifier. If you name any particular F you please, then provided that it is hereditary and holds of 0, it holds of all the natural numbers; but that is a less stringent condition than the second-order one, where it holds of every F, whether you name it or not. Indeed, there can be only a denumerable infinity, aleph0 of Fs you might possibly name, whereas there may well be a non-denumerable infinity, aleph1 of properties over which the quantifier (F) ranges.

 

Second-order logic is able to characterize the natural numbers. If we can avail ourselves of second-order logic, we do not need to posit some platonist acquaintance with the natural numbers in order to distinguish the intended model from the non-standard ones. But second-order logic is, on Quine’s showing, itself platonist. If “to be is to be a value of a variable”, then qualities and relations must exist if we are to be able to quantify monadic and polyadic predicates. Either way, Gödel’s theorem pushes us towards platonic realism.

 

If truth out-runs provability, Verificationism, and in particular Mathematical Intuitionism, are wrong. Truth cannot be defined in terms of provability, for in any serious intellectual endeavour we shall be able to formulate simple mathematical arguments, and thus shall be subject to Gödel’s incompleteness theorem. However far we go in formalising our canons of proof, we shall be able to devise propositions which are not, according to those canons, provable, but are none the less, true. So it is one thing to be provable, and a different thing to be true. However far we go in formalising our canons of proof, we shall be able to devise propositions which are not, according to those canons, provable, but are none the less, true. So it is one thing to be provable, and a different thing to be true. Many modern philosophers have held the opposite. The Logical Positivists of the Vienna Circle proposed the Verification Principle that the meaning of a proposition was its method of verification; Wittgenstein held that meaning was constituted by use, and use has often been construed as “assertibility conditions”. But if truth outruns provability, then the meaning of a proposition claimed to be true cannot be given by its method of verification, and the use of terms is not confined to their assertibility conditions.

 

The Gödelian formula cannot be proved. Hence its negation cannot be disproved. So I can assert its negation without being convicted of inconsistency. But its negation is false. I can assert a falsehood without its being proved to be false. It is possible to be wrong without being provably so. There are claims which cannot be proved or disproved, but which can be true and can be false. I can claim truth, but may be wrong. I am fallible. I make claims which may be false - but, also and importantly, may be true. I may have some warrant for them, but not a watertight guarantee; and though I may be subsequently vindicated and seen to have been right, I may also be found to be wrong and required to withdraw my original claim even though reasonable and well warranted when it was made. I often have to stick my neck out in the pursuit of truth. Nothing Venture, Nothing Win: if I want to have a chance of being right, I have to run the risk of being wrong.

 

If truth can outrun provability, reality can outrun knowledge. Many modern thinkers have thought to evade scepticism by cutting down reality to what can be securely known. Idealists, phenomenalists and mathematical intuitionists reconstrue our statements about reality as being really statements about knowledge, or about sense-data, or about mathematical proofs. But once we can separate the content of a truth-claim from the warrant for asserting it, we can allow that we may be making claims about reality even though we do not know them to be true. Warranted assertibility is one thing, intelligibility is anther. I can understand Fermat’s last theorem or Goldbach’s conjecture even though I cannot prove them; and, similarly, speculations about quarks or the beginning of the universe. Verificationist arguments fail. But so also do all-embracing sceptical arguments. For although there is a difference between being provable in some specified system and being true, the truth of the Gödelian formula can be established incontrovertibly. There is no doubt that the Gödelian formula is true, and hence that the Gödelian formula is unprovable-in-the-given-system of first-order arithmetic, and hence that it is true. Thus it is provable, but not in the-given-system of first-order arithmetic. The sceptic is entitled to point out that the claims about reality go beyond the empirical evidence on which they are based, but not to maintain that there can be no other considerations leading us to affirm them. Although at any stage he may legitimately ask the realist to prove his claims, he is not justified in laying down what form a proof must take or in drawing widespread conclusions from the realist’s failing to meet the challenge. `In any system there will be propositions we cannot prove, but it does not follow that they are not true - quite the contrary.

 

Quine was a reluctant realist. Hence his suspicion of second-order logic. Ontologically, it was, in his eyes, extravagant. Furthermore, second-order logic lacks various meta-logical features that were seen as merits of first-order logic. The completeness and compactness theorems fail. Whereas in first-order logic every formula that is true under all interpretations is a theorem, and can be proved from the axioms by means of the rules of inference in a finite number of steps, there are in second-order logic many well-formed formulae which are true under all reasonable interpretations but are not provable from the axioms in a finite number of steps by means of the rules of inference. Many logicians have seen this as a defect of second-order logic; but the completeness of first-order logic is only eunuch-completeness; eunuchs can do everything they want to do, but cannot want to do what other men desire, and similarly first-order logic can prove every valid well-formed formula that it can express, but lacks the apparatus to formulate well-formed formulae involving quantification over predicates. But still, it cam be urged, second-order logic is not recursively axiomatizable: a computer cannot be programmed to produce from any given axioms, by means of any properly specified rules of inference, all those formulae that are “logically valid”, that is, true under all reasonable interpretations. But, again, this is no demerit. That second-order logic is in this sense not mechanical shows that logic is much more juicy than the Logical Positivists thought. Ayer dismissed the whole of mathematics as just one big tautology. To make out that the propositions of logic are merely analytic is a faceable claim so long as logic is taken to be first-order logic, for then anyone who denied a logically valid proposition could be taken through a proof of it step by step, and forced at someone stage into self-contradiction. But once we allow that logic includes second-order logic, the charge of its being merely analytic loses its cogency. Not all logically valid well-formed formulae - those true under all reasonable interpretations - are analytic; some are synthetic, though not depending on experience for their truth, nor known only a posteriori. Synthetic a priori propositions are no longer ruled out. (Footnote: I.M.Copi, The Journal of Philosophy, 46, 1949, pp.243-245; A.R.Turquette, The Journal of Philosophy, 47, 1950, pp.125-129; and I.M.Copi, pp.633-636.)

 

If some logical truths can be synthetic, we may wonder, with Kant, how they can be known. Sense-experience would be an unsuitable warrant, since it is characteristic of sense-experience that it can - logically can - be other than it is. Rather, we are led to affirm logical truths by rational considerations, such as those adduced in the proof of Gödel’s theorem, but ones that cannot be completely codified in a set of canonical rules. Contrary to the teaching of many influential philosophers in the Twentieth Century, being reasonable is not simply a matter of following a rule. Reason outruns rules, just as truth outruns provability. Reason is creative and original. Although any bit of reasoning can be codified into a set of rules, there will always be further exercises of reason, however much we codify existing patterns of rationality, which go beyond the rules and yet are evidently reasonable and right. Reason is a source of knowledge. It is not confined to empty manipulations of experiental contents, or servile machinations towards passionate ends. Although we do well also to attend to actual experience and be guided in our practical thinking by what human beings actually want, we are not confined to those realms, but can assess their deliverances rationally, and perhaps reach new conclusions which go beyond the empirical evidence or transcend the motives and ambitions of contemporary men.

 

Reason is creative and original. It goes beyond antecedently established canons of right reasoning. And it can do so in a personal way, so that one man’s original insight may differ from another’s without either being wrong. Just as different men, using different codings, may pick out different Gödelian formulae, each of them true, so in other disciplines too, different thinkers may develop the subject in their own individual ways without any of them being necessarily wrong. We have been too long in thrall to a monolithic view of reason, supposing that it must yield just one right answer valid for all men in all places and at all times. And then we have felt that reason’s uniform light obliterated all personal idiosyncrasy and individuality, and that real fulfilment was to be found in feeling and sensibility rather than rationality and common sense, and that the life of reason was a poor thing, cold and lacking all romance. But it is a false antithesis resting on a false view of reason. Reason not only can be original, but original in very variegated ways, well capable of accommodating the variety of individual genius.

 

Moral and political philosophy will be different once reason is allowed to regain its ancient sway. Instead of seeking - vainly as it has appeared - for some fundamental principle from which all else follows deductively, we can live with our actual, somewhat piecemeal, approach to moral reasoning and moral reform. Many different arguments, of many different types, may be legitimately adduced, depending on the actual question to be addressed. Although the way Gödel’s theorem was proved follows a somewhat standard route, the upshot of the proof is that reasoning, even mathematical reasoning, comes in all sorts of novel forms, which we cannot anticipate but can recognise as cogent when it is presented to us. Solvitur ambulando: we do not need an exhaustive critique of pure reason, but can be content to hope that we shall continue to recognise good reasons when they are offered us. So, too, in politics, we can be much more relaxed once we are no longer compelled, on a priori grounds, to despair of reason. If reason is nothing, we should be pessimistic about the outcome of political debate, and compelled to view the political process as a crude conflict of particular interests, with the weaker always being trampled on by the strong, and with propaganda being the only way of bringing others to share one’s views. Power-sharing would always, then, be a dangerous weakening of one’s defences, and indoctrination the only means of transmitting one’s values to future generations. Many élites have excluded all others, fearing to let go of the ears of the wolf, and often the bitterest quarrels have been over the control of education. But once reason is acknowledged to have some sway over men’s minds, the case is greatly altered. We can do business with reasonable men, knowing that should we concede the force of their arguments they will not automatically construe our concession as a sign of weakness, but will be the readier in their turn to grant the cogency of good arguments adduced by us. And, while always aware of the importance of education, our attitude will be very different if in the end we rely on the unfettered judgement of educated men rather than the conditioned responses of those whose upbringing we have been able to control. If the Other Side is Wrong, their arguments, however meretricious, will not in the end survive exposure to ours. Magna est Veritas, we shall be justified in believing, et Praevalebit.

 

The thrust of Gödel’s theorem is revolutionary, but in a conservative direction. Many old truths, discarded by modern fashionable philosophers, are rehabilitated; in some cases shown to be definitely true, in others not to be necessarily false. Claims which would rule out traditional tenets and natural patterns of reasoning can be articulated precisely and formally in mathematical logic, and then shown not to hold in one central, incontrovertible instance. The instance is the Achilles’ heel of all wide-ranging metaphysical systems, that is the consideration of how they fare when assessed according to the precepts they promulgate. Many in that trial are hoist with their own petard. In general, however, it is difficult to get a grip on the argument, because it is inherently self-referential, and self-reference is tricky, and subject to reasonable doubts as to whether the referring expression actually refers. Gödel’s theorem overcomes this difficulty. It overcomes the normal slipperiness of wide-ranging metaphysical arguments by concentrating on a small fragment of mathematics formalised in precise logical terms. A complicated coding system guarantees reference. In most cases that would still be not enough to secure self-reference, but Gödel was dealing with an infinite set of numbers, and it is one of the peculiar features - a defining feature according to Dedekind’s definition of infinity - of infinite sets that they can be mapped by a one-one correlation into a proper subset of themselves. So it is possible for an infinite set to contain within itself a complete model of itself, in a way that is not possible for a finite model (e.g. a model village, containing a model of the model village, itself containing a model of the model village, itself containing … .). Once we embrace infinity we open the way to a coding that can code the whole system and yet still be part of it. Infinity - Dedekindian infinity - makes Gödelian self-reference possible, and thus an incontrovertible instance in which limited and finitely specified accounts of truth and rationality are evidently inadequate.

Footnotes:

from David Miller, University of Warwick. Date: Thu, 14 May 1998 13:30:10 +0100 (BST)

 

Dear John

 

Almost by chance I came across your two talks on the Web. Since they are clearly designed for a general audience, and since I teach this term a short course on the theme The Liar to Gödel I am moved to make a few critical comments, not on the principal thesis but on the details. I hope that you do not mind.

 

Your remark Gödel showed that first-order logic was complete, and it had seemed natural to suppose that it was decidable. But it cannot be, or we should be able to decide well-formed formulae of Peano Arithmetic by expressing them as implications with Peano’s postulates as the antecedent and the well-formed formula in question as the consequent. seems to assume that PA is finitely axiomatizable. Ryll-Nardzewski showed that it isn’t. So you need to substitute for PA a finitely axiomatizable essentially undecidable theory, such as Robinson’s Q. For first order PA consists of a few trivial axioms (for example, that if x′ = z′ then x = z); the axioms for + and x; and the induction scheme, which says that for any first order formula F the following

 

(Az){[F0 &ersand; (An)(Fn ⇒ Fn′)] ⇒ (An)Fn}

 

is an axiom (where (Az) universally quantifies all the free variables left in F). This is a scheme and Ryll-Nardzewski showed that there is no finite set of first-order sentences equivalent to PA.

 

How do we prove Church’s theorem from Gödel’s theorm for the finitely axiomatized Q? By reductio. (See Boolos & Jeffrey, Chapter 15, Theorem 2.) Suppose that first order logic is decidable. Then for any sentence A of first order arithmetic we can decide whether Q → A is valid (here Q is the conjunction of the axioms of Q), and therefore whether or not A is a theorem of Q. This would make Q a decidable theory. But if we use PA instead of Q we cannot do this. The conditional X → A is not generally defined when X is (the conjunction of) an infinite set (I have one or two interesting results on this), and it seems that we can do no better than use finite conjunctions X of axioms of PA. For any sentence A of PA, we can decide whether the conditional X → A is or is not valid. But since there infinitely many such X, we do not in this way get a decision procedure for PA. If A does follow from PA, we can establish that it follows from some X; but if it doesn’t, we have no way of knowing how big an X we need.

 

Kind regards -

 

David Miller

 

 

Dear David,

 

Touché. I should have said “SA” for Sorites Arithmetic, which is first-order logic plus the first four of Peano’s postulates, and serves (I hope) as well as Robinson’s Q for representing recursive functions.

 

I will enter suitable corrections. I wonder whether it might be helpful to put a precis of your E-mail into a postscript to the article. It would get the points across, without interrupting the flow of text.

Interested in supporting this site?

You can help by sharing the site with others. You can also donate at Go Get Funding: Logic and Language where there are full details.

 

 

As site owner I reserve the right to keep my comments sections as I deem appropriate. I do not use that right to unfairly censor valid criticism. My reasons for deleting or editing comments do not include deleting a comment because it disagrees with what is on my website. Reasons for exclusion include:
Frivolous, irrelevant comments.
Comments devoid of logical basis.
Derogatory comments.
Long-winded comments.
Comments with excessive number of different points.
Questions about matters that do not relate to the page they post on. Such posts are not comments.
Comments with a substantial amount of mathematical terms not properly formatted will not be published unless a file (such as doc, tex, pdf) is simultaneously emailed to me, and where the mathematical terms are correctly formatted.


Reasons for deleting comments of certain users:
Bulk posting of comments in a short space of time, often on several different pages, and which are not simply part of an ongoing discussion. Multiple anonymous user names for one person.
Users, who, when shown their point is wrong, immediately claim that they just wrote it incorrectly and rewrite it again - still erroneously, or else attack something else on my site - erroneously. After the first few instances, further posts are deleted.
Users who make persistent erroneous attacks in a scatter-gun attempt to try to find some error in what I write on this site. After the first few instances, further posts are deleted.


Difficulties in understanding the site content are usually best addressed by contacting me by e-mail.

 

Based on HashOver Comment System by Jacob Barkdull

Copyright   James R Meyer   2012 - 2024
https://www.jamesrmeyer.com