This page is keyboard accessible:
• Use Tab, Shift + Tab keys to traverse the main menu. To enter a sub-menu use the Right Arrow key. To leave a sub-menu use the Left Arrow or the Escape key.
• The Enter or the Space key opens the active menu item.
• To skip the menu and move to the main content, press Tab after the page loads to reveal a skip button.
• To get back to the top of the page anytime, press the Home key.
• For more information, click here: Accessibility   Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.
 

Language, Logic and Understanding

We make many decisions during the course of the day. Sometimes these decisions are guided by emotion, sometimes we just rely on a hunch, sometimes we rely on experience, and sometimes we analyze a situation logically and make a decision according to this logical analysis. But very few things in life are easy to analyze in a completely logical way; in most cases, our actual decisions are based on a combination of emotion, experience, and a little bit of logic.

 

However, when we want a conclusion that isn’t based on any emotion, or hunch, we want a conclusion that is arrived at purely by means of logical argument. This site is devoted to showing how many results that are commonly accepted as being the result of a completely logical argument are in fact flawed because of the failure to acknowledge the significance of the way language is used in the argument - a seemingly innocuous statement can contain subtle errors which render the statement illogical. Unless every aspect of a statement is very carefully analyzed with regard to the use of language by the statement, an ostensibly logical statement may actually contain subtle errors of logic. Even fairly innocuous looking statements can be difficult to analyze, see Natural Language and Reality.

 

Intuition

This site explains how intuitive errors may occur; in most cases it is because insufficient attention has been given to the use of language. If you are visiting this site for the first time, I suggest these pages as suitable starting points:

In principle, a logical argument should never rely on an unstated intuitive assumption. It is well known that intuition can lead to erroneous results, and that there are many examples of this having happened. So it should be the case that every logical argument should be carefully examined to ensure that it contains no intuitive assumptions. But there seems to be a blind spot when it comes to the possibility that the way that language is used in an argument might affect the validity of the argument. This possibility is commonly dismissed without any justification for its dismissal. But everything that is referred to by a logical argument must be referred to by symbols that belong to some language. And since that is the case, the fact that those symbols belong to some language is an inherent part of the argument, and is not something that can simply be ignored.

 

Self-reference

Much of this website deals with the confusion that occurs when levels of language are not clearly delineated. Kurt Gödel set the ball rolling on this in 1931 with his incompletenesses theorem which hides its language confusion under an impressive looking facade of complexity. Amazingly, it has long being accepted as correct even though Gödel never actually proved the crucial step in his proof, and although his proof leads to a blatant contradiction, see Gödel’s contradiction. And over the years since that there seems to be an alarming increase in the willingness of certain academics to forgo the need for clear precise logical Picture: Snake eating itselfproofs of any claim, and now there are numerous people who like to call themselves “logicians”, but who are content to simply make a crucial assumption rather than actually make an attempt to prove it, and proceed to base an entire structure of claims based on that assumption. That assumption is that a completely formal language can actually reference itself - that is, that a completely formal language there can be a sentence that explicitly refers to that entire sentence itself.

 

Despite their self-appellation as “logicians”, that isn’t logic, and the inane results of these assumptions aren’t logical - they are worthless. For an example of this sort of nonsense, see Halbach and Zhang: Yablo without Gödel.

 

Opinionated?

Most of this site is, naturally enough, based on logical and factual analysis. To provide some contrast, I decided to include some viewpoint based material here - this is where I get an opportunity to voice my opinion on various matters. Feel free to disagree.

 

25 June 2018    The duplicity of Mark Chu-Carroll

Mark Chu-Carroll is a computer scientist and software engineer. He writes a blog Good Math, Bad Math which has the headline:

“Good Math, Bad Math: Finding the fun in good math. Squashing bad math and the fools who promote it.”

 

Previously, I had a great deal of respect for Chu-Carroll, and I even wrote in a previous blog that in general he doesn’t dismiss anyone as a crank unless he can provide a reasoned explanation as to why they are wrong.

 

But it seems that when he is against the ropes, he plays dirty. I recently posted some comments on one of his blog pages Why we need formality in mathematics, and when it was evident that my comments were irking him, he resorted to various well-known dishonest debating tricks.

 

One of those tricks was suddenly questioning the meaning of a term whose meaning is perfectly clear, in this case the term “finite representation”.

 

Chu-Carroll asks in a comment:

What do you mean by real numbers that have no finite representation?Pi That’s one of those informal terms that sounds nice, but could mean several different things.

 

But in his previous comment he had used that very term without any quibble, indicating that it was quite clear to him what the term means, saying:

…how can I prove that there are sets without finite representation in set theory? Very easily…

 

number 9And he had used that very term himself in two of his previous blogs, where he makes it very clear what he thinks it means for a number to have or not to have a finite representation - in his blog You can’t even describe most numbers! he says:

The basics are really easy to explain: A describable number is a number for which there is some finite representation. An indescribable number is a number for which there is no finite notation.

 

And he wrote precisely the same thing in another blog You can’t write that number; in fact, you can’t write most numbers.

 

So in my next comment I ask him why he is now asking me what “finite representation” means:

You ask me now?… Is that intended as a joke?

and point out:

… you have described and explained it several times on your blog (do a Google advanced site search for the phrase “finite representation”).

 

square root of minus oneIn his reply he ignores what I said, and tries to confuse the issue by asking me as to what is the correct definition of “infinite representation”, a term I never use - since I consider it meaningless - and suggests that I must choose which one of his list of five different definitions of “infinite representation” is “correct”:

Which one?

(1) A number has an infinite representation if its decimal (or binary if you prefer) expansion has an infinite number of digits.

(2) A number has an infinite representation if its expansion in every integral number base has an infinite number of digits.

(3) A number has an infinite representation if its expansion in every integral number base has a non-terminating, non-repeating sequence of digits?

(4) A number has an infinite representation if there is no finite-length program in a recursive computing system that produces its digits.

(5) A number has an infinite representation there is no way of uniquely identifying the number in a finite amount of space.

 

blank pictureChu-Carroll simply assumes that:

not having a finite representation

implies:

having some sort of representation that is not finite

which is an implication that lacks any logical foundation. He assumes that the notion that a number might “exist” but have an infinite representation is a valid notion, but I have no idea what an infinite representation might be. So the answer to the question as to which of his five choices I might choose is none of them, since I consider the concept to be meaningless.

 

In the same comment he has the gall to tell me that “you’re not arguing honestly” on the basis that I hadn’t defined the term ‘finite representation’, even though I had shown that he had a very clear understanding of the term. In my next comment I again point out that Chu-Carroll had already previously himself used the very term “finite representation” in his blogs, but I provided my own definition anyway:

A real number having a finite representation; There is a definition in a given formal system that can be written down with a finite number of symbols, and which precisely defines the entire expansion of that number (to a given base).

A real number not having a finite representation; There is no definition in any formal system that can be written down with a finite number of symbols, and which precisely defines the entire expansion of that number (to a given base).

 

number 2Yet, after I provided that definition, he continues in his next comment to berate me for not defining his term “infinite representation”, a term that I do not use:

… the phrase “infinite representation” can have multiple meanings, and I carefully gave you a list of options. (Which, I will note, you ignored and gave your own, less precise definition.)

 

In my next comment, I point out that, contrary to what Chu-Carroll had written,:

I never used the term “infinite representation”. That is why I did not define it, and why I ignored your definitions of it. The term I actually used was “finite representation”, whose meaning is perfectly clear to you, as used in your blogs…

 

the number eChu-Carroll’s reply is:

Your problem in this entire discussion is that you don’t understand any of the things that you’re talking about.

 

And that just about sums up Chu-Carroll’s blatant duplicity - when he finally realizes that he cannot any longer pretend that he doesn’t know what the term “finite representation” means, he changes tack and resorts to claiming that I don’t understand anything of what we have been discussing, even though it was I who had to use several comments to show him that the term “finite representation” was easily understandable, even by him.

section divider

6 Dec 2017    Man versus Machine

These days there is more and more discussion of artificial intelligence and the changes that it will have on our lives in the future. We see that there are more and more tasks that AI can perform better than any human can. And now we are seeing that AI systems can themselves design new and improved AI systems, see Google’s New AI Designs AI Better Than Humans Could. Elon Musk has spoken about the potential dangers of AI on multiple occasions. Ray Kurzweil, Google’s Director of Engineering, has claimed that by 2029 AI will have human-level intelligence, and comfortably pass any valid Turing test, and its responses will be at least on a par with those of any human, regardless of the questions posed to it. He has also claimed that by 2045 a ‘Singularity’ will be reached, where AI is significantly smarter than any human, see Kurzweil Claims That the Singularity Will Happen by 2045. And Masayoshi Son, CEO of Softbank has predicted that by 2047 a single computer chip will have an IQ equivalent to 10,000, see Softbank CEO: The Singularity Will Happen by 2047.

 

But, despite the evidence that AI is progressing by leaps and bounds, there are nevertheless many Luddites who prefer to believe that human brains/minds are somehow different in principle from any machine, and that human minds/brains will always have some sort of superiority over any AI machine. The two principal claims that these Luddites commonly bring to the table in support of their viewpoint are that it is impossible for any AI to fully understand Gödel’s incompleteness theorem, and that is impossible for any AI to achieve consciousness. Often the two claims are intertwined.

 

Picture: Book, The Emporer's New MindHowever, despite the frequency of these claims being made, it is difficult to find attempts to provide a logically reasoned argument to support these claims. Perhaps the best in-depth attempts to give some sort of supporting argument to these claims in clear unambiguous language have been given by Roger Penrose, in his books The Emperor’s New Mind (Footnote: Roger Penrose, The Emperor’s New Mind, Oxford University Press, (1989) ISBN: 0198519737 The Emperor’s New Mind: Details) and Shadows of the Mind. (Footnote: Roger Penrose, Shadows of the Mind, Oxford University Press, (1994) ISBN: 0198539789 Shadows of the Mind: Details) In these books, Penrose claims that no AI can ever surpass human brains in finding mathematical “truths”. He bases his claims on Gödel’s proof of incompleteness, since, according to Penrose, a human can “see” the “truth” of Gödel’s proof of incompleteness, but no formal system can “see” that “truth”. Therefore, according to Penrose, mathematical insight cannot be mechanized, and there can be no AI that can replicate what a human does when “seeing” the “truth” of Gödel’s proof of incompleteness. Hence, claims Penrose, human brains must be inherently superior to any AI system. Penrose totally ignores the possibility that what he thinks he might be “seeing” in Gödel’s proof of incompleteness isn’t any profound truth, but a logical error, see Gödel’s theorem. But of course, he wouldn’t want to contemplate that possibility, as it would utterly destroy his claim that the human mind is superior to any possible AI.

 

Picture: Book, Shadows of the MindAnyway, let’s consider Penrose’s claims. If a person can understand a mathematical proof, where the proof proceeds without making any non-rule based assumptions other than the fundamental axioms, then for each individual step of the proof, that person must be able to deduce the result of that step from the information previous to that step. Every such deduction is given by a set of clearly defined rules, and so, any such deduction can be replicated by an algorithm. This applies to every step of a proof. The totality of all such deductions/algorithms for a given proof can be replicated by one single algorithm that goes through all the steps of the individual algorithms for that proof.

 

So, if Gödel’s “truth” result that Penrose is able to “see”, but which no AI system can ever “see”, is not given by some such algorithm, it must be the case that what Penrose calls “seeing” the “truth” of a proposition cannot be obtained by any logical deduction - which means it must be obtained by some leap of faith, where an assumption is used instead of a deduction. Penrose tries to camouflage what he is claiming by referring to mathematicians having insights, or intuition. But the crucial point that Penrose ignores is that if a mathematician has some sort of intuition or insight or hunch, his next step should always be to confirm that his hunch was correct, by subsequently proving rigorously what he had previously suspected. Otherwise, mathematics is simply a childish game where no-one knows or cares whether mathematical claims are correct.

 

Picture: Book, Dangerous IdeasAs Daniel Dennett says, “Whenever we say we solved some problem ‘by intuition’ all that really means is we don’t know how we solved it”. (Footnote: Daniel C Dennett, Darwin’s dangerous idea: Evolution and the meanings of life, first ed 1995, pub Penguin (New ed Sept. 1996) ISBN: 9780140167344.) When Penrose says that he cannot conceive of any algorithm that could enable one to “see” the “truth” of a certain mathematical statement, then he is really saying that he doesn’t know how he is able to “see” the “truth” of that statement. What he does do is to suggest that there might be some quantum process occurring in the brain that accounts for this “seeing” the “truth”, but that doesn’t even begin to explain how that might bypass the need for a logical deduction of all steps that lead to that “seeing” the “truth”. It’s a rather desperate argument, invoking a claim that there has to be some magical property that we can’t understand but which can enable us to “see” the “truth”. Penrose’s argument is so absurd that it beggars belief that a competent mathematician could make it; it is an appeal to rely on faith rather than logic and evidence.

 

Penrose also uses a fallacious argument which involves creating a straw man; he argues that there can be no universal infallible method for creating a proof of a mathematical proposition. But no-one is claiming that there is !

 

What Penrose does is to conflate the notion of having a universal infallible method for proving a mathematical proposition, and the simple fact that most proofs are obtained by the use of a fallible trial and error method - a method that is in principle reducible to an algorithm. The fact that this method may not be able to provide a proof of every possible mathematical statement is completely and utterly irrelevant. After all, there is no reason to suppose that human insight or intuition can provide a proof of every possible mathematical statement either. Daniel Dennett has written a perspicacious article (Footnote: Daniel C. Dennett, Murmurs in the cathedral: Review of R Penrose, The Emperor’s New Mind, Times Literary Supplement, 29 September, 1989) on Penrose’s book The Emperor’s New Mind, and is not fooled by Penrose’s fallacious posturing. He points out that if we compare Penrose’s argument regarding discovering correct mathematical proofs to the notion of achieving checkmate in a chess game then we end up with this argument:

  • X is superbly capable of achieving checkmate.
  • There is no (practical) algorithm guaranteed to achieve checkmate,
    therefore
  • X does not owe its power to achieve checkmate to an algorithm.

 

The point Dennet makes is that Penrose makes an elementary error in logical deduction - simply because mathematicians sometimes produce correct mathematical proofs, that does not prove that they are not using some sort of algorithm to achieve those results. The evidence that does exist indicates that mathematicians actually use a trial and error method. Many mathematicians will attest that they perused several false trails before attaining the correct one that leads to a proof. Furthermore, there have been many cases of proofs that were published, but later found to be incorrect. (Footnote: For example, a proof of the four color map theorem was published by the prominent mathematician Alfred Kempe in 1879, and which received widespread acclaim. Eleven years later, an error was found in the proof. Another proof of the four color map theorem was published by Peter Guthrie Tait in 1880. Again, it was eleven years before anyone discovered that there was an error in the proof.)

 

Penrose also invokes consciousness to help try shore up his arguments, stating:

I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must “see” the truth of a mathematical argument to be convinced of its validity. This “seeing” is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we convince ourselves of the validity of Gödel’s theorem we not only “see” it, but by so doing we reveal the very non-algorithmic nature of the “seeing” process itself. (Footnote: From the book, The Emperor’s New Mind, Oxford University Press, (1989) ISBN: 0198519737 The Emperor’s New Mind: Details)

 

But what is consciousness? Part of the problem when trying to counter claims that AI can never have consciousness is that the claimants never define clearly and precisely what they actually mean by the term consciousness. But it seems to me that consciousness is simply the result of a person having in the brain a model of himself and his environment. Having such a model enables the person to make predictions of what is likely to happen in different scenarios. This applies also to other entities in the environment, such as other humans, so that the person can make predictions as to what another person is likely to do in a certain set of circumstances. It’s easy to see why such a model system might have evolved, as the better a person can make predictions as to what might happen in various different scenarios enables him to make a better choice as to which actions he should do next.

 

At the moment, we do not have AI systems that have complex models of themselves and their environment. But in principle there is no reason to suppose that such AI systems are impossible. It is easy to imagine AI systems that have very simple models of themselves and their environment. Given that is the case, why should we then imagine that there is some limitation on how complex such models might become? Such imagined limitations are mere wishful thinking, and which go against all the lessons of the past.

section divider

Footnotes:

section divider

16 August 2017     A John Searle Inanity

Recently, I was looking up a passage in a book The Mystery of Consciousness (Footnote: The Mystery of Consciousness by John Searle, 1997. ISBN 0-940322-06-4. pp 85–86.) by John Searle. (Footnote: See also the article on Searle’s Chinese Room) Admittedly, the book is now 20 years old, but I could not help laughing at an argument that Searle puts forward in the book. Searle argues in the book that there is something non-computational about human consciousness. At one point, Searle argues that a completely computational process can result in a system that is incapable of description by a computational algorithm. He states:

“… there is no problem whatever in supposing that a set of relations that are non-computable at some level of description can be the result of processes that are computable at some other level.”

 

He bases this belief on the assignment of LPNs (vehicle license number plates that are assigned by a governmental body) to VINs(vehicle identification numbers as assigned at the vehicle factory), and states:

“Here is an example. Every registered car in California has both a vehicle identification number (VIN) and a license plate number (LPN). For registeredPicture: number-plate cars there is a perfect match: for every LPN there is a VIN and vice versa, and this match continues indefinitely into the future because as new cars are manufactured each gets a VIN, and as they come in to use in California each is assigned a LPN. But there is no way to compute one from the other. To put this in mathematical jargon, if we construe each series as potentially infinite, thePicture: number-plate function from VIN to LPN is a non-computable function. But so what? Non-computability by itself is of little significance and does not imply that the processes that produce the non-computable relations must therefore be non-computable For all I know, the assignment of VINs at the car factories may be done by computer, and if it isn’t, it certainly could be. The assignment of LPNs ideally is done by one of the oldest algorithms known: first come, first served.”

 

This must be one of the most asinine statements by someone who has gained general recognition as a profound philosopher.

 

As Searle says, the assignment of LPNs could be done by computer. But of course the next VIN that will arrive in an application for a LPN is random - the computer does not know what the next VIN that will accompany an application for a LPN will be, and obviously it cannot compute that. What Searle is talking about being computable is the list of correspondences between VINs and LPNs that exists at a particular time only after all such correspondences up to that time have been assigned.

 

But a correspondence between a VIN and a LPN before an application for a LPN has been submitted is obviously not computable. In short, Searle is comparingPicture: number-plate chalk and cheese. His argument is a completely nonsensical absurdity. When Searle says that “Non-computability … does not imply that the processes that produce the non-computable relations must therefore be non-computable”, he is implying that a computable process can produce a non-computable relationship. This of course, is complete nonsense and Searle can provide no evidence whatsoever to support his crazy notions.

 

In the case of VINs and LPNs, every computative process involved (the assignment of a LPN when a application for a LPN accompanies a VIN) produces a correspondence which is quite obviously computable, given the information regarding the VIN and the date/time of VPN application and the current LPN at that given date/time. But it is equally obvious that no computer, and no computablePicture: number-plate process can predict in advance what LPN will be linked to a VIN before the assignment of the LPN has been computed. Neither can humans or human consciousness.

 

In short, Searle’s argument says nothing about whether a state of human consciousness might be something that is non-computable, regardless of how it arises.

section divider

Footnotes:

section divider

10 Feb 2017    Fake News and Fake Mathematics

Currently we hear a lot about fake news. What we don’t hear much about is fake mathematics. At this point you might be wondering what I mean by fake mathematics.

 

Fake news might be described as material that is fabricated without any supporting evidence, and which is presented in such a way that naive observers are willing to believe the material without subjecting it to any detailed examination, especially if it concurs with their underlying philosophy.

 

Picture: Fake mathsIn a similar vein, fake mathematics might be described as material that is fabricated without any supporting evidence, and which is presented in such a way that naive observers are willing to believe the material without subjecting it to any detailed examination, especially if it concurs with their underlying philosophy.

 

While we don’t hear much about it, fake mathematics has been prevalent for a great many years. To show that this is the case, we only have to carry out a simple thought experiment. In this thought experiment, we imagine an alternative mathematical world than the one we see today. In our thought experiment, the only proofs accepted by the mathematical community are proofs that have been logically proved, and no proof steps are allowed to be assumed to be correct rather than proven. We now suppose that in this mathematical world (as in our actual world) GödelPicture: Pipe submitted his paper on Incompleteness (Footnote: Gödel’s paper was written in German, viewable online Gödel’s original proof in German: here PDF. The English translation of the paper is entitled “On Formally Undecidable Propositions of Principia Mathematica and Related Systems”, viewable online Gödel’s Proof - English translation: here.) to various journals. Unfortunately for Gödel, in this mathematical world, all the reviewers rejected his paper because (as in our actual world (Footnote: Peter Smith, although a staunch advocate of Gödel’s proof, acknowledges this in his paper, Expounding the First Incompleteness Theorem (PDF), that, “Gödel only sketches a proof… The crucial step is just asserted.”)) it failed to prove a crucial step in the proof, and Gödel merely assumed that the crucial step (the Proposition V in his paper) was correct. This was completely unacceptable to the reviewers, and Gödel’s paper was never published in this hypothetical mathematical world.

 

But, as the years rolled on in this mathematical world, large numbers of people still attempted to prove what Gödel tried to prove, but what he never actually did prove. And all these people either tried to rely on an unproven assumption - just like Gödel did - or else they made basic logical errors. (Footnote: See, for example:
The Flaw in Gödel’s Proof of his Incompleteness Theorem
Paper(PDF): The Fundamental Flaw in Gödel’s Proof of his Incompleteness Theorem
Analysis of Other Incompleteness Proofs
Common Errors in Incompleteness Proofs
Yet another flawed incompleteness proof)
In this alternative mathematical world, such people are ridiculed and are called cranks - because what they are doing strikes against the fundamental ethos of this mathematical world, where the establishment of a logical proof of any claim is of paramount importance.

 

Now, let us look instead at the mathematical world that we actually inhabit. In our actual mathematical world, such people aren’t called cranks. No, often they are professors and have prestigious positions within our mathematical world. Yes, in our current mathematical world, people that should be called cranks and who should be reprimanded for promoting fake mathematics are accepted and even applauded for what they do. In the actual mathematical world that we inhabit, fake mathematics is sitting alongside normal mathematics, instead of being banished forever from it. Surely this is unacceptable in a community in the 21st century that claims to be based on rationality?

section divider

Footnotes:

section divider

Previous Posts

 

section divider

 

 

Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked. Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - any address will do, it does not require verification. Your e-mail will only be used to notify you of replies to your comments - it will never be used for any other purpose and will not be displayed. If you cannot see any comments below, see Why isn’t the comment box loading?.

section divider
 

The Lighter Side

NEWS

Recently added pages

The Platonist Rod paradox

 

The Balls in the Urn Paradox

 

How you can tell if someone is a crackpot

 

Platonism’s Logical Blunder

 

Richard’s Paradox

 

Alexander’s Horned Sphere

 

section divider

Lebesgue Measure

There is now a new page on a contradiction in Lebesgue measure theory.

section divider

Illogical Assumptions

There is now a new page Halbach and Zhang’s Yablo without Gödel which analyzes the illogical assumptions used by Halbach and Zhang.

section divider

Easy Footnotes

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).

section divider

O’Connor’s “computer checked” proof

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.

Previous Blog Posts

Links

For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem

 

– a page with general links:

Gödel Links

 

– and a page relating specifically to the Gödel mind-machine debate:

Gödel, Minds, and Machines

Printer Friendly

All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.


Note: for some browsers JavaScript must be enabled for this to operate correctly.

Comments

Comments on this site are welcome, please see the comment section.

 

Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.

 

If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.

 

Feedback about site design would also be appreciated so that I can improve the site.


Copyright © James R Meyer 2012 - 2018  
www.jamesrmeyer.com