This page is keyboard accessible:
• Use Tab, Shift + Tab keys to traverse the main menu. To enter a sub-menu use the Right Arrow key. To leave a sub-menu use the Left Arrow or the Escape key.
• The Enter or the Space key opens the active menu item.
• To skip the menu and move to the main content, press Tab after the page loads to reveal a skip button.
• To get back to the top of the page anytime, press the Home key.
• For more information, click here: Accessibility   Close this tip.

Note: Full functionality of this web page requires JavaScript to be enabled in your browser.
 

Language, Logic and Understanding

We make many decisions during the course of the day. Sometimes these decisions are guided by emotion, sometimes we just rely on a hunch, sometimes we rely on experience, and sometimes we analyze a situation logically and make a decision according to this logical analysis. But very few things in life are easy to analyze in a completely logical way; in most cases, our actual decisions are based on a combination of emotion, experience, and a little bit of logic.

 

However, when we want a conclusion that isn’t based on any emotion, or hunch, we want a conclusion that is arrived at purely by means of logical argument. This site is devoted to showing how many results that are commonly accepted as being the result of a completely logical argument are in fact flawed because of the failure to acknowledge the significance of the way language is used in the argument. When a statement itself refers to some aspect of language, a seemingly innocuous statement can contain subtle errors which render the statement illogical. Unless every aspect of a statement which refers to some aspect of language itself is very carefully analyzed, an ostensibly logical statement may actually contain subtle errors of logic. Even fairly innocuous looking statements can be difficult to analyze, see Natural Language and Reality.

 

This site explains how such errors may occur; in most cases it is because insufficient attention has been given to the way in which such statements refer to language. If you are visiting this site for the first time, I suggest these pages as suitable starting points:

 

Self-reference

Much of this website deals with the confusion that occurs when levels of language are not clearly delineated. But there seems to be an alarming increase in the willingness of certain academics to forgo the need for clear precise logical Picture: Snake eating itselfproofs of any claim, and now there are numerous people who like to call themselves “logicians”, but who are content to simply make a crucial assumption rather than actually make an attempt to prove it, and proceed to base an entire structure of claims based on that assumption. That assumption is that a completely formal language can actually reference itself - that is, that a completely formal language there can be a sentence that explicitly refers to that entire sentence itself.

 

Despite their self-appellation as “logicians”, that isn’t logic, and the inane results of these assumptions aren’t logical - they are worthless. For an example of this sort of nonsense, see Halbach and Zhang: Yablo without Gödel.

 

Opinionated?

Most of this site is, naturally enough, based on logical and factual analysis. To provide some contrast, I decided to include some viewpoint based material here - this is where I get an opportunity to voice my opinion on various matters. Feel free to disagree.

 

6 Dec 2017    Man versus Machine

These days there is more and more discussion of artificial intelligence and the changes that it will have on our lives in the future. We see that there are more and more tasks that AI can perform better than any human can. And now we are seeing that AI systems can themselves design new and improved AI systems, see Google’s New AI Designs AI Better Than Humans Could. Elon Musk has spoken about the potential dangers of AI on multiple occasions. Ray Kurzweil, Google’s Director of Engineering, has claimed that by 2029 AI will have human-level intelligence, and comfortably pass any valid Turing test, and its responses will be at least on a par with those of any human, regardless of the questions posed to it. He has also claimed that by 2045 a ‘Singularity’ will be reached, where AI is significantly smarter than any human, see Kurzweil Claims That the Singularity Will Happen by 2045. And Masayoshi Son, CEO of Softbank has predicted that by 2047 a single computer chip will have an IQ equivalent to 10,000, see Softbank CEO: The Singularity Will Happen by 2047.

 

But, despite the evidence that AI is progressing by leaps and bounds, there are nevertheless many Luddites who prefer to believe that human brains/minds are somehow different in principle from any machine, and that human minds/brains will always have some sort of superiority over any AI machine. The two principal claims that these Luddites commonly bring to the table in support of their viewpoint are that it is impossible for any AI to fully understand Gödel’s incompleteness theorem, and that is impossible for any AI to achieve consciousness. Often the two claims are intertwined.

 

However, despite the frequency of these claims being made, it is difficult to find attempts to provide a logically reasoned argument to support these claims. Perhaps the best in-depth attempts to give some sort of supporting argument to these claims in clear unambiguous language have been given by Roger Penrose, in his books The Emperor’s New Mind (Footnote: Roger Penrose, The Emperor’s New Mind, Oxford University Press, (1989) ISBN: 0198519737 The Emperor’s New Mind: Details) and Shadows of the Mind. (Footnote: Roger Penrose, Shadows of the Mind, Oxford University Press, (1994) ISBN: 0198539789 Shadows of the Mind: Details) In these books, Penrose claims that no AI can ever surpass human brains in finding mathematical “truths”. He bases his claims on Gödel’s proof of incompleteness, since, according to Penrose, a human can “see” the “truth” of Gödel’s proof of incompleteness, but no formal system can “see” that “truth”. Therefore, according to Penrose, mathematical insight cannot be mechanized, and there can be no AI that can replicate what a human does when “seeing” the “truth” of Gödel’s proof of incompleteness. Hence, claims Penrose, human brains must be inherently superior to any AI system. Penrose totally ignores the possibility that what he thinks he might be “seeing” in Gödel’s proof of incompleteness isn’t any profound truth, but a logical error, see Gödel’s theorem. But of course, he wouldn’t want to contemplate that possibility, as it would utterly destroy his claim that the human mind is superior to any possible AI.

 

Anyway, let’s consider Penrose’s claims. If a person can understand a mathematical proof, where the proof proceeds without making any non-rule based assumptions other than the fundamental axioms, then for each individual step of the proof, that person must be able to deduce the result of that step from the information previous to that step. Every such deduction is given by a set of clearly defined rules, and so, any such deduction can be replicated by an algorithm. This applies to every step of a proof. The totality of all such deductions/algorithms for a given proof can be replicated by one single algorithm that goes through all the steps of the individual algorithms for that proof.

 

So, if Gödel’s “truth” result that Penrose is able to “see”, but which no AI system can ever “see”, is not given by some such algorithm, it must be the case that what Penrose calls “seeing” the “truth” of a proposition cannot be obtained by any logical deduction - which means it must be obtained by some leap of faith, where an assumption is used instead of a deduction. Penrose tries to camouflage what he is claiming by referring to mathematicians having insights, or intuition. But the crucial point that Penrose ignores is that if a mathematician has some sort of intuition or insight or hunch, his next step should always be to confirm that his hunch was correct, by subsequently proving rigorously what he had previously suspected. Otherwise, mathematics is simply a childish game where no-one knows or cares whether mathematical claims are correct.

 

As Daniel Dennett says, “Whenever we say we solved some problem ‘by intuition’ all that really means is we don’t know how we solved it”. (Footnote: Daniel C Dennett, Darwin’s dangerous idea: Evolution and the meanings of life, first ed 1995, pub Penguin (New ed Sept. 1996) ISBN: 9780140167344.) When Penrose says that he cannot conceive of any algorithm that could enable one to “see” the “truth” of a certain mathematical statement, then he is really saying that he doesn’t know how he is able to “see” the “truth” of that statement. What he does do is to suggest that there might be some quantum process occurring in the brain that accounts for this “seeing” the “truth”, but that doesn’t even begin to explain how that might bypass the need for a logical deduction of all steps that lead to that “seeing” the “truth”. It’s a rather desperate argument, invoking a claim that there has to be some magical property that we can’t understand but which can enable us to “see” the “truth”. Penrose’s argument is so absurd that it beggars belief that a competent mathematician could make it; it is an appeal to rely on faith rather than logic and evidence.

 

Penrose also uses a fallacious argument which involves creating a straw man; he argues that there can be no universal infallible method for creating a proof of a mathematical proposition. But no-one is claiming that there is!

 

What Penrose does is to conflate the notion of having a universal infallible method for proving a mathematical proposition, and the simple fact that most proofs are obtained by the use of a fallible trial and error method - a method that is in principle reducible to an algorithm. The fact that this method may not be able to provide a proof of every possible mathematical statement is completely and utterly irrelevant. After all, there is no reason to suppose that human insight or intuition can provide a proof of every possible mathematical statement either. Daniel Dennett has written a perspicacious article (Footnote: Daniel C. Dennett, Murmurs in the cathedral: Review of R Penrose, The Emperor’s New Mind, Times Literary Supplement, 29 September, 1989) on Penrose’s book The Emperor’s New Mind, and is not fooled by Penrose’s fallacious posturing. He points out that if we compare Penrose’s argument regarding discovering correct mathematical proofs to the notion of achieving checkmate in a chess game then we end up with this argument:

  • X is superbly capable of achieving checkmate.
  • There is no (practical) algorithm guaranteed to achieve checkmate,
    therefore
  • X does not owe its power to achieve checkmate to an algorithm.

 

The point Dennet makes is that Penrose makes an elementary error in logical deduction - simply because mathematicians sometimes produce correct mathematical proofs, that does not prove that they are not using some sort of algorithm to achieve those results. The evidence that does exist indicates that mathematicians actually use a trial and error method. Many mathematicians will attest that they perused several false trails before attaining the correct one that leads to a proof. Furthermore, there have been many cases of proofs that were published, but later found to be incorrect. (Footnote: For example, a proof of the four color map theorem was published by the prominent mathematician Alfred Kempe in 1879, and which received widespread acclaim. Eleven years later, an error was found in the proof. Another proof of the four color map theorem was published by Peter Guthrie Tait in 1880. Again, it was eleven years before anyone discovered that there was an error in the proof.)

 

Penrose also invokes consciousness to help try shore up his arguments, stating:

I believe, also, that our consciousness is a crucial ingredient in our comprehension of mathematical truth. We must “see” the truth of a mathematical argument to be convinced of its validity. This “seeing” is the very essence of consciousness. It must be present whenever we directly perceive mathematical truth. When we convince ourselves of the validity of Gödel’s theorem we not only “see” it, but by so doing we reveal the very non-algorithmic nature of the “seeing” process itself. (Footnote: From the book, The Emperor’s New Mind, Oxford University Press, (1989) ISBN: 0198519737 The Emperor’s New Mind: Details)

 

But what is consciousness? Part of the problem when trying to counter claims that AI can never have consciousness is that the claimants never define clearly and precisely what they actually mean by the term consciousness. But it seems to me that consciousness is simply the result of a person having in the brain a model of himself and his environment. Having such a model enables the person to make predictions of what is likely to happen in different scenarios. This applies also to other entities in the environment, such as other humans, so that the person can make predictions as to what another person is likely to do in a certain set of circumstances. It’s easy to see why such a model system might have evolved, as the better a person can make predictions as to what might happen in various different scenarios enables him to make a better choice as to which actions he should do next.

 

At the moment, we do not have AI systems that have complex models of themselves and their environment. But in principle there is no reason to suppose that such AI systems are impossible. It is easy to imagine AI systems that have very simple models of themselves and their environment. Given that is the case, why should we then imagine that there is some limitation on how complex such models might become? Such imagined limitations are mere wishful thinking, and which go against all the lessons of the past.

 

 

Footnotes:

 

 

16 August 2017     A John Searle Inanity

Recently, I was looking up a passage in a book The Mystery of Consciousness (Footnote: The Mystery of Consciousness by John Searle, 1997. ISBN 0-940322-06-4. pp 85–86.) by John Searle. (Footnote: See also the article on Searle’s Chinese Room) Admittedly, the book is now 20 years old, but I could not help laughing at an argument that Searle puts forward in the book. Searle argues in the book that there is something non-computational about human consciousness. At one point, Searle argues that a completely computational process can result in a system that is incapable of description by a computational algorithm. He states:

“… there is no problem whatever in supposing that a set of relations that are non-computable at some level of description can be the result of processes that are computable at some other level.”

 

He bases this belief on the assignment of LPNs (vehicle license number plates that are assigned by a governmental body) to VINs(vehicle identification numbers as assigned at the vehicle factory), and states:

“Here is an example. Every registered car in California has both a vehicle identification number (VIN) and a license plate number (LPN). For registeredPicture: number-plate cars there is a perfect match: for every LPN there is a VIN and vice versa, and this match continues indefinitely into the future because as new cars are manufactured each gets a VIN, and as they come in to use in California each is assigned a LPN. But there is no way to compute one from the other. To put this in mathematical jargon, if we construe each series as potentially infinite, thePicture: number-plate function from VIN to LPN is a non-computable function. But so what? Non-computability by itself is of little significance and does not imply that the processes that produce the non-computable relations must therefore be non-computable For all I know, the assignment of VINs at the car factories may be done by computer, and if it isn’t, it certainly could be. The assignment of LPNs ideally is done by one of the oldest algorithms known: first come, first served.”

 

This must be one of the most asinine statements by someone who has gained general recognition as a profound philosopher.

 

As Searle says, the assignment of LPNs could be done by computer. But of course the next VIN that will arrive in an application for a LPN is random - the computer does not know what the next VIN that will accompany an application for a LPN will be, and obviously it cannot compute that. What Searle is talking about being computable is the list of correspondences between VINs and LPNs that exists at a particular time only after all such correspondences up to that time have been assigned.

 

But a correspondence between a VIN and a LPN before an application for a LPN has been submitted is obviously not computable. In short, Searle is comparingPicture: number-plate chalk and cheese. His argument is a completely nonsensical absurdity. When Searle says that “Non-computability … does not imply that the processes that produce the non-computable relations must therefore be non-computable”, he is implying that a computable process can produce a non-computable relationship. This of course, is complete nonsense and Searle can provide no evidence whatsoever to support his crazy notions.

 

In the case of VINs and LPNs, every computative process involved (the assignment of a LPN when a application for a LPN accompanies a VIN) produces a correspondence which is quite obviously computable, given the information regarding the VIN and the date/time of VPN application and the current LPN at that given date/time. But it is equally obvious that no computer, and no computablePicture: number-plate process can predict in advance what LPN will be linked to a VIN before the the assignment of the LPN has been computed. Neither can humans or human consciousness.

 

In short, Searle’s argument says nothing about whether a state of human consciousness might be something that is non-computable, regardless of how it arises.

 

 

Footnotes:

 

 

10 Feb 2017    Fake News and Fake Mathematics

Currently we hear a lot about fake news. What we don’t hear much about is fake mathematics. At this point you might be wondering what I mean by fake mathematics.

 

Fake news might be described as material that is fabricated without any supporting evidence, and which is presented in such a way that naive observers are willing to believe the material without subjecting it to any detailed examination, especially if it concurs with their underlying philosophy.

 

Picture: Fake mathsIn a similar vein, fake mathematics might be described as material that is fabricated without any supporting evidence, and which is presented in such a way that naive observers are willing to believe the material without subjecting it to any detailed examination, especially if it concurs with their underlying philosophy.

 

While we don’t hear much about it, fake mathematics has been prevalent for a great many years. To show that this is the case, we only have to carry out a simple thought experiment. In this thought experiment, we imagine an alternative mathematical world than the one we see today. In our thought experiment, the only proofs accepted by the mathematical community are proofs that have been logically proved, and no proof steps are allowed to be assumed to be correct rather than proven. We now suppose that in this mathematical world (as in our actual world) GödelPicture: Pipe submitted his paper on Incompleteness (Footnote: Gödel’s paper was written in German, viewable online Gödel’s original proof in German: here PDF. The English translation of the paper is entitled “On Formally Undecidable Propositions of Principia Mathematica and Related Systems”, viewable online Gödel’s Proof - English translation: here.) to various journals. Unfortunately for Gödel, in this mathematical world, all the reviewers rejected his paper because (as in our actual world (Footnote: Peter Smith, although a staunch advocate of Gödel’s proof, acknowledges this in his paper, Expounding the First Incompleteness Theorem (PDF), that, “Gödel only sketches a proof… The crucial step is just asserted.”)) it failed to prove a crucial step in the proof, and Gödel merely assumed that the crucial step (the Proposition V in his paper) was correct. This was completely unacceptable to the reviewers, and Gödel’s paper was never published in this hypothetical mathematical world.

 

But, as the years rolled on in this mathematical world, large numbers of people still attempted to prove what Gödel tried to prove, but what he never actually did prove. And all these people either tried to rely on an unproven assumption - just like Gödel did - or else they made basic logical errors. (Footnote: See, for example:
The Flaw in Gödel’s Proof of his Incompleteness Theorem
Paper(PDF): The Fundamental Flaw in Gödel’s Proof of his Incompleteness Theorem
Analysis of Other Incompleteness Proofs
Common Errors in Incompleteness Proofs
Yet another flawed incompleteness proof)
In this alternative mathematical world, such people are ridiculed and are called cranks - because what they are doing strikes against the fundamental ethos of this mathematical world, where the establishment of a logical proof of any claim is of paramount importance.

 

Now, let us look instead at the mathematical world that we actually inhabit. In our actual mathematical world, such people aren’t called cranks. No, often they are professors and have prestigious positions within our mathematical world. Yes, in our current mathematical world, people that should be called cranks and who should be reprimanded for promoting fake mathematics are accepted and even applauded for what they do. In the actual mathematical world that we inhabit, fake mathematics is sitting alongside normal mathematics, instead of being banished forever from it. Surely this is unacceptable in a community in the 21st century that claims to be based on rationality?

 

 

Footnotes:

 

 

13 Jan 2017    Ned Block’s Blockhead Machine argument

In a paper (Psychologism and behaviourism, Ned Block, 1981, Philosophical Review, 90, 5-43, available online Psychologism and behaviourism), Ned Block conceives of a theoretical computer system (now commonly referred to as Blockhead) as part of a thought experiment. Block argues that the internal mechanism of a system is important in determining whether that system is intelligent, and claims that he can show that a non-intelligent system could pass the Turing Test.

 

Block asks us to imagine a conversation lasting any given amount of time. He argues that, there are only a finite number of syntactically and grammatically correct sentences that can be used to start a conversation. And from this point on there is a limit to how many valid responses can be made to this first sentence, and then again to the second sentence, and so on until the conversation ends.

 

Picture: Fake mathsBlock then asks us to imagine a computer which had been programmed with every one of all these possible sentences. Although it has been claimed that the number of sentences required for an hour long conversation would be greater than the number of particles in the universe, Block argues that hypothetically such a machine could exist, so that his argument is still valid as a theoretical argument rather than one which can be applied in practice.

 

Given this hypothetical machine, Block invites us to agree that such a machine could continue a conversation with a person on any topic, because the computer would be programmed for not only every sentence but for every sequence of sentences that might be inputted to it. On this basis, Block claims that the hypothetical machine would be able to pass the Turing test despite the machine having no attributes that we would assign as indicative of intelligence.

 

Block claims that his argument shows that the internal composition of the machine must be considered in any assessment of whether that machine can be considered to be intelligent, and that the Turing test on its own cannot suffice. That is to say, his claim is:
Premise: the Blockhead machine does not use any intelligence to produce its response, yet it can pass the Turing test
Therefore
Conclusion: the Turing test is not a sufficient condition for intelligence.

 

There are two principal flaws in Block’s argument:

 

1. Ignoring the time taken to react to a question

The principal assumption in Block’s argument is that the Blockhead machine, although impossibly large, is not infinitely large, and can contain all of what are considered to be intelligent responses that a human might make. Now, the Turing test is a test of whether a machine can emulate a human. The longer the period of the Turing test, the more difficult it is for a machine to pass the test. Block claims that his argument and a Blockhead machine that he describes, as long as the machine is large enough, suffices for a Turing Test of any duration.

 

However, Block makes no mention of the time taken for his finite but impossibly huge Blockhead machines to produce a response. He admits that his machine may be larger than the observable universe, but insists that it is a valid theoretical concept since it is nevertheless finite. But if the hypothesis results in an imaginary machine that is larger than the observable universe, it follows that there would be physical limitations with this imaginary machine. For example, the response time to at least some of the questions will be in terms of time greater than 24 hours, so that there will always be a possibility that the response time to a least one question will be greater than the time allocated for the test (It might be mentioned that Block says that there may be new discoveries in sub-atomic physics that would enable his Blockhead machine to be made on a human scale, but this is mere speculation which is completely at odds with scientific opinion at the present time).

 

As is well known the Turing test is, above all, a behavioural test, an assessment of the interaction of the entity being measured with a specific environment. A human judge will decide that a response time over a certain time interval (which will measured in seconds or minutes, depending on the question, rather than millions of years) is too long for it to be a normal valid human response time, and on that basis can decide that the Blockhead machine is not a human (at least not a normal one), nor a human-like intelligence. After all, we do not find it surprising that IQ tests are time-limited tests; we would class something that takes thirty minutes to solve a puzzle as being less intelligent than one that takes one minute to perform the same task.

 

2. Ignoring the possibility of a human introducing new words or symbols.

In any conversation, a human can introduce new words, and use such new words in subsequent conversation to refer to things that he might otherwise use natural language to refer to. A human could easily invent words of 60 characters or more; using the English alphabet of 26 letters, the number of possible words of up to 60 characters would be greater than the number of atoms estimated to be in the universe. Hence no machine of finite size could cope with every possible new word of up to 60 characters. In other words, such a machine could never exist, even hypothetically, in our universe.

 

Block’s description of his machine is that it will only deal with “sensible strings”, and that the machine will be programmed using “imagination and judgment about what is to count as a sensible string”. A sensible string would, of course, be any string that a human might use to test the respondent in a Turing test! That is, all strings that would include words of 60 characters or more! The size of the machine is now ridiculously enormous.

 

But even beyond that, Block’s argument looks even more preposterous when one can envisage that a human can also introduce new symbols and use such new symbols in subsequent conversation. The only limitations on the use of new symbols would be the overall size and that it must be possible to easily distinguish different symbols. There are thousands of Chinese characters, and a human could easily invent new ones. After all, humans must have invented all our language symbols at some point in time, so there is no reason to suppose that a human could not introduce a new symbol in a conversation.

 

Block’s machine is now becoming more and more ludicrously massive.

 

Block’s response

Block responds to the type of criticisms above by pleading (his reply to objection 6 in his paper) that:

My argument requires only that the machine be logically possible, not that it be feasible or even nomologically possible.

 

This, of course, is absurdity masquerading as meaningful philosophy. The reality is that humans are physical entities that are subject to the limitations of the physical world. The Turing test is a test envisaged to be applied to physical entities that are also subject to physical limitations.

 

When Block claims that a hypothetical non-physically realizable machine could pass a test that is designed to be applied in the real physical world to real physical entities, he is simply imagining a magic machine that happens to possess some physical attributes (such as the ability to manipulate symbols) but also possesses magical properties that have no possible physical realization. So the conclusion is?

 

A magic machine can do magic things that no physically realizable thing can do.

 

Eh - didn’t we already know that?

 

 

Previous Posts

 

 

 

Diverse opinions and criticisms are welcome, but messages that are frivolous, irrelevant or devoid of logical basis will be blocked. Difficulties in understanding the site content are usually best addressed by contacting me by e-mail. Note: you will be asked to provide an e-mail address - any address will do, it does not require verification. Your e-mail will only be used to notify you of replies to your comments - it will never be used for any other purpose and will not be displayed. If you cannot see any comments below, see Why isn’t the comment box loading?.

 

 

Please wait for comments to load …  

 

The Lighter Side

 

NEWS

Lebesgue Measure

There is now a new page on Lebesgue measure theory and how it is contradictory.

 

 

Illogical Assumptions

There is now a new page Halbach and Zhang’s Yablo without Gödel which demonstrates the illogical assumptions used by Halbach and Zhang.

 

 

Peter Smith’s ‘Proof’

It has come to my notice that, when asked about the demonstration of the flaw in his proof (see A Fundamental Flaw in an Incompleteness Proof by Peter Smith PDF), Smith refuses to engage in any logical discussion, and instead attempts to deflect attention away from any such discussion. If any other reader has tried to engage with Smith regarding my demonstration of the flaw, I would be interested to know what the outcome was.

 

 

Easy Footnotes

I found that making, adding or deleting footnotes in the traditional manner proved to be a major pain. So I developed a different system for footnotes which makes inserting or changing footnotes a doddle. You can check it out at Easy Footnotes for Web Pages (Accessibility friendly).

 

 

O’Connor’s “computer checked” proof

I have now added a new section to my paper on Russell O’Connor’s claim of a computer verified incompleteness proof. This shows that the flaw in the proof arises from a reliance on definitions that include unacceptable assumptions - assumptions that are not actually checked by the computer code. See also the new page Representability.

 

 

New page on Chaitin’s Constant

There is now a new page on Chaitin’s Constant (Chaitin’s Omega), which demonstrates that Chaitin has failed to prove that it is actually algorithmically irreducible.

 

Previous Blog Posts  

 

Links  

 

For convenience, there are now two pages on this site with links to various material relating to Gödel and the Incompleteness Theorem

 

– a page with general links:

Gödel Links

 

– and a page relating specifically to the Gödel mind-machine debate:

Gödel, Minds, and Machines

 

Printer Friendly

 

All pages on this website are printer friendly, and will print the main content in a convenient format. Note that the margins are set by your browser print settings.


Note: for some browsers JavaScript must be enabled for this to operate correctly.

 

Comments

 

Comments on this site are welcome, please see the comment section.

 

Please note that this web site, like any other is a collection of various statements. Not all of this web site is intended to be factual. Some of it is personal opinion or interpretation.

 

If you prefer to ask me directly about the material on this site, please send me an e-mail with your query, and I will attempt to reply promptly.

 

Feedback about site design would also be appreciated so that I can improve the site.

 


Copyright © James R Meyer 2012 - 2017  
www.jamesrmeyer.com