Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

"I have just one word for you, my boy...plastics." -- from "The Graduate"


tech / sci.logic / Re: The Psychology of Self-Reference

SubjectAuthor
* Re: The Psychology of Self-Reference_ Olcott
+- Re: The Psychology of Self-ReferenceRichard Damon
`* Re: The Psychology of Self-ReferenceDan Christensen
 `- Re: The Psychology of Self-Referenceolcott

1
Re: The Psychology of Self-Reference

<a243c2e7-b0cd-4a11-9841-3a3f67d5e8e2n@googlegroups.com>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=718&group=sci.logic#718

  copy link   Newsgroups: sci.logic
X-Received: by 2002:ad4:5884:0:b0:626:1a34:918 with SMTP id dz4-20020ad45884000000b006261a340918mr226219qvb.13.1685300116452;
Sun, 28 May 2023 11:55:16 -0700 (PDT)
X-Received: by 2002:a81:4050:0:b0:561:8ff6:fb5b with SMTP id
m16-20020a814050000000b005618ff6fb5bmr4664650ywn.10.1685300116187; Sun, 28
May 2023 11:55:16 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: sci.logic
Date: Sun, 28 May 2023 11:55:15 -0700 (PDT)
In-Reply-To: <cbiciv02k04@drn.newsguy.com>
Injection-Info: google-groups.googlegroups.com; posting-host=67.3.177.173; posting-account=DF8GrgkAAACuHOM11ubMg5CKivwjGJf3
NNTP-Posting-Host: 67.3.177.173
References: <cbiciv02k04@drn.newsguy.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <a243c2e7-b0cd-4a11-9841-3a3f67d5e8e2n@googlegroups.com>
Subject: Re: The Psychology of Self-Reference
From: polco...@gmail.com (_ Olcott)
Injection-Date: Sun, 28 May 2023 18:55:16 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 8046
 by: _ Olcott - Sun, 28 May 2023 18:55 UTC

On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
> It is becoming increasingly clear that Peter Olcott and Herc have
> no coherent mathematical argument for rejecting Godel's theorem
> and Turing's proof of the unsolvability of the halting problem.
> Their objections are really psychological---they feel that the
> proofs are somehow a cheat, but they lack the mathematical ability
> to say why.
> I'd like to talk about the psychology of why people sometimes feel
> that Godel's and Turing's proofs are somehow cheats. Partly, it is
> the fault of informal intuitive expositions of the results.
> Both Godel's proof and Turing's proof have the flavor of using
> self-reference to force someone to make a mistake. Both cases
> seem a little like the following paradox (call it the "Gotcha"
> paradox).
> You ask someone (we'll call him "Jack") to give a truthful
> yes/no answer to the following question:
> Will Jack's answer to this question be no?
> Jack can't possibly give a correct yes/no answer to the question.
> While the Gotcha paradox gives some of the flavor of Godel's
> proof or Turing's proof, there is one big difference, and this
> difference is what makes people feel like there is something
> fishy going on: In the case of the Gotcha paradox, it
> is possible for Jack to *know* the answer, but to be
> prevented by the rules from *saying* the answer.
> In other words, there is a gap between what Jack knows
> and what he can say. He knows that the answer to the question
> is "no", but he can't say that answer, because that would
> make the answer incorrect. So this informal paradox doesn't
> really reveal any limitations in Jack's knowledge---it's
> just a quirk of the rules that prevents Jack from telling
> the answer. It's a little like the following one-question
> quiz:
> ---------------
> | 5 5 5 5 |
> | How many 5's |
> | appear inside |
> | this box? |
> | Answer: ___ |
> | |
> ---------------
> If you write "5" in the space provided, then the correct answer
> is "6", and if you write "6" the correct answer is "5". The fact
> that you can't write the correct answer in the space provided
> doesn't prove that you have problems counting.
> Someone hearing some variant of the Gotcha paradox might be led
> to think (as Peter Olcott and Herc do) that Godel's and Turing's
> proofs might be cheats in a similar way.
> Of course, the difference is that there is no "gap" involved in
> Turing's or Godel's proofs. It makes no sense to suppose that
> Peano Arithmetic really knows that the Godel statement is true,
> but just can't say it, because there is no notion of PA "knowing"
> something independently of what it can prove. In the case of Turing's
> proof, given a purported solution H to the halting problem,
> one comes up with a program Q(x) such that
> Q halts on its own input if and only if H(Q,Q) = false
> There is no sense in which H "knows" that the answer is true
> but is unable to say it.
> We could try to modify the Gotcha paradox to eliminate the gap
> between what you know and what you can say. Let's consider the
> following statement (called "U" for "Unbelievable").
> U: Jack will never believe this statement.
> Apparently, if Jack believes U, then U is false. So we are left
> with two possibilities:
> Either (A) Jack believes some false statement, or (B)
> there is some true statement that Jack doesn't believe.
> This is a lot like Godel's sentence G that shows that PA is
> either inconsistent or incomplete. However, it still seems like
> a joke, or a trick, rather than something that reveals any
> limitations in Jack's knowledge. U doesn't seem to have any
> real content, so who cares whether it is true or not, or whether
> Jack believes it or not. It isn't a claim about anything tangible,
> so who could ever tell if Jack believes it or not, or what it even
> *means* for Jack to believe it?
> Okay, let's try one more time to get something meaningful that
> really reveals a gap in Jack's knowledge akin to Godel's
> incompleteness. Suppose that at some future time, the mechanisms
> behind the human mind are finally understood. Suppose that it is
> possible to insert probes into a person's brain to discover what
> the person is thinking, and what he believes.
> So we take our subject, Jack, and hook him up with our brain scanning
> machine. We give Jack a computer monitor on which we can display
> statements for Jack to consider, and we connect his brain scanning
> machine to a bell in such a way that if Jack agrees with the statement
> on the screen (that is, if the scanning machine determines that Jack
> believes the statement) then the bell will ring. Then we display
> on the screen the following statement:
> The bell will not ring.
> Now, there is no way out for Jack. The statement is now a completely
> concrete claim---there is no ambiguity about what it means, and there
> is no ambiguity about whether it is true or false. There is no "knowledge
> gap" possible---either Jack believes that the statement is true, or
> he doesn't.
> Does Jack believe the statement, or not? It seems to me that in this
> circumstance, Jack is forced to doubt his own reasoning ability, or
> to doubt the truth of the circumstances (that the brain scanning machine
> works as advertised, or that it is connected to the bell as described).
> If he *really* believes in the soundness of his own reasoning, and he
> really believes in the truth of the claims about the scanning machine,
> then it logically follows that the bell will not ring. But as soon as
> he makes that inference, the bell will ring, showing that he made a
> mistake, somewhere. So the only way for Jack to avoid making a mistake
> is if he considers it *possible* that he or his information is mistaken.
> --
> Daryl McCullough
> Ithaca, NY

After nearly two decades of pondering I have derived some resolution. Self-contradictory expressions of language such as the Liar Paradox are not truth bearers thus have no Boolean value.

When Gödel's g expresses its unprovability within formal system F the proof of g in F requires a sequence of inference steps proving that no such sequence of inference steps exists in F, thus is self-contradictory in F. When we examine the same statement in metamathematics we are outside of the scope of self-contradiction.

*The same thing works for the Liar Paradox*
This sentence is not true: "This sentence is not true" is true.

When Jack is asked his question the Jack/Question pair is inside the scope of self-contradiction. When anyone else is asked the same question they are outside of the scope of self-contradiction. When a question is asked within the scope of self-contradiction it is an incorrect question because a correct answer cannot possibly exist.

Re: The Psychology of Self-Reference

<fLNcM.3545896$9sn9.2406800@fx17.iad>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=720&group=sci.logic#720

  copy link   Newsgroups: sci.logic
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx17.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.11.1
Subject: Re: The Psychology of Self-Reference
Content-Language: en-US
Newsgroups: sci.logic
References: <cbiciv02k04@drn.newsguy.com>
<a243c2e7-b0cd-4a11-9841-3a3f67d5e8e2n@googlegroups.com>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <a243c2e7-b0cd-4a11-9841-3a3f67d5e8e2n@googlegroups.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 28
Message-ID: <fLNcM.3545896$9sn9.2406800@fx17.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 28 May 2023 15:15:23 -0400
X-Received-Bytes: 2611
 by: Richard Damon - Sun, 28 May 2023 19:15 UTC

On 5/28/23 2:55 PM, _ Olcott wrote:

> After nearly two decades of pondering I have derived some resolution. Self-contradictory expressions of language such as the Liar Paradox are not truth bearers thus have no Boolean value.
>
> When Gödel's g expresses its unprovability within formal system F the proof of g in F requires a sequence of inference steps proving that no such sequence of inference steps exists in F, thus is self-contradictory in F. When we examine the same statement in metamathematics we are outside of the scope of self-contradiction.

So, you just don't understand what you are reading.

He proves that there is no *FINITE* sequence if inference steps, that
could form a PROOF of the statement, and at the same time show that
there *IS* an INFINITE series of steps that shows that the statement is
TRUE.

This has long been one of your problems, that you don't understand the
difference in implication of a finite series of steps and an infinte
series, likely becaue you mind can't comprehend the infinite.

This is strange for someone who claims to be God, as God, by most
definitions, needs to be infinite himself.

>
> *The same thing works for the Liar Paradox*
> This sentence is not true: "This sentence is not true" is true.
>
> When Jack is asked his question the Jack/Question pair is inside the scope of self-contradiction. When anyone else is asked the same question they are outside of the scope of self-contradiction. When a question is asked within the scope of self-contradiction it is an incorrect question because a correct answer cannot possibly exist.

WHich are irrelevent, and you bringing them up just shows how broken
your case is.

Re: The Psychology of Self-Reference

<6d5aafde-e8ac-43d0-b593-d950ee51af7en@googlegroups.com>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=721&group=sci.logic#721

  copy link   Newsgroups: sci.logic
X-Received: by 2002:a05:6214:901:b0:61a:bbd9:2c64 with SMTP id dj1-20020a056214090100b0061abbd92c64mr1128191qvb.6.1685305353468;
Sun, 28 May 2023 13:22:33 -0700 (PDT)
X-Received: by 2002:a81:c846:0:b0:561:2d82:7f08 with SMTP id
k6-20020a81c846000000b005612d827f08mr4888916ywl.0.1685305353194; Sun, 28 May
2023 13:22:33 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: sci.logic
Date: Sun, 28 May 2023 13:22:32 -0700 (PDT)
In-Reply-To: <a243c2e7-b0cd-4a11-9841-3a3f67d5e8e2n@googlegroups.com>
Injection-Info: google-groups.googlegroups.com; posting-host=163.182.226.42; posting-account=OWfgwwgAAADQpH2XgMDMe2wuQ7OFPXlE
NNTP-Posting-Host: 163.182.226.42
References: <cbiciv02k04@drn.newsguy.com> <a243c2e7-b0cd-4a11-9841-3a3f67d5e8e2n@googlegroups.com>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <6d5aafde-e8ac-43d0-b593-d950ee51af7en@googlegroups.com>
Subject: Re: The Psychology of Self-Reference
From: Dan_Chri...@sympatico.ca (Dan Christensen)
Injection-Date: Sun, 28 May 2023 20:22:33 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 8246
 by: Dan Christensen - Sun, 28 May 2023 20:22 UTC

On Sunday, May 28, 2023 at 2:55:17 PM UTC-4, _ Olcott wrote:
> On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
> > It is becoming increasingly clear that Peter Olcott and Herc have
> > no coherent mathematical argument for rejecting Godel's theorem
> > and Turing's proof of the unsolvability of the halting problem.
> > Their objections are really psychological---they feel that the
> > proofs are somehow a cheat, but they lack the mathematical ability
> > to say why.
> > I'd like to talk about the psychology of why people sometimes feel
> > that Godel's and Turing's proofs are somehow cheats. Partly, it is
> > the fault of informal intuitive expositions of the results.
> > Both Godel's proof and Turing's proof have the flavor of using
> > self-reference to force someone to make a mistake. Both cases
> > seem a little like the following paradox (call it the "Gotcha"
> > paradox).
> > You ask someone (we'll call him "Jack") to give a truthful
> > yes/no answer to the following question:
> > Will Jack's answer to this question be no?
> > Jack can't possibly give a correct yes/no answer to the question.
> > While the Gotcha paradox gives some of the flavor of Godel's
> > proof or Turing's proof, there is one big difference, and this
> > difference is what makes people feel like there is something
> > fishy going on: In the case of the Gotcha paradox, it
> > is possible for Jack to *know* the answer, but to be
> > prevented by the rules from *saying* the answer.
> > In other words, there is a gap between what Jack knows
> > and what he can say. He knows that the answer to the question
> > is "no", but he can't say that answer, because that would
> > make the answer incorrect. So this informal paradox doesn't
> > really reveal any limitations in Jack's knowledge---it's
> > just a quirk of the rules that prevents Jack from telling
> > the answer. It's a little like the following one-question
> > quiz:
> > ---------------
> > | 5 5 5 5 |
> > | How many 5's |
> > | appear inside |
> > | this box? |
> > | Answer: ___ |
> > | |
> > ---------------
> > If you write "5" in the space provided, then the correct answer
> > is "6", and if you write "6" the correct answer is "5". The fact
> > that you can't write the correct answer in the space provided
> > doesn't prove that you have problems counting.
> > Someone hearing some variant of the Gotcha paradox might be led
> > to think (as Peter Olcott and Herc do) that Godel's and Turing's
> > proofs might be cheats in a similar way.
> > Of course, the difference is that there is no "gap" involved in
> > Turing's or Godel's proofs. It makes no sense to suppose that
> > Peano Arithmetic really knows that the Godel statement is true,
> > but just can't say it, because there is no notion of PA "knowing"
> > something independently of what it can prove. In the case of Turing's
> > proof, given a purported solution H to the halting problem,
> > one comes up with a program Q(x) such that
> > Q halts on its own input if and only if H(Q,Q) = false
> > There is no sense in which H "knows" that the answer is true
> > but is unable to say it.
> > We could try to modify the Gotcha paradox to eliminate the gap
> > between what you know and what you can say. Let's consider the
> > following statement (called "U" for "Unbelievable").
> > U: Jack will never believe this statement.
> > Apparently, if Jack believes U, then U is false. So we are left
> > with two possibilities:
> > Either (A) Jack believes some false statement, or (B)
> > there is some true statement that Jack doesn't believe.
> > This is a lot like Godel's sentence G that shows that PA is
> > either inconsistent or incomplete. However, it still seems like
> > a joke, or a trick, rather than something that reveals any
> > limitations in Jack's knowledge. U doesn't seem to have any
> > real content, so who cares whether it is true or not, or whether
> > Jack believes it or not. It isn't a claim about anything tangible,
> > so who could ever tell if Jack believes it or not, or what it even
> > *means* for Jack to believe it?
> > Okay, let's try one more time to get something meaningful that
> > really reveals a gap in Jack's knowledge akin to Godel's
> > incompleteness. Suppose that at some future time, the mechanisms
> > behind the human mind are finally understood. Suppose that it is
> > possible to insert probes into a person's brain to discover what
> > the person is thinking, and what he believes.
> > So we take our subject, Jack, and hook him up with our brain scanning
> > machine. We give Jack a computer monitor on which we can display
> > statements for Jack to consider, and we connect his brain scanning
> > machine to a bell in such a way that if Jack agrees with the statement
> > on the screen (that is, if the scanning machine determines that Jack
> > believes the statement) then the bell will ring. Then we display
> > on the screen the following statement:
> > The bell will not ring.
> > Now, there is no way out for Jack. The statement is now a completely
> > concrete claim---there is no ambiguity about what it means, and there
> > is no ambiguity about whether it is true or false. There is no "knowledge
> > gap" possible---either Jack believes that the statement is true, or
> > he doesn't.
> > Does Jack believe the statement, or not? It seems to me that in this
> > circumstance, Jack is forced to doubt his own reasoning ability, or
> > to doubt the truth of the circumstances (that the brain scanning machine
> > works as advertised, or that it is connected to the bell as described).
> > If he *really* believes in the soundness of his own reasoning, and he
> > really believes in the truth of the claims about the scanning machine,
> > then it logically follows that the bell will not ring. But as soon as
> > he makes that inference, the bell will ring, showing that he made a
> > mistake, somewhere. So the only way for Jack to avoid making a mistake
> > is if he considers it *possible* that he or his information is mistaken..
> > --
> > Daryl McCullough
> > Ithaca, NY

> After nearly two decades of pondering I have derived some resolution. Self-contradictory expressions of language such as the Liar Paradox are not truth bearers thus have no Boolean value.
>

Once upon a midnight dreary, while I pondered weak and weary...

(P & ~P) is always false. (P <=> ~P) is always false.

Evermore!

Dan

Download my DC Proof 2.0 freeware at http://www.dcproof.com
Visit my Math Blog at http://www.dcproof.wordpress.com

Re: The Psychology of Self-Reference

<u50do6$10lva$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=722&group=sci.logic#722

  copy link   Newsgroups: sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic
Subject: Re: The Psychology of Self-Reference
Date: Sun, 28 May 2023 15:32:05 -0500
Organization: A noiseless patient Spider
Lines: 139
Message-ID: <u50do6$10lva$1@dont-email.me>
References: <cbiciv02k04@drn.newsguy.com>
<a243c2e7-b0cd-4a11-9841-3a3f67d5e8e2n@googlegroups.com>
<6d5aafde-e8ac-43d0-b593-d950ee51af7en@googlegroups.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 28 May 2023 20:32:06 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="4f88169012ba92053ff80e0f55524c6a";
logging-data="1071082"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/Qwzdn1P4XY1b7yJE2snVV"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.11.0
Cancel-Lock: sha1:HfFys+NtUOMYCErz4ChdvFOxOeg=
Content-Language: en-US
In-Reply-To: <6d5aafde-e8ac-43d0-b593-d950ee51af7en@googlegroups.com>
 by: olcott - Sun, 28 May 2023 20:32 UTC

On 5/28/2023 3:22 PM, Dan Christensen wrote:
> On Sunday, May 28, 2023 at 2:55:17 PM UTC-4, _ Olcott wrote:
>> On Friday, June 25, 2004 at 6:30:39 PM UTC-5, Daryl McCullough wrote:
>>> It is becoming increasingly clear that Peter Olcott and Herc have
>>> no coherent mathematical argument for rejecting Godel's theorem
>>> and Turing's proof of the unsolvability of the halting problem.
>>> Their objections are really psychological---they feel that the
>>> proofs are somehow a cheat, but they lack the mathematical ability
>>> to say why.
>>> I'd like to talk about the psychology of why people sometimes feel
>>> that Godel's and Turing's proofs are somehow cheats. Partly, it is
>>> the fault of informal intuitive expositions of the results.
>>> Both Godel's proof and Turing's proof have the flavor of using
>>> self-reference to force someone to make a mistake. Both cases
>>> seem a little like the following paradox (call it the "Gotcha"
>>> paradox).
>>> You ask someone (we'll call him "Jack") to give a truthful
>>> yes/no answer to the following question:
>>> Will Jack's answer to this question be no?
>>> Jack can't possibly give a correct yes/no answer to the question.
>>> While the Gotcha paradox gives some of the flavor of Godel's
>>> proof or Turing's proof, there is one big difference, and this
>>> difference is what makes people feel like there is something
>>> fishy going on: In the case of the Gotcha paradox, it
>>> is possible for Jack to *know* the answer, but to be
>>> prevented by the rules from *saying* the answer.
>>> In other words, there is a gap between what Jack knows
>>> and what he can say. He knows that the answer to the question
>>> is "no", but he can't say that answer, because that would
>>> make the answer incorrect. So this informal paradox doesn't
>>> really reveal any limitations in Jack's knowledge---it's
>>> just a quirk of the rules that prevents Jack from telling
>>> the answer. It's a little like the following one-question
>>> quiz:
>>> ---------------
>>> | 5 5 5 5 |
>>> | How many 5's |
>>> | appear inside |
>>> | this box? |
>>> | Answer: ___ |
>>> | |
>>> ---------------
>>> If you write "5" in the space provided, then the correct answer
>>> is "6", and if you write "6" the correct answer is "5". The fact
>>> that you can't write the correct answer in the space provided
>>> doesn't prove that you have problems counting.
>>> Someone hearing some variant of the Gotcha paradox might be led
>>> to think (as Peter Olcott and Herc do) that Godel's and Turing's
>>> proofs might be cheats in a similar way.
>>> Of course, the difference is that there is no "gap" involved in
>>> Turing's or Godel's proofs. It makes no sense to suppose that
>>> Peano Arithmetic really knows that the Godel statement is true,
>>> but just can't say it, because there is no notion of PA "knowing"
>>> something independently of what it can prove. In the case of Turing's
>>> proof, given a purported solution H to the halting problem,
>>> one comes up with a program Q(x) such that
>>> Q halts on its own input if and only if H(Q,Q) = false
>>> There is no sense in which H "knows" that the answer is true
>>> but is unable to say it.
>>> We could try to modify the Gotcha paradox to eliminate the gap
>>> between what you know and what you can say. Let's consider the
>>> following statement (called "U" for "Unbelievable").
>>> U: Jack will never believe this statement.
>>> Apparently, if Jack believes U, then U is false. So we are left
>>> with two possibilities:
>>> Either (A) Jack believes some false statement, or (B)
>>> there is some true statement that Jack doesn't believe.
>>> This is a lot like Godel's sentence G that shows that PA is
>>> either inconsistent or incomplete. However, it still seems like
>>> a joke, or a trick, rather than something that reveals any
>>> limitations in Jack's knowledge. U doesn't seem to have any
>>> real content, so who cares whether it is true or not, or whether
>>> Jack believes it or not. It isn't a claim about anything tangible,
>>> so who could ever tell if Jack believes it or not, or what it even
>>> *means* for Jack to believe it?
>>> Okay, let's try one more time to get something meaningful that
>>> really reveals a gap in Jack's knowledge akin to Godel's
>>> incompleteness. Suppose that at some future time, the mechanisms
>>> behind the human mind are finally understood. Suppose that it is
>>> possible to insert probes into a person's brain to discover what
>>> the person is thinking, and what he believes.
>>> So we take our subject, Jack, and hook him up with our brain scanning
>>> machine. We give Jack a computer monitor on which we can display
>>> statements for Jack to consider, and we connect his brain scanning
>>> machine to a bell in such a way that if Jack agrees with the statement
>>> on the screen (that is, if the scanning machine determines that Jack
>>> believes the statement) then the bell will ring. Then we display
>>> on the screen the following statement:
>>> The bell will not ring.
>>> Now, there is no way out for Jack. The statement is now a completely
>>> concrete claim---there is no ambiguity about what it means, and there
>>> is no ambiguity about whether it is true or false. There is no "knowledge
>>> gap" possible---either Jack believes that the statement is true, or
>>> he doesn't.
>>> Does Jack believe the statement, or not? It seems to me that in this
>>> circumstance, Jack is forced to doubt his own reasoning ability, or
>>> to doubt the truth of the circumstances (that the brain scanning machine
>>> works as advertised, or that it is connected to the bell as described).
>>> If he *really* believes in the soundness of his own reasoning, and he
>>> really believes in the truth of the claims about the scanning machine,
>>> then it logically follows that the bell will not ring. But as soon as
>>> he makes that inference, the bell will ring, showing that he made a
>>> mistake, somewhere. So the only way for Jack to avoid making a mistake
>>> is if he considers it *possible* that he or his information is mistaken.
>>> --
>>> Daryl McCullough
>>> Ithaca, NY
>
>> After nearly two decades of pondering I have derived some resolution. Self-contradictory expressions of language such as the Liar Paradox are not truth bearers thus have no Boolean value.
>>
>
> Once upon a midnight dreary, while I pondered weak and weary...
>
> (P & ~P) is always false. (P <=> ~P) is always false.
>
> Evermore!
>
> Dan
>
> Download my DC Proof 2.0 freeware at http://www.dcproof.com
> Visit my Math Blog at http://www.dcproof.wordpress.com
>
>

It is not merely that P & ~P is false it is that every expression of
language that is isomorphic to the Liar Paradox cannot be resolved to
true or false because it is semantically unsound.

?- G = not(provable(F, G)).
G = not(provable(F, G)).

?- unify_with_occurs_check(G, not(provable(F, G))).
false.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer


tech / sci.logic / Re: The Psychology of Self-Reference

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor