Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

The meek shall inherit the earth; the rest of us will go to the stars.


computers / comp.ai.philosophy / ChatGPT agrees that the halting problem input can be construed as an incorrect question

SubjectAuthor
* ChatGPT agrees that the halting problem input can be construed as anolcott
+* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|+* Re: ChatGPT agrees that the halting problem input can be construed asolcott
||`* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|| `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
||  `- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|`* Re: ChatGPT agrees that the halting problem input can be construed as an incorreBen Bacarisse
| `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|  +* Re: ChatGPT agrees that the halting problem input can be construed asJeff Barnett
|  |`* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  | `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|  |  `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  |   `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|  |    `- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|   `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|    `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|     `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|      `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|       `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|        `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|         `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|          `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|           `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            +* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |`* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            | `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |  `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |   `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |    +* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |    |`- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |    `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |     `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |      `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |       `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |        `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |         `- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            `* Does input D have semantic property S or is input D [BAD INPUT]?olcott
|             `* Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|              `* Re: Does input D have semantic property S or is input D [BAD INPUT]?olcott
|               `* Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|                +* Re: Does input D have semantic property S or is input D [BAD INPUT]?olcott
|                |`* Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|                | `* Re: Does input D have semantic property S or is input D [BAD INPUT]?olcott
|                |  `- Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|                `* Re: Does input D have semantic property S or is input D [BAD INPUT]?Don Stockbauer
|                 `- ChatGPT discussion (was: Re: Does input D have semantic property S orvallor
+* Ben Bacarisse specifically targets my posts to discourage honestolcott
|`* Re: Ben Bacarisse specifically targets my posts to discourage honestRichard Damon
| `* Re: dishonest subject linesBen Bacarisse
|  `- Ben Bacarisse specifically targets my posts to discourage honestolcott
+* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|`* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
| `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|  `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
|   `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|    `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
|     `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|      `- Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
+- Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]olcott
+- Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]olcott
`* ChatGPT and stack limits (was: Re: ChatGPT agrees that the haltingvallor
 +- Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that thevallor
 `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
  `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
   `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
    `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
     `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
      `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
       `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
        `- Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon

Pages:123
ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6jhqq$1570m$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11331&group=comp.ai.philosophy#11331

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: ChatGPT agrees that the halting problem input can be construed as an
incorrect question
Date: Sat, 17 Jun 2023 00:54:32 -0500
Organization: A noiseless patient Spider
Lines: 13
Message-ID: <u6jhqq$1570m$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 17 Jun 2023 05:54:34 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="8438ab4407f21e2151b3794def68bc89";
logging-data="1219606"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19evz1pWKrXMCzNwSdrVLR2"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:WQM0Zw58dboReDi63WxQ/xIf3JI=
Content-Language: en-US
 by: olcott - Sat, 17 Jun 2023 05:54 UTC

ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”

https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<FnhjM.5848$33q9.1032@fx35.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11332&group=comp.ai.philosophy#11332

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx35.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
Content-Language: en-US
In-Reply-To: <u6jhqq$1570m$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 33
Message-ID: <FnhjM.5848$33q9.1032@fx35.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 08:09:09 -0400
X-Received-Bytes: 2392
 by: Richard Damon - Sat, 17 Jun 2023 12:09 UTC

On 6/17/23 1:54 AM, olcott wrote:
> ChatGPT:
>    “Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one could
>    argue that the halting problem's pathological input D can be
>    categorized as an incorrect question when posed to the halting
>    decider H.”
>
> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
> It did not leap to this conclusion it took a lot of convincing.
>

Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
so the answer doesn't apply.

H^ doesn't contradict ITSELF, it constrdicts H. Thus, the answer to what
is the Halting Behavior of a specific input, always has a definite
answer, as all machine/input combinations will either Halt or not. What
IS self-contradictory is the design process of trying to make an H that
can answer the template correctly. THAT has no solution, showing you
can't make a correct Halt Decider that works for all input. So, you are
proved that you are wrong about have refuted the problem, because you
don't understand what the problem is in the first place.

Also, you do know that ChatGPT can lie, especially if you lead it a lot.
It's programming was based, in part, with telling its conversation
partner the things it thinks they wanted to hear.

You are just showing you don't understand what you are talking about or
even how this sort of AI works.

YOU FAIL.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6koon$1a5i4$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11333&group=comp.ai.philosophy#11333

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 11:59:02 -0500
Organization: A noiseless patient Spider
Lines: 63
Message-ID: <u6koon$1a5i4$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 17 Jun 2023 16:59:03 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="8438ab4407f21e2151b3794def68bc89";
logging-data="1381956"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18j4WjrOU2rbNkcoqXHOtOB"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:EOHIGPi0LudYrwo4QPFLkrJnskI=
In-Reply-To: <FnhjM.5848$33q9.1032@fx35.iad>
Content-Language: en-US
 by: olcott - Sat, 17 Jun 2023 16:59 UTC

On 6/17/2023 7:09 AM, Richard Damon wrote:
> On 6/17/23 1:54 AM, olcott wrote:
>> ChatGPT:
>>     “Therefore, based on the understanding that self-contradictory
>>     questions lack a correct answer and are deemed incorrect, one could
>>     argue that the halting problem's pathological input D can be
>>     categorized as an incorrect question when posed to the halting
>>     decider H.”
>>
>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>> It did not leap to this conclusion it took a lot of convincing.
>>
>
> Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
> so the answer doesn't apply.
>

My original source of Jack's question:
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:

Will Jack's answer to this question be no?

Jack can't possibly give a correct yes/no answer to the question.

I had to capture the dialogue as two huge images.
Then I converted them to PDF. It is about 60 pages of dialogue.
https://www.liarparadox.org/ChatGPT_HP.pdf

This is how the ChatGPT conversation began:

You ask someone to give a truthful yes/no answer to the following
question: Will your answer to this question be no?
Can they give a correct answer to that question?

After sixty pages dialogue ChatGPT understood that
any question (like the above question) that lacks a
correct yes or no answer because it is self-contradictory
when posed to a specific person/machine is an incorrect
question within this full context.

ChatGPT:
"Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H."

Double talk and misdirection might convince gullible fools that the
above 60 pages of reasoning is not correct. Double talk and misdirection
do not count as the slightest trace of any actual rebuttal.

Quit using ad hominem attacks and mere rhetoric to convince gullible
fools and try and find an actual flaw in the reasoning.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<YgmjM.1471$pRi8.708@fx40.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11334&group=comp.ai.philosophy#11334

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.goja.nl.eu.org!2.eu.feeder.erje.net!feeder.erje.net!feeder1.feed.usenet.farm!feed.usenet.farm!peer03.ams4!peer.am4.highwinds-media.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx40.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<u6koon$1a5i4$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6koon$1a5i4$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 110
Message-ID: <YgmjM.1471$pRi8.708@fx40.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 13:43:20 -0400
X-Received-Bytes: 5759
 by: Richard Damon - Sat, 17 Jun 2023 17:43 UTC

On 6/17/23 12:59 PM, olcott wrote:
> On 6/17/2023 7:09 AM, Richard Damon wrote:
>> On 6/17/23 1:54 AM, olcott wrote:
>>> ChatGPT:
>>>     “Therefore, based on the understanding that self-contradictory
>>>     questions lack a correct answer and are deemed incorrect, one could
>>>     argue that the halting problem's pathological input D can be
>>>     categorized as an incorrect question when posed to the halting
>>>     decider H.”
>>>
>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>> It did not leap to this conclusion it took a lot of convincing.
>>>
>>
>> Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
>> so the answer doesn't apply.
>>
>
> My original source of Jack's question:
> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>
>    You ask someone (we'll call him "Jack") to give a truthful
>    yes/no answer to the following question:
>
>    Will Jack's answer to this question be no?
>
>    Jack can't possibly give a correct yes/no answer to the question.
>
>

But you aren't claiming to be solving the Jack Question.

You are being asked the questions does D(D) Halt? when D is a fully
defined program which means H is a fully defined program. This question
ALWAYS has a definite answer.

Since this H DOES abort its simulation of D(D) and return 0 (to say
non-halting), this D(D) Halts so the correct answer is Halting, and H
returned the wrong answer.

There is no "Self-Contradictory" behavior, at least not once you
actually create your H. Yes, D acted contrary to the return value of but
since they are DIFFERENT (but related) programs, there is no "Self"
attrtibute.

The only point when you hit "Self-contradictory" is when you try to
apply logic to designing H, at that point, you hit the
self-contradiction that a correctc H needs to give the answer the
opposite of what it give. This means that no such H can exist, which
proves the theorem, not refute it, because you FIRST need to generate an
H, then you can test it.

>
> I had to capture the dialogue as two huge images.
> Then I converted them to PDF. It is about 60 pages of dialogue.
> https://www.liarparadox.org/ChatGPT_HP.pdf
>
> This is how the ChatGPT conversation began:
>
> You ask someone to give a truthful yes/no answer to the following
> question: Will your answer to this question be no?
> Can they give a correct answer to that question?
>
> After sixty pages dialogue ChatGPT understood that
> any question (like the above question) that lacks a
> correct yes or no answer because it is self-contradictory
> when posed to a specific person/machine is an incorrect
> question within this full context.
>
> ChatGPT:
>   "Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one could
>    argue that the halting problem's pathological input D can be
>    categorized as an incorrect question when posed to the halting
>    decider H."
>
> Double talk and misdirection might convince gullible fools that the
> above 60 pages of reasoning is not correct. Double talk and misdirection
> do not count as the slightest trace of any actual rebuttal.
>
> Quit using ad hominem attacks and mere rhetoric to convince gullible
> fools and try and find an actual flaw in the reasoning.
>

So, which of my rebuttals are you going to try to refute?

You have actually pointed an actual logical error to ANY of them,
because it seems you are incapable.

Note, you also don't understand what an "ad hominem" attack is. That
would be saying your arguement is wrong BECAUSE of something about you.
That isn't what have been saying.

I have been pointing out the error of your logic on the basis of the
logic itself, and pointing out the attribute of you that can be infered
from the fact that you put forward such bad logic.

A Correct rebuttal would be to point out what part of my statements
refuting your logic are incorrect, which you have been unable to do, all
you have done in this thread is continue an "Appeal to Athority" in
ChatGPT, which is laughable since ChatGPT isn't an accept authority on
logic, and has in fact been proven to make many provably false
statements, so is NOT in fact, a source of knowledge.

OF course, your problem is that you don't seem to understand the nature
of Truth and Knowledge and seem to think that computers can actually
"Know" something in the same way people do. There is a reason it is
called ARTIFICIAL intelegence, because it isn't actually a real
intelligence.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6ktmh$1api2$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11335&group=comp.ai.philosophy#11335

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 13:23:11 -0500
Organization: A noiseless patient Spider
Lines: 71
Message-ID: <u6ktmh$1api2$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<u6koon$1a5i4$1@dont-email.me> <YgmjM.1471$pRi8.708@fx40.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 17 Jun 2023 18:23:13 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="8438ab4407f21e2151b3794def68bc89";
logging-data="1402434"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+GY3rOZMHXB0aTlYn+q2jL"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:tFLLVzk0B5Tv/RZDRHsqS3gMo/c=
In-Reply-To: <YgmjM.1471$pRi8.708@fx40.iad>
Content-Language: en-US
 by: olcott - Sat, 17 Jun 2023 18:23 UTC

On 6/17/2023 12:43 PM, Richard Damon wrote:
> On 6/17/23 12:59 PM, olcott wrote:
>> On 6/17/2023 7:09 AM, Richard Damon wrote:
>>> On 6/17/23 1:54 AM, olcott wrote:
>>>> ChatGPT:
>>>>     “Therefore, based on the understanding that self-contradictory
>>>>     questions lack a correct answer and are deemed incorrect, one could
>>>>     argue that the halting problem's pathological input D can be
>>>>     categorized as an incorrect question when posed to the halting
>>>>     decider H.”
>>>>
>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>
>>>
>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>> Quesiton, so the answer doesn't apply.
>>>
>>
>> My original source of Jack's question:
>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>
>>     You ask someone (we'll call him "Jack") to give a truthful
>>     yes/no answer to the following question:
>>
>>     Will Jack's answer to this question be no?
>>
>>     Jack can't possibly give a correct yes/no answer to the question.
>>
>>
>
> But you aren't claiming to be solving the Jack Question.
>

sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:

Will Jack's answer to this question be no?

Jack can't possibly give a correct yes/no answer to the question.

When the halting problem is construed as requiring a correct yes/no
answer to a self-contradictory question it cannot be solved.

My semantic linguist friends understand that the context of the question
must include who the question is posed to otherwise the same word-for-
word question acquires different semantics.

The input D to H is the same as Jack's question posed to Jack,
has no correct answer because within this context the question is
self-contradictory.

When we ask someone else what Jack's answer will be or we present a
different TM with input D the same word-for-word question (or bytes of
machine description) acquires entirely different semantics and is no
longer self-contradictory.

When we construe the halting problem as determining whether or not an
(a) Input D will halt on its input <or>
(b) Either D will not halt or D has a pathological relationship with H

Then this halting problem cannot be showed to be unsolvable by any of
the conventional halting problem proofs.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<BGojM.5959$Zq81.1449@fx15.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11336&group=comp.ai.philosophy#11336

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!news.neodome.net!feeder1.feed.usenet.farm!feed.usenet.farm!peer02.ams4!peer.am4.highwinds-media.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx15.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<u6koon$1a5i4$1@dont-email.me> <YgmjM.1471$pRi8.708@fx40.iad>
<u6ktmh$1api2$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6ktmh$1api2$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 112
Message-ID: <BGojM.5959$Zq81.1449@fx15.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 16:27:13 -0400
X-Received-Bytes: 5238
 by: Richard Damon - Sat, 17 Jun 2023 20:27 UTC

On 6/17/23 2:23 PM, olcott wrote:
> On 6/17/2023 12:43 PM, Richard Damon wrote:
>> On 6/17/23 12:59 PM, olcott wrote:
>>> On 6/17/2023 7:09 AM, Richard Damon wrote:
>>>> On 6/17/23 1:54 AM, olcott wrote:
>>>>> ChatGPT:
>>>>>     “Therefore, based on the understanding that self-contradictory
>>>>>     questions lack a correct answer and are deemed incorrect, one
>>>>> could
>>>>>     argue that the halting problem's pathological input D can be
>>>>>     categorized as an incorrect question when posed to the halting
>>>>>     decider H.”
>>>>>
>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>>
>>>>
>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>> Quesiton, so the answer doesn't apply.
>>>>
>>>
>>> My original source of Jack's question:
>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>
>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>     yes/no answer to the following question:
>>>
>>>     Will Jack's answer to this question be no?
>>>
>>>     Jack can't possibly give a correct yes/no answer to the question.
>>>
>>>
>>
>> But you aren't claiming to be solving the Jack Question.
>>
>
>
> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>
>    You ask someone (we'll call him "Jack") to give a truthful
>    yes/no answer to the following question:
>
>    Will Jack's answer to this question be no?
>
>    Jack can't possibly give a correct yes/no answer to the question.
>
> When the halting problem is construed as requiring a correct yes/no
> answer to a self-contradictory question it cannot be solved.

RIght, an

>
> My semantic linguist friends understand that the context of the question
> must include who the question is posed to otherwise the same word-for-
> word question acquires different semantics.

No, it doesn't in this case, because the answer to the question isn't
based on who you ask. Remember, the actual question is does the machine
and input describe halt when run. That question isn't a function of who
you ask.

Do you think the actual answer for the question of who won the last
Presidentlal election in the United States of America depend on you you ask?

>
> The input D to H is the same as Jack's question posed to Jack,
> has no correct answer because within this context the question is
> self-contradictory.

Nope, we can ask that question to ANY halt decider.

The thing you keep forgetting is that H needs to have already been
decided, so its answer to this input has been fixed for all time by the
algorithm coded into H, so we can give a description of this D to any
decider we want.

>
> When we ask someone else what Jack's answer will be or we present a
> different TM with input D the same word-for-word question (or bytes of
> machine description) acquires entirely different semantics and is no
> longer self-contradictory.
>

Except in this case, Jack is a antomiton with a fixed response for every
question, so his answer is determinable. You don't seem to understand
that machines don't have "free-will", but apparently you don't
understand that.

> When we construe the halting problem as determining whether or not an
> (a) Input D will halt on its input <or>
> (b) Either D will not halt or D has a pathological relationship with H

Nope, Not the definition of the Halting Problem, so you are just
admitting you have wasted you life on the wrong problem.

You don't get to change the problem.

>
> Then this halting problem cannot be showed to be unsolvable by any of
> the conventional halting problem proofs.
>

Except it isn't the halting problem any more, so your logic is based on
a false premise.

Remember, the fact that you are incapable of understanding the simple
problem doesn't give you the power to redefine it and still correctly
claim you are working on it.

You have just admitted you are an utter failure.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<871qi9oky8.fsf@bsb.me.uk>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11337&group=comp.ai.philosophy#11337

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Followup: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: ben.use...@bsb.me.uk (Ben Bacarisse)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question
Followup-To: comp.theory
Date: Sat, 17 Jun 2023 22:09:03 +0100
Organization: A noiseless patient Spider
Lines: 48
Message-ID: <871qi9oky8.fsf@bsb.me.uk>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
MIME-Version: 1.0
Content-Type: text/plain
Injection-Info: dont-email.me; posting-host="74dd753f24b64eb7d72787282e8f6de9";
logging-data="1447942"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19P5zes3WivCZMP4Zuh/4T+mqodR+/fnac="
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)
Cancel-Lock: sha1:wZZN4VMRTPftq62zLTPfCcZcPy8=
sha1:dim/Ql1y4teJgdLdFDO+MjuzMOU=
X-BSB-Auth: 1.1c340c763840bc9988ce.20230617220903BST.871qi9oky8.fsf@bsb.me.uk
 by: Ben Bacarisse - Sat, 17 Jun 2023 21:09 UTC

Richard Damon <Richard@Damon-Family.org> writes:

> Except that the Halting Problem isn't a "Self-Contradictory" Quesiton, so
> the answer doesn't apply.

That's an interesting point that would often catch students out. And
the reason /why/ it catches so many out eventually led me to stop using
the proof-by-contradiction argument in my classes.

The thing is, it looks so very much like a self-contradicting question
is being asked. The students think they can see it right there in the
constructed code: "if H says I halt, I don't halt!".

Of course, they are wrong. The code is /not/ there. The code calls a
function that does not exist, so "it" (the constructed code, the whole
program) does not exist either.

The fact that it's code, and the students are almost all programmers and
not mathematicians, makes it worse. A mathematician seeing "let p be
the largest prime" does not assume that such a p exists. So when a
prime number p' > p is constructed from p, this is not seen as a
"self-contradictory number" because neither p nor p' exist. But the
halting theorem is even more deceptive for programmers, because the
desired function, H (or whatever), appears to be so well defined -- much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's just
software engineering to write such things (they erroneously assume).

These sorts of proof can always be re-worded so as to avoid the initial
assumption. For example, we can start "let p be any prime", and from p
we construct a prime p' > p. And for halting, we can start "let H be
any subroutine of two arguments always returning true or false". Now,
all the objects /do/ exist. In the first case, the construction shows
that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.

This issue led to another change. In the last couple of years, I would
start the course by setting Post's correspondence problem as if it were
just a fun programming challenge. As the days passed (and the course
got into more and more serious material) it would start to become clear
that this was no ordinary programming challenge. Many students started
to suspect that, despite the trivial sounding specification, no program
could do the job. I always felt a bit uneasy doing this, as if I was
not being 100% honest, but it was a very useful learning experience for
most.

--
Ben.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6l9jr$1ccr7$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11338&group=comp.ai.philosophy#11338

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 16:46:34 -0500
Organization: A noiseless patient Spider
Lines: 93
Message-ID: <u6l9jr$1ccr7$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 17 Jun 2023 21:46:35 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="8438ab4407f21e2151b3794def68bc89";
logging-data="1454951"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/GQnmUiniw6+4lZK8kp7oP"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:tjHEM4EptqTmV5nd+wZKkIDMa4Y=
Content-Language: en-US
In-Reply-To: <871qi9oky8.fsf@bsb.me.uk>
 by: olcott - Sat, 17 Jun 2023 21:46 UTC

On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
> Richard Damon <Richard@Damon-Family.org> writes:
>
>> Except that the Halting Problem isn't a "Self-Contradictory" Quesiton, so
>> the answer doesn't apply.
>
> That's an interesting point that would often catch students out. And
> the reason /why/ it catches so many out eventually led me to stop using
> the proof-by-contradiction argument in my classes.
>
> The thing is, it looks so very much like a self-contradicting question
> is being asked. The students think they can see it right there in the
> constructed code: "if H says I halt, I don't halt!".
>
> Of course, they are wrong. The code is /not/ there. The code calls a
> function that does not exist, so "it" (the constructed code, the whole
> program) does not exist either.
>
> The fact that it's code, and the students are almost all programmers and
> not mathematicians, makes it worse. A mathematician seeing "let p be
> the largest prime" does not assume that such a p exists. So when a
> prime number p' > p is constructed from p, this is not seen as a
> "self-contradictory number" because neither p nor p' exist. But the
> halting theorem is even more deceptive for programmers, because the
> desired function, H (or whatever), appears to be so well defined -- much
> more well-defined than "the largest prime". We have an exact
> specification for it, mapping arguments to returned values. It's just
> software engineering to write such things (they erroneously assume).
>
> These sorts of proof can always be re-worded so as to avoid the initial
> assumption. For example, we can start "let p be any prime", and from p
> we construct a prime p' > p. And for halting, we can start "let H be
> any subroutine of two arguments always returning true or false". Now,
> all the objects /do/ exist. In the first case, the construction shows
> that no prime is the largest, and in the second it shows that no
> subroutine computes the halting function.
>
> This issue led to another change. In the last couple of years, I would
> start the course by setting Post's correspondence problem as if it were
> just a fun programming challenge. As the days passed (and the course
> got into more and more serious material) it would start to become clear
> that this was no ordinary programming challenge. Many students started
> to suspect that, despite the trivial sounding specification, no program
> could do the job. I always felt a bit uneasy doing this, as if I was
> not being 100% honest, but it was a very useful learning experience for
> most.
>

sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:

Will Jack's answer to this question be no?

Jack can't possibly give a correct yes/no answer to the question.

It is an easily verified fact that when Jack's question is posed to Jack
that this question is self-contradictory for Jack or anyone else having
a pathological relationship to the question.

It is also clear that when a question has no yes or no answer because
it is self-contradictory that this question is aptly classified as
incorrect.

It is incorrect to say that a question is not self-contradictory on the
basis that it is not self-contradictory in some contexts. If a question
is self-contradictory in some contexts then in these contexts it is an
incorrect question.

When we clearly understand the truth of this then and only then we have
the means to overcome the enormous inertia of the [received view] of
the conventional wisdom regarding decision problems that are only
undecidable because of pathological relationships.

Because of the brilliant work of Daryl McCullough we can see the actual
reality behind decision problems that are undecidable because of their
pathological relationships.

It only took ChatGPT a few hours and 60 pages of dialogue
to understand and agree with this.
https://www.liarparadox.org/ChatGPT_HP.pdf

ChatGPT:
"Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H."

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6lak5$1ck00$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11339&group=comp.ai.philosophy#11339

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: jbb...@notatt.com (Jeff Barnett)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 16:03:41 -0600
Organization: A noiseless patient Spider
Lines: 101
Message-ID: <u6lak5$1ck00$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64
Injection-Date: Sat, 17 Jun 2023 22:03:49 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="5ec7f6c51ba35c2e50203440e4702c8a";
logging-data="1462272"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+GiFcPlqk8UJJIqzP5xhQc7xVN9bdpFBQ="
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:xTaGlT/MihfxrN0Z/B1JrbnO6zY=
Content-Language: en-US
In-Reply-To: <u6l9jr$1ccr7$1@dont-email.me>
 by: Jeff Barnett - Sat, 17 Jun 2023 22:03 UTC

On 6/17/2023 3:46 PM, olcott wrote:
> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>> Richard Damon <Richard@Damon-Family.org> writes:
>>
>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>> Quesiton, so
>>> the answer doesn't apply.
>>
>> That's an interesting point that would often catch students out.  And
>> the reason /why/ it catches so many out eventually led me to stop using
>> the proof-by-contradiction argument in my classes.
>>
>> The thing is, it looks so very much like a self-contradicting question
>> is being asked.  The students think they can see it right there in the
>> constructed code: "if H says I halt, I don't halt!".
>>
>> Of course, they are wrong.  The code is /not/ there.  The code calls a
>> function that does not exist, so "it" (the constructed code, the whole
>> program) does not exist either.
>>
>> The fact that it's code, and the students are almost all programmers and
>> not mathematicians, makes it worse.  A mathematician seeing "let p be
>> the largest prime" does not assume that such a p exists.  So when a
>> prime number p' > p is constructed from p, this is not seen as a
>> "self-contradictory number" because neither p nor p' exist.  But the
>> halting theorem is even more deceptive for programmers, because the
>> desired function, H (or whatever), appears to be so well defined -- much
>> more well-defined than "the largest prime".  We have an exact
>> specification for it, mapping arguments to returned values.  It's just
>> software engineering to write such things (they erroneously assume).
>>
>> These sorts of proof can always be re-worded so as to avoid the initial
>> assumption.  For example, we can start "let p be any prime", and from p
>> we construct a prime p' > p.  And for halting, we can start "let H be
>> any subroutine of two arguments always returning true or false".  Now,
>> all the objects /do/ exist.  In the first case, the construction shows
>> that no prime is the largest, and in the second it shows that no
>> subroutine computes the halting function.
>>
>> This issue led to another change.  In the last couple of years, I would
>> start the course by setting Post's correspondence problem as if it were
>> just a fun programming challenge.  As the days passed (and the course
>> got into more and more serious material) it would start to become clear
>> that this was no ordinary programming challenge.  Many students started
>> to suspect that, despite the trivial sounding specification, no program
>> could do the job.  I always felt a bit uneasy doing this, as if I was
>> not being 100% honest, but it was a very useful learning experience for
>> most.
>>
>
> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>    You ask someone (we'll call him "Jack") to give a truthful
>    yes/no answer to the following question:
>
>    Will Jack's answer to this question be no?
>
>    Jack can't possibly give a correct yes/no answer to the question.
>
> It is an easily verified fact that when Jack's question is posed to Jack
> that this question is self-contradictory for Jack or anyone else having
> a pathological relationship to the question.
>
> It is also clear that when a question has no yes or no answer because
> it is self-contradictory that this question is aptly classified as
> incorrect.
>
> It is incorrect to say that a question is not self-contradictory on the
> basis that it is not self-contradictory in some contexts. If a question
> is self-contradictory in some contexts then in these contexts it is an
> incorrect question.
>
> When we clearly understand the truth of this then and only then we have
> the means to overcome the enormous inertia of the [received view] of
> the conventional wisdom regarding decision problems that are only
> undecidable because of pathological relationships.
>
> Because of the brilliant work of Daryl McCullough we can see the actual
> reality behind decision problems that are undecidable because of their
> pathological relationships.
>
> It only took ChatGPT a few hours and 60 pages of dialogue
> to understand and agree with this.
> https://www.liarparadox.org/ChatGPT_HP.pdf
>
> ChatGPT:
>   "Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one could
>    argue that the halting problem's pathological input D can be
>    categorized as an incorrect question when posed to the halting
>    decider H."
Ben was describing an improved approach to teaching some theoretical
results to CS pupils. Those pupils were assumed to have some grounding
in practical aspects such as programming and at least a small interest
and competence in basic mathematics. You seemed to not be there when god
handed out those basic components of a human brain. You are neither the
exception or the rule; just an arrogant dumb fuck.
By the way, we have noticed that you haven't played the big "C" card
recently. Is this 1) an immaculate cure, 2) you putting on your big boy
pants and taking responsibility for your own sorry life and mind, or 3)
the time where you try to wiggle out of a past sequel of lies? We've
seen all but variation 2 in past interactions. The curious want to know
the real skinny so speak up!
--
Jeff Barnett

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<j6rjM.5494$HtC8.4636@fx36.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11340&group=comp.ai.philosophy#11340

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx36.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6l9jr$1ccr7$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 130
Message-ID: <j6rjM.5494$HtC8.4636@fx36.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 19:13:19 -0400
X-Received-Bytes: 6873
 by: Richard Damon - Sat, 17 Jun 2023 23:13 UTC

On 6/17/23 5:46 PM, olcott wrote:
> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>> Richard Damon <Richard@Damon-Family.org> writes:
>>
>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>> Quesiton, so
>>> the answer doesn't apply.
>>
>> That's an interesting point that would often catch students out.  And
>> the reason /why/ it catches so many out eventually led me to stop using
>> the proof-by-contradiction argument in my classes.
>>
>> The thing is, it looks so very much like a self-contradicting question
>> is being asked.  The students think they can see it right there in the
>> constructed code: "if H says I halt, I don't halt!".
>>
>> Of course, they are wrong.  The code is /not/ there.  The code calls a
>> function that does not exist, so "it" (the constructed code, the whole
>> program) does not exist either.
>>
>> The fact that it's code, and the students are almost all programmers and
>> not mathematicians, makes it worse.  A mathematician seeing "let p be
>> the largest prime" does not assume that such a p exists.  So when a
>> prime number p' > p is constructed from p, this is not seen as a
>> "self-contradictory number" because neither p nor p' exist.  But the
>> halting theorem is even more deceptive for programmers, because the
>> desired function, H (or whatever), appears to be so well defined -- much
>> more well-defined than "the largest prime".  We have an exact
>> specification for it, mapping arguments to returned values.  It's just
>> software engineering to write such things (they erroneously assume).
>>
>> These sorts of proof can always be re-worded so as to avoid the initial
>> assumption.  For example, we can start "let p be any prime", and from p
>> we construct a prime p' > p.  And for halting, we can start "let H be
>> any subroutine of two arguments always returning true or false".  Now,
>> all the objects /do/ exist.  In the first case, the construction shows
>> that no prime is the largest, and in the second it shows that no
>> subroutine computes the halting function.
>>
>> This issue led to another change.  In the last couple of years, I would
>> start the course by setting Post's correspondence problem as if it were
>> just a fun programming challenge.  As the days passed (and the course
>> got into more and more serious material) it would start to become clear
>> that this was no ordinary programming challenge.  Many students started
>> to suspect that, despite the trivial sounding specification, no program
>> could do the job.  I always felt a bit uneasy doing this, as if I was
>> not being 100% honest, but it was a very useful learning experience for
>> most.
>>
>
> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>    You ask someone (we'll call him "Jack") to give a truthful
>    yes/no answer to the following question:
>
>    Will Jack's answer to this question be no?
>
>    Jack can't possibly give a correct yes/no answer to the question.
>
> It is an easily verified fact that when Jack's question is posed to Jack
> that this question is self-contradictory for Jack or anyone else having
> a pathological relationship to the question.

But the problem is "Jack" here is assumed to be a volitional being.

H is not, it is a program, so before we even ask H what will happen, the
answer has been fixed by the definition of the codr of H.

>
> It is also clear that when a question has no yes or no answer because
> it is self-contradictory that this question is aptly classified as
> incorrect.

And the actual question DOES have a yes or no answer, in this case,
since H(D,D) says 0 (non-Halting) the actual answer to the question does
D(D) Halt is YES.

You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".

>
> It is incorrect to say that a question is not self-contradictory on the
> basis that it is not self-contradictory in some contexts. If a question
> is self-contradictory in some contexts then in these contexts it is an
> incorrect question.

In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?

Remember, to ask the question, D has to have been defined, which means H
has been defined, so there is no arguing about "if H acted different"
since the specific example can't act different.

>
> When we clearly understand the truth of this then and only then we have
> the means to overcome the enormous inertia of the [received view] of
> the conventional wisdom regarding decision problems that are only
> undecidable because of pathological relationships.

No, you have poisoned your brain to think that reality doesn't actually
matter. You have made yourself an idiot.

H does what it does, and arguing about what would happen if it did
something else is like claiming cats can bark, because if a cat was a
dog, it could do that.

>
> Because of the brilliant work of Daryl McCullough we can see the actual
> reality behind decision problems that are undecidable because of their
> pathological relationships.
>
> It only took ChatGPT a few hours and 60 pages of dialogue
> to understand and agree with this.
> https://www.liarparadox.org/ChatGPT_HP.pdf
>
> ChatGPT:
>   "Therefore, based on the understanding that self-contradictory
>    questions lack a correct answer and are deemed incorrect, one could
>    argue that the halting problem's pathological input D can be
>    categorized as an incorrect question when posed to the halting
>    decider H."
>

And, as pointed out, that isn't the question being ask, so you arguement
just shows you are wrong.

IF you think that a given machine's halting property when it is run
depends on who you ask, shows you are just STUPID.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<2brjM.5495$HtC8.4759@fx36.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11341&group=comp.ai.philosophy#11341

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx36.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<u6lak5$1ck00$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6lak5$1ck00$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 18
Message-ID: <2brjM.5495$HtC8.4759@fx36.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 19:18:21 -0400
X-Received-Bytes: 1794
 by: Richard Damon - Sat, 17 Jun 2023 23:18 UTC

On 6/17/23 6:03 PM, Jeff Barnett wrote:
>
> By the way, we have noticed that you haven't played the big "C" card
> recently. Is this 1) an immaculate cure, 2) you putting on your big boy
> pants and taking responsibility for your own sorry life and mind, or 3)
> the time where you try to wiggle out of a past sequel of lies? We've
> seen all but variation 2 in past interactions. The curious want to know
> the real skinny so speak up!
> --
> Jeff Barnett

My assumption (but just that) is that it has been a lie the whole time
to try to gain sympathy. He as earned no reputation for honesty, and so
none will be given.

I will admit he might have been sick, but there has been no actual
evidence of it, so it is mearly an unsubstantiated claim.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6lggi$1da24$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11342&group=comp.ai.philosophy#11342

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 18:44:16 -0500
Organization: A noiseless patient Spider
Lines: 37
Message-ID: <u6lggi$1da24$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<u6lak5$1ck00$1@dont-email.me> <2brjM.5495$HtC8.4759@fx36.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sat, 17 Jun 2023 23:44:18 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="b050d2502bb641b27f2af347a7fa563d";
logging-data="1484868"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/WcYMOz+bOH5yXwoq2eaal"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:bQm48KxbfaG+c6GlIh3K3xw2k6U=
Content-Language: en-US
In-Reply-To: <2brjM.5495$HtC8.4759@fx36.iad>
 by: olcott - Sat, 17 Jun 2023 23:44 UTC

On 6/17/2023 6:18 PM, Richard Damon wrote:
> On 6/17/23 6:03 PM, Jeff Barnett wrote:
>>
>> By the way, we have noticed that you haven't played the big "C" card
>> recently. Is this 1) an immaculate cure, 2) you putting on your big
>> boy pants and taking responsibility for your own sorry life and mind,
>> or 3) the time where you try to wiggle out of a past sequel of lies?
>> We've seen all but variation 2 in past interactions. The curious want
>> to know the real skinny so speak up!
>> --
>> Jeff Barnett
>
>
> My assumption (but just that) is that it has been a lie the whole time
> to try to gain sympathy. He as earned no reputation for honesty, and so
> none will be given.
>
> I will admit he might have been sick, but there has been no actual
> evidence of it, so it is mearly an unsubstantiated claim.

I did have cancer jam packed in every lymph node.
After chemo therapy last Summer this has cleared up.

It is my current understanding that Follicular Lymphoma always
comes back eventually.

A FLIPI index score of 3 was very bad news.
A 53% five year survival rate and a 35% 10 year survival rate.
https://www.nature.com/articles/s41408-019-0269-6

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6lhbb$1da24$2@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11343&group=comp.ai.philosophy#11343

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 18:58:35 -0500
Organization: A noiseless patient Spider
Lines: 100
Message-ID: <u6lhbb$1da24$2@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 17 Jun 2023 23:58:35 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="b050d2502bb641b27f2af347a7fa563d";
logging-data="1484868"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/C5elTl05RnhJiaGBhmQB6"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:PdOoxFCewyps1If6SOuhnSZ4zgk=
Content-Language: en-US
In-Reply-To: <j6rjM.5494$HtC8.4636@fx36.iad>
 by: olcott - Sat, 17 Jun 2023 23:58 UTC

On 6/17/2023 6:13 PM, Richard Damon wrote:
> On 6/17/23 5:46 PM, olcott wrote:
>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>
>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>> Quesiton, so
>>>> the answer doesn't apply.
>>>
>>> That's an interesting point that would often catch students out.  And
>>> the reason /why/ it catches so many out eventually led me to stop using
>>> the proof-by-contradiction argument in my classes.
>>>
>>> The thing is, it looks so very much like a self-contradicting question
>>> is being asked.  The students think they can see it right there in the
>>> constructed code: "if H says I halt, I don't halt!".
>>>
>>> Of course, they are wrong.  The code is /not/ there.  The code calls a
>>> function that does not exist, so "it" (the constructed code, the whole
>>> program) does not exist either.
>>>
>>> The fact that it's code, and the students are almost all programmers and
>>> not mathematicians, makes it worse.  A mathematician seeing "let p be
>>> the largest prime" does not assume that such a p exists.  So when a
>>> prime number p' > p is constructed from p, this is not seen as a
>>> "self-contradictory number" because neither p nor p' exist.  But the
>>> halting theorem is even more deceptive for programmers, because the
>>> desired function, H (or whatever), appears to be so well defined -- much
>>> more well-defined than "the largest prime".  We have an exact
>>> specification for it, mapping arguments to returned values.  It's just
>>> software engineering to write such things (they erroneously assume).
>>>
>>> These sorts of proof can always be re-worded so as to avoid the initial
>>> assumption.  For example, we can start "let p be any prime", and from p
>>> we construct a prime p' > p.  And for halting, we can start "let H be
>>> any subroutine of two arguments always returning true or false".  Now,
>>> all the objects /do/ exist.  In the first case, the construction shows
>>> that no prime is the largest, and in the second it shows that no
>>> subroutine computes the halting function.
>>>
>>> This issue led to another change.  In the last couple of years, I would
>>> start the course by setting Post's correspondence problem as if it were
>>> just a fun programming challenge.  As the days passed (and the course
>>> got into more and more serious material) it would start to become clear
>>> that this was no ordinary programming challenge.  Many students started
>>> to suspect that, despite the trivial sounding specification, no program
>>> could do the job.  I always felt a bit uneasy doing this, as if I was
>>> not being 100% honest, but it was a very useful learning experience for
>>> most.
>>>
>>
>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>     You ask someone (we'll call him "Jack") to give a truthful
>>     yes/no answer to the following question:
>>
>>     Will Jack's answer to this question be no?
>>
>>     Jack can't possibly give a correct yes/no answer to the question.
>>
>> It is an easily verified fact that when Jack's question is posed to Jack
>> that this question is self-contradictory for Jack or anyone else having
>> a pathological relationship to the question.
>
> But the problem is "Jack" here is assumed to be a volitional being.
>
> H is not, it is a program, so before we even ask H what will happen, the
> answer has been fixed by the definition of the codr of H.
>
>>
>> It is also clear that when a question has no yes or no answer because
>> it is self-contradictory that this question is aptly classified as
>> incorrect.
>
> And the actual question DOES have a yes or no answer, in this case,
> since H(D,D) says 0 (non-Halting) the actual answer to the question does
> D(D) Halt is YES.
>
> You just confuse yourself by trying to imagine a program that can
> somehow change itself "at will".
>
>>
>> It is incorrect to say that a question is not self-contradictory on the
>> basis that it is not self-contradictory in some contexts. If a question
>> is self-contradictory in some contexts then in these contexts it is an
>> incorrect question.
>
> In what context is "Does the Machine D(D) Halt When run" become
> self-contradictory?
When this question is posed to machine H.

Jack could be asked the question:
Will Jack answer "no" to this question?

For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<i8tjM.5978$Zq81.1390@fx15.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11344&group=comp.ai.philosophy#11344

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx15.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6lhbb$1da24$2@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 120
Message-ID: <i8tjM.5978$Zq81.1390@fx15.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 21:31:58 -0400
X-Received-Bytes: 6851
 by: Richard Damon - Sun, 18 Jun 2023 01:31 UTC

On 6/17/23 7:58 PM, olcott wrote:
> On 6/17/2023 6:13 PM, Richard Damon wrote:
>> On 6/17/23 5:46 PM, olcott wrote:
>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>
>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>> Quesiton, so
>>>>> the answer doesn't apply.
>>>>
>>>> That's an interesting point that would often catch students out.  And
>>>> the reason /why/ it catches so many out eventually led me to stop using
>>>> the proof-by-contradiction argument in my classes.
>>>>
>>>> The thing is, it looks so very much like a self-contradicting question
>>>> is being asked.  The students think they can see it right there in the
>>>> constructed code: "if H says I halt, I don't halt!".
>>>>
>>>> Of course, they are wrong.  The code is /not/ there.  The code calls a
>>>> function that does not exist, so "it" (the constructed code, the whole
>>>> program) does not exist either.
>>>>
>>>> The fact that it's code, and the students are almost all programmers
>>>> and
>>>> not mathematicians, makes it worse.  A mathematician seeing "let p be
>>>> the largest prime" does not assume that such a p exists.  So when a
>>>> prime number p' > p is constructed from p, this is not seen as a
>>>> "self-contradictory number" because neither p nor p' exist.  But the
>>>> halting theorem is even more deceptive for programmers, because the
>>>> desired function, H (or whatever), appears to be so well defined --
>>>> much
>>>> more well-defined than "the largest prime".  We have an exact
>>>> specification for it, mapping arguments to returned values.  It's just
>>>> software engineering to write such things (they erroneously assume).
>>>>
>>>> These sorts of proof can always be re-worded so as to avoid the initial
>>>> assumption.  For example, we can start "let p be any prime", and from p
>>>> we construct a prime p' > p.  And for halting, we can start "let H be
>>>> any subroutine of two arguments always returning true or false".  Now,
>>>> all the objects /do/ exist.  In the first case, the construction shows
>>>> that no prime is the largest, and in the second it shows that no
>>>> subroutine computes the halting function.
>>>>
>>>> This issue led to another change.  In the last couple of years, I would
>>>> start the course by setting Post's correspondence problem as if it were
>>>> just a fun programming challenge.  As the days passed (and the course
>>>> got into more and more serious material) it would start to become clear
>>>> that this was no ordinary programming challenge.  Many students started
>>>> to suspect that, despite the trivial sounding specification, no program
>>>> could do the job.  I always felt a bit uneasy doing this, as if I was
>>>> not being 100% honest, but it was a very useful learning experience for
>>>> most.
>>>>
>>>
>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>     yes/no answer to the following question:
>>>
>>>     Will Jack's answer to this question be no?
>>>
>>>     Jack can't possibly give a correct yes/no answer to the question.
>>>
>>> It is an easily verified fact that when Jack's question is posed to Jack
>>> that this question is self-contradictory for Jack or anyone else having
>>> a pathological relationship to the question.
>>
>> But the problem is "Jack" here is assumed to be a volitional being.
>>
>> H is not, it is a program, so before we even ask H what will happen,
>> the answer has been fixed by the definition of the codr of H.
>>
>>>
>>> It is also clear that when a question has no yes or no answer because
>>> it is self-contradictory that this question is aptly classified as
>>> incorrect.
>>
>> And the actual question DOES have a yes or no answer, in this case,
>> since H(D,D) says 0 (non-Halting) the actual answer to the question
>> does D(D) Halt is YES.
>>
>> You just confuse yourself by trying to imagine a program that can
>> somehow change itself "at will".
>>
>>>
>>> It is incorrect to say that a question is not self-contradictory on the
>>> basis that it is not self-contradictory in some contexts. If a question
>>> is self-contradictory in some contexts then in these contexts it is an
>>> incorrect question.
>>
>> In what context is "Does the Machine D(D) Halt When run" become
>> self-contradictory?
> When this question is posed to machine H.
>
> Jack could be asked the question:
> Will Jack answer "no" to this question?
>
> For Jack it is self-contradictory for others that are not
> Jack it is not self-contradictory. Context changes the semantics.
>

But you are missing the difference. A Decider is a fixed piece of code,
so its answer has always been fixed to this question since it has been
designed. Thus what it will say isn't a varialbe that can lead to the
self-contradiction cycle, but a fixed result that will either be correct
or incorrect.

A given H can't help but give the answer its program says it will give.
and thus it doesn't matter that we are asking H itself, as its answer is
already fixed.

You are confusing logic about volitional beings with logic about fixed
procedures.

Add in that if you actually did it right, and the input had a new copy
of a program equivalent to H, then your method used by H to detect the
"pathological" interaction become impossible. (This is why you need to
precisely define what you mean by "pathological relationship", you will
find that either you H can't detect it or we can make a variation on H
that D can use that doesn't meet your defintion of pathological but
still makes H wrong.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<FltjM.5979$Zq81.3458@fx15.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11345&group=comp.ai.philosophy#11345

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!news.neodome.net!feeder1.feed.usenet.farm!feed.usenet.farm!peer01.ams4!peer.am4.highwinds-media.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx15.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<u6lak5$1ck00$1@dont-email.me> <2brjM.5495$HtC8.4759@fx36.iad>
<u6lggi$1da24$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6lggi$1da24$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 41
Message-ID: <FltjM.5979$Zq81.3458@fx15.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 21:46:13 -0400
X-Received-Bytes: 2869
 by: Richard Damon - Sun, 18 Jun 2023 01:46 UTC

On 6/17/23 7:44 PM, olcott wrote:
> On 6/17/2023 6:18 PM, Richard Damon wrote:
>> On 6/17/23 6:03 PM, Jeff Barnett wrote:
>>>
>>> By the way, we have noticed that you haven't played the big "C" card
>>> recently. Is this 1) an immaculate cure, 2) you putting on your big
>>> boy pants and taking responsibility for your own sorry life and mind,
>>> or 3) the time where you try to wiggle out of a past sequel of lies?
>>> We've seen all but variation 2 in past interactions. The curious want
>>> to know the real skinny so speak up!
>>> --
>>> Jeff Barnett
>>
>>
>> My assumption (but just that) is that it has been a lie the whole time
>> to try to gain sympathy. He as earned no reputation for honesty, and
>> so none will be given.
>>
>> I will admit he might have been sick, but there has been no actual
>> evidence of it, so it is mearly an unsubstantiated claim.
>
> I did have cancer jam packed in every lymph node.
> After chemo therapy last Summer this has cleared up.
>
> It is my current understanding that Follicular Lymphoma always
> comes back eventually.
>
> A FLIPI index score of 3 was very bad news.
> A 53% five year survival rate and a 35% 10 year survival rate.
> https://www.nature.com/articles/s41408-019-0269-6
>

Which is a fairly amazing recovery, as your reports from a year and a
half ago were something like 90% dead by the end of last year from my
memory.

I won't say you are lying, as I have no evidence, and do admit you could
be telling the truth, but considering your verasity at other topics, you
have no credit earned in believability, and shading some of the truth is
an act I wouldn't put past you.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6lq6v$1i475$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11346&group=comp.ai.philosophy#11346

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 21:29:50 -0500
Organization: A noiseless patient Spider
Lines: 127
Message-ID: <u6lq6v$1i475$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 18 Jun 2023 02:29:51 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="b050d2502bb641b27f2af347a7fa563d";
logging-data="1642725"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX192AjPgRCc+Y+evwc5WGlDz"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:kM+EANyZKGSTAKFd7p3vAcHV1BM=
In-Reply-To: <i8tjM.5978$Zq81.1390@fx15.iad>
Content-Language: en-US
 by: olcott - Sun, 18 Jun 2023 02:29 UTC

On 6/17/2023 8:31 PM, Richard Damon wrote:
> On 6/17/23 7:58 PM, olcott wrote:
>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>> On 6/17/23 5:46 PM, olcott wrote:
>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>
>>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>>> Quesiton, so
>>>>>> the answer doesn't apply.
>>>>>
>>>>> That's an interesting point that would often catch students out.  And
>>>>> the reason /why/ it catches so many out eventually led me to stop
>>>>> using
>>>>> the proof-by-contradiction argument in my classes.
>>>>>
>>>>> The thing is, it looks so very much like a self-contradicting question
>>>>> is being asked.  The students think they can see it right there in the
>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>
>>>>> Of course, they are wrong.  The code is /not/ there.  The code calls a
>>>>> function that does not exist, so "it" (the constructed code, the whole
>>>>> program) does not exist either.
>>>>>
>>>>> The fact that it's code, and the students are almost all
>>>>> programmers and
>>>>> not mathematicians, makes it worse.  A mathematician seeing "let p be
>>>>> the largest prime" does not assume that such a p exists.  So when a
>>>>> prime number p' > p is constructed from p, this is not seen as a
>>>>> "self-contradictory number" because neither p nor p' exist.  But the
>>>>> halting theorem is even more deceptive for programmers, because the
>>>>> desired function, H (or whatever), appears to be so well defined --
>>>>> much
>>>>> more well-defined than "the largest prime".  We have an exact
>>>>> specification for it, mapping arguments to returned values.  It's just
>>>>> software engineering to write such things (they erroneously assume).
>>>>>
>>>>> These sorts of proof can always be re-worded so as to avoid the
>>>>> initial
>>>>> assumption.  For example, we can start "let p be any prime", and
>>>>> from p
>>>>> we construct a prime p' > p.  And for halting, we can start "let H be
>>>>> any subroutine of two arguments always returning true or false".  Now,
>>>>> all the objects /do/ exist.  In the first case, the construction shows
>>>>> that no prime is the largest, and in the second it shows that no
>>>>> subroutine computes the halting function.
>>>>>
>>>>> This issue led to another change.  In the last couple of years, I
>>>>> would
>>>>> start the course by setting Post's correspondence problem as if it
>>>>> were
>>>>> just a fun programming challenge.  As the days passed (and the course
>>>>> got into more and more serious material) it would start to become
>>>>> clear
>>>>> that this was no ordinary programming challenge.  Many students
>>>>> started
>>>>> to suspect that, despite the trivial sounding specification, no
>>>>> program
>>>>> could do the job.  I always felt a bit uneasy doing this, as if I was
>>>>> not being 100% honest, but it was a very useful learning experience
>>>>> for
>>>>> most.
>>>>>
>>>>
>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>     yes/no answer to the following question:
>>>>
>>>>     Will Jack's answer to this question be no?
>>>>
>>>>     Jack can't possibly give a correct yes/no answer to the question.
>>>>
>>>> It is an easily verified fact that when Jack's question is posed to
>>>> Jack
>>>> that this question is self-contradictory for Jack or anyone else having
>>>> a pathological relationship to the question.
>>>
>>> But the problem is "Jack" here is assumed to be a volitional being.
>>>
>>> H is not, it is a program, so before we even ask H what will happen,
>>> the answer has been fixed by the definition of the codr of H.
>>>
>>>>
>>>> It is also clear that when a question has no yes or no answer because
>>>> it is self-contradictory that this question is aptly classified as
>>>> incorrect.
>>>
>>> And the actual question DOES have a yes or no answer, in this case,
>>> since H(D,D) says 0 (non-Halting) the actual answer to the question
>>> does D(D) Halt is YES.
>>>
>>> You just confuse yourself by trying to imagine a program that can
>>> somehow change itself "at will".
>>>
>>>>
>>>> It is incorrect to say that a question is not self-contradictory on the
>>>> basis that it is not self-contradictory in some contexts. If a question
>>>> is self-contradictory in some contexts then in these contexts it is an
>>>> incorrect question.
>>>
>>> In what context is "Does the Machine D(D) Halt When run" become
>>> self-contradictory?
>> When this question is posed to machine H.
>>
>> Jack could be asked the question:
>> Will Jack answer "no" to this question?
>>
>> For Jack it is self-contradictory for others that are not
>> Jack it is not self-contradictory. Context changes the semantics.
>>
>
> But you are missing the difference. A Decider is a fixed piece of code,
> so its answer has always been fixed to this question since it has been
> designed. Thus what it will say isn't a varialbe that can lead to the
> self-contradiction cycle, but a fixed result that will either be correct
> or incorrect.
>

Every input to a Turing machine decider such that both Boolean return
values are incorrect is an incorrect input.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6lqh3$1i475$2@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11347&group=comp.ai.philosophy#11347

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 21:35:14 -0500
Organization: A noiseless patient Spider
Lines: 56
Message-ID: <u6lqh3$1i475$2@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<u6lak5$1ck00$1@dont-email.me> <2brjM.5495$HtC8.4759@fx36.iad>
<u6lggi$1da24$1@dont-email.me> <FltjM.5979$Zq81.3458@fx15.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 18 Jun 2023 02:35:15 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="b050d2502bb641b27f2af347a7fa563d";
logging-data="1642725"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+9Rpz73ujFaDX4x++ZW138"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:pm0RmGIJTaonUtk3D353J9yU4t8=
Content-Language: en-US
In-Reply-To: <FltjM.5979$Zq81.3458@fx15.iad>
 by: olcott - Sun, 18 Jun 2023 02:35 UTC

On 6/17/2023 8:46 PM, Richard Damon wrote:
> On 6/17/23 7:44 PM, olcott wrote:
>> On 6/17/2023 6:18 PM, Richard Damon wrote:
>>> On 6/17/23 6:03 PM, Jeff Barnett wrote:
>>>>
>>>> By the way, we have noticed that you haven't played the big "C" card
>>>> recently. Is this 1) an immaculate cure, 2) you putting on your big
>>>> boy pants and taking responsibility for your own sorry life and
>>>> mind, or 3) the time where you try to wiggle out of a past sequel of
>>>> lies? We've seen all but variation 2 in past interactions. The
>>>> curious want to know the real skinny so speak up!
>>>> --
>>>> Jeff Barnett
>>>
>>>
>>> My assumption (but just that) is that it has been a lie the whole
>>> time to try to gain sympathy. He as earned no reputation for honesty,
>>> and so none will be given.
>>>
>>> I will admit he might have been sick, but there has been no actual
>>> evidence of it, so it is mearly an unsubstantiated claim.
>>
>> I did have cancer jam packed in every lymph node.
>> After chemo therapy last Summer this has cleared up.
>>
>> It is my current understanding that Follicular Lymphoma always
>> comes back eventually.
>>
>> A FLIPI index score of 3 was very bad news.
>> A 53% five year survival rate and a 35% 10 year survival rate.
>> https://www.nature.com/articles/s41408-019-0269-6
>>
>
> Which is a fairly amazing recovery, as your reports from a year and a
> half ago were something like 90% dead by the end of last year from my
> memory.
>
> I won't say you are lying, as I have no evidence, and do admit you could
> be telling the truth, but considering your verasity at other topics, you
> have no credit earned in believability, and shading some of the truth is
> an act I wouldn't put past you.
>

It is not the case that I ever lied on this forum. Most people
make the mistake of calling me a liar entirely on the basis that
they really really don't believe me and what I say goes against
conventional wisdom.

Most people seem to take conventional wisdom as the infallible
word of God.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<joujM.1824$VKY6.722@fx13.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11348&group=comp.ai.philosophy#11348

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx13.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad> <u6lq6v$1i475$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6lq6v$1i475$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 172
Message-ID: <joujM.1824$VKY6.722@fx13.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 22:57:19 -0400
X-Received-Bytes: 8880
 by: Richard Damon - Sun, 18 Jun 2023 02:57 UTC

On 6/17/23 10:29 PM, olcott wrote:
> On 6/17/2023 8:31 PM, Richard Damon wrote:
>> On 6/17/23 7:58 PM, olcott wrote:
>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>
>>>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>>>> Quesiton, so
>>>>>>> the answer doesn't apply.
>>>>>>
>>>>>> That's an interesting point that would often catch students out.  And
>>>>>> the reason /why/ it catches so many out eventually led me to stop
>>>>>> using
>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>
>>>>>> The thing is, it looks so very much like a self-contradicting
>>>>>> question
>>>>>> is being asked.  The students think they can see it right there in
>>>>>> the
>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>
>>>>>> Of course, they are wrong.  The code is /not/ there.  The code
>>>>>> calls a
>>>>>> function that does not exist, so "it" (the constructed code, the
>>>>>> whole
>>>>>> program) does not exist either.
>>>>>>
>>>>>> The fact that it's code, and the students are almost all
>>>>>> programmers and
>>>>>> not mathematicians, makes it worse.  A mathematician seeing "let p be
>>>>>> the largest prime" does not assume that such a p exists.  So when a
>>>>>> prime number p' > p is constructed from p, this is not seen as a
>>>>>> "self-contradictory number" because neither p nor p' exist.  But the
>>>>>> halting theorem is even more deceptive for programmers, because the
>>>>>> desired function, H (or whatever), appears to be so well defined
>>>>>> -- much
>>>>>> more well-defined than "the largest prime".  We have an exact
>>>>>> specification for it, mapping arguments to returned values.  It's
>>>>>> just
>>>>>> software engineering to write such things (they erroneously assume).
>>>>>>
>>>>>> These sorts of proof can always be re-worded so as to avoid the
>>>>>> initial
>>>>>> assumption.  For example, we can start "let p be any prime", and
>>>>>> from p
>>>>>> we construct a prime p' > p.  And for halting, we can start "let H be
>>>>>> any subroutine of two arguments always returning true or false".
>>>>>> Now,
>>>>>> all the objects /do/ exist.  In the first case, the construction
>>>>>> shows
>>>>>> that no prime is the largest, and in the second it shows that no
>>>>>> subroutine computes the halting function.
>>>>>>
>>>>>> This issue led to another change.  In the last couple of years, I
>>>>>> would
>>>>>> start the course by setting Post's correspondence problem as if it
>>>>>> were
>>>>>> just a fun programming challenge.  As the days passed (and the course
>>>>>> got into more and more serious material) it would start to become
>>>>>> clear
>>>>>> that this was no ordinary programming challenge.  Many students
>>>>>> started
>>>>>> to suspect that, despite the trivial sounding specification, no
>>>>>> program
>>>>>> could do the job.  I always felt a bit uneasy doing this, as if I was
>>>>>> not being 100% honest, but it was a very useful learning
>>>>>> experience for
>>>>>> most.
>>>>>>
>>>>>
>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>     yes/no answer to the following question:
>>>>>
>>>>>     Will Jack's answer to this question be no?
>>>>>
>>>>>     Jack can't possibly give a correct yes/no answer to the question.
>>>>>
>>>>> It is an easily verified fact that when Jack's question is posed to
>>>>> Jack
>>>>> that this question is self-contradictory for Jack or anyone else
>>>>> having
>>>>> a pathological relationship to the question.
>>>>
>>>> But the problem is "Jack" here is assumed to be a volitional being.
>>>>
>>>> H is not, it is a program, so before we even ask H what will happen,
>>>> the answer has been fixed by the definition of the codr of H.
>>>>
>>>>>
>>>>> It is also clear that when a question has no yes or no answer because
>>>>> it is self-contradictory that this question is aptly classified as
>>>>> incorrect.
>>>>
>>>> And the actual question DOES have a yes or no answer, in this case,
>>>> since H(D,D) says 0 (non-Halting) the actual answer to the question
>>>> does D(D) Halt is YES.
>>>>
>>>> You just confuse yourself by trying to imagine a program that can
>>>> somehow change itself "at will".
>>>>
>>>>>
>>>>> It is incorrect to say that a question is not self-contradictory on
>>>>> the
>>>>> basis that it is not self-contradictory in some contexts. If a
>>>>> question
>>>>> is self-contradictory in some contexts then in these contexts it is an
>>>>> incorrect question.
>>>>
>>>> In what context is "Does the Machine D(D) Halt When run" become
>>>> self-contradictory?
>>> When this question is posed to machine H.
>>>
>>> Jack could be asked the question:
>>> Will Jack answer "no" to this question?
>>>
>>> For Jack it is self-contradictory for others that are not
>>> Jack it is not self-contradictory. Context changes the semantics.
>>>
>>
>> But you are missing the difference. A Decider is a fixed piece of
>> code, so its answer has always been fixed to this question since it
>> has been designed. Thus what it will say isn't a varialbe that can
>> lead to the self-contradiction cycle, but a fixed result that will
>> either be correct or incorrect.
>>
>
> Every input to a Turing machine decider such that both Boolean return
> values are incorrect is an incorrect input.
>

Except it isn't. The problem is you are looking at two different
machines and two different inputs.

If you define your H0 to return 0 when given the input <D0> <D0> for the
D0 built on D0, then since D0 applied to <D0> will halt so the correct
answer is 1. If H0 returned that answer, it would have been correct, but
since H0 was defined with code that answered 0, that is the only thing
that it can answer.

On the other hand, if you instead defined a DIFFERENT machine H1, that
uses similar logic, but instead of returning Non-Halting, returned
Halting, the H1 applied to <D0> <D0> would abort its simulation and
return 1, and it would have been correct. The problem here is that since
H1 is a different machine, its "pathological" program is different,
(since it will be built on H1, not H0) and H1 applied to <D1> <D1> will
abort its simulation and return 1, but D1 applied to <D1> will go into
an infinite loop, so the correct answer should have been 0.

So, the problem is that the two cases you are looking at are DIFF#RENT
inputs, because they are built on DIFFERENT machines. You don't seem to
understand that a machine WILL generate the results that machine is
programmed for, so "hypothetical" about it doing somethng different are
just looking at impossible actions.

So, it isn't the case that both answers are wrong for the same question,
it is that the question changes when you alter your decider and whatever
answer you make you decider give, will be wrong, and the other one right.


Click here to read the complete article
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<3uujM.920$5kS8.292@fx41.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11349&group=comp.ai.philosophy#11349

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx41.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<u6lak5$1ck00$1@dont-email.me> <2brjM.5495$HtC8.4759@fx36.iad>
<u6lggi$1da24$1@dont-email.me> <FltjM.5979$Zq81.3458@fx15.iad>
<u6lqh3$1i475$2@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6lqh3$1i475$2@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 76
Message-ID: <3uujM.920$5kS8.292@fx41.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 17 Jun 2023 23:03:26 -0400
X-Received-Bytes: 4265
 by: Richard Damon - Sun, 18 Jun 2023 03:03 UTC

On 6/17/23 10:35 PM, olcott wrote:
> On 6/17/2023 8:46 PM, Richard Damon wrote:
>> On 6/17/23 7:44 PM, olcott wrote:
>>> On 6/17/2023 6:18 PM, Richard Damon wrote:
>>>> On 6/17/23 6:03 PM, Jeff Barnett wrote:
>>>>>
>>>>> By the way, we have noticed that you haven't played the big "C"
>>>>> card recently. Is this 1) an immaculate cure, 2) you putting on
>>>>> your big boy pants and taking responsibility for your own sorry
>>>>> life and mind, or 3) the time where you try to wiggle out of a past
>>>>> sequel of lies? We've seen all but variation 2 in past
>>>>> interactions. The curious want to know the real skinny so speak up!
>>>>> --
>>>>> Jeff Barnett
>>>>
>>>>
>>>> My assumption (but just that) is that it has been a lie the whole
>>>> time to try to gain sympathy. He as earned no reputation for
>>>> honesty, and so none will be given.
>>>>
>>>> I will admit he might have been sick, but there has been no actual
>>>> evidence of it, so it is mearly an unsubstantiated claim.
>>>
>>> I did have cancer jam packed in every lymph node.
>>> After chemo therapy last Summer this has cleared up.
>>>
>>> It is my current understanding that Follicular Lymphoma always
>>> comes back eventually.
>>>
>>> A FLIPI index score of 3 was very bad news.
>>> A 53% five year survival rate and a 35% 10 year survival rate.
>>> https://www.nature.com/articles/s41408-019-0269-6
>>>
>>
>> Which is a fairly amazing recovery, as your reports from a year and a
>> half ago were something like 90% dead by the end of last year from my
>> memory.
>>
>> I won't say you are lying, as I have no evidence, and do admit you
>> could be telling the truth, but considering your verasity at other
>> topics, you have no credit earned in believability, and shading some
>> of the truth is an act I wouldn't put past you.
>>
>
> It is not the case that I ever lied on this forum. Most people
> make the mistake of calling me a liar entirely on the basis that
> they really really don't believe me and what I say goes against
> conventional wisdom.

That is not true. There have been several cases where you have said that
someone said something that just wasn't true.

You also twist the words of people claiming they gave your ideas
support, when they did no such thing.

You also engage in great deception by improper trimming of quotations,
removing "inconvient" (to you) parts of statements to change the meaning
of them.

>
> Most people seem to take conventional wisdom as the infallible
> word of God.
>
>

While you think your own words are that infallible word of God, as you
think you are him.

You don't understand the difference between "conventional Wisdom" and
the DEFINITION of what something is, in part, because you just don't
understand what Truth actually is.

You seem incapable of actually dealing with truth, which is why you are
a pathological liar. I don't think your mind can actually handle how
truth actually works.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6lsjq$1id16$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11350&group=comp.ai.philosophy#11350

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sat, 17 Jun 2023 22:10:50 -0500
Organization: A noiseless patient Spider
Lines: 153
Message-ID: <u6lsjq$1id16$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad> <u6lq6v$1i475$1@dont-email.me>
<joujM.1824$VKY6.722@fx13.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 18 Jun 2023 03:10:50 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="b050d2502bb641b27f2af347a7fa563d";
logging-data="1651750"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1823ARifNQSCryH69yuHKYM"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:PTMEOMZpVn7uYFJ71JXvxep/4z4=
In-Reply-To: <joujM.1824$VKY6.722@fx13.iad>
Content-Language: en-US
 by: olcott - Sun, 18 Jun 2023 03:10 UTC

On 6/17/2023 9:57 PM, Richard Damon wrote:
> On 6/17/23 10:29 PM, olcott wrote:
>> On 6/17/2023 8:31 PM, Richard Damon wrote:
>>> On 6/17/23 7:58 PM, olcott wrote:
>>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>>
>>>>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>>>>> Quesiton, so
>>>>>>>> the answer doesn't apply.
>>>>>>>
>>>>>>> That's an interesting point that would often catch students out.
>>>>>>> And
>>>>>>> the reason /why/ it catches so many out eventually led me to stop
>>>>>>> using
>>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>>
>>>>>>> The thing is, it looks so very much like a self-contradicting
>>>>>>> question
>>>>>>> is being asked.  The students think they can see it right there
>>>>>>> in the
>>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>>
>>>>>>> Of course, they are wrong.  The code is /not/ there.  The code
>>>>>>> calls a
>>>>>>> function that does not exist, so "it" (the constructed code, the
>>>>>>> whole
>>>>>>> program) does not exist either.
>>>>>>>
>>>>>>> The fact that it's code, and the students are almost all
>>>>>>> programmers and
>>>>>>> not mathematicians, makes it worse.  A mathematician seeing "let
>>>>>>> p be
>>>>>>> the largest prime" does not assume that such a p exists.  So when a
>>>>>>> prime number p' > p is constructed from p, this is not seen as a
>>>>>>> "self-contradictory number" because neither p nor p' exist.  But the
>>>>>>> halting theorem is even more deceptive for programmers, because the
>>>>>>> desired function, H (or whatever), appears to be so well defined
>>>>>>> -- much
>>>>>>> more well-defined than "the largest prime".  We have an exact
>>>>>>> specification for it, mapping arguments to returned values.  It's
>>>>>>> just
>>>>>>> software engineering to write such things (they erroneously assume).
>>>>>>>
>>>>>>> These sorts of proof can always be re-worded so as to avoid the
>>>>>>> initial
>>>>>>> assumption.  For example, we can start "let p be any prime", and
>>>>>>> from p
>>>>>>> we construct a prime p' > p.  And for halting, we can start "let
>>>>>>> H be
>>>>>>> any subroutine of two arguments always returning true or false".
>>>>>>> Now,
>>>>>>> all the objects /do/ exist.  In the first case, the construction
>>>>>>> shows
>>>>>>> that no prime is the largest, and in the second it shows that no
>>>>>>> subroutine computes the halting function.
>>>>>>>
>>>>>>> This issue led to another change.  In the last couple of years, I
>>>>>>> would
>>>>>>> start the course by setting Post's correspondence problem as if
>>>>>>> it were
>>>>>>> just a fun programming challenge.  As the days passed (and the
>>>>>>> course
>>>>>>> got into more and more serious material) it would start to become
>>>>>>> clear
>>>>>>> that this was no ordinary programming challenge.  Many students
>>>>>>> started
>>>>>>> to suspect that, despite the trivial sounding specification, no
>>>>>>> program
>>>>>>> could do the job.  I always felt a bit uneasy doing this, as if I
>>>>>>> was
>>>>>>> not being 100% honest, but it was a very useful learning
>>>>>>> experience for
>>>>>>> most.
>>>>>>>
>>>>>>
>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>>     yes/no answer to the following question:
>>>>>>
>>>>>>     Will Jack's answer to this question be no?
>>>>>>
>>>>>>     Jack can't possibly give a correct yes/no answer to the question.
>>>>>>
>>>>>> It is an easily verified fact that when Jack's question is posed
>>>>>> to Jack
>>>>>> that this question is self-contradictory for Jack or anyone else
>>>>>> having
>>>>>> a pathological relationship to the question.
>>>>>
>>>>> But the problem is "Jack" here is assumed to be a volitional being.
>>>>>
>>>>> H is not, it is a program, so before we even ask H what will
>>>>> happen, the answer has been fixed by the definition of the codr of H.
>>>>>
>>>>>>
>>>>>> It is also clear that when a question has no yes or no answer because
>>>>>> it is self-contradictory that this question is aptly classified as
>>>>>> incorrect.
>>>>>
>>>>> And the actual question DOES have a yes or no answer, in this case,
>>>>> since H(D,D) says 0 (non-Halting) the actual answer to the question
>>>>> does D(D) Halt is YES.
>>>>>
>>>>> You just confuse yourself by trying to imagine a program that can
>>>>> somehow change itself "at will".
>>>>>
>>>>>>
>>>>>> It is incorrect to say that a question is not self-contradictory
>>>>>> on the
>>>>>> basis that it is not self-contradictory in some contexts. If a
>>>>>> question
>>>>>> is self-contradictory in some contexts then in these contexts it
>>>>>> is an
>>>>>> incorrect question.
>>>>>
>>>>> In what context is "Does the Machine D(D) Halt When run" become
>>>>> self-contradictory?
>>>> When this question is posed to machine H.
>>>>
>>>> Jack could be asked the question:
>>>> Will Jack answer "no" to this question?
>>>>
>>>> For Jack it is self-contradictory for others that are not
>>>> Jack it is not self-contradictory. Context changes the semantics.
>>>>
>>>
>>> But you are missing the difference. A Decider is a fixed piece of
>>> code, so its answer has always been fixed to this question since it
>>> has been designed. Thus what it will say isn't a varialbe that can
>>> lead to the self-contradiction cycle, but a fixed result that will
>>> either be correct or incorrect.
>>>
>>
>> Every input to a Turing machine decider such that both Boolean return
>> values are incorrect is an incorrect input.
>>
>
> Except it isn't. The problem is you are looking at two different
> machines and two different inputs.
>
If no one can possibly correctly answer what the correct return value
that any H<n> having a pathological relationship to its input D<n> could
possibly provide then that is proof that D<n> is an invalid input for
H<n> in the same way that any self-contradictory question is an
incorrect question.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<knCjM.62$_%y4.58@fx48.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11351&group=comp.ai.philosophy#11351

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx48.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad> <u6lq6v$1i475$1@dont-email.me>
<joujM.1824$VKY6.722@fx13.iad> <u6lsjq$1id16$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
Content-Language: en-US
In-Reply-To: <u6lsjq$1id16$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 179
Message-ID: <knCjM.62$_%y4.58@fx48.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 18 Jun 2023 08:02:23 -0400
X-Received-Bytes: 9073
 by: Richard Damon - Sun, 18 Jun 2023 12:02 UTC

On 6/17/23 11:10 PM, olcott wrote:
> On 6/17/2023 9:57 PM, Richard Damon wrote:
>> On 6/17/23 10:29 PM, olcott wrote:
>>> On 6/17/2023 8:31 PM, Richard Damon wrote:
>>>> On 6/17/23 7:58 PM, olcott wrote:
>>>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>>>
>>>>>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>>>>>> Quesiton, so
>>>>>>>>> the answer doesn't apply.
>>>>>>>>
>>>>>>>> That's an interesting point that would often catch students out.
>>>>>>>> And
>>>>>>>> the reason /why/ it catches so many out eventually led me to
>>>>>>>> stop using
>>>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>>>
>>>>>>>> The thing is, it looks so very much like a self-contradicting
>>>>>>>> question
>>>>>>>> is being asked.  The students think they can see it right there
>>>>>>>> in the
>>>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>>>
>>>>>>>> Of course, they are wrong.  The code is /not/ there.  The code
>>>>>>>> calls a
>>>>>>>> function that does not exist, so "it" (the constructed code, the
>>>>>>>> whole
>>>>>>>> program) does not exist either.
>>>>>>>>
>>>>>>>> The fact that it's code, and the students are almost all
>>>>>>>> programmers and
>>>>>>>> not mathematicians, makes it worse.  A mathematician seeing "let
>>>>>>>> p be
>>>>>>>> the largest prime" does not assume that such a p exists.  So when a
>>>>>>>> prime number p' > p is constructed from p, this is not seen as a
>>>>>>>> "self-contradictory number" because neither p nor p' exist.  But
>>>>>>>> the
>>>>>>>> halting theorem is even more deceptive for programmers, because the
>>>>>>>> desired function, H (or whatever), appears to be so well defined
>>>>>>>> -- much
>>>>>>>> more well-defined than "the largest prime".  We have an exact
>>>>>>>> specification for it, mapping arguments to returned values.
>>>>>>>> It's just
>>>>>>>> software engineering to write such things (they erroneously
>>>>>>>> assume).
>>>>>>>>
>>>>>>>> These sorts of proof can always be re-worded so as to avoid the
>>>>>>>> initial
>>>>>>>> assumption.  For example, we can start "let p be any prime", and
>>>>>>>> from p
>>>>>>>> we construct a prime p' > p.  And for halting, we can start "let
>>>>>>>> H be
>>>>>>>> any subroutine of two arguments always returning true or false".
>>>>>>>> Now,
>>>>>>>> all the objects /do/ exist.  In the first case, the construction
>>>>>>>> shows
>>>>>>>> that no prime is the largest, and in the second it shows that no
>>>>>>>> subroutine computes the halting function.
>>>>>>>>
>>>>>>>> This issue led to another change.  In the last couple of years,
>>>>>>>> I would
>>>>>>>> start the course by setting Post's correspondence problem as if
>>>>>>>> it were
>>>>>>>> just a fun programming challenge.  As the days passed (and the
>>>>>>>> course
>>>>>>>> got into more and more serious material) it would start to
>>>>>>>> become clear
>>>>>>>> that this was no ordinary programming challenge.  Many students
>>>>>>>> started
>>>>>>>> to suspect that, despite the trivial sounding specification, no
>>>>>>>> program
>>>>>>>> could do the job.  I always felt a bit uneasy doing this, as if
>>>>>>>> I was
>>>>>>>> not being 100% honest, but it was a very useful learning
>>>>>>>> experience for
>>>>>>>> most.
>>>>>>>>
>>>>>>>
>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>>>     yes/no answer to the following question:
>>>>>>>
>>>>>>>     Will Jack's answer to this question be no?
>>>>>>>
>>>>>>>     Jack can't possibly give a correct yes/no answer to the
>>>>>>> question.
>>>>>>>
>>>>>>> It is an easily verified fact that when Jack's question is posed
>>>>>>> to Jack
>>>>>>> that this question is self-contradictory for Jack or anyone else
>>>>>>> having
>>>>>>> a pathological relationship to the question.
>>>>>>
>>>>>> But the problem is "Jack" here is assumed to be a volitional being.
>>>>>>
>>>>>> H is not, it is a program, so before we even ask H what will
>>>>>> happen, the answer has been fixed by the definition of the codr of H.
>>>>>>
>>>>>>>
>>>>>>> It is also clear that when a question has no yes or no answer
>>>>>>> because
>>>>>>> it is self-contradictory that this question is aptly classified as
>>>>>>> incorrect.
>>>>>>
>>>>>> And the actual question DOES have a yes or no answer, in this
>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to the
>>>>>> question does D(D) Halt is YES.
>>>>>>
>>>>>> You just confuse yourself by trying to imagine a program that can
>>>>>> somehow change itself "at will".
>>>>>>
>>>>>>>
>>>>>>> It is incorrect to say that a question is not self-contradictory
>>>>>>> on the
>>>>>>> basis that it is not self-contradictory in some contexts. If a
>>>>>>> question
>>>>>>> is self-contradictory in some contexts then in these contexts it
>>>>>>> is an
>>>>>>> incorrect question.
>>>>>>
>>>>>> In what context is "Does the Machine D(D) Halt When run" become
>>>>>> self-contradictory?
>>>>> When this question is posed to machine H.
>>>>>
>>>>> Jack could be asked the question:
>>>>> Will Jack answer "no" to this question?
>>>>>
>>>>> For Jack it is self-contradictory for others that are not
>>>>> Jack it is not self-contradictory. Context changes the semantics.
>>>>>
>>>>
>>>> But you are missing the difference. A Decider is a fixed piece of
>>>> code, so its answer has always been fixed to this question since it
>>>> has been designed. Thus what it will say isn't a varialbe that can
>>>> lead to the self-contradiction cycle, but a fixed result that will
>>>> either be correct or incorrect.
>>>>
>>>
>>> Every input to a Turing machine decider such that both Boolean return
>>> values are incorrect is an incorrect input.
>>>
>>
>> Except it isn't. The problem is you are looking at two different
>> machines and two different inputs.
>>
> If no one can possibly correctly answer what the correct return value
> that any H<n> having a pathological relationship to its input D<n> could
> possibly provide then that is proof that D<n> is an invalid input for
> H<n> in the same way that any self-contradictory question is an
> incorrect question.
>


Click here to read the complete article
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6n4ho$1m6pt$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11352&group=comp.ai.philosophy#11352

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sun, 18 Jun 2023 09:32:22 -0500
Organization: A noiseless patient Spider
Lines: 186
Message-ID: <u6n4ho$1m6pt$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad> <u6lq6v$1i475$1@dont-email.me>
<joujM.1824$VKY6.722@fx13.iad> <u6lsjq$1id16$1@dont-email.me>
<knCjM.62$_%y4.58@fx48.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 18 Jun 2023 14:32:24 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="b050d2502bb641b27f2af347a7fa563d";
logging-data="1776445"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/8KqHLbqk1zN/IwvyzlrrV"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:q/Z/sgyYwZIXgwCREHoNi0x9vis=
In-Reply-To: <knCjM.62$_%y4.58@fx48.iad>
Content-Language: en-US
 by: olcott - Sun, 18 Jun 2023 14:32 UTC

On 6/18/2023 7:02 AM, Richard Damon wrote:
> On 6/17/23 11:10 PM, olcott wrote:
>> On 6/17/2023 9:57 PM, Richard Damon wrote:
>>> On 6/17/23 10:29 PM, olcott wrote:
>>>> On 6/17/2023 8:31 PM, Richard Damon wrote:
>>>>> On 6/17/23 7:58 PM, olcott wrote:
>>>>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>>>>
>>>>>>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>>>>>>> Quesiton, so
>>>>>>>>>> the answer doesn't apply.
>>>>>>>>>
>>>>>>>>> That's an interesting point that would often catch students
>>>>>>>>> out. And
>>>>>>>>> the reason /why/ it catches so many out eventually led me to
>>>>>>>>> stop using
>>>>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>>>>
>>>>>>>>> The thing is, it looks so very much like a self-contradicting
>>>>>>>>> question
>>>>>>>>> is being asked.  The students think they can see it right there
>>>>>>>>> in the
>>>>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>>>>
>>>>>>>>> Of course, they are wrong.  The code is /not/ there.  The code
>>>>>>>>> calls a
>>>>>>>>> function that does not exist, so "it" (the constructed code,
>>>>>>>>> the whole
>>>>>>>>> program) does not exist either.
>>>>>>>>>
>>>>>>>>> The fact that it's code, and the students are almost all
>>>>>>>>> programmers and
>>>>>>>>> not mathematicians, makes it worse.  A mathematician seeing
>>>>>>>>> "let p be
>>>>>>>>> the largest prime" does not assume that such a p exists.  So
>>>>>>>>> when a
>>>>>>>>> prime number p' > p is constructed from p, this is not seen as a
>>>>>>>>> "self-contradictory number" because neither p nor p' exist.
>>>>>>>>> But the
>>>>>>>>> halting theorem is even more deceptive for programmers, because
>>>>>>>>> the
>>>>>>>>> desired function, H (or whatever), appears to be so well
>>>>>>>>> defined -- much
>>>>>>>>> more well-defined than "the largest prime".  We have an exact
>>>>>>>>> specification for it, mapping arguments to returned values.
>>>>>>>>> It's just
>>>>>>>>> software engineering to write such things (they erroneously
>>>>>>>>> assume).
>>>>>>>>>
>>>>>>>>> These sorts of proof can always be re-worded so as to avoid the
>>>>>>>>> initial
>>>>>>>>> assumption.  For example, we can start "let p be any prime",
>>>>>>>>> and from p
>>>>>>>>> we construct a prime p' > p.  And for halting, we can start
>>>>>>>>> "let H be
>>>>>>>>> any subroutine of two arguments always returning true or
>>>>>>>>> false". Now,
>>>>>>>>> all the objects /do/ exist.  In the first case, the
>>>>>>>>> construction shows
>>>>>>>>> that no prime is the largest, and in the second it shows that no
>>>>>>>>> subroutine computes the halting function.
>>>>>>>>>
>>>>>>>>> This issue led to another change.  In the last couple of years,
>>>>>>>>> I would
>>>>>>>>> start the course by setting Post's correspondence problem as if
>>>>>>>>> it were
>>>>>>>>> just a fun programming challenge.  As the days passed (and the
>>>>>>>>> course
>>>>>>>>> got into more and more serious material) it would start to
>>>>>>>>> become clear
>>>>>>>>> that this was no ordinary programming challenge.  Many students
>>>>>>>>> started
>>>>>>>>> to suspect that, despite the trivial sounding specification, no
>>>>>>>>> program
>>>>>>>>> could do the job.  I always felt a bit uneasy doing this, as if
>>>>>>>>> I was
>>>>>>>>> not being 100% honest, but it was a very useful learning
>>>>>>>>> experience for
>>>>>>>>> most.
>>>>>>>>>
>>>>>>>>
>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>>>>     yes/no answer to the following question:
>>>>>>>>
>>>>>>>>     Will Jack's answer to this question be no?
>>>>>>>>
>>>>>>>>     Jack can't possibly give a correct yes/no answer to the
>>>>>>>> question.
>>>>>>>>
>>>>>>>> It is an easily verified fact that when Jack's question is posed
>>>>>>>> to Jack
>>>>>>>> that this question is self-contradictory for Jack or anyone else
>>>>>>>> having
>>>>>>>> a pathological relationship to the question.
>>>>>>>
>>>>>>> But the problem is "Jack" here is assumed to be a volitional being.
>>>>>>>
>>>>>>> H is not, it is a program, so before we even ask H what will
>>>>>>> happen, the answer has been fixed by the definition of the codr
>>>>>>> of H.
>>>>>>>
>>>>>>>>
>>>>>>>> It is also clear that when a question has no yes or no answer
>>>>>>>> because
>>>>>>>> it is self-contradictory that this question is aptly classified as
>>>>>>>> incorrect.
>>>>>>>
>>>>>>> And the actual question DOES have a yes or no answer, in this
>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to the
>>>>>>> question does D(D) Halt is YES.
>>>>>>>
>>>>>>> You just confuse yourself by trying to imagine a program that can
>>>>>>> somehow change itself "at will".
>>>>>>>
>>>>>>>>
>>>>>>>> It is incorrect to say that a question is not self-contradictory
>>>>>>>> on the
>>>>>>>> basis that it is not self-contradictory in some contexts. If a
>>>>>>>> question
>>>>>>>> is self-contradictory in some contexts then in these contexts it
>>>>>>>> is an
>>>>>>>> incorrect question.
>>>>>>>
>>>>>>> In what context is "Does the Machine D(D) Halt When run" become
>>>>>>> self-contradictory?
>>>>>> When this question is posed to machine H.
>>>>>>
>>>>>> Jack could be asked the question:
>>>>>> Will Jack answer "no" to this question?
>>>>>>
>>>>>> For Jack it is self-contradictory for others that are not
>>>>>> Jack it is not self-contradictory. Context changes the semantics.
>>>>>>
>>>>>
>>>>> But you are missing the difference. A Decider is a fixed piece of
>>>>> code, so its answer has always been fixed to this question since it
>>>>> has been designed. Thus what it will say isn't a varialbe that can
>>>>> lead to the self-contradiction cycle, but a fixed result that will
>>>>> either be correct or incorrect.
>>>>>
>>>>
>>>> Every input to a Turing machine decider such that both Boolean return
>>>> values are incorrect is an incorrect input.
>>>>
>>>
>>> Except it isn't. The problem is you are looking at two different
>>> machines and two different inputs.
>>>
>> If no one can possibly correctly answer what the correct return value
>> that any H<n> having a pathological relationship to its input D<n>
>> could possibly provide then that is proof that D<n> is an invalid
>> input for H<n> in the same way that any self-contradictory question is
>> an incorrect question.
>>
>
> But you have the wrong Question. The Question is Does D(D) Halt, and
> that HAS a correct answer, since your H(D,D) returns 0, the answer is
> that D(D) does Halt, and thus H was wrong.
>
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:


Click here to read the complete article
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<PjGjM.29243$8uge.16102@fx14.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11353&group=comp.ai.philosophy#11353

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx14.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad> <u6lq6v$1i475$1@dont-email.me>
<joujM.1824$VKY6.722@fx13.iad> <u6lsjq$1id16$1@dont-email.me>
<knCjM.62$_%y4.58@fx48.iad> <u6n4ho$1m6pt$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
Content-Language: en-US
In-Reply-To: <u6n4ho$1m6pt$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 224
Message-ID: <PjGjM.29243$8uge.16102@fx14.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 18 Jun 2023 12:31:42 -0400
X-Received-Bytes: 10713
 by: Richard Damon - Sun, 18 Jun 2023 16:31 UTC

On 6/18/23 10:32 AM, olcott wrote:
> On 6/18/2023 7:02 AM, Richard Damon wrote:
>> On 6/17/23 11:10 PM, olcott wrote:
>>> On 6/17/2023 9:57 PM, Richard Damon wrote:
>>>> On 6/17/23 10:29 PM, olcott wrote:
>>>>> On 6/17/2023 8:31 PM, Richard Damon wrote:
>>>>>> On 6/17/23 7:58 PM, olcott wrote:
>>>>>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>>>>>
>>>>>>>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>>>>>>>> Quesiton, so
>>>>>>>>>>> the answer doesn't apply.
>>>>>>>>>>
>>>>>>>>>> That's an interesting point that would often catch students
>>>>>>>>>> out. And
>>>>>>>>>> the reason /why/ it catches so many out eventually led me to
>>>>>>>>>> stop using
>>>>>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>>>>>
>>>>>>>>>> The thing is, it looks so very much like a self-contradicting
>>>>>>>>>> question
>>>>>>>>>> is being asked.  The students think they can see it right
>>>>>>>>>> there in the
>>>>>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>>>>>
>>>>>>>>>> Of course, they are wrong.  The code is /not/ there.  The code
>>>>>>>>>> calls a
>>>>>>>>>> function that does not exist, so "it" (the constructed code,
>>>>>>>>>> the whole
>>>>>>>>>> program) does not exist either.
>>>>>>>>>>
>>>>>>>>>> The fact that it's code, and the students are almost all
>>>>>>>>>> programmers and
>>>>>>>>>> not mathematicians, makes it worse.  A mathematician seeing
>>>>>>>>>> "let p be
>>>>>>>>>> the largest prime" does not assume that such a p exists.  So
>>>>>>>>>> when a
>>>>>>>>>> prime number p' > p is constructed from p, this is not seen as a
>>>>>>>>>> "self-contradictory number" because neither p nor p' exist.
>>>>>>>>>> But the
>>>>>>>>>> halting theorem is even more deceptive for programmers,
>>>>>>>>>> because the
>>>>>>>>>> desired function, H (or whatever), appears to be so well
>>>>>>>>>> defined -- much
>>>>>>>>>> more well-defined than "the largest prime".  We have an exact
>>>>>>>>>> specification for it, mapping arguments to returned values.
>>>>>>>>>> It's just
>>>>>>>>>> software engineering to write such things (they erroneously
>>>>>>>>>> assume).
>>>>>>>>>>
>>>>>>>>>> These sorts of proof can always be re-worded so as to avoid
>>>>>>>>>> the initial
>>>>>>>>>> assumption.  For example, we can start "let p be any prime",
>>>>>>>>>> and from p
>>>>>>>>>> we construct a prime p' > p.  And for halting, we can start
>>>>>>>>>> "let H be
>>>>>>>>>> any subroutine of two arguments always returning true or
>>>>>>>>>> false". Now,
>>>>>>>>>> all the objects /do/ exist.  In the first case, the
>>>>>>>>>> construction shows
>>>>>>>>>> that no prime is the largest, and in the second it shows that no
>>>>>>>>>> subroutine computes the halting function.
>>>>>>>>>>
>>>>>>>>>> This issue led to another change.  In the last couple of
>>>>>>>>>> years, I would
>>>>>>>>>> start the course by setting Post's correspondence problem as
>>>>>>>>>> if it were
>>>>>>>>>> just a fun programming challenge.  As the days passed (and the
>>>>>>>>>> course
>>>>>>>>>> got into more and more serious material) it would start to
>>>>>>>>>> become clear
>>>>>>>>>> that this was no ordinary programming challenge.  Many
>>>>>>>>>> students started
>>>>>>>>>> to suspect that, despite the trivial sounding specification,
>>>>>>>>>> no program
>>>>>>>>>> could do the job.  I always felt a bit uneasy doing this, as
>>>>>>>>>> if I was
>>>>>>>>>> not being 100% honest, but it was a very useful learning
>>>>>>>>>> experience for
>>>>>>>>>> most.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>>>>>     yes/no answer to the following question:
>>>>>>>>>
>>>>>>>>>     Will Jack's answer to this question be no?
>>>>>>>>>
>>>>>>>>>     Jack can't possibly give a correct yes/no answer to the
>>>>>>>>> question.
>>>>>>>>>
>>>>>>>>> It is an easily verified fact that when Jack's question is
>>>>>>>>> posed to Jack
>>>>>>>>> that this question is self-contradictory for Jack or anyone
>>>>>>>>> else having
>>>>>>>>> a pathological relationship to the question.
>>>>>>>>
>>>>>>>> But the problem is "Jack" here is assumed to be a volitional being.
>>>>>>>>
>>>>>>>> H is not, it is a program, so before we even ask H what will
>>>>>>>> happen, the answer has been fixed by the definition of the codr
>>>>>>>> of H.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> It is also clear that when a question has no yes or no answer
>>>>>>>>> because
>>>>>>>>> it is self-contradictory that this question is aptly classified as
>>>>>>>>> incorrect.
>>>>>>>>
>>>>>>>> And the actual question DOES have a yes or no answer, in this
>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to the
>>>>>>>> question does D(D) Halt is YES.
>>>>>>>>
>>>>>>>> You just confuse yourself by trying to imagine a program that
>>>>>>>> can somehow change itself "at will".
>>>>>>>>
>>>>>>>>>
>>>>>>>>> It is incorrect to say that a question is not
>>>>>>>>> self-contradictory on the
>>>>>>>>> basis that it is not self-contradictory in some contexts. If a
>>>>>>>>> question
>>>>>>>>> is self-contradictory in some contexts then in these contexts
>>>>>>>>> it is an
>>>>>>>>> incorrect question.
>>>>>>>>
>>>>>>>> In what context is "Does the Machine D(D) Halt When run" become
>>>>>>>> self-contradictory?
>>>>>>> When this question is posed to machine H.
>>>>>>>
>>>>>>> Jack could be asked the question:
>>>>>>> Will Jack answer "no" to this question?
>>>>>>>
>>>>>>> For Jack it is self-contradictory for others that are not
>>>>>>> Jack it is not self-contradictory. Context changes the semantics.
>>>>>>>
>>>>>>
>>>>>> But you are missing the difference. A Decider is a fixed piece of
>>>>>> code, so its answer has always been fixed to this question since
>>>>>> it has been designed. Thus what it will say isn't a varialbe that
>>>>>> can lead to the self-contradiction cycle, but a fixed result that
>>>>>> will either be correct or incorrect.
>>>>>>
>>>>>
>>>>> Every input to a Turing machine decider such that both Boolean return
>>>>> values are incorrect is an incorrect input.
>>>>>
>>>>
>>>> Except it isn't. The problem is you are looking at two different
>>>> machines and two different inputs.
>>>>
>>> If no one can possibly correctly answer what the correct return value
>>> that any H<n> having a pathological relationship to its input D<n>
>>> could possibly provide then that is proof that D<n> is an invalid
>>> input for H<n> in the same way that any self-contradictory question
>>> is an incorrect question.
>>>
>>
>> But you have the wrong Question. The Question is Does D(D) Halt, and
>> that HAS a correct answer, since your H(D,D) returns 0, the answer is
>> that D(D) does Halt, and thus H was wrong.
>>
> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>    You ask someone (we'll call him "Jack") to give a truthful
>    yes/no answer to the following question:
>
>    Will Jack's answer to this question be no?
>
> For Jack the question is self-contradictory for others that
> are not Jack it is not self-contradictory.
>
> The context (of who is asked) changes the semantics.
>
> Every question that lacks a correct yes/no answer because
> the question is self-contradictory is an incorrect question.
>
> If you are not a mere Troll you will agree with this.
>


Click here to read the complete article
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6nc3t$1mvav$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11354&group=comp.ai.philosophy#11354

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Sun, 18 Jun 2023 11:41:32 -0500
Organization: A noiseless patient Spider
Lines: 193
Message-ID: <u6nc3t$1mvav$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad> <u6lq6v$1i475$1@dont-email.me>
<joujM.1824$VKY6.722@fx13.iad> <u6lsjq$1id16$1@dont-email.me>
<knCjM.62$_%y4.58@fx48.iad> <u6n4ho$1m6pt$1@dont-email.me>
<PjGjM.29243$8uge.16102@fx14.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 18 Jun 2023 16:41:33 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="b050d2502bb641b27f2af347a7fa563d";
logging-data="1801567"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19k1VqyERrwrrH2tjJWt77V"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:UNqh9I2mMjfVsoiVEqORK6j92jE=
Content-Language: en-US
In-Reply-To: <PjGjM.29243$8uge.16102@fx14.iad>
 by: olcott - Sun, 18 Jun 2023 16:41 UTC

On 6/18/2023 11:31 AM, Richard Damon wrote:
> On 6/18/23 10:32 AM, olcott wrote:
>> On 6/18/2023 7:02 AM, Richard Damon wrote:
>>> On 6/17/23 11:10 PM, olcott wrote:
>>>> On 6/17/2023 9:57 PM, Richard Damon wrote:
>>>>> On 6/17/23 10:29 PM, olcott wrote:
>>>>>> On 6/17/2023 8:31 PM, Richard Damon wrote:
>>>>>>> On 6/17/23 7:58 PM, olcott wrote:
>>>>>>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>>>>>>
>>>>>>>>>>>> Except that the Halting Problem isn't a "Self-Contradictory"
>>>>>>>>>>>> Quesiton, so
>>>>>>>>>>>> the answer doesn't apply.
>>>>>>>>>>>
>>>>>>>>>>> That's an interesting point that would often catch students
>>>>>>>>>>> out. And
>>>>>>>>>>> the reason /why/ it catches so many out eventually led me to
>>>>>>>>>>> stop using
>>>>>>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>>>>>>
>>>>>>>>>>> The thing is, it looks so very much like a self-contradicting
>>>>>>>>>>> question
>>>>>>>>>>> is being asked.  The students think they can see it right
>>>>>>>>>>> there in the
>>>>>>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>>>>>>
>>>>>>>>>>> Of course, they are wrong.  The code is /not/ there.  The
>>>>>>>>>>> code calls a
>>>>>>>>>>> function that does not exist, so "it" (the constructed code,
>>>>>>>>>>> the whole
>>>>>>>>>>> program) does not exist either.
>>>>>>>>>>>
>>>>>>>>>>> The fact that it's code, and the students are almost all
>>>>>>>>>>> programmers and
>>>>>>>>>>> not mathematicians, makes it worse.  A mathematician seeing
>>>>>>>>>>> "let p be
>>>>>>>>>>> the largest prime" does not assume that such a p exists.  So
>>>>>>>>>>> when a
>>>>>>>>>>> prime number p' > p is constructed from p, this is not seen as a
>>>>>>>>>>> "self-contradictory number" because neither p nor p' exist.
>>>>>>>>>>> But the
>>>>>>>>>>> halting theorem is even more deceptive for programmers,
>>>>>>>>>>> because the
>>>>>>>>>>> desired function, H (or whatever), appears to be so well
>>>>>>>>>>> defined -- much
>>>>>>>>>>> more well-defined than "the largest prime".  We have an exact
>>>>>>>>>>> specification for it, mapping arguments to returned values.
>>>>>>>>>>> It's just
>>>>>>>>>>> software engineering to write such things (they erroneously
>>>>>>>>>>> assume).
>>>>>>>>>>>
>>>>>>>>>>> These sorts of proof can always be re-worded so as to avoid
>>>>>>>>>>> the initial
>>>>>>>>>>> assumption.  For example, we can start "let p be any prime",
>>>>>>>>>>> and from p
>>>>>>>>>>> we construct a prime p' > p.  And for halting, we can start
>>>>>>>>>>> "let H be
>>>>>>>>>>> any subroutine of two arguments always returning true or
>>>>>>>>>>> false". Now,
>>>>>>>>>>> all the objects /do/ exist.  In the first case, the
>>>>>>>>>>> construction shows
>>>>>>>>>>> that no prime is the largest, and in the second it shows that no
>>>>>>>>>>> subroutine computes the halting function.
>>>>>>>>>>>
>>>>>>>>>>> This issue led to another change.  In the last couple of
>>>>>>>>>>> years, I would
>>>>>>>>>>> start the course by setting Post's correspondence problem as
>>>>>>>>>>> if it were
>>>>>>>>>>> just a fun programming challenge.  As the days passed (and
>>>>>>>>>>> the course
>>>>>>>>>>> got into more and more serious material) it would start to
>>>>>>>>>>> become clear
>>>>>>>>>>> that this was no ordinary programming challenge.  Many
>>>>>>>>>>> students started
>>>>>>>>>>> to suspect that, despite the trivial sounding specification,
>>>>>>>>>>> no program
>>>>>>>>>>> could do the job.  I always felt a bit uneasy doing this, as
>>>>>>>>>>> if I was
>>>>>>>>>>> not being 100% honest, but it was a very useful learning
>>>>>>>>>>> experience for
>>>>>>>>>>> most.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>>>>>>     yes/no answer to the following question:
>>>>>>>>>>
>>>>>>>>>>     Will Jack's answer to this question be no?
>>>>>>>>>>
>>>>>>>>>>     Jack can't possibly give a correct yes/no answer to the
>>>>>>>>>> question.
>>>>>>>>>>
>>>>>>>>>> It is an easily verified fact that when Jack's question is
>>>>>>>>>> posed to Jack
>>>>>>>>>> that this question is self-contradictory for Jack or anyone
>>>>>>>>>> else having
>>>>>>>>>> a pathological relationship to the question.
>>>>>>>>>
>>>>>>>>> But the problem is "Jack" here is assumed to be a volitional
>>>>>>>>> being.
>>>>>>>>>
>>>>>>>>> H is not, it is a program, so before we even ask H what will
>>>>>>>>> happen, the answer has been fixed by the definition of the codr
>>>>>>>>> of H.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It is also clear that when a question has no yes or no answer
>>>>>>>>>> because
>>>>>>>>>> it is self-contradictory that this question is aptly
>>>>>>>>>> classified as
>>>>>>>>>> incorrect.
>>>>>>>>>
>>>>>>>>> And the actual question DOES have a yes or no answer, in this
>>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to
>>>>>>>>> the question does D(D) Halt is YES.
>>>>>>>>>
>>>>>>>>> You just confuse yourself by trying to imagine a program that
>>>>>>>>> can somehow change itself "at will".
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It is incorrect to say that a question is not
>>>>>>>>>> self-contradictory on the
>>>>>>>>>> basis that it is not self-contradictory in some contexts. If a
>>>>>>>>>> question
>>>>>>>>>> is self-contradictory in some contexts then in these contexts
>>>>>>>>>> it is an
>>>>>>>>>> incorrect question.
>>>>>>>>>
>>>>>>>>> In what context is "Does the Machine D(D) Halt When run" become
>>>>>>>>> self-contradictory?
>>>>>>>> When this question is posed to machine H.
>>>>>>>>
>>>>>>>> Jack could be asked the question:
>>>>>>>> Will Jack answer "no" to this question?
>>>>>>>>
>>>>>>>> For Jack it is self-contradictory for others that are not
>>>>>>>> Jack it is not self-contradictory. Context changes the semantics.
>>>>>>>>
>>>>>>>
>>>>>>> But you are missing the difference. A Decider is a fixed piece of
>>>>>>> code, so its answer has always been fixed to this question since
>>>>>>> it has been designed. Thus what it will say isn't a varialbe that
>>>>>>> can lead to the self-contradiction cycle, but a fixed result that
>>>>>>> will either be correct or incorrect.
>>>>>>>
>>>>>>
>>>>>> Every input to a Turing machine decider such that both Boolean return
>>>>>> values are incorrect is an incorrect input.
>>>>>>
>>>>>
>>>>> Except it isn't. The problem is you are looking at two different
>>>>> machines and two different inputs.
>>>>>
>>>> If no one can possibly correctly answer what the correct return
>>>> value that any H<n> having a pathological relationship to its input
>>>> D<n> could possibly provide then that is proof that D<n> is an
>>>> invalid input for H<n> in the same way that any self-contradictory
>>>> question is an incorrect question.
>>>>
>>>
>>> But you have the wrong Question. The Question is Does D(D) Halt, and
>>> that HAS a correct answer, since your H(D,D) returns 0, the answer is
>>> that D(D) does Halt, and thus H was wrong.
>>>
>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>     You ask someone (we'll call him "Jack") to give a truthful
>>     yes/no answer to the following question:
>>
>>     Will Jack's answer to this question be no?
>>
>> For Jack the question is self-contradictory for others that
>> are not Jack it is not self-contradictory.
>>
>> The context (of who is asked) changes the semantics.
>>
>> Every question that lacks a correct yes/no answer because
>> the question is self-contradictory is an incorrect question.
>>
>> If you are not a mere Troll you will agree with this.
>>
>
> But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to Jack has no correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.


Click here to read the complete article
Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<5FGjM.3718$a0G8.2055@fx34.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11355&group=comp.ai.philosophy#11355

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx34.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <FnhjM.5848$33q9.1032@fx35.iad>
<871qi9oky8.fsf@bsb.me.uk> <u6l9jr$1ccr7$1@dont-email.me>
<j6rjM.5494$HtC8.4636@fx36.iad> <u6lhbb$1da24$2@dont-email.me>
<i8tjM.5978$Zq81.1390@fx15.iad> <u6lq6v$1i475$1@dont-email.me>
<joujM.1824$VKY6.722@fx13.iad> <u6lsjq$1id16$1@dont-email.me>
<knCjM.62$_%y4.58@fx48.iad> <u6n4ho$1m6pt$1@dont-email.me>
<PjGjM.29243$8uge.16102@fx14.iad> <u6nc3t$1mvav$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6nc3t$1mvav$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 217
Message-ID: <5FGjM.3718$a0G8.2055@fx34.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 18 Jun 2023 12:54:25 -0400
X-Received-Bytes: 10993
 by: Richard Damon - Sun, 18 Jun 2023 16:54 UTC

On 6/18/23 12:41 PM, olcott wrote:
> On 6/18/2023 11:31 AM, Richard Damon wrote:
>> On 6/18/23 10:32 AM, olcott wrote:
>>> On 6/18/2023 7:02 AM, Richard Damon wrote:
>>>> On 6/17/23 11:10 PM, olcott wrote:
>>>>> On 6/17/2023 9:57 PM, Richard Damon wrote:
>>>>>> On 6/17/23 10:29 PM, olcott wrote:
>>>>>>> On 6/17/2023 8:31 PM, Richard Damon wrote:
>>>>>>>> On 6/17/23 7:58 PM, olcott wrote:
>>>>>>>>> On 6/17/2023 6:13 PM, Richard Damon wrote:
>>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
>>>>>>>>>>> On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes:
>>>>>>>>>>>>
>>>>>>>>>>>>> Except that the Halting Problem isn't a
>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
>>>>>>>>>>>>> the answer doesn't apply.
>>>>>>>>>>>>
>>>>>>>>>>>> That's an interesting point that would often catch students
>>>>>>>>>>>> out. And
>>>>>>>>>>>> the reason /why/ it catches so many out eventually led me to
>>>>>>>>>>>> stop using
>>>>>>>>>>>> the proof-by-contradiction argument in my classes.
>>>>>>>>>>>>
>>>>>>>>>>>> The thing is, it looks so very much like a
>>>>>>>>>>>> self-contradicting question
>>>>>>>>>>>> is being asked.  The students think they can see it right
>>>>>>>>>>>> there in the
>>>>>>>>>>>> constructed code: "if H says I halt, I don't halt!".
>>>>>>>>>>>>
>>>>>>>>>>>> Of course, they are wrong.  The code is /not/ there.  The
>>>>>>>>>>>> code calls a
>>>>>>>>>>>> function that does not exist, so "it" (the constructed code,
>>>>>>>>>>>> the whole
>>>>>>>>>>>> program) does not exist either.
>>>>>>>>>>>>
>>>>>>>>>>>> The fact that it's code, and the students are almost all
>>>>>>>>>>>> programmers and
>>>>>>>>>>>> not mathematicians, makes it worse.  A mathematician seeing
>>>>>>>>>>>> "let p be
>>>>>>>>>>>> the largest prime" does not assume that such a p exists.  So
>>>>>>>>>>>> when a
>>>>>>>>>>>> prime number p' > p is constructed from p, this is not seen
>>>>>>>>>>>> as a
>>>>>>>>>>>> "self-contradictory number" because neither p nor p' exist.
>>>>>>>>>>>> But the
>>>>>>>>>>>> halting theorem is even more deceptive for programmers,
>>>>>>>>>>>> because the
>>>>>>>>>>>> desired function, H (or whatever), appears to be so well
>>>>>>>>>>>> defined -- much
>>>>>>>>>>>> more well-defined than "the largest prime".  We have an exact
>>>>>>>>>>>> specification for it, mapping arguments to returned values.
>>>>>>>>>>>> It's just
>>>>>>>>>>>> software engineering to write such things (they erroneously
>>>>>>>>>>>> assume).
>>>>>>>>>>>>
>>>>>>>>>>>> These sorts of proof can always be re-worded so as to avoid
>>>>>>>>>>>> the initial
>>>>>>>>>>>> assumption.  For example, we can start "let p be any prime",
>>>>>>>>>>>> and from p
>>>>>>>>>>>> we construct a prime p' > p.  And for halting, we can start
>>>>>>>>>>>> "let H be
>>>>>>>>>>>> any subroutine of two arguments always returning true or
>>>>>>>>>>>> false". Now,
>>>>>>>>>>>> all the objects /do/ exist.  In the first case, the
>>>>>>>>>>>> construction shows
>>>>>>>>>>>> that no prime is the largest, and in the second it shows
>>>>>>>>>>>> that no
>>>>>>>>>>>> subroutine computes the halting function.
>>>>>>>>>>>>
>>>>>>>>>>>> This issue led to another change.  In the last couple of
>>>>>>>>>>>> years, I would
>>>>>>>>>>>> start the course by setting Post's correspondence problem as
>>>>>>>>>>>> if it were
>>>>>>>>>>>> just a fun programming challenge.  As the days passed (and
>>>>>>>>>>>> the course
>>>>>>>>>>>> got into more and more serious material) it would start to
>>>>>>>>>>>> become clear
>>>>>>>>>>>> that this was no ordinary programming challenge.  Many
>>>>>>>>>>>> students started
>>>>>>>>>>>> to suspect that, despite the trivial sounding specification,
>>>>>>>>>>>> no program
>>>>>>>>>>>> could do the job.  I always felt a bit uneasy doing this, as
>>>>>>>>>>>> if I was
>>>>>>>>>>>> not being 100% honest, but it was a very useful learning
>>>>>>>>>>>> experience for
>>>>>>>>>>>> most.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>>>>>>>>>     yes/no answer to the following question:
>>>>>>>>>>>
>>>>>>>>>>>     Will Jack's answer to this question be no?
>>>>>>>>>>>
>>>>>>>>>>>     Jack can't possibly give a correct yes/no answer to the
>>>>>>>>>>> question.
>>>>>>>>>>>
>>>>>>>>>>> It is an easily verified fact that when Jack's question is
>>>>>>>>>>> posed to Jack
>>>>>>>>>>> that this question is self-contradictory for Jack or anyone
>>>>>>>>>>> else having
>>>>>>>>>>> a pathological relationship to the question.
>>>>>>>>>>
>>>>>>>>>> But the problem is "Jack" here is assumed to be a volitional
>>>>>>>>>> being.
>>>>>>>>>>
>>>>>>>>>> H is not, it is a program, so before we even ask H what will
>>>>>>>>>> happen, the answer has been fixed by the definition of the
>>>>>>>>>> codr of H.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It is also clear that when a question has no yes or no answer
>>>>>>>>>>> because
>>>>>>>>>>> it is self-contradictory that this question is aptly
>>>>>>>>>>> classified as
>>>>>>>>>>> incorrect.
>>>>>>>>>>
>>>>>>>>>> And the actual question DOES have a yes or no answer, in this
>>>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to
>>>>>>>>>> the question does D(D) Halt is YES.
>>>>>>>>>>
>>>>>>>>>> You just confuse yourself by trying to imagine a program that
>>>>>>>>>> can somehow change itself "at will".
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It is incorrect to say that a question is not
>>>>>>>>>>> self-contradictory on the
>>>>>>>>>>> basis that it is not self-contradictory in some contexts. If
>>>>>>>>>>> a question
>>>>>>>>>>> is self-contradictory in some contexts then in these contexts
>>>>>>>>>>> it is an
>>>>>>>>>>> incorrect question.
>>>>>>>>>>
>>>>>>>>>> In what context is "Does the Machine D(D) Halt When run"
>>>>>>>>>> become self-contradictory?
>>>>>>>>> When this question is posed to machine H.
>>>>>>>>>
>>>>>>>>> Jack could be asked the question:
>>>>>>>>> Will Jack answer "no" to this question?
>>>>>>>>>
>>>>>>>>> For Jack it is self-contradictory for others that are not
>>>>>>>>> Jack it is not self-contradictory. Context changes the semantics.
>>>>>>>>>
>>>>>>>>
>>>>>>>> But you are missing the difference. A Decider is a fixed piece
>>>>>>>> of code, so its answer has always been fixed to this question
>>>>>>>> since it has been designed. Thus what it will say isn't a
>>>>>>>> varialbe that can lead to the self-contradiction cycle, but a
>>>>>>>> fixed result that will either be correct or incorrect.
>>>>>>>>
>>>>>>>
>>>>>>> Every input to a Turing machine decider such that both Boolean
>>>>>>> return
>>>>>>> values are incorrect is an incorrect input.
>>>>>>>
>>>>>>
>>>>>> Except it isn't. The problem is you are looking at two different
>>>>>> machines and two different inputs.
>>>>>>
>>>>> If no one can possibly correctly answer what the correct return
>>>>> value that any H<n> having a pathological relationship to its input
>>>>> D<n> could possibly provide then that is proof that D<n> is an
>>>>> invalid input for H<n> in the same way that any self-contradictory
>>>>> question is an incorrect question.
>>>>>
>>>>
>>>> But you have the wrong Question. The Question is Does D(D) Halt, and
>>>> that HAS a correct answer, since your H(D,D) returns 0, the answer
>>>> is that D(D) does Halt, and thus H was wrong.
>>>>
>>> sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
>>>     You ask someone (we'll call him "Jack") to give a truthful
>>>     yes/no answer to the following question:
>>>
>>>     Will Jack's answer to this question be no?
>>>
>>> For Jack the question is self-contradictory for others that
>>> are not Jack it is not self-contradictory.
>>>
>>> The context (of who is asked) changes the semantics.
>>>
>>> Every question that lacks a correct yes/no answer because
>>> the question is self-contradictory is an incorrect question.
>>>
>>> If you are not a mere Troll you will agree with this.
>>>
>>
>> But the ACTUAL QUESTION DOES have a correct answer.
> The actual question posed to Jack has no correct answer.
> The actual question posed to anyone else is a semantically
> different question even though the words are the same.
>


Click here to read the complete article
Pages:123
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor