Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

From Sharp minds come... pointed heads. -- Bryan Sparrowhawk


computers / comp.ai.philosophy / Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

SubjectAuthor
* ChatGPT agrees that the halting problem input can be construed as anolcott
+* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|+* Re: ChatGPT agrees that the halting problem input can be construed asolcott
||`* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|| `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
||  `- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|`* Re: ChatGPT agrees that the halting problem input can be construed as an incorreBen Bacarisse
| `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|  +* Re: ChatGPT agrees that the halting problem input can be construed asJeff Barnett
|  |`* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  | `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|  |  `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  |   `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|  |    `- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|   `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|    `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|     `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|      `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|       `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|        `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|         `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|          `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|           `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            +* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |`* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            | `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |  `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |   `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |    +* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |    |`- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |    `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |     `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |      `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |       `* Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |        `* Re: ChatGPT agrees that the halting problem input can be construed asolcott
|            |         `- Re: ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            `* Does input D have semantic property S or is input D [BAD INPUT]?olcott
|             `* Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|              `* Re: Does input D have semantic property S or is input D [BAD INPUT]?olcott
|               `* Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|                +* Re: Does input D have semantic property S or is input D [BAD INPUT]?olcott
|                |`* Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|                | `* Re: Does input D have semantic property S or is input D [BAD INPUT]?olcott
|                |  `- Re: Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|                `* Re: Does input D have semantic property S or is input D [BAD INPUT]?Don Stockbauer
|                 `- ChatGPT discussion (was: Re: Does input D have semantic property S orvallor
+* Ben Bacarisse specifically targets my posts to discourage honestolcott
|`* Re: Ben Bacarisse specifically targets my posts to discourage honestRichard Damon
| `* Re: dishonest subject linesBen Bacarisse
|  `- Ben Bacarisse specifically targets my posts to discourage honestolcott
+* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|`* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
| `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|  `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
|   `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|    `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
|     `* Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|      `- Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
+- Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]olcott
+- Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]olcott
`* ChatGPT and stack limits (was: Re: ChatGPT agrees that the haltingvallor
 +- Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that thevallor
 `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
  `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
   `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
    `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
     `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
      `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
       `* Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
        `- Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon

Pages:123
Ben Bacarisse specifically targets my posts to discourage honest dialogue

<u6sne8$2gcmh$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11398&group=comp.ai.philosophy#11398

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Ben Bacarisse specifically targets my posts to discourage honest
dialogue
Date: Tue, 20 Jun 2023 12:25:28 -0500
Organization: A noiseless patient Spider
Lines: 42
Message-ID: <u6sne8$2gcmh$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6sfa8$2fgi8$2@dont-email.me>
<eTjkM.3650$WpOe.1930@fx18.iad> <87r0q69l5l.fsf_-_@bsb.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jun 2023 17:25:29 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="09a376d1ca8466ab1cfa2e779f87b5bd";
logging-data="2634449"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+jAg3wzxDZSdGMhNS9s7v/"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:XpKEQc+o0qNZH3OZRtpHRW/xrFA=
Content-Language: en-US
In-Reply-To: <87r0q69l5l.fsf_-_@bsb.me.uk>
 by: olcott - Tue, 20 Jun 2023 17:25 UTC

Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!
Quit doing that Ben !!!

On 6/20/2023 11:02 AM, Ben Bacarisse wrote:
> Richard Damon <Richard@Damon-Family.org> writes:
>
>>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>
>>>> Me: "do you still assert that [...] false is the "correct" answer even
>>>>      though P(P) halts?"
>>>>
>>>> PO: Yes that is the correct answer even though P(P) halts.
> <cut>
>>>> This was not a slip of the tongue.  He has been quite clear that he is
>>>> talking about something other than what the world calls halting.  It's
>>>> about what /would/ happen if the program were slight different, not
>>>> about what actually happens:
>>>>
>>>> PO: "A non-halting computation is every computation that never halts
>>>>      unless its simulation is aborted.  This maps to every element of the
>>>>      conventional halting problem set of non-halting computations and a
>>>>      few more."
>
>> Ben is just pointing out the ERRORS in your logic
>
> I don't think I pointed to any errors of logic. I just quoted PO so
> that readers can see what he's talking about.
>
> Why do you keep making posts with personally derogatory subject lines?
> You are just amplifying his nasty voice.
>

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<u6t0c2$2h8h7$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11402&group=comp.ai.philosophy#11402

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
Subject: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Date: Tue, 20 Jun 2023 14:57:53 -0500
Organization: A noiseless patient Spider
Lines: 57
Message-ID: <u6t0c2$2h8h7$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jun 2023 19:57:54 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="09a376d1ca8466ab1cfa2e779f87b5bd";
logging-data="2662951"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19C5Bx4+gNkDmO0TV0WDyei"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:YzMPYNHuhv1hcug/UvO02EFxFXM=
Content-Language: en-US
In-Reply-To: <87wmzzjjv6.fsf@bsb.me.uk>
 by: olcott - Tue, 20 Jun 2023 19:57 UTC

On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>
>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>
>>> the full semantics of the question <bla>
>>
>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>
>> Now, D(D) either halts or doesn't halt.
>>
>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>
> Just a reminder that you are arguing with someone who has declared that
> the wrong answer is the right one:
>
> Me: "do you still assert that [...] false is the "correct" answer even
> though P(P) halts?"
>
> PO: Yes that is the correct answer even though P(P) halts.
>

Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]

*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*

When Ben pointed out that H(P,P) reports that P(P) does not halt when
P(P) does halt this seems to be a contradiction to people that lack a
complete understanding.

Because of this I changed the semantic meaning of a return value of 0
from H to mean either
(a) that P(P) does not halt <or>
(b) P(P) specifically targets H to do the opposite of whatever Boolean
value that H returns.

When H(P,P) reports that P correctly simulated by H cannot possibly
reach its own last instruction this is an easily verified fact, thus
P(P) does not halt from the point of view of H.

When H returns 0 for input P means either that P does not halt or
P specifically targets H to do the opposite of whatever Boolean
value that H returns not even people with little understanding can
say that this is contradictory.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]

<u6t0fd$2h8h7$2@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11403&group=comp.ai.philosophy#11403

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
Subject: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]
Date: Tue, 20 Jun 2023 14:59:40 -0500
Organization: A noiseless patient Spider
Lines: 53
Message-ID: <u6t0fd$2h8h7$2@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jun 2023 19:59:41 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="09a376d1ca8466ab1cfa2e779f87b5bd";
logging-data="2662951"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+VikG8VKwcdbH48LxnqEsY"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:NC1kpOzji36ozSYRsj0k+XXMiUU=
Content-Language: en-US
In-Reply-To: <87wmzzjjv6.fsf@bsb.me.uk>
 by: olcott - Tue, 20 Jun 2023 19:59 UTC

On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>
>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>
>>> the full semantics of the question <bla>
>>
>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>
>> Now, D(D) either halts or doesn't halt.
>>
>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>
> Just a reminder that you are arguing with someone who has declared that
> the wrong answer is the right one:
>
> Me: "do you still assert that [...] false is the "correct" answer even
> though P(P) halts?"
>
> PO: Yes that is the correct answer even though P(P) halts.
>

*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*

When Ben pointed out that H(P,P) reports that P(P) does not halt when
P(P) does halt this seems to be a contradiction to people that lack a
complete understanding.

Because of this I changed the semantic meaning of a return value of 0
from H to mean either
(a) that P(P) does not halt <or>
(b) P(P) specifically targets H to do the opposite of whatever Boolean
value that H returns.

When H(P,P) reports that P correctly simulated by H cannot possibly
reach its own last instruction this is an easily verified fact, thus
P(P) does not halt from the point of view of H.

When H returns 0 for input P means either that P does not halt or
P specifically targets H to do the opposite of whatever Boolean
value that H returns not even people with little understanding can
say that this is contradictory.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]

<u6t0hk$2h8h7$3@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11404&group=comp.ai.philosophy#11404

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
Subject: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]
Date: Tue, 20 Jun 2023 15:00:52 -0500
Organization: A noiseless patient Spider
Lines: 52
Message-ID: <u6t0hk$2h8h7$3@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jun 2023 20:00:53 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="09a376d1ca8466ab1cfa2e779f87b5bd";
logging-data="2662951"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+ZOmb1fQ9ANtmpiQ6jJ+q1"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:K/doZxYYO8D1XY5X56qXnhzgGQo=
In-Reply-To: <87wmzzjjv6.fsf@bsb.me.uk>
Content-Language: en-US
 by: olcott - Tue, 20 Jun 2023 20:00 UTC

On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>
>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>
>>> the full semantics of the question <bla>
>>
>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>
>> Now, D(D) either halts or doesn't halt.
>>
>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>
> Just a reminder that you are arguing with someone who has declared that
> the wrong answer is the right one:
>
> Me: "do you still assert that [...] false is the "correct" answer even
> though P(P) halts?"
>
> PO: Yes that is the correct answer even though P(P) halts.
>

*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*
*Ben Bacarisse targets my posts to discourage honest dialogue*

When Ben pointed out that H(P,P) reports that P(P) does not halt when
P(P) does halt this seems to be a contradiction to people that lack a
complete understanding.

Because of this I changed the semantic meaning of a return value of 0
from H to mean either
(a) that P(P) does not halt <or>
(b) P(P) specifically targets H to do the opposite of whatever Boolean
value that H returns.

When H(P,P) reports that P correctly simulated by H cannot possibly
reach its own last instruction this is an easily verified fact, thus
P(P) does not halt from the point of view of H.

When H returns 0 for input P means either that P does not halt or
P specifically targets H to do the opposite of whatever Boolean
value that H returns not even people with little understanding can
say that this is contradictory.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<z3okM.866$Ect9.422@fx44.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11406&group=comp.ai.philosophy#11406

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!nntp.club.cc.cmu.edu!45.76.7.193.MISMATCH!3.us.feeder.erje.net!feeder.erje.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx44.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Content-Language: en-US
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6t0c2$2h8h7$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6t0c2$2h8h7$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 67
Message-ID: <z3okM.866$Ect9.422@fx44.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Tue, 20 Jun 2023 16:34:39 -0400
X-Received-Bytes: 3391
 by: Richard Damon - Tue, 20 Jun 2023 20:34 UTC

On 6/20/23 3:57 PM, olcott wrote:
> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>>
>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>
>>>> the full semantics of the question <bla>
>>>
>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>
>>> Now, D(D) either halts or doesn't halt.
>>>
>>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>
>> Just a reminder that you are arguing with someone who has declared that
>> the wrong answer is the right one:
>>
>> Me: "do you still assert that [...] false is the "correct" answer even
>>      though P(P) halts?"
>>
>> PO: Yes that is the correct answer even though P(P) halts.
>>
>
> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
> discourage honest dialogue]
>
> *Ben Bacarisse targets my posts to discourage honest dialogue*
> *Ben Bacarisse targets my posts to discourage honest dialogue*
> *Ben Bacarisse targets my posts to discourage honest dialogue*

No, YOU DO by claiming your words don't actually mean what they say.

>
> When Ben pointed out that H(P,P) reports that P(P) does not halt when
> P(P) does halt this seems to be a contradiction to people that lack a
> complete understanding.

But since P(P) (now D(D) ) does halt, how do you explain that H saying
it doesn't is correct?

>
> Because of this I changed the semantic meaning of a return value of 0
> from H to mean either

So you are admitting to LYIHG about the problem you are doing/

OLCOTT --- ADMITTED LIAR

> (a) that P(P) does not halt <or>
> (b) P(P) specifically targets H to do the opposite of whatever Boolean
> value that H returns.
>
> When H(P,P) reports that P correctly simulated by H cannot possibly
> reach its own last instruction this is an easily verified fact, thus
> P(P) does not halt from the point of view of H.
>
> When H returns 0 for input P means either that P does not halt or
> P specifically targets H to do the opposite of whatever Boolean
> value that H returns not even people with little understanding can
> say that this is contradictory.
>
>
>
>
>

Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<u6t30e$2hh2e$2@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11408&group=comp.ai.philosophy#11408

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Date: Tue, 20 Jun 2023 15:42:53 -0500
Organization: A noiseless patient Spider
Lines: 64
Message-ID: <u6t30e$2hh2e$2@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6t0c2$2h8h7$1@dont-email.me>
<z3okM.866$Ect9.422@fx44.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jun 2023 20:42:54 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="09a376d1ca8466ab1cfa2e779f87b5bd";
logging-data="2671694"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1++3KgcLER2twlljE9fZ0gy"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:EGcJyIBITI+C2TYCJTMYkKtG0Tg=
In-Reply-To: <z3okM.866$Ect9.422@fx44.iad>
Content-Language: en-US
 by: olcott - Tue, 20 Jun 2023 20:42 UTC

On 6/20/2023 3:34 PM, Richard Damon wrote:
> On 6/20/23 3:57 PM, olcott wrote:
>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>>> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>>>
>>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>>
>>>>> the full semantics of the question <bla>
>>>>
>>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>>
>>>> Now, D(D) either halts or doesn't halt.
>>>>
>>>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>>
>>> Just a reminder that you are arguing with someone who has declared that
>>> the wrong answer is the right one:
>>>
>>> Me: "do you still assert that [...] false is the "correct" answer even
>>>      though P(P) halts?"
>>>
>>> PO: Yes that is the correct answer even though P(P) halts.
>>>
>>
>> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
>> discourage honest dialogue]
>>
>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>
> No, YOU DO by claiming your words don't actually mean what they say.
>
>>
>> When Ben pointed out that H(P,P) reports that P(P) does not halt when
>> P(P) does halt this seems to be a contradiction to people that lack a
>> complete understanding.
>
> But since P(P) (now D(D) ) does halt, how do you explain that H saying
> it doesn't is correct?
>
>>
>> Because of this I changed the semantic meaning of a return value of 0
>> from H to mean either
>
> So you are admitting to LYIHG about the problem you are doing/
>
> OLCOTT --- ADMITTED LIAR
>

When H(P,P) reports that P correctly simulated by H cannot possibly
reach its own last instruction this is an easily verified fact, thus
P(P) does not halt from the point of view of H.

This is the same thing as the Facebook post where two people are looking
at the same symbol that is a "9" or a "6" depending on your point of
view.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<okokM.3740$VKY6.505@fx13.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11410&group=comp.ai.philosophy#11410

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx13.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Content-Language: en-US
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6t0c2$2h8h7$1@dont-email.me>
<z3okM.866$Ect9.422@fx44.iad> <u6t30e$2hh2e$2@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6t30e$2hh2e$2@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 73
Message-ID: <okokM.3740$VKY6.505@fx13.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Tue, 20 Jun 2023 16:52:36 -0400
X-Received-Bytes: 3817
 by: Richard Damon - Tue, 20 Jun 2023 20:52 UTC

On 6/20/23 4:42 PM, olcott wrote:
> On 6/20/2023 3:34 PM, Richard Damon wrote:
>> On 6/20/23 3:57 PM, olcott wrote:
>>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>>>> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>>>>
>>>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>>>
>>>>>> the full semantics of the question <bla>
>>>>>
>>>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>>>
>>>>> Now, D(D) either halts or doesn't halt.
>>>>>
>>>>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>>>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>>>
>>>> Just a reminder that you are arguing with someone who has declared that
>>>> the wrong answer is the right one:
>>>>
>>>> Me: "do you still assert that [...] false is the "correct" answer even
>>>>      though P(P) halts?"
>>>>
>>>> PO: Yes that is the correct answer even though P(P) halts.
>>>>
>>>
>>> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
>>> discourage honest dialogue]
>>>
>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>
>> No, YOU DO by claiming your words don't actually mean what they say.
>>
>>>
>>> When Ben pointed out that H(P,P) reports that P(P) does not halt when
>>> P(P) does halt this seems to be a contradiction to people that lack a
>>> complete understanding.
>>
>> But since P(P) (now D(D) ) does halt, how do you explain that H saying
>> it doesn't is correct?
>>
>>>
>>> Because of this I changed the semantic meaning of a return value of 0
>>> from H to mean either
>>
>> So you are admitting to LYIHG about the problem you are doing/
>>
>> OLCOTT --- ADMITTED LIAR
>>
>
> When H(P,P) reports that P correctly simulated by H cannot possibly
> reach its own last instruction this is an easily verified fact, thus
> P(P) does not halt from the point of view of H.

Which isn't the Halting Problem criteria, so you are lying about worki g
on the halting problem.

Note also, your H never actual DOES a "Correct Simulation" if it answer
the question, so your criteria is just invalid, so again, YOU LIE.

>
> This is the same thing as the Facebook post where two people are looking
> at the same symbol that is a "9" or a "6" depending on your point of
> view.
>

Nope. You are just too stupid to think.

You are so stupid, you don't see that you are lying, which is why you
ara a pathological liar. You are just proving it.

Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<u6t6a1$2i0mf$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11412&group=comp.ai.philosophy#11412

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Date: Tue, 20 Jun 2023 16:39:11 -0500
Organization: A noiseless patient Spider
Lines: 82
Message-ID: <u6t6a1$2i0mf$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6t0c2$2h8h7$1@dont-email.me>
<z3okM.866$Ect9.422@fx44.iad> <u6t30e$2hh2e$2@dont-email.me>
<okokM.3740$VKY6.505@fx13.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jun 2023 21:39:13 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="09a376d1ca8466ab1cfa2e779f87b5bd";
logging-data="2687695"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+iG4RKc2VaEVXGL63ai3sx"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:3fIw1dJuEQPi1/F7Et2H2Ln+dns=
Content-Language: en-US
In-Reply-To: <okokM.3740$VKY6.505@fx13.iad>
 by: olcott - Tue, 20 Jun 2023 21:39 UTC

On 6/20/2023 3:52 PM, Richard Damon wrote:
> On 6/20/23 4:42 PM, olcott wrote:
>> On 6/20/2023 3:34 PM, Richard Damon wrote:
>>> On 6/20/23 3:57 PM, olcott wrote:
>>>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>>>>> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>>>>>
>>>>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>>>>
>>>>>>> the full semantics of the question <bla>
>>>>>>
>>>>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>>>>
>>>>>> Now, D(D) either halts or doesn't halt.
>>>>>>
>>>>>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>>>>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>>>>
>>>>> Just a reminder that you are arguing with someone who has declared
>>>>> that
>>>>> the wrong answer is the right one:
>>>>>
>>>>> Me: "do you still assert that [...] false is the "correct" answer even
>>>>>      though P(P) halts?"
>>>>>
>>>>> PO: Yes that is the correct answer even though P(P) halts.
>>>>>
>>>>
>>>> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
>>>> discourage honest dialogue]
>>>>
>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>
>>> No, YOU DO by claiming your words don't actually mean what they say.
>>>
>>>>
>>>> When Ben pointed out that H(P,P) reports that P(P) does not halt when
>>>> P(P) does halt this seems to be a contradiction to people that lack a
>>>> complete understanding.
>>>
>>> But since P(P) (now D(D) ) does halt, how do you explain that H
>>> saying it doesn't is correct?
>>>
>>>>
>>>> Because of this I changed the semantic meaning of a return value of 0
>>>> from H to mean either
>>>
>>> So you are admitting to LYIHG about the problem you are doing/
>>>
>>> OLCOTT --- ADMITTED LIAR
>>>
>>
>> When H(P,P) reports that P correctly simulated by H cannot possibly
>> reach its own last instruction this is an easily verified fact, thus
>> P(P) does not halt from the point of view of H.
>
> Which isn't the Halting Problem criteria, so you are lying about worki g
> on the halting problem.
>

Try and explain how any H can be defined that can be embedded
within Linz Ĥ such that embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.qy or Ĥ.qn
consistently with the behavior of Ĥ applied to ⟨Ĥ⟩.

If it is impossible to do this then you have affirmed that ⟨Ĥ⟩ ⟨Ĥ⟩ is a
self-contradictory input to embedded_H.

If it is possible to do this then explain the details of how it is done.

https://www.liarparadox.org/Linz_Proof.pdf

Once we know that the halting problem question is an incorrect question
then we can transform it into a correct question.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<xdpkM.838$3XE8.316@fx42.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11413&group=comp.ai.philosophy#11413

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx42.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Content-Language: en-US
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6t0c2$2h8h7$1@dont-email.me>
<z3okM.866$Ect9.422@fx44.iad> <u6t30e$2hh2e$2@dont-email.me>
<okokM.3740$VKY6.505@fx13.iad> <u6t6a1$2i0mf$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6t6a1$2i0mf$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 115
Message-ID: <xdpkM.838$3XE8.316@fx42.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Tue, 20 Jun 2023 17:53:32 -0400
X-Received-Bytes: 5438
 by: Richard Damon - Tue, 20 Jun 2023 21:53 UTC

On 6/20/23 5:39 PM, olcott wrote:
> On 6/20/2023 3:52 PM, Richard Damon wrote:
>> On 6/20/23 4:42 PM, olcott wrote:
>>> On 6/20/2023 3:34 PM, Richard Damon wrote:
>>>> On 6/20/23 3:57 PM, olcott wrote:
>>>>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>>>>>> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>>>>>>
>>>>>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>>>>>
>>>>>>>> the full semantics of the question <bla>
>>>>>>>
>>>>>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>>>>>
>>>>>>> Now, D(D) either halts or doesn't halt.
>>>>>>>
>>>>>>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>>>>>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>>>>>
>>>>>> Just a reminder that you are arguing with someone who has declared
>>>>>> that
>>>>>> the wrong answer is the right one:
>>>>>>
>>>>>> Me: "do you still assert that [...] false is the "correct" answer
>>>>>> even
>>>>>>      though P(P) halts?"
>>>>>>
>>>>>> PO: Yes that is the correct answer even though P(P) halts.
>>>>>>
>>>>>
>>>>> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
>>>>> discourage honest dialogue]
>>>>>
>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>
>>>> No, YOU DO by claiming your words don't actually mean what they say.
>>>>
>>>>>
>>>>> When Ben pointed out that H(P,P) reports that P(P) does not halt when
>>>>> P(P) does halt this seems to be a contradiction to people that lack a
>>>>> complete understanding.
>>>>
>>>> But since P(P) (now D(D) ) does halt, how do you explain that H
>>>> saying it doesn't is correct?
>>>>
>>>>>
>>>>> Because of this I changed the semantic meaning of a return value of 0
>>>>> from H to mean either
>>>>
>>>> So you are admitting to LYIHG about the problem you are doing/
>>>>
>>>> OLCOTT --- ADMITTED LIAR
>>>>
>>>
>>> When H(P,P) reports that P correctly simulated by H cannot possibly
>>> reach its own last instruction this is an easily verified fact, thus
>>> P(P) does not halt from the point of view of H.
>>
>> Which isn't the Halting Problem criteria, so you are lying about worki
>> g on the halting problem.
>>
>
> Try and explain how any H can be defined that can be embedded
> within Linz Ĥ such that embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.qy or Ĥ.qn
> consistently with the behavior of Ĥ applied to ⟨Ĥ⟩.

It can't, that is what the Theorem Proves.

That is because the Halting Function just isn't computable,

>
> If it is impossible to do this then you have affirmed that ⟨Ĥ⟩ ⟨Ĥ⟩ is a
> self-contradictory input to embedded_H.

Nope, because it just doesn't exist.

Since no H can exist that meets the requirements, an H that meets the
requirements doesn't exist, and so no H^ exists.

>
> If it is possible to do this then explain the details of how it is done.
>
> https://www.liarparadox.org/Linz_Proof.pdf
>
> Once we know that the halting problem question is an incorrect question
> then we can transform it into a correct question.
>

But it isn't an "Incorrect Question", but the definition of what a
"Correct Question" is.

Remember, the Question of the Halting Problem Theorem is, Can an H exist
that meets the requirements.

This Question has an answer of NO.

The Question of the Requirements is to decide if an given input will
Halt of Not.

This question has an answer for any input you can actually create.

The answer for the D built on your claimed H, is that it Halts, while
your claimed H says it doesn't.

Your "requirements" that you are claiming is that we must create a halt
decider for this template, there is no such requirement, since the
answer to the Halting Problem Theorem is NO, and you thinking is just
stuck in a rabbit hole trying to require the impossible, because you
refuse to face the reality that some things are just impossible, and
that is ok.

This is perhaps part of your mental defect.

Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<u6t7vm$2i77d$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11415&group=comp.ai.philosophy#11415

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Date: Tue, 20 Jun 2023 17:07:49 -0500
Organization: A noiseless patient Spider
Lines: 113
Message-ID: <u6t7vm$2i77d$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6t0c2$2h8h7$1@dont-email.me>
<z3okM.866$Ect9.422@fx44.iad> <u6t30e$2hh2e$2@dont-email.me>
<okokM.3740$VKY6.505@fx13.iad> <u6t6a1$2i0mf$1@dont-email.me>
<xdpkM.838$3XE8.316@fx42.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 20 Jun 2023 22:07:50 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ed643445410f82d7b44362150a53c93e";
logging-data="2694381"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18dwh5Vv7BA4tM0Hk503Mom"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:8LM1TBYfSoR5HpUHmowjEwZtrrI=
Content-Language: en-US
In-Reply-To: <xdpkM.838$3XE8.316@fx42.iad>
 by: olcott - Tue, 20 Jun 2023 22:07 UTC

On 6/20/2023 4:53 PM, Richard Damon wrote:
> On 6/20/23 5:39 PM, olcott wrote:
>> On 6/20/2023 3:52 PM, Richard Damon wrote:
>>> On 6/20/23 4:42 PM, olcott wrote:
>>>> On 6/20/2023 3:34 PM, Richard Damon wrote:
>>>>> On 6/20/23 3:57 PM, olcott wrote:
>>>>>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>>>>>>> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>>>>>>>
>>>>>>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>>>>>>
>>>>>>>>> the full semantics of the question <bla>
>>>>>>>>
>>>>>>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>>>>>>
>>>>>>>> Now, D(D) either halts or doesn't halt.
>>>>>>>>
>>>>>>>> Hence the CORRECT yes/no-answer to the question "Does D(D)
>>>>>>>> halt?" is
>>>>>>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>>>>>>
>>>>>>> Just a reminder that you are arguing with someone who has
>>>>>>> declared that
>>>>>>> the wrong answer is the right one:
>>>>>>>
>>>>>>> Me: "do you still assert that [...] false is the "correct" answer
>>>>>>> even
>>>>>>>      though P(P) halts?"
>>>>>>>
>>>>>>> PO: Yes that is the correct answer even though P(P) halts.
>>>>>>>
>>>>>>
>>>>>> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
>>>>>> discourage honest dialogue]
>>>>>>
>>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>>
>>>>> No, YOU DO by claiming your words don't actually mean what they say.
>>>>>
>>>>>>
>>>>>> When Ben pointed out that H(P,P) reports that P(P) does not halt when
>>>>>> P(P) does halt this seems to be a contradiction to people that lack a
>>>>>> complete understanding.
>>>>>
>>>>> But since P(P) (now D(D) ) does halt, how do you explain that H
>>>>> saying it doesn't is correct?
>>>>>
>>>>>>
>>>>>> Because of this I changed the semantic meaning of a return value of 0
>>>>>> from H to mean either
>>>>>
>>>>> So you are admitting to LYIHG about the problem you are doing/
>>>>>
>>>>> OLCOTT --- ADMITTED LIAR
>>>>>
>>>>
>>>> When H(P,P) reports that P correctly simulated by H cannot possibly
>>>> reach its own last instruction this is an easily verified fact, thus
>>>> P(P) does not halt from the point of view of H.
>>>
>>> Which isn't the Halting Problem criteria, so you are lying about
>>> worki g on the halting problem.
>>>
>>
>> Try and explain how any H can be defined that can be embedded
>> within Linz Ĥ such that embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.qy or Ĥ.qn
>> consistently with the behavior of Ĥ applied to ⟨Ĥ⟩.
>
> It can't, that is what the Theorem Proves.
>
> That is because the Halting Function just isn't computable,
>
>>
>> If it is impossible to do this then you have affirmed that ⟨Ĥ⟩ ⟨Ĥ⟩ is a
>> self-contradictory input to embedded_H.
>
> Nope, because it just doesn't exist.
>
> Since no H can exist that meets the requirements, an H that meets the
> requirements doesn't exist, and so no H^ exists.
>
>>
>> If it is possible to do this then explain the details of how it is done.
>>
>> https://www.liarparadox.org/Linz_Proof.pdf
>>
>> Once we know that the halting problem question is an incorrect question
>> then we can transform it into a correct question.
>>
>
> But it isn't an "Incorrect Question", but the definition of what a
> "Correct Question" is.
>
> Remember, the Question of the Halting Problem Theorem is, Can an H exist
> that meets the requirements.
>
> This Question has an answer of NO.
>

That is exactly analogous to:
(1) Can anyone correctly answer this question:
(2) Will your answer to this question be no?

The answer to (1) is "no" only because (2) is self-contradictory.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to discourage honest dialogue]

<o4qkM.9813$8fUf.4480@fx16.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11417&group=comp.ai.philosophy#11417

  copy link   Newsgroups: sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!news.neodome.net!feeder1.feed.usenet.farm!feed.usenet.farm!peer01.ams4!peer.am4.highwinds-media.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx16.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
discourage honest dialogue]
Content-Language: en-US
Newsgroups: sci.logic,comp.theory,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk> <u6t0c2$2h8h7$1@dont-email.me>
<z3okM.866$Ect9.422@fx44.iad> <u6t30e$2hh2e$2@dont-email.me>
<okokM.3740$VKY6.505@fx13.iad> <u6t6a1$2i0mf$1@dont-email.me>
<xdpkM.838$3XE8.316@fx42.iad> <u6t7vm$2i77d$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6t7vm$2i77d$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 130
Message-ID: <o4qkM.9813$8fUf.4480@fx16.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Tue, 20 Jun 2023 18:52:03 -0400
X-Received-Bytes: 6167
 by: Richard Damon - Tue, 20 Jun 2023 22:52 UTC

On 6/20/23 6:07 PM, olcott wrote:
> On 6/20/2023 4:53 PM, Richard Damon wrote:
>> On 6/20/23 5:39 PM, olcott wrote:
>>> On 6/20/2023 3:52 PM, Richard Damon wrote:
>>>> On 6/20/23 4:42 PM, olcott wrote:
>>>>> On 6/20/2023 3:34 PM, Richard Damon wrote:
>>>>>> On 6/20/23 3:57 PM, olcott wrote:
>>>>>>> On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
>>>>>>>> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>>>>>>>>
>>>>>>>>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>>>>>>>>
>>>>>>>>>> the full semantics of the question <bla>
>>>>>>>>>
>>>>>>>>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>>>>>>>>
>>>>>>>>> Now, D(D) either halts or doesn't halt.
>>>>>>>>>
>>>>>>>>> Hence the CORRECT yes/no-answer to the question "Does D(D)
>>>>>>>>> halt?" is
>>>>>>>>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>>>>>>>>
>>>>>>>> Just a reminder that you are arguing with someone who has
>>>>>>>> declared that
>>>>>>>> the wrong answer is the right one:
>>>>>>>>
>>>>>>>> Me: "do you still assert that [...] false is the "correct"
>>>>>>>> answer even
>>>>>>>>      though P(P) halts?"
>>>>>>>>
>>>>>>>> PO: Yes that is the correct answer even though P(P) halts.
>>>>>>>>
>>>>>>>
>>>>>>> Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts to
>>>>>>> discourage honest dialogue]
>>>>>>>
>>>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>>>> *Ben Bacarisse targets my posts to discourage honest dialogue*
>>>>>>
>>>>>> No, YOU DO by claiming your words don't actually mean what they say.
>>>>>>
>>>>>>>
>>>>>>> When Ben pointed out that H(P,P) reports that P(P) does not halt
>>>>>>> when
>>>>>>> P(P) does halt this seems to be a contradiction to people that
>>>>>>> lack a
>>>>>>> complete understanding.
>>>>>>
>>>>>> But since P(P) (now D(D) ) does halt, how do you explain that H
>>>>>> saying it doesn't is correct?
>>>>>>
>>>>>>>
>>>>>>> Because of this I changed the semantic meaning of a return value
>>>>>>> of 0
>>>>>>> from H to mean either
>>>>>>
>>>>>> So you are admitting to LYIHG about the problem you are doing/
>>>>>>
>>>>>> OLCOTT --- ADMITTED LIAR
>>>>>>
>>>>>
>>>>> When H(P,P) reports that P correctly simulated by H cannot possibly
>>>>> reach its own last instruction this is an easily verified fact, thus
>>>>> P(P) does not halt from the point of view of H.
>>>>
>>>> Which isn't the Halting Problem criteria, so you are lying about
>>>> worki g on the halting problem.
>>>>
>>>
>>> Try and explain how any H can be defined that can be embedded
>>> within Linz Ĥ such that embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.qy or Ĥ.qn
>>> consistently with the behavior of Ĥ applied to ⟨Ĥ⟩.
>>
>> It can't, that is what the Theorem Proves.
>>
>> That is because the Halting Function just isn't computable,
>>
>>>
>>> If it is impossible to do this then you have affirmed that ⟨Ĥ⟩ ⟨Ĥ⟩ is a
>>> self-contradictory input to embedded_H.
>>
>> Nope, because it just doesn't exist.
>>
>> Since no H can exist that meets the requirements, an H that meets the
>> requirements doesn't exist, and so no H^ exists.
>>
>>>
>>> If it is possible to do this then explain the details of how it is done.
>>>
>>> https://www.liarparadox.org/Linz_Proof.pdf
>>>
>>> Once we know that the halting problem question is an incorrect question
>>> then we can transform it into a correct question.
>>>
>>
>> But it isn't an "Incorrect Question", but the definition of what a
>> "Correct Question" is.
>>
>> Remember, the Question of the Halting Problem Theorem is, Can an H
>> exist that meets the requirements.
>>
>> This Question has an answer of NO.
>>
>
> That is exactly analogous to:
> (1) Can anyone correctly answer this question:
> (2) Will your answer to this question be no?
>
> The answer to (1) is "no" only because (2) is self-contradictory.
>

Nope, totally different questions, but you are too stupid to understand.

The question is NOT about some future event, but about something that
has already been determined. To ask about a machine, the machine must
exist, and thus the answer is fixed.

We conventionally talk about the machine's behavior in the future, as
there is no sense deciding on a machine we have already run, but its
behavior is NOT just in the future, but was fixed as soon as the machine
was created.

Not so with a question about a volitional beings future behavior.

Thus, the questions are VERY different.

Maybe you are just stuck on the idea of Free Will and Determinism and
can't figure out what is rules by what.

ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6vhv2$2urnr$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11426&group=comp.ai.philosophy#11426

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: val...@cultnix.org (vallor)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting
problem input can be construed as an incorrect question
Date: Wed, 21 Jun 2023 19:10:26 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 20
Message-ID: <u6vhv2$2urnr$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 21 Jun 2023 19:10:26 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="e0ea919d9145ac3370d4cdb17386be7a";
logging-data="3108603"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/STCkT7vxU08J5lqriI/mw"
User-Agent: Pan/0.154 (Izium; dfc8674 gitlab.gnome.org/GNOME/pan.git)
Cancel-Lock: sha1:RGdr3AK28w9Smxed9DDexkaQaYA=
X-Face: \}2`P"_@pS86<'EM:'b.Ml}8IuMK"pV"?FReF$'c.S%u9<Q#U*4QO)$l81M`{Q/n
XL'`91kd%N::LG:=*\35JS0prp\VJN^<s"b#bff@fA7]5lJA.jn,x_d%Md$,{.EZ
 by: vallor - Wed, 21 Jun 2023 19:10 UTC

On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

> ChatGPT:
> “Therefore, based on the understanding that self-contradictory
> questions lack a correct answer and are deemed incorrect, one could
> argue that the halting problem's pathological input D can be
> categorized as an incorrect question when posed to the halting
> decider H.”
>
> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
> not leap to this conclusion it took a lot of convincing.

Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about,
but they are the world's best BS artists.

(Try playing a game of chess with ChatGPT, you'll see what I mean.)

--
-v

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<a7IkM.796$L836.451@fx47.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11427&group=comp.ai.philosophy#11427

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy comp.ai.shells
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx47.iad.POSTED!not-for-mail
From: val...@vallor.earth (vallor)
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy,comp.ai.shells
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
MIME-Version: 1.0
User-Agent: Pan/0.154 (Izium; dfc8674 gitlab.gnome.org/GNOME/pan.git)
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Lines: 13
Message-ID: <a7IkM.796$L836.451@fx47.iad>
X-Complaints-To: abuse@blocknews.net
NNTP-Posting-Date: Wed, 21 Jun 2023 19:23:50 UTC
Organization: blocknews - www.blocknews.net
Date: Wed, 21 Jun 2023 19:23:50 GMT
X-Received-Bytes: 1200
 by: vallor - Wed, 21 Jun 2023 19:23 UTC

On Wed, 21 Jun 2023 19:10:26 -0000 (UTC), vallor wrote:
> Chatbots are highly unreliable at reasoning. They are designed to give
> you the illusion that they know what they're talking about,
> but they are the world's best BS artists.
>
> (Try playing a game of chess with ChatGPT, you'll see what I mean.)

Can't even get two moves into the game:

https://chat.openai.com/share/8a315ec0-f0c4-4a4e-8019-dcb070790e5c

--
-v

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u6vkro$30a76$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11428&group=comp.ai.philosophy#11428

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Date: Wed, 21 Jun 2023 14:59:52 -0500
Organization: A noiseless patient Spider
Lines: 48
Message-ID: <u6vkro$30a76$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 21 Jun 2023 19:59:52 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ed643445410f82d7b44362150a53c93e";
logging-data="3156198"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19STrABteRno/rqQlHWQCqa"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:mSnx27vS24ZNF+CMEzZL9cv3Uvo=
Content-Language: en-US
In-Reply-To: <u6vhv2$2urnr$1@dont-email.me>
 by: olcott - Wed, 21 Jun 2023 19:59 UTC

On 6/21/2023 2:10 PM, vallor wrote:
> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>
>> ChatGPT:
>> “Therefore, based on the understanding that self-contradictory
>> questions lack a correct answer and are deemed incorrect, one could
>> argue that the halting problem's pathological input D can be
>> categorized as an incorrect question when posed to the halting
>> decider H.”
>>
>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
>> not leap to this conclusion it took a lot of convincing.
>
> Chatbots are highly unreliable at reasoning. They are designed
> to give you the illusion that they know what they're talking about,
> but they are the world's best BS artists.
>
> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>

I already know that and much worse than that they simply make up facts
on the fly citing purely fictional textbooks that have photos and back
stories for the purely fictional authors. The fake textbooks themselves
are complete and convincing.

In my case ChatGPT was able to be convinced by clearly correct
reasoning.

https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.

People are not convinced by this same reasoning only because they spend
99.9% of their attention on rebuttal thus there is not enough attention
left over for comprehension.

The only reason that the halting problem cannot be solved is that the
halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return
value from a set of specific deciders.

Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<QiLkM.629$sW_c.553@fx07.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11429&group=comp.ai.philosophy#11429

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx07.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u6vkro$30a76$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 84
Message-ID: <QiLkM.629$sW_c.553@fx07.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Wed, 21 Jun 2023 19:01:04 -0400
X-Received-Bytes: 4254
 by: Richard Damon - Wed, 21 Jun 2023 23:01 UTC

On 6/21/23 3:59 PM, olcott wrote:
> On 6/21/2023 2:10 PM, vallor wrote:
>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>
>>> ChatGPT:
>>>      “Therefore, based on the understanding that self-contradictory
>>>      questions lack a correct answer and are deemed incorrect, one could
>>>      argue that the halting problem's pathological input D can be
>>>      categorized as an incorrect question when posed to the halting
>>>      decider H.”
>>>
>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
>>> not leap to this conclusion it took a lot of convincing.
>>
>> Chatbots are highly unreliable at reasoning.  They are designed
>> to give you the illusion that they know what they're talking about,
>> but they are the world's best BS artists.
>>
>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>
>
> I already know that and much worse than that they simply make up facts
> on the fly citing purely fictional textbooks that have photos and back
> stories for the purely fictional authors. The fake textbooks themselves
> are complete and convincing.
>
> In my case ChatGPT was able to be convinced by clearly correct
> reasoning.
>

So, you admit that they will lie and tell you want you want to hear, you
think the fact that it agrees with you means something?

> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
> It did not leap to this conclusion it took a lot of convincing.

Which is a good sign that it was learnig what you wanted it to say so it
finally said it.

>
> People are not convinced by this same reasoning only because they spend
> 99.9% of their attention on rebuttal thus there is not enough attention
> left over for comprehension.

No, people can apply REAL "Correct Reasoning" and see the error in what
you call "Correct Reasoning". Your problem is that your idea of correct
isn't.

>
> The only reason that the halting problem cannot be solved is that the
> halting question is phrased incorrectly. The way that the halting
> problem is phrased allows inputs that contradict every Boolean return
> value from a set of specific deciders.

Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by your
faulty logic.

>
> Each of the halting problems instances is exactly isomorphic to
> requiring a correct answer to this question:
> Is this sentence true or false: "This sentence is not true".
>

Nope.

How is "Does the Machine represented by the input to the decider?"
isomopric to your statement.

Note, the actual Halting Problem question always has a definite answer.

Your claimed Isomorphic does not.

So they CAN'T be Isomorphic.

Note, you altered question of What can H return isn't the actual
question, but you don't seem to be able to understand that.

Your question is asked before H exists, and its problem with finding an
answer says a correct H can't actually exist.

The actual question can only be asked once H is fully defined, and at
that point it is just wrong, you can't ask what it can return to be
right, since it can only return one answer.

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u705ae$323du$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11431&group=comp.ai.philosophy#11431

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Date: Wed, 21 Jun 2023 19:40:45 -0500
Organization: A noiseless patient Spider
Lines: 89
Message-ID: <u705ae$323du$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 22 Jun 2023 00:40:46 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="2545df0e4b7cf57e65f9ecaa42564d8d";
logging-data="3214782"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19sj+a1jNVSH7zW8CO1bPnF"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:nN0TxlXFH9IxAqTvJEOoY+7Hcl8=
In-Reply-To: <QiLkM.629$sW_c.553@fx07.iad>
Content-Language: en-US
 by: olcott - Thu, 22 Jun 2023 00:40 UTC

On 6/21/2023 6:01 PM, Richard Damon wrote:
> On 6/21/23 3:59 PM, olcott wrote:
>> On 6/21/2023 2:10 PM, vallor wrote:
>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>
>>>> ChatGPT:
>>>>      “Therefore, based on the understanding that self-contradictory
>>>>      questions lack a correct answer and are deemed incorrect, one
>>>> could
>>>>      argue that the halting problem's pathological input D can be
>>>>      categorized as an incorrect question when posed to the halting
>>>>      decider H.”
>>>>
>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
>>>> not leap to this conclusion it took a lot of convincing.
>>>
>>> Chatbots are highly unreliable at reasoning.  They are designed
>>> to give you the illusion that they know what they're talking about,
>>> but they are the world's best BS artists.
>>>
>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>
>>
>> I already know that and much worse than that they simply make up facts
>> on the fly citing purely fictional textbooks that have photos and back
>> stories for the purely fictional authors. The fake textbooks themselves
>> are complete and convincing.
>>
>> In my case ChatGPT was able to be convinced by clearly correct
>> reasoning.
>>
>
> So, you admit that they will lie and tell you want you want to hear, you
> think the fact that it agrees with you means something?
>
>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>> It did not leap to this conclusion it took a lot of convincing.
>
> Which is a good sign that it was learnig what you wanted it to say so it
> finally said it.
>
>>
>> People are not convinced by this same reasoning only because they spend
>> 99.9% of their attention on rebuttal thus there is not enough attention
>> left over for comprehension.
>
> No, people can apply REAL "Correct Reasoning" and see the error in what
> you call "Correct Reasoning". Your problem is that your idea of correct
> isn't.
>
>>
>> The only reason that the halting problem cannot be solved is that the
>> halting question is phrased incorrectly. The way that the halting
>> problem is phrased allows inputs that contradict every Boolean return
>> value from a set of specific deciders.
>
> Nope, it is phrased exactly as needed. Your alterations allow the
> decider to give false answer and still be considered "correct" by your
> faulty logic.
>
>>
>> Each of the halting problems instances is exactly isomorphic to
>> requiring a correct answer to this question:
>> Is this sentence true or false: "This sentence is not true".
>>
>
> Nope.
>
> How is "Does the Machine represented by the input to the decider?"
> isomopric to your statement.
>

The halting problem instances that ask:
"Does this input halt"

are isomorphic to asking Jack this question:
"Will Jack's answer to this question be no?"

Which are both isomorphic to asking if this expression
is true or false: "This sentence is not true"

That you are unwilling to validate my work merely means that
someone else will get the credit for validating my work.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<dDOkM.10063$jlQ4.3709@fx12.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11432&group=comp.ai.philosophy#11432

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx12.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Content-Language: en-US
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <u705ae$323du$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 122
Message-ID: <dDOkM.10063$jlQ4.3709@fx12.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Wed, 21 Jun 2023 22:47:37 -0400
X-Received-Bytes: 5915
 by: Richard Damon - Thu, 22 Jun 2023 02:47 UTC

On 6/21/23 8:40 PM, olcott wrote:
> On 6/21/2023 6:01 PM, Richard Damon wrote:
>> On 6/21/23 3:59 PM, olcott wrote:
>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>
>>>>> ChatGPT:
>>>>>      “Therefore, based on the understanding that self-contradictory
>>>>>      questions lack a correct answer and are deemed incorrect, one
>>>>> could
>>>>>      argue that the halting problem's pathological input D can be
>>>>>      categorized as an incorrect question when posed to the halting
>>>>>      decider H.”
>>>>>
>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
>>>>> not leap to this conclusion it took a lot of convincing.
>>>>
>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>> to give you the illusion that they know what they're talking about,
>>>> but they are the world's best BS artists.
>>>>
>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>
>>>
>>> I already know that and much worse than that they simply make up facts
>>> on the fly citing purely fictional textbooks that have photos and back
>>> stories for the purely fictional authors. The fake textbooks themselves
>>> are complete and convincing.
>>>
>>> In my case ChatGPT was able to be convinced by clearly correct
>>> reasoning.
>>>
>>
>> So, you admit that they will lie and tell you want you want to hear,
>> you think the fact that it agrees with you means something?
>>
>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>> It did not leap to this conclusion it took a lot of convincing.
>>
>> Which is a good sign that it was learnig what you wanted it to say so
>> it finally said it.
>>
>>>
>>> People are not convinced by this same reasoning only because they spend
>>> 99.9% of their attention on rebuttal thus there is not enough attention
>>> left over for comprehension.
>>
>> No, people can apply REAL "Correct Reasoning" and see the error in
>> what you call "Correct Reasoning". Your problem is that your idea of
>> correct isn't.
>>
>>>
>>> The only reason that the halting problem cannot be solved is that the
>>> halting question is phrased incorrectly. The way that the halting
>>> problem is phrased allows inputs that contradict every Boolean return
>>> value from a set of specific deciders.
>>
>> Nope, it is phrased exactly as needed. Your alterations allow the
>> decider to give false answer and still be considered "correct" by your
>> faulty logic.
>>
>>>
>>> Each of the halting problems instances is exactly isomorphic to
>>> requiring a correct answer to this question:
>>> Is this sentence true or false: "This sentence is not true".
>>>
>>
>> Nope.
>>
>> How is "Does the Machine represented by the input to the decider?"
>> isomopric to your statement.
>>
>
> The halting problem instances that ask:
> "Does this input halt"
>
> are isomorphic to asking Jack this question:
> "Will Jack's answer to this question be no?"

Nope, because Jack is a volitional being, so we CAN'T know the correct
answer to the question until after Jack answers the question, thus Jack,
in trying to be correct, hits a contradiction.

The correct answer to the Halting Problem Question was avaiable as soon
as the machine being asked about was defined, so the decider doesn't hit
a contradiction in logic, it just is wrong, because it CAN'T "try" to
give the other answer, because it just does as it was programmed.

All your logic is in designing the machine, and there the contradiction
just points out that you can't make a correct machine, which is an
acceptable answer. Not all problems are computable, so we can't always
make a machine give the answer.

>
> Which are both isomorphic to asking if this expression
> is true or false: "This sentence is not true"

Nope. Show how the CAN be.

The Halting Problem ALWAYS has a valid yes or no question, since the
machine it is being asked on must be defined to ask it, and thus its
behavior is FIXED by its code.

You just don't seem to understand what a program is, so I guess you
faked it when you were working as a programmer.

>
> That you are unwilling to validate my work merely means that
> someone else will get the credit for validating my work.
>
>

I can't "Validate" your work, as it is just incorrect.

You think to things of different kind are the same, which is impossible,
so your statements are just incorrect.

You don't seem to understand that compuations don't have volition, so,
you basically don't understand what a computation is at all, and nothing
you have done reguarding them has any hope of having a factual basis.

You also clearly don't understand how logic works too.

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u70dch$36h09$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11433&group=comp.ai.philosophy#11433

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Date: Wed, 21 Jun 2023 21:58:25 -0500
Organization: A noiseless patient Spider
Lines: 95
Message-ID: <u70dch$36h09$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 22 Jun 2023 02:58:26 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="2545df0e4b7cf57e65f9ecaa42564d8d";
logging-data="3359753"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18ytzJ6uabVwwwS99f0nBb6"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:ZmjuCU04eKeTiqkivVq3uaipkPY=
In-Reply-To: <dDOkM.10063$jlQ4.3709@fx12.iad>
Content-Language: en-US
 by: olcott - Thu, 22 Jun 2023 02:58 UTC

On 6/21/2023 9:47 PM, Richard Damon wrote:
> On 6/21/23 8:40 PM, olcott wrote:
>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>> On 6/21/23 3:59 PM, olcott wrote:
>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>
>>>>>> ChatGPT:
>>>>>>      “Therefore, based on the understanding that self-contradictory
>>>>>>      questions lack a correct answer and are deemed incorrect, one
>>>>>> could
>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>      categorized as an incorrect question when posed to the halting
>>>>>>      decider H.”
>>>>>>
>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>
>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>> to give you the illusion that they know what they're talking about,
>>>>> but they are the world's best BS artists.
>>>>>
>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>
>>>>
>>>> I already know that and much worse than that they simply make up facts
>>>> on the fly citing purely fictional textbooks that have photos and back
>>>> stories for the purely fictional authors. The fake textbooks themselves
>>>> are complete and convincing.
>>>>
>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>> reasoning.
>>>>
>>>
>>> So, you admit that they will lie and tell you want you want to hear,
>>> you think the fact that it agrees with you means something?
>>>
>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>> It did not leap to this conclusion it took a lot of convincing.
>>>
>>> Which is a good sign that it was learnig what you wanted it to say so
>>> it finally said it.
>>>
>>>>
>>>> People are not convinced by this same reasoning only because they spend
>>>> 99.9% of their attention on rebuttal thus there is not enough attention
>>>> left over for comprehension.
>>>
>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>> what you call "Correct Reasoning". Your problem is that your idea of
>>> correct isn't.
>>>
>>>>
>>>> The only reason that the halting problem cannot be solved is that the
>>>> halting question is phrased incorrectly. The way that the halting
>>>> problem is phrased allows inputs that contradict every Boolean return
>>>> value from a set of specific deciders.
>>>
>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>> decider to give false answer and still be considered "correct" by
>>> your faulty logic.
>>>
>>>>
>>>> Each of the halting problems instances is exactly isomorphic to
>>>> requiring a correct answer to this question:
>>>> Is this sentence true or false: "This sentence is not true".
>>>>
>>>
>>> Nope.
>>>
>>> How is "Does the Machine represented by the input to the decider?"
>>> isomopric to your statement.
>>>
>>
>> The halting problem instances that ask:
>> "Does this input halt"
>>
>> are isomorphic to asking Jack this question:
>> "Will Jack's answer to this question be no?"
>
> Nope, because Jack is a volitional being, so we CAN'T know the correct
> answer to the question until after Jack answers the question, thus Jack,
> in trying to be correct, hits a contradiction.
>

We can know that the correct answer from Jack and the correct return
value from H cannot possibly exist, now and forever.

You are just playing head games.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<ZdWkM.10575$zG0d.977@fx04.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11434&group=comp.ai.philosophy#11434

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx04.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
<u70dch$36h09$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
Content-Language: en-US
In-Reply-To: <u70dch$36h09$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 112
Message-ID: <ZdWkM.10575$zG0d.977@fx04.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Thu, 22 Jun 2023 07:26:48 -0400
X-Received-Bytes: 5665
 by: Richard Damon - Thu, 22 Jun 2023 11:26 UTC

On 6/21/23 10:58 PM, olcott wrote:
> On 6/21/2023 9:47 PM, Richard Damon wrote:
>> On 6/21/23 8:40 PM, olcott wrote:
>>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>>> On 6/21/23 3:59 PM, olcott wrote:
>>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>>
>>>>>>> ChatGPT:
>>>>>>>      “Therefore, based on the understanding that self-contradictory
>>>>>>>      questions lack a correct answer and are deemed incorrect,
>>>>>>> one could
>>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>>      categorized as an incorrect question when posed to the halting
>>>>>>>      decider H.”
>>>>>>>
>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It
>>>>>>> did
>>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>>
>>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>>> to give you the illusion that they know what they're talking about,
>>>>>> but they are the world's best BS artists.
>>>>>>
>>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>>
>>>>>
>>>>> I already know that and much worse than that they simply make up facts
>>>>> on the fly citing purely fictional textbooks that have photos and back
>>>>> stories for the purely fictional authors. The fake textbooks
>>>>> themselves
>>>>> are complete and convincing.
>>>>>
>>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>>> reasoning.
>>>>>
>>>>
>>>> So, you admit that they will lie and tell you want you want to hear,
>>>> you think the fact that it agrees with you means something?
>>>>
>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>
>>>> Which is a good sign that it was learnig what you wanted it to say
>>>> so it finally said it.
>>>>
>>>>>
>>>>> People are not convinced by this same reasoning only because they
>>>>> spend
>>>>> 99.9% of their attention on rebuttal thus there is not enough
>>>>> attention
>>>>> left over for comprehension.
>>>>
>>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>>> what you call "Correct Reasoning". Your problem is that your idea of
>>>> correct isn't.
>>>>
>>>>>
>>>>> The only reason that the halting problem cannot be solved is that the
>>>>> halting question is phrased incorrectly. The way that the halting
>>>>> problem is phrased allows inputs that contradict every Boolean return
>>>>> value from a set of specific deciders.
>>>>
>>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>>> decider to give false answer and still be considered "correct" by
>>>> your faulty logic.
>>>>
>>>>>
>>>>> Each of the halting problems instances is exactly isomorphic to
>>>>> requiring a correct answer to this question:
>>>>> Is this sentence true or false: "This sentence is not true".
>>>>>
>>>>
>>>> Nope.
>>>>
>>>> How is "Does the Machine represented by the input to the decider?"
>>>> isomopric to your statement.
>>>>
>>>
>>> The halting problem instances that ask:
>>> "Does this input halt"
>>>
>>> are isomorphic to asking Jack this question:
>>> "Will Jack's answer to this question be no?"
>>
>> Nope, because Jack is a volitional being, so we CAN'T know the correct
>> answer to the question until after Jack answers the question, thus
>> Jack, in trying to be correct, hits a contradiction.
>>
>
> We can know that the correct answer from Jack and the correct return
> value from H cannot possibly exist, now and forever.
>
> You are just playing head games.
>
>

But the question isn't what H can return to be correct, since the only
possible answer that H can return is what it does return by its
programming, which will either BE correct or not. (In this case NOT).

Therefore, the correct answer that H SHOULD HAVE returned (to be
correct) has an answer, so the question actually HAS a correct answer.

You clearly don't understand the difference between a volitional being
and a deterministic machinne. This shows your stupidity and ignornace.
Maybe you have lost your free will and ability to think because of the
evil in your life, and are condemned to keep repeating the same error
over and over proving your insanity and stupidity.

I guess you are now shown to be a Hypocritical Ignorant Pathological
Lying insane idiot.

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u71l85$3aqv2$1@dont-email.me>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11435&group=comp.ai.philosophy#11435

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Date: Thu, 22 Jun 2023 09:18:45 -0500
Organization: A noiseless patient Spider
Lines: 107
Message-ID: <u71l85$3aqv2$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
<u70dch$36h09$1@dont-email.me> <ZdWkM.10575$zG0d.977@fx04.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 22 Jun 2023 14:18:45 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="2545df0e4b7cf57e65f9ecaa42564d8d";
logging-data="3501026"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19LqvW11YMbAsFzzt6BsQel"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:uNVeuowEk2yPIsRpAIeJlrgwmzU=
In-Reply-To: <ZdWkM.10575$zG0d.977@fx04.iad>
Content-Language: en-US
 by: olcott - Thu, 22 Jun 2023 14:18 UTC

On 6/22/2023 6:26 AM, Richard Damon wrote:
> On 6/21/23 10:58 PM, olcott wrote:
>> On 6/21/2023 9:47 PM, Richard Damon wrote:
>>> On 6/21/23 8:40 PM, olcott wrote:
>>>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>>>> On 6/21/23 3:59 PM, olcott wrote:
>>>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>>>
>>>>>>>> ChatGPT:
>>>>>>>>      “Therefore, based on the understanding that self-contradictory
>>>>>>>>      questions lack a correct answer and are deemed incorrect,
>>>>>>>> one could
>>>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>>>      categorized as an incorrect question when posed to the halting
>>>>>>>>      decider H.”
>>>>>>>>
>>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>>>> It did
>>>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>>>
>>>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>>>> to give you the illusion that they know what they're talking about,
>>>>>>> but they are the world's best BS artists.
>>>>>>>
>>>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>>>
>>>>>>
>>>>>> I already know that and much worse than that they simply make up
>>>>>> facts
>>>>>> on the fly citing purely fictional textbooks that have photos and
>>>>>> back
>>>>>> stories for the purely fictional authors. The fake textbooks
>>>>>> themselves
>>>>>> are complete and convincing.
>>>>>>
>>>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>>>> reasoning.
>>>>>>
>>>>>
>>>>> So, you admit that they will lie and tell you want you want to
>>>>> hear, you think the fact that it agrees with you means something?
>>>>>
>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>>
>>>>> Which is a good sign that it was learnig what you wanted it to say
>>>>> so it finally said it.
>>>>>
>>>>>>
>>>>>> People are not convinced by this same reasoning only because they
>>>>>> spend
>>>>>> 99.9% of their attention on rebuttal thus there is not enough
>>>>>> attention
>>>>>> left over for comprehension.
>>>>>
>>>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>>>> what you call "Correct Reasoning". Your problem is that your idea
>>>>> of correct isn't.
>>>>>
>>>>>>
>>>>>> The only reason that the halting problem cannot be solved is that the
>>>>>> halting question is phrased incorrectly. The way that the halting
>>>>>> problem is phrased allows inputs that contradict every Boolean return
>>>>>> value from a set of specific deciders.
>>>>>
>>>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>>>> decider to give false answer and still be considered "correct" by
>>>>> your faulty logic.
>>>>>
>>>>>>
>>>>>> Each of the halting problems instances is exactly isomorphic to
>>>>>> requiring a correct answer to this question:
>>>>>> Is this sentence true or false: "This sentence is not true".
>>>>>>
>>>>>
>>>>> Nope.
>>>>>
>>>>> How is "Does the Machine represented by the input to the decider?"
>>>>> isomopric to your statement.
>>>>>
>>>>
>>>> The halting problem instances that ask:
>>>> "Does this input halt"
>>>>
>>>> are isomorphic to asking Jack this question:
>>>> "Will Jack's answer to this question be no?"
>>>
>>> Nope, because Jack is a volitional being, so we CAN'T know the
>>> correct answer to the question until after Jack answers the question,
>>> thus Jack, in trying to be correct, hits a contradiction.
>>>
>>
>> We can know that the correct answer from Jack and the correct return
>> value from H cannot possibly exist, now and forever.
>>
>> You are just playing head games.
>>
>>
>
> But the question isn't what H can return to be correct,
Yes it is and you just keep playing heed games.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<ce6lM.38368$7915.14256@fx10.iad>

  copy mid

https://www.novabbs.com/computers/article-flat.php?id=11439&group=comp.ai.philosophy#11439

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx10.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
<u70dch$36h09$1@dont-email.me> <ZdWkM.10575$zG0d.977@fx04.iad>
<u71l85$3aqv2$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
Content-Language: en-US
In-Reply-To: <u71l85$3aqv2$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 134
Message-ID: <ce6lM.38368$7915.14256@fx10.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Thu, 22 Jun 2023 21:06:16 -0400
X-Received-Bytes: 6635
 by: Richard Damon - Fri, 23 Jun 2023 01:06 UTC

On 6/22/23 10:18 AM, olcott wrote:
> On 6/22/2023 6:26 AM, Richard Damon wrote:
>> On 6/21/23 10:58 PM, olcott wrote:
>>> On 6/21/2023 9:47 PM, Richard Damon wrote:
>>>> On 6/21/23 8:40 PM, olcott wrote:
>>>>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>>>>> On 6/21/23 3:59 PM, olcott wrote:
>>>>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>>>>
>>>>>>>>> ChatGPT:
>>>>>>>>>      “Therefore, based on the understanding that
>>>>>>>>> self-contradictory
>>>>>>>>>      questions lack a correct answer and are deemed incorrect,
>>>>>>>>> one could
>>>>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>>>>      categorized as an incorrect question when posed to the
>>>>>>>>> halting
>>>>>>>>>      decider H.”
>>>>>>>>>
>>>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>>>>> It did
>>>>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>>>>
>>>>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>>>>> to give you the illusion that they know what they're talking about,
>>>>>>>> but they are the world's best BS artists.
>>>>>>>>
>>>>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>>>>
>>>>>>>
>>>>>>> I already know that and much worse than that they simply make up
>>>>>>> facts
>>>>>>> on the fly citing purely fictional textbooks that have photos and
>>>>>>> back
>>>>>>> stories for the purely fictional authors. The fake textbooks
>>>>>>> themselves
>>>>>>> are complete and convincing.
>>>>>>>
>>>>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>>>>> reasoning.
>>>>>>>
>>>>>>
>>>>>> So, you admit that they will lie and tell you want you want to
>>>>>> hear, you think the fact that it agrees with you means something?
>>>>>>
>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>>>
>>>>>> Which is a good sign that it was learnig what you wanted it to say
>>>>>> so it finally said it.
>>>>>>
>>>>>>>
>>>>>>> People are not convinced by this same reasoning only because they
>>>>>>> spend
>>>>>>> 99.9% of their attention on rebuttal thus there is not enough
>>>>>>> attention
>>>>>>> left over for comprehension.
>>>>>>
>>>>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>>>>> what you call "Correct Reasoning". Your problem is that your idea
>>>>>> of correct isn't.
>>>>>>
>>>>>>>
>>>>>>> The only reason that the halting problem cannot be solved is that
>>>>>>> the
>>>>>>> halting question is phrased incorrectly. The way that the halting
>>>>>>> problem is phrased allows inputs that contradict every Boolean
>>>>>>> return
>>>>>>> value from a set of specific deciders.
>>>>>>
>>>>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>>>>> decider to give false answer and still be considered "correct" by
>>>>>> your faulty logic.
>>>>>>
>>>>>>>
>>>>>>> Each of the halting problems instances is exactly isomorphic to
>>>>>>> requiring a correct answer to this question:
>>>>>>> Is this sentence true or false: "This sentence is not true".
>>>>>>>
>>>>>>
>>>>>> Nope.
>>>>>>
>>>>>> How is "Does the Machine represented by the input to the decider?"
>>>>>> isomopric to your statement.
>>>>>>
>>>>>
>>>>> The halting problem instances that ask:
>>>>> "Does this input halt"
>>>>>
>>>>> are isomorphic to asking Jack this question:
>>>>> "Will Jack's answer to this question be no?"
>>>>
>>>> Nope, because Jack is a volitional being, so we CAN'T know the
>>>> correct answer to the question until after Jack answers the
>>>> question, thus Jack, in trying to be correct, hits a contradiction.
>>>>
>>>
>>> We can know that the correct answer from Jack and the correct return
>>> value from H cannot possibly exist, now and forever.
>>>
>>> You are just playing head games.
>>>
>>>
>>
>> But the question isn't what H can return to be correct,
> Yes it is and you just keep playing heed games.
>

So, you aren't talking about the Halting Problem, and your definition of
"Head Games" must be to be correcting your mistakes.

The question of the Halting Problem is does the Machine that the input
describes Halt. It make no reference to H itself. H to be correct needs
to get the right answer, but the question isn't what it needs to return
to be correct, since once you define H, its answer is fixed, so the only
answer it CAN give is what it DOES give.

You seem to not understand that programs are deterministic entities and
have no option of "choice", so we can't ask what they can do to be
correct, because they will only do what they do.

Your Head Games seems to be about assuming things might do what they
don't actually do, and thus thinking about lies of pure fantasy.

You alse seem to not understand the difference between a volitional
being an a deterministic process. Maybe because you have lost your own
determinism and gave it to your insanity, and now you are stuck forever
trying to do what you incorrect thought of.

Clearly you have lost the intelegence that comes out of volition, as you
show yourself to be so stupid and ignorant to not understand the basic
presented to you.

Pages:123
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor