Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Never trust an operating system.


devel / comp.theory / Halting Problem solved (AI reprise)

SubjectAuthor
* Halting Problem solved (AI reprise)Mr Flibble
`* Halting Problem solved (AI reprise)Richard Damon
 `* Halting Problem solved (AI reprise)Mr Flibble
  `* Halting Problem solved (AI reprise)Richard Damon
   +* Halting Problem solved (AI reprise)olcott
   |`* Halting Problem solved (AI reprise)Richard Damon
   | `* Halting Problem solved (AI reprise)olcott
   |  `* Halting Problem solved (AI reprise)Richard Damon
   |   `* Halting Problem solved (AI reprise)olcott
   |    `* Halting Problem solved (AI reprise)Richard Damon
   |     `* Halting Problem solved (AI reprise)olcott
   |      `- Halting Problem solved (AI reprise)Richard Damon
   `* Halting Problem solved (AI reprise)Mr Flibble
    `* Halting Problem solved (AI reprise)Richard Damon
     `- Halting Problem solved (AI reprise)Mr Flibble

1
Halting Problem solved (AI reprise)

<173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43310&group=comp.theory#43310

  copy link   Newsgroups: comp.theory
From: flib...@reddwarf.jmc.corp (Mr Flibble)
Subject: Halting Problem solved (AI reprise)
Newsgroups: comp.theory
User-Agent: Pan/0.146 (Hic habitat felicitas; d7a48b4 gitlab.gnome.org/GNOME/pan.git)
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Lines: 13
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!feeder.usenetexpress.com!tr1.iad1.usenetexpress.com!news.newsdemon.com!not-for-mail
Date: Sat, 28 Jan 2023 20:26:04 +0000
Nntp-Posting-Date: Sat, 28 Jan 2023 20:26:04 +0000
Organization: NewsDemon - www.newsdemon.com
X-Complaints-To: abuse@newsdemon.com
Message-Id: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
X-Received-Bytes: 1009
 by: Mr Flibble - Sat, 28 Jan 2023 20:26 UTC

Hi!

I am happy to announce that ChatGPT agrees with my thesis that there is a
third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE DUE TO SELF
REFERENCE CATEGORY ERROR):

https://twitter.com/i42Software/status/1609626194273525760

My halt decider:

https://github.com/i42output/halting-problem#readme

/Flibble

Re: Halting Problem solved (AI reprise)

<SPfBL.137319$5CY7.25969@fx46.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43312&group=comp.theory#43312

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx46.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Halting Problem solved (AI reprise)
Content-Language: en-US
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 21
Message-ID: <SPfBL.137319$5CY7.25969@fx46.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sat, 28 Jan 2023 15:45:38 -0500
X-Received-Bytes: 1529
 by: Richard Damon - Sat, 28 Jan 2023 20:45 UTC

On 1/28/23 3:26 PM, Mr Flibble wrote:
> Hi!
>
> I am happy to announce that ChatGPT agrees with my thesis that there is a
> third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE DUE TO SELF
> REFERENCE CATEGORY ERROR):
>
> https://twitter.com/i42Software/status/1609626194273525760
>
> My halt decider:
>
> https://github.com/i42output/halting-problem#readme
>
> /Flibble

You do understand that ChatGPT doesn't actually understand how your
function "program_halts" is supposed to work since it only has a comment
to decide, and seems to just assume that it makes a direct call.

Thus, its answer isn't actually meaningful.

Re: Halting Problem solved (AI reprise)

<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43323&group=comp.theory#43323

  copy link   Newsgroups: comp.theory
From: flib...@reddwarf.jmc.corp (Mr Flibble)
Subject: Re: Halting Problem solved (AI reprise)
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com> <SPfBL.137319$5CY7.25969@fx46.iad>
User-Agent: Pan/0.146 (Hic habitat felicitas; d7a48b4 gitlab.gnome.org/GNOME/pan.git)
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Lines: 28
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!feeder.usenetexpress.com!tr2.iad1.usenetexpress.com!news.newsdemon.com!not-for-mail
Date: Mon, 30 Jan 2023 01:04:41 +0000
Nntp-Posting-Date: Mon, 30 Jan 2023 01:04:41 +0000
Organization: NewsDemon - www.newsdemon.com
X-Complaints-To: abuse@newsdemon.com
Message-Id: <173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
X-Received-Bytes: 1656
 by: Mr Flibble - Mon, 30 Jan 2023 01:04 UTC

On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:

> On 1/28/23 3:26 PM, Mr Flibble wrote:
>> Hi!
>>
>> I am happy to announce that ChatGPT agrees with my thesis that there is
>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE DUE TO
>> SELF REFERENCE CATEGORY ERROR):
>>
>> https://twitter.com/i42Software/status/1609626194273525760
>>
>> My halt decider:
>>
>> https://github.com/i42output/halting-problem#readme
>>
>> /Flibble
>
>
> You do understand that ChatGPT doesn't actually understand how your
> function "program_halts" is supposed to work since it only has a comment
> to decide, and seems to just assume that it makes a direct call.
>
> Thus, its answer isn't actually meaningful.

You are assuming that ChatGPT is ignoring the comment: on what are you
basing such an asinine assumption?

/Flibble

Re: Halting Problem solved (AI reprise)

<%REBL.56569$Lfzc.9977@fx36.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43325&group=comp.theory#43325

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx36.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Halting Problem solved (AI reprise)
Content-Language: en-US
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 43
Message-ID: <%REBL.56569$Lfzc.9977@fx36.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 29 Jan 2023 20:14:35 -0500
X-Received-Bytes: 2328
 by: Richard Damon - Mon, 30 Jan 2023 01:14 UTC

On 1/29/23 8:04 PM, Mr Flibble wrote:
> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>
>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>> Hi!
>>>
>>> I am happy to announce that ChatGPT agrees with my thesis that there is
>>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE DUE TO
>>> SELF REFERENCE CATEGORY ERROR):
>>>
>>> https://twitter.com/i42Software/status/1609626194273525760
>>>
>>> My halt decider:
>>>
>>> https://github.com/i42output/halting-problem#readme
>>>
>>> /Flibble
>>
>>
>> You do understand that ChatGPT doesn't actually understand how your
>> function "program_halts" is supposed to work since it only has a comment
>> to decide, and seems to just assume that it makes a direct call.
>>
>> Thus, its answer isn't actually meaningful.
>
> You are assuming that ChatGPT is ignoring the comment: on what are you
> basing such an asinine assumption?
>
> /Flibble

Read what ChatGPT said about the program.

It says that the halt decider call the program provided.

That is NOT how a halt decider could work, as then it can not answer if
the program does not halt.

ChatGPT is NOT a proven source of truth, in fact, when fact checked, on
actual somewhat compleicated logical question, it has a very low
accuracy rate.

This is the case of the Blind being lead by the Blind.

Re: Halting Problem solved (AI reprise)

<tr75um$30htb$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43326&group=comp.theory#43326

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Halting Problem solved (AI reprise)
Date: Sun, 29 Jan 2023 19:19:17 -0600
Organization: A noiseless patient Spider
Lines: 61
Message-ID: <tr75um$30htb$1@dont-email.me>
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 30 Jan 2023 01:19:18 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="305b6a6f28f0c6a0eec54ac1656281c5";
logging-data="3164075"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19Qias7U5NQl8suUVh2zNsB"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.6.1
Cancel-Lock: sha1:sK54PJrWHzsrCBIqsbqmvK4cx7o=
In-Reply-To: <%REBL.56569$Lfzc.9977@fx36.iad>
Content-Language: en-US
 by: olcott - Mon, 30 Jan 2023 01:19 UTC

On 1/29/2023 7:14 PM, Richard Damon wrote:
> On 1/29/23 8:04 PM, Mr Flibble wrote:
>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>
>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>> Hi!
>>>>
>>>> I am happy to announce that ChatGPT agrees with my thesis that there is
>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE DUE TO
>>>> SELF REFERENCE CATEGORY ERROR):
>>>>
>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>
>>>> My halt decider:
>>>>
>>>> https://github.com/i42output/halting-problem#readme
>>>>
>>>> /Flibble
>>>
>>>
>>> You do understand that ChatGPT doesn't actually understand how your
>>> function "program_halts" is supposed to work since it only has a comment
>>> to decide, and seems to just assume that it makes a direct call.
>>>
>>> Thus, its answer isn't actually meaningful.
>>
>> You are assuming that ChatGPT is ignoring the comment: on what are you
>> basing such an asinine assumption?
>>
>> /Flibble
>
> Read what ChatGPT said about the program.
>
> It says that the halt decider call the program provided.
>
> That is NOT how a halt decider could work, as then it can not answer if
> the program does not halt.
>
>
> ChatGPT is NOT a proven source of truth, in fact, when fact checked, on
> actual somewhat compleicated logical question, it has a very low
> accuracy rate.
>
> This is the case of the Blind being lead by the Blind.

When I explained the key aspect of my algorithm to ChatGPT it understood
that this algorithm is correct.

olcott
This first sentence is an established fact: When H(D,D) correctly
simulates its input the execution trace of this simulated input proves
that it would never stop running because D continues to call H in
recursive simulation. Then H aborts its simulation of D and returns 0 to
main, indicating that D would never stop running unless aborted. Is H
correct?

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Halting Problem solved (AI reprise)

<173ef210c8763fd0$4896$354281$3aa16cab@news.newsdemon.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43329&group=comp.theory#43329

  copy link   Newsgroups: comp.theory
From: flib...@reddwarf.jmc.corp (Mr Flibble)
Subject: Re: Halting Problem solved (AI reprise)
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com> <SPfBL.137319$5CY7.25969@fx46.iad> <173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com> <%REBL.56569$Lfzc.9977@fx36.iad>
User-Agent: Pan/0.146 (Hic habitat felicitas; d7a48b4 gitlab.gnome.org/GNOME/pan.git)
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Lines: 53
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!feeder.usenetexpress.com!tr1.iad1.usenetexpress.com!news.newsdemon.com!not-for-mail
Date: Mon, 30 Jan 2023 01:31:05 +0000
Nntp-Posting-Date: Mon, 30 Jan 2023 01:31:05 +0000
X-Complaints-To: abuse@newsdemon.com
Organization: NewsDemon - www.newsdemon.com
Message-Id: <173ef210c8763fd0$4896$354281$3aa16cab@news.newsdemon.com>
X-Received-Bytes: 2544
 by: Mr Flibble - Mon, 30 Jan 2023 01:31 UTC

On Sun, 29 Jan 2023 20:14:35 -0500, Richard Damon wrote:

> On 1/29/23 8:04 PM, Mr Flibble wrote:
>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>
>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>> Hi!
>>>>
>>>> I am happy to announce that ChatGPT agrees with my thesis that there
>>>> is a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE
>>>> DUE TO SELF REFERENCE CATEGORY ERROR):
>>>>
>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>
>>>> My halt decider:
>>>>
>>>> https://github.com/i42output/halting-problem#readme
>>>>
>>>> /Flibble
>>>
>>>
>>> You do understand that ChatGPT doesn't actually understand how your
>>> function "program_halts" is supposed to work since it only has a
>>> comment to decide, and seems to just assume that it makes a direct
>>> call.
>>>
>>> Thus, its answer isn't actually meaningful.
>>
>> You are assuming that ChatGPT is ignoring the comment: on what are you
>> basing such an asinine assumption?
>>
>> /Flibble
>
> Read what ChatGPT said about the program.
>
> It says that the halt decider call the program provided.
>
> That is NOT how a halt decider could work, as then it can not answer if
> the program does not halt.
>
>
> ChatGPT is NOT a proven source of truth, in fact, when fact checked, on
> actual somewhat compleicated logical question, it has a very low
> accuracy rate.
>
> This is the case of the Blind being lead by the Blind.

It seems you totally ignored my reply. I will try again:

You are assuming that ChatGPT is ignoring the comment: on what are you
basing such an asinine assumption?

/Flibble

Re: Halting Problem solved (AI reprise)

<QhFBL.56572$Lfzc.44550@fx36.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43333&group=comp.theory#43333

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx36.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Halting Problem solved (AI reprise)
Content-Language: en-US
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad> <tr75um$30htb$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <tr75um$30htb$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 65
Message-ID: <QhFBL.56572$Lfzc.44550@fx36.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 29 Jan 2023 20:44:17 -0500
X-Received-Bytes: 3239
 by: Richard Damon - Mon, 30 Jan 2023 01:44 UTC

On 1/29/23 8:19 PM, olcott wrote:
> On 1/29/2023 7:14 PM, Richard Damon wrote:
>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>
>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>> Hi!
>>>>>
>>>>> I am happy to announce that ChatGPT agrees with my thesis that
>>>>> there is
>>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE DUE TO
>>>>> SELF REFERENCE CATEGORY ERROR):
>>>>>
>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>
>>>>> My halt decider:
>>>>>
>>>>> https://github.com/i42output/halting-problem#readme
>>>>>
>>>>> /Flibble
>>>>
>>>>
>>>> You do understand that ChatGPT doesn't actually understand how your
>>>> function "program_halts" is supposed to work since it only has a
>>>> comment
>>>> to decide, and seems to just assume that it makes a direct call.
>>>>
>>>> Thus, its answer isn't actually meaningful.
>>>
>>> You are assuming that ChatGPT is ignoring the comment: on what are you
>>> basing such an asinine assumption?
>>>
>>> /Flibble
>>
>> Read what ChatGPT said about the program.
>>
>> It says that the halt decider call the program provided.
>>
>> That is NOT how a halt decider could work, as then it can not answer
>> if the program does not halt.
>>
>>
>> ChatGPT is NOT a proven source of truth, in fact, when fact checked,
>> on actual somewhat compleicated logical question, it has a very low
>> accuracy rate.
>>
>> This is the case of the Blind being lead by the Blind.
>
> When I explained the key aspect of my algorithm to ChatGPT it understood
> that this algorithm is correct.

Nope, it parroted back what you told it in an attempt to come close to
passing the Turing Test.

>
> olcott
> This first sentence is an established fact: When H(D,D) correctly
> simulates its input the execution trace of this simulated input proves
> that it would never stop running because D continues to call H in
> recursive simulation. Then H aborts its simulation of D and returns 0 to
> main, indicating that D would never stop running unless aborted. Is H
> correct?
>
>
You know ChatGPT has been shown to give many incorrect answers?

Re: Halting Problem solved (AI reprise)

<zjFBL.56573$Lfzc.14891@fx36.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43334&group=comp.theory#43334

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx36.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Halting Problem solved (AI reprise)
Content-Language: en-US
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad>
<173ef210c8763fd0$4896$354281$3aa16cab@news.newsdemon.com>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <173ef210c8763fd0$4896$354281$3aa16cab@news.newsdemon.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 63
Message-ID: <zjFBL.56573$Lfzc.14891@fx36.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 29 Jan 2023 20:46:07 -0500
X-Received-Bytes: 3167
 by: Richard Damon - Mon, 30 Jan 2023 01:46 UTC

On 1/29/23 8:31 PM, Mr Flibble wrote:
> On Sun, 29 Jan 2023 20:14:35 -0500, Richard Damon wrote:
>
>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>
>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>> Hi!
>>>>>
>>>>> I am happy to announce that ChatGPT agrees with my thesis that there
>>>>> is a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE
>>>>> DUE TO SELF REFERENCE CATEGORY ERROR):
>>>>>
>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>
>>>>> My halt decider:
>>>>>
>>>>> https://github.com/i42output/halting-problem#readme
>>>>>
>>>>> /Flibble
>>>>
>>>>
>>>> You do understand that ChatGPT doesn't actually understand how your
>>>> function "program_halts" is supposed to work since it only has a
>>>> comment to decide, and seems to just assume that it makes a direct
>>>> call.
>>>>
>>>> Thus, its answer isn't actually meaningful.
>>>
>>> You are assuming that ChatGPT is ignoring the comment: on what are you
>>> basing such an asinine assumption?
>>>
>>> /Flibble
>>
>> Read what ChatGPT said about the program.
>>
>> It says that the halt decider call the program provided.
>>
>> That is NOT how a halt decider could work, as then it can not answer if
>> the program does not halt.
>>
>>
>> ChatGPT is NOT a proven source of truth, in fact, when fact checked, on
>> actual somewhat compleicated logical question, it has a very low
>> accuracy rate.
>>
>> This is the case of the Blind being lead by the Blind.
>
> It seems you totally ignored my reply. I will try again:
>
> You are assuming that ChatGPT is ignoring the comment: on what are you
> basing such an asinine assumption?
>
> /Flibble

I am not "Assuming" I am reading what it says, it says that the halt
detector just calls the function given to it.

That means the "Halt Detector" Fails to be a Halt Detector if given a
non-halting input.

Note, it didn't say "Simulate until it can determine it will not halt",
it says calls and that forms an infinite recursion loop.

Re: Halting Problem solved (AI reprise)

<tr79i2$314l3$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43338&group=comp.theory#43338

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Halting Problem solved (AI reprise)
Date: Sun, 29 Jan 2023 20:20:48 -0600
Organization: A noiseless patient Spider
Lines: 93
Message-ID: <tr79i2$314l3$1@dont-email.me>
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad> <tr75um$30htb$1@dont-email.me>
<QhFBL.56572$Lfzc.44550@fx36.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 30 Jan 2023 02:20:50 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="305b6a6f28f0c6a0eec54ac1656281c5";
logging-data="3183267"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/lpFru6qNaGGfotnq/UUVM"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.6.1
Cancel-Lock: sha1:LMWECdofRtK2XTpjGPkq2W2ddVY=
Content-Language: en-US
In-Reply-To: <QhFBL.56572$Lfzc.44550@fx36.iad>
 by: olcott - Mon, 30 Jan 2023 02:20 UTC

On 1/29/2023 7:44 PM, Richard Damon wrote:
> On 1/29/23 8:19 PM, olcott wrote:
>> On 1/29/2023 7:14 PM, Richard Damon wrote:
>>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>>
>>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>>> Hi!
>>>>>>
>>>>>> I am happy to announce that ChatGPT agrees with my thesis that
>>>>>> there is
>>>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE
>>>>>> DUE TO
>>>>>> SELF REFERENCE CATEGORY ERROR):
>>>>>>
>>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>>
>>>>>> My halt decider:
>>>>>>
>>>>>> https://github.com/i42output/halting-problem#readme
>>>>>>
>>>>>> /Flibble
>>>>>
>>>>>
>>>>> You do understand that ChatGPT doesn't actually understand how your
>>>>> function "program_halts" is supposed to work since it only has a
>>>>> comment
>>>>> to decide, and seems to just assume that it makes a direct call.
>>>>>
>>>>> Thus, its answer isn't actually meaningful.
>>>>
>>>> You are assuming that ChatGPT is ignoring the comment: on what are you
>>>> basing such an asinine assumption?
>>>>
>>>> /Flibble
>>>
>>> Read what ChatGPT said about the program.
>>>
>>> It says that the halt decider call the program provided.
>>>
>>> That is NOT how a halt decider could work, as then it can not answer
>>> if the program does not halt.
>>>
>>>
>>> ChatGPT is NOT a proven source of truth, in fact, when fact checked,
>>> on actual somewhat compleicated logical question, it has a very low
>>> accuracy rate.
>>>
>>> This is the case of the Blind being lead by the Blind.
>>
>> When I explained the key aspect of my algorithm to ChatGPT it
>> understood that this algorithm is correct.
>
> Nope, it parroted back what you told it in an attempt to come close to
> passing the Turing Test.
>
>>
>> olcott
>> This first sentence is an established fact: When H(D,D) correctly
>> simulates its input the execution trace of this simulated input proves
>> that it would never stop running because D continues to call H in
>> recursive simulation. Then H aborts its simulation of D and returns 0
>> to main, indicating that D would never stop running unless aborted. Is
>> H correct?
>>
>>
> You know ChatGPT has been shown to give many incorrect answers?

Within the context of the conversation the error can be explained to
ChatGPT using simple English and it doesn't make the same mistake again
within this same conversation.

I tested this on one of the reported errors and was able to get it to
understand and correct its mistake with a single sentence of correction.

olcott:
When I was 6 my sister was half my age. Now I'm 70 how old is my sister?
wrong answer followed by this correction:

olcott:
sisters age now = my age now - (my age at 6 years old / 2)
Now ChatGPT gets the correct answer.

Unlike with my human reviewers that remain stuck in rebuttal mode I have
always been able to achieve mutual agreement with ChatGPT.

ChatGPT (and Professor Sipser) were able to correctly determine that my
second sentence is a necessary consequence of my first sentence.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Halting Problem solved (AI reprise)

<xkGBL.379133$MVg8.152264@fx12.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43343&group=comp.theory#43343

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx12.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Halting Problem solved (AI reprise)
Content-Language: en-US
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad> <tr75um$30htb$1@dont-email.me>
<QhFBL.56572$Lfzc.44550@fx36.iad> <tr79i2$314l3$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <tr79i2$314l3$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 100
Message-ID: <xkGBL.379133$MVg8.152264@fx12.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Sun, 29 Jan 2023 21:55:26 -0500
X-Received-Bytes: 4735
 by: Richard Damon - Mon, 30 Jan 2023 02:55 UTC

On 1/29/23 9:20 PM, olcott wrote:
> On 1/29/2023 7:44 PM, Richard Damon wrote:
>> On 1/29/23 8:19 PM, olcott wrote:
>>> On 1/29/2023 7:14 PM, Richard Damon wrote:
>>>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>>>
>>>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>>>> Hi!
>>>>>>>
>>>>>>> I am happy to announce that ChatGPT agrees with my thesis that
>>>>>>> there is
>>>>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE
>>>>>>> DUE TO
>>>>>>> SELF REFERENCE CATEGORY ERROR):
>>>>>>>
>>>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>>>
>>>>>>> My halt decider:
>>>>>>>
>>>>>>> https://github.com/i42output/halting-problem#readme
>>>>>>>
>>>>>>> /Flibble
>>>>>>
>>>>>>
>>>>>> You do understand that ChatGPT doesn't actually understand how your
>>>>>> function "program_halts" is supposed to work since it only has a
>>>>>> comment
>>>>>> to decide, and seems to just assume that it makes a direct call.
>>>>>>
>>>>>> Thus, its answer isn't actually meaningful.
>>>>>
>>>>> You are assuming that ChatGPT is ignoring the comment: on what are you
>>>>> basing such an asinine assumption?
>>>>>
>>>>> /Flibble
>>>>
>>>> Read what ChatGPT said about the program.
>>>>
>>>> It says that the halt decider call the program provided.
>>>>
>>>> That is NOT how a halt decider could work, as then it can not answer
>>>> if the program does not halt.
>>>>
>>>>
>>>> ChatGPT is NOT a proven source of truth, in fact, when fact checked,
>>>> on actual somewhat compleicated logical question, it has a very low
>>>> accuracy rate.
>>>>
>>>> This is the case of the Blind being lead by the Blind.
>>>
>>> When I explained the key aspect of my algorithm to ChatGPT it
>>> understood that this algorithm is correct.
>>
>> Nope, it parroted back what you told it in an attempt to come close to
>> passing the Turing Test.
>>
>>>
>>> olcott
>>> This first sentence is an established fact: When H(D,D) correctly
>>> simulates its input the execution trace of this simulated input
>>> proves that it would never stop running because D continues to call H
>>> in recursive simulation. Then H aborts its simulation of D and
>>> returns 0 to main, indicating that D would never stop running unless
>>> aborted. Is H correct?
>>>
>>>
>> You know ChatGPT has been shown to give many incorrect answers?
>
> Within the context of the conversation the error can be explained to
> ChatGPT using simple English and it doesn't make the same mistake again
> within this same conversation.
>
> I tested this on one of the reported errors and was able to get it to
> understand and correct its mistake with a single sentence of correction.
>
> olcott:
> When I was 6 my sister was half my age. Now I'm 70 how old is my sister?
> wrong answer followed by this correction:
>
> olcott:
> sisters age now = my age now - (my age at 6 years old / 2)
> Now ChatGPT gets the correct answer.
>
> Unlike with my human reviewers that remain stuck in rebuttal mode I have
> always been able to achieve mutual agreement with ChatGPT.
>
> ChatGPT (and Professor Sipser) were able to correctly determine that my
> second sentence is a necessary consequence of my first sentence.
>

Fallacy of Proof by Example.

Fallacy of Appeal to Authority (to someone who isn't even an Athority).

ChatGPT has been shown to give incorrect answers when asked moderately
complicated factual questions.

If you think you question was moderately complicated, that shows your
level of intelegence (very low)

Re: Halting Problem solved (AI reprise)

<tr7itg$35ano$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43344&group=comp.theory#43344

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Halting Problem solved (AI reprise)
Date: Sun, 29 Jan 2023 23:00:31 -0600
Organization: A noiseless patient Spider
Lines: 114
Message-ID: <tr7itg$35ano$1@dont-email.me>
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad> <tr75um$30htb$1@dont-email.me>
<QhFBL.56572$Lfzc.44550@fx36.iad> <tr79i2$314l3$1@dont-email.me>
<xkGBL.379133$MVg8.152264@fx12.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 30 Jan 2023 05:00:32 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="305b6a6f28f0c6a0eec54ac1656281c5";
logging-data="3320568"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+/L2kmiJy2Oy84rW3evc4g"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.6.1
Cancel-Lock: sha1:WYmmrjOK5ob4p3d4b8Ch6r6wFIs=
Content-Language: en-US
In-Reply-To: <xkGBL.379133$MVg8.152264@fx12.iad>
 by: olcott - Mon, 30 Jan 2023 05:00 UTC

On 1/29/2023 8:55 PM, Richard Damon wrote:
> On 1/29/23 9:20 PM, olcott wrote:
>> On 1/29/2023 7:44 PM, Richard Damon wrote:
>>> On 1/29/23 8:19 PM, olcott wrote:
>>>> On 1/29/2023 7:14 PM, Richard Damon wrote:
>>>>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>>>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>>>>
>>>>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>>>>> Hi!
>>>>>>>>
>>>>>>>> I am happy to announce that ChatGPT agrees with my thesis that
>>>>>>>> there is
>>>>>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE
>>>>>>>> DUE TO
>>>>>>>> SELF REFERENCE CATEGORY ERROR):
>>>>>>>>
>>>>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>>>>
>>>>>>>> My halt decider:
>>>>>>>>
>>>>>>>> https://github.com/i42output/halting-problem#readme
>>>>>>>>
>>>>>>>> /Flibble
>>>>>>>
>>>>>>>
>>>>>>> You do understand that ChatGPT doesn't actually understand how your
>>>>>>> function "program_halts" is supposed to work since it only has a
>>>>>>> comment
>>>>>>> to decide, and seems to just assume that it makes a direct call.
>>>>>>>
>>>>>>> Thus, its answer isn't actually meaningful.
>>>>>>
>>>>>> You are assuming that ChatGPT is ignoring the comment: on what are
>>>>>> you
>>>>>> basing such an asinine assumption?
>>>>>>
>>>>>> /Flibble
>>>>>
>>>>> Read what ChatGPT said about the program.
>>>>>
>>>>> It says that the halt decider call the program provided.
>>>>>
>>>>> That is NOT how a halt decider could work, as then it can not
>>>>> answer if the program does not halt.
>>>>>
>>>>>
>>>>> ChatGPT is NOT a proven source of truth, in fact, when fact
>>>>> checked, on actual somewhat compleicated logical question, it has a
>>>>> very low accuracy rate.
>>>>>
>>>>> This is the case of the Blind being lead by the Blind.
>>>>
>>>> When I explained the key aspect of my algorithm to ChatGPT it
>>>> understood that this algorithm is correct.
>>>
>>> Nope, it parroted back what you told it in an attempt to come close
>>> to passing the Turing Test.
>>>
>>>>
>>>> olcott
>>>> This first sentence is an established fact: When H(D,D) correctly
>>>> simulates its input the execution trace of this simulated input
>>>> proves that it would never stop running because D continues to call
>>>> H in recursive simulation.

This second sentence follows the prior sentence logical necessity.

>>>> Then H aborts its simulation of D and
>>>> returns 0 to main, indicating that D would never stop running unless
>>>> aborted. Is H correct?
>>>>
>>>>
>>> You know ChatGPT has been shown to give many incorrect answers?
>>
>> Within the context of the conversation the error can be explained to
>> ChatGPT using simple English and it doesn't make the same mistake again
>> within this same conversation.
>>
>> I tested this on one of the reported errors and was able to get it to
>> understand and correct its mistake with a single sentence of correction.
>>
>> olcott:
>> When I was 6 my sister was half my age. Now I'm 70 how old is my sister?
>> wrong answer followed by this correction:
>>
>> olcott:
>> sisters age now = my age now - (my age at 6 years old / 2)
>> Now ChatGPT gets the correct answer.
>>
>> Unlike with my human reviewers that remain stuck in rebuttal mode I have
>> always been able to achieve mutual agreement with ChatGPT.
>>
>> ChatGPT (and Professor Sipser) were able to correctly determine that my
>> second sentence is a necessary consequence of my first sentence.
>>
>
> Fallacy of Proof by Example.
>
> Fallacy of Appeal to Authority (to someone who isn't even an Athority).
>
> ChatGPT has been shown to give incorrect answers when asked moderately
> complicated factual questions.
>
> If you think you question was moderately complicated, that shows your
> level of intelegence (very low)

That you do not acknowledge that my first sentence logically entails my
second sentence proves that you are a liar.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Halting Problem solved (AI reprise)

<LxOBL.80632$0dpc.49909@fx33.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43352&group=comp.theory#43352

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx33.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Halting Problem solved (AI reprise)
Content-Language: en-US
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad> <tr75um$30htb$1@dont-email.me>
<QhFBL.56572$Lfzc.44550@fx36.iad> <tr79i2$314l3$1@dont-email.me>
<xkGBL.379133$MVg8.152264@fx12.iad> <tr7itg$35ano$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <tr7itg$35ano$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 132
Message-ID: <LxOBL.80632$0dpc.49909@fx33.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Mon, 30 Jan 2023 07:15:40 -0500
X-Received-Bytes: 6060
 by: Richard Damon - Mon, 30 Jan 2023 12:15 UTC

On 1/30/23 12:00 AM, olcott wrote:
> On 1/29/2023 8:55 PM, Richard Damon wrote:
>> On 1/29/23 9:20 PM, olcott wrote:
>>> On 1/29/2023 7:44 PM, Richard Damon wrote:
>>>> On 1/29/23 8:19 PM, olcott wrote:
>>>>> On 1/29/2023 7:14 PM, Richard Damon wrote:
>>>>>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>>>>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>>>>>
>>>>>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>>>>>> Hi!
>>>>>>>>>
>>>>>>>>> I am happy to announce that ChatGPT agrees with my thesis that
>>>>>>>>> there is
>>>>>>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT DECIDE
>>>>>>>>> DUE TO
>>>>>>>>> SELF REFERENCE CATEGORY ERROR):
>>>>>>>>>
>>>>>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>>>>>
>>>>>>>>> My halt decider:
>>>>>>>>>
>>>>>>>>> https://github.com/i42output/halting-problem#readme
>>>>>>>>>
>>>>>>>>> /Flibble
>>>>>>>>
>>>>>>>>
>>>>>>>> You do understand that ChatGPT doesn't actually understand how your
>>>>>>>> function "program_halts" is supposed to work since it only has a
>>>>>>>> comment
>>>>>>>> to decide, and seems to just assume that it makes a direct call.
>>>>>>>>
>>>>>>>> Thus, its answer isn't actually meaningful.
>>>>>>>
>>>>>>> You are assuming that ChatGPT is ignoring the comment: on what
>>>>>>> are you
>>>>>>> basing such an asinine assumption?
>>>>>>>
>>>>>>> /Flibble
>>>>>>
>>>>>> Read what ChatGPT said about the program.
>>>>>>
>>>>>> It says that the halt decider call the program provided.
>>>>>>
>>>>>> That is NOT how a halt decider could work, as then it can not
>>>>>> answer if the program does not halt.
>>>>>>
>>>>>>
>>>>>> ChatGPT is NOT a proven source of truth, in fact, when fact
>>>>>> checked, on actual somewhat compleicated logical question, it has
>>>>>> a very low accuracy rate.
>>>>>>
>>>>>> This is the case of the Blind being lead by the Blind.
>>>>>
>>>>> When I explained the key aspect of my algorithm to ChatGPT it
>>>>> understood that this algorithm is correct.
>>>>
>>>> Nope, it parroted back what you told it in an attempt to come close
>>>> to passing the Turing Test.
>>>>
>>>>>
>>>>> olcott
>>>>> This first sentence is an established fact: When H(D,D) correctly
>>>>> simulates its input the execution trace of this simulated input
>>>>> proves that it would never stop running because D continues to call
>>>>> H in recursive simulation.
>
> This second sentence follows the prior sentence logical necessity.

Nope, it presumes the fact that H(D,D) does correctly simulate its input.

"When" never occures for THIS H, Remember, H is a fixed defined machine,
NOT some "class" of machines. You are just showing that you don't
understand the basics.

>
>>>>> Then H aborts its simulation of D and returns 0 to main, indicating
>>>>> that D would never stop running unless aborted. Is H correct?
>>>>>
>>>>>
>>>> You know ChatGPT has been shown to give many incorrect answers?
>>>
>>> Within the context of the conversation the error can be explained to
>>> ChatGPT using simple English and it doesn't make the same mistake again
>>> within this same conversation.
>>>
>>> I tested this on one of the reported errors and was able to get it to
>>> understand and correct its mistake with a single sentence of correction.
>>>
>>> olcott:
>>> When I was 6 my sister was half my age. Now I'm 70 how old is my sister?
>>> wrong answer followed by this correction:
>>>
>>> olcott:
>>> sisters age now = my age now - (my age at 6 years old / 2)
>>> Now ChatGPT gets the correct answer.
>>>
>>> Unlike with my human reviewers that remain stuck in rebuttal mode I have
>>> always been able to achieve mutual agreement with ChatGPT.
>>>
>>> ChatGPT (and Professor Sipser) were able to correctly determine that my
>>> second sentence is a necessary consequence of my first sentence.
>>>
>>
>> Fallacy of Proof by Example.
>>
>> Fallacy of Appeal to Authority (to someone who isn't even an Athority).
>>
>> ChatGPT has been shown to give incorrect answers when asked moderately
>> complicated factual questions.
>>
>> If you think you question was moderately complicated, that shows your
>> level of intelegence (very low)
>
> That you do not acknowledge that my first sentence logically entails my
> second sentence proves that you are a liar.
>

Nope, just as the sentence:

If this sentence is True, then Peter Olcott is a Hypocritical
Pathological Lying Ignorant Idiot.

Can be shown to be true by the same method that you are claiming your
first sentence is true, and thus its conclusion is "proven" to be a fact.

You are just showing that you don't understand tha basics of Logic
Theory and are just repating all the mistakes that have been made in the
past.

Your self-imposed ignorace of the theory has made you into an idiot.

Re: Halting Problem solved (AI reprise)

<tr8mad$3b0eb$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43353&group=comp.theory#43353

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!reader01.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Halting Problem solved (AI reprise)
Date: Mon, 30 Jan 2023 09:04:44 -0600
Organization: A noiseless patient Spider
Lines: 99
Message-ID: <tr8mad$3b0eb$1@dont-email.me>
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad> <tr75um$30htb$1@dont-email.me>
<QhFBL.56572$Lfzc.44550@fx36.iad> <tr79i2$314l3$1@dont-email.me>
<xkGBL.379133$MVg8.152264@fx12.iad> <tr7itg$35ano$1@dont-email.me>
<LxOBL.80632$0dpc.49909@fx33.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 30 Jan 2023 15:04:45 -0000 (UTC)
Injection-Info: reader01.eternal-september.org; posting-host="305b6a6f28f0c6a0eec54ac1656281c5";
logging-data="3506635"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/oCjZIAo64G+2b3DhHuzXB"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.6.1
Cancel-Lock: sha1:rWfIjZxM0oS6ZEDPHHlyXfmBew0=
Content-Language: en-US
In-Reply-To: <LxOBL.80632$0dpc.49909@fx33.iad>
 by: olcott - Mon, 30 Jan 2023 15:04 UTC

On 1/30/2023 6:15 AM, Richard Damon wrote:
> On 1/30/23 12:00 AM, olcott wrote:
>> On 1/29/2023 8:55 PM, Richard Damon wrote:
>>> On 1/29/23 9:20 PM, olcott wrote:
>>>> On 1/29/2023 7:44 PM, Richard Damon wrote:
>>>>> On 1/29/23 8:19 PM, olcott wrote:
>>>>>> On 1/29/2023 7:14 PM, Richard Damon wrote:
>>>>>>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>>>>>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>>>>>>
>>>>>>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>>>>>>> Hi!
>>>>>>>>>>
>>>>>>>>>> I am happy to announce that ChatGPT agrees with my thesis that
>>>>>>>>>> there is
>>>>>>>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT
>>>>>>>>>> DECIDE DUE TO
>>>>>>>>>> SELF REFERENCE CATEGORY ERROR):
>>>>>>>>>>
>>>>>>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>>>>>>
>>>>>>>>>> My halt decider:
>>>>>>>>>>
>>>>>>>>>> https://github.com/i42output/halting-problem#readme
>>>>>>>>>>
>>>>>>>>>> /Flibble
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> You do understand that ChatGPT doesn't actually understand how
>>>>>>>>> your
>>>>>>>>> function "program_halts" is supposed to work since it only has
>>>>>>>>> a comment
>>>>>>>>> to decide, and seems to just assume that it makes a direct call.
>>>>>>>>>
>>>>>>>>> Thus, its answer isn't actually meaningful.
>>>>>>>>
>>>>>>>> You are assuming that ChatGPT is ignoring the comment: on what
>>>>>>>> are you
>>>>>>>> basing such an asinine assumption?
>>>>>>>>
>>>>>>>> /Flibble
>>>>>>>
>>>>>>> Read what ChatGPT said about the program.
>>>>>>>
>>>>>>> It says that the halt decider call the program provided.
>>>>>>>
>>>>>>> That is NOT how a halt decider could work, as then it can not
>>>>>>> answer if the program does not halt.
>>>>>>>
>>>>>>>
>>>>>>> ChatGPT is NOT a proven source of truth, in fact, when fact
>>>>>>> checked, on actual somewhat compleicated logical question, it has
>>>>>>> a very low accuracy rate.
>>>>>>>
>>>>>>> This is the case of the Blind being lead by the Blind.
>>>>>>
>>>>>> When I explained the key aspect of my algorithm to ChatGPT it
>>>>>> understood that this algorithm is correct.
>>>>>
>>>>> Nope, it parroted back what you told it in an attempt to come close
>>>>> to passing the Turing Test.
>>>>>
>>>>>>
>>>>>> olcott
>>>>>> This first sentence is an established fact: When H(D,D) correctly
>>>>>> simulates its input the execution trace of this simulated input
>>>>>> proves that it would never stop running because D continues to
>>>>>> call H in recursive simulation.
>>
>> This second sentence follows the prior sentence logical necessity.
>
> Nope, it presumes the fact that H(D,D) does correctly simulate its input.
>

You don't even understand how logical deduction works do you?
If the Moon is entirely made of green cheese there are millions of tons
of green cheese on the Moon follows by logical necessity from its
premise.

Thus the second sentence follows the prior sentence by logical necessity
even if the first sentence is false. That you do not even understand
elementary validity proves that your rebuttal has no basis.

> "When" never occures for THIS H, Remember, H is a fixed defined machine,
> NOT some "class" of machines. You are just showing that you don't
> understand the basics.
>

It is a verified fact that H does correctly simulate D until H correctly
determines that the simulated D would never stop running unless aborted.

Two people with masters degrees in computer science and Ben agrees with
this.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Halting Problem solved (AI reprise)

<173f27c2a939cc95$3145$404183$faa1aca7@news.newsdemon.com>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43359&group=comp.theory#43359

  copy link   Newsgroups: comp.theory
From: flib...@reddwarf.jmc.corp (Mr Flibble)
Subject: Re: Halting Problem solved (AI reprise)
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com> <SPfBL.137319$5CY7.25969@fx46.iad> <173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com> <%REBL.56569$Lfzc.9977@fx36.iad> <173ef210c8763fd0$4896$354281$3aa16cab@news.newsdemon.com> <zjFBL.56573$Lfzc.14891@fx36.iad>
User-Agent: Pan/0.146 (Hic habitat felicitas; d7a48b4 gitlab.gnome.org/GNOME/pan.git)
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Lines: 73
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!newsreader4.netcologne.de!news.netcologne.de!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.ams4!peer.am4.highwinds-media.com!news.highwinds-media.com!tr3.eu1.usenetexpress.com!feeder.usenetexpress.com!tr1.iad1.usenetexpress.com!news.newsdemon.com!not-for-mail
Date: Mon, 30 Jan 2023 17:55:03 +0000
Nntp-Posting-Date: Mon, 30 Jan 2023 17:55:03 +0000
Organization: NewsDemon - www.newsdemon.com
X-Complaints-To: abuse@newsdemon.com
Message-Id: <173f27c2a939cc95$3145$404183$faa1aca7@news.newsdemon.com>
X-Received-Bytes: 3672
 by: Mr Flibble - Mon, 30 Jan 2023 17:55 UTC

On Sun, 29 Jan 2023 20:46:07 -0500, Richard Damon wrote:

> On 1/29/23 8:31 PM, Mr Flibble wrote:
>> On Sun, 29 Jan 2023 20:14:35 -0500, Richard Damon wrote:
>>
>>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>>
>>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>>> Hi!
>>>>>>
>>>>>> I am happy to announce that ChatGPT agrees with my thesis that
>>>>>> there is a third outcome for a halt decider: INVALID (I.E. CANNOT
>>>>>> DECIDE DUE TO SELF REFERENCE CATEGORY ERROR):
>>>>>>
>>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>>
>>>>>> My halt decider:
>>>>>>
>>>>>> https://github.com/i42output/halting-problem#readme
>>>>>>
>>>>>> /Flibble
>>>>>
>>>>>
>>>>> You do understand that ChatGPT doesn't actually understand how your
>>>>> function "program_halts" is supposed to work since it only has a
>>>>> comment to decide, and seems to just assume that it makes a direct
>>>>> call.
>>>>>
>>>>> Thus, its answer isn't actually meaningful.
>>>>
>>>> You are assuming that ChatGPT is ignoring the comment: on what are
>>>> you basing such an asinine assumption?
>>>>
>>>> /Flibble
>>>
>>> Read what ChatGPT said about the program.
>>>
>>> It says that the halt decider call the program provided.
>>>
>>> That is NOT how a halt decider could work, as then it can not answer
>>> if the program does not halt.
>>>
>>>
>>> ChatGPT is NOT a proven source of truth, in fact, when fact checked,
>>> on actual somewhat compleicated logical question, it has a very low
>>> accuracy rate.
>>>
>>> This is the case of the Blind being lead by the Blind.
>>
>> It seems you totally ignored my reply. I will try again:
>>
>> You are assuming that ChatGPT is ignoring the comment: on what are you
>> basing such an asinine assumption?
>>
>> /Flibble
>
> I am not "Assuming" I am reading what it says, it says that the halt
> detector just calls the function given to it.
>
> That means the "Halt Detector" Fails to be a Halt Detector if given a
> non-halting input.
>
> Note, it didn't say "Simulate until it can determine it will not halt",
> it says calls and that forms an infinite recursion loop.

Yes, it says there is a pathological self reference thus confirming my
assertion that the ONLY method to determine if a program halts is through
simulation; it also confirms my thesis: "it is not possible to determine
whether it will halt or not" BECAUSE of the circular reference, i.e. it
exhibits the category error that I have uniquely identified.

/Flibble

Re: Halting Problem solved (AI reprise)

<dCYBL.93448$rKDc.65120@fx34.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=43368&group=comp.theory#43368

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!feed1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx34.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.6.1
Subject: Re: Halting Problem solved (AI reprise)
Content-Language: en-US
Newsgroups: comp.theory
References: <173e92d71d359ef8$3667$380591$7aa12caf@news.newsdemon.com>
<SPfBL.137319$5CY7.25969@fx46.iad>
<173ef09fffb109a4$193$2862885$7aa12caf@news.newsdemon.com>
<%REBL.56569$Lfzc.9977@fx36.iad> <tr75um$30htb$1@dont-email.me>
<QhFBL.56572$Lfzc.44550@fx36.iad> <tr79i2$314l3$1@dont-email.me>
<xkGBL.379133$MVg8.152264@fx12.iad> <tr7itg$35ano$1@dont-email.me>
<LxOBL.80632$0dpc.49909@fx33.iad> <tr8mad$3b0eb$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
In-Reply-To: <tr8mad$3b0eb$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 122
Message-ID: <dCYBL.93448$rKDc.65120@fx34.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Mon, 30 Jan 2023 18:43:05 -0500
X-Received-Bytes: 5847
 by: Richard Damon - Mon, 30 Jan 2023 23:43 UTC

On 1/30/23 10:04 AM, olcott wrote:
> On 1/30/2023 6:15 AM, Richard Damon wrote:
>> On 1/30/23 12:00 AM, olcott wrote:
>>> On 1/29/2023 8:55 PM, Richard Damon wrote:
>>>> On 1/29/23 9:20 PM, olcott wrote:
>>>>> On 1/29/2023 7:44 PM, Richard Damon wrote:
>>>>>> On 1/29/23 8:19 PM, olcott wrote:
>>>>>>> On 1/29/2023 7:14 PM, Richard Damon wrote:
>>>>>>>> On 1/29/23 8:04 PM, Mr Flibble wrote:
>>>>>>>>> On Sat, 28 Jan 2023 15:45:38 -0500, Richard Damon wrote:
>>>>>>>>>
>>>>>>>>>> On 1/28/23 3:26 PM, Mr Flibble wrote:
>>>>>>>>>>> Hi!
>>>>>>>>>>>
>>>>>>>>>>> I am happy to announce that ChatGPT agrees with my thesis
>>>>>>>>>>> that there is
>>>>>>>>>>> a third outcome for a halt decider: INVALID (I.E. CANNOT
>>>>>>>>>>> DECIDE DUE TO
>>>>>>>>>>> SELF REFERENCE CATEGORY ERROR):
>>>>>>>>>>>
>>>>>>>>>>> https://twitter.com/i42Software/status/1609626194273525760
>>>>>>>>>>>
>>>>>>>>>>> My halt decider:
>>>>>>>>>>>
>>>>>>>>>>> https://github.com/i42output/halting-problem#readme
>>>>>>>>>>>
>>>>>>>>>>> /Flibble
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> You do understand that ChatGPT doesn't actually understand how
>>>>>>>>>> your
>>>>>>>>>> function "program_halts" is supposed to work since it only has
>>>>>>>>>> a comment
>>>>>>>>>> to decide, and seems to just assume that it makes a direct call.
>>>>>>>>>>
>>>>>>>>>> Thus, its answer isn't actually meaningful.
>>>>>>>>>
>>>>>>>>> You are assuming that ChatGPT is ignoring the comment: on what
>>>>>>>>> are you
>>>>>>>>> basing such an asinine assumption?
>>>>>>>>>
>>>>>>>>> /Flibble
>>>>>>>>
>>>>>>>> Read what ChatGPT said about the program.
>>>>>>>>
>>>>>>>> It says that the halt decider call the program provided.
>>>>>>>>
>>>>>>>> That is NOT how a halt decider could work, as then it can not
>>>>>>>> answer if the program does not halt.
>>>>>>>>
>>>>>>>>
>>>>>>>> ChatGPT is NOT a proven source of truth, in fact, when fact
>>>>>>>> checked, on actual somewhat compleicated logical question, it
>>>>>>>> has a very low accuracy rate.
>>>>>>>>
>>>>>>>> This is the case of the Blind being lead by the Blind.
>>>>>>>
>>>>>>> When I explained the key aspect of my algorithm to ChatGPT it
>>>>>>> understood that this algorithm is correct.
>>>>>>
>>>>>> Nope, it parroted back what you told it in an attempt to come
>>>>>> close to passing the Turing Test.
>>>>>>
>>>>>>>
>>>>>>> olcott
>>>>>>> This first sentence is an established fact: When H(D,D) correctly
>>>>>>> simulates its input the execution trace of this simulated input
>>>>>>> proves that it would never stop running because D continues to
>>>>>>> call H in recursive simulation.
>>>
>>> This second sentence follows the prior sentence logical necessity.
>>
>> Nope, it presumes the fact that H(D,D) does correctly simulate its input.
>>
>
> You don't even understand how logical deduction works do you?
> If the Moon is entirely made of green cheese there are millions of tons
> of green cheese on the Moon follows by logical necessity from its
> premise.

Right, but you can't use that to establish t

>
> Thus the second sentence follows the prior sentence by logical necessity
> even if the first sentence is false. That you do not even understand
> elementary validity proves that your rebuttal has no basis.

But you haven't established that the first sentene actually occurs.

To get the second, you need to PROVE the first, without using it.

That is the result of the Curry Paradox.

Since H DOESN'T do a correct simulation by the definiton that you need
use to replace the simulation with the direct execution, (which requires
complete) you can't use the statement to prove that that answer is correct.

>
>
>> "When" never occures for THIS H, Remember, H is a fixed defined
>> machine, NOT some "class" of machines. You are just showing that you
>> don't understand the basics.
>>
>
> It is a verified fact that H does correctly simulate D until H correctly
> determines that the simulated D would never stop running unless aborted.

Nope, and aborted simulation is NEVER correct by the definition of a
UTM, and without that definition, you can't substitue the simulation for
the original.

>
> Two people with masters degrees in computer science and Ben agrees with
> this.
>

No, they have said that *IF* H correctly simulates far enough to
*CORRECTLY* an *LOGICALLY* conclude that a correct simulation of this
input will never halt, it can abort and return non-halting.

Since this H will ALWAYS abort its simulation, that conclusion is NEVER
correct.

1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor