Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

The value of a program is proportional to the weight of its output.


devel / comp.theory / Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

SubjectAuthor
* Why does H1(D,D) actually get a different result than H(D,D) ???olcott
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|||`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | | +- Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |    `* How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |      `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |       `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |        `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |         `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |+- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |    | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |  +- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |    |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    |+* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||+- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    ||`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    || `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||  +- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    ||  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    ||   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    |`- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |     `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |      `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |    `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |     +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |     |`* Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||| | | |     | `* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||| | | |     |  `* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||| | | |     |   `- Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||| | | |     `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |      `* Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?olcott
||| | | |       +- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?immibis
||| | | |       `- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?Richard Damon
||| | | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |    `* Actual limits of computations != actual limits of computers with unlimited memorolcott
||| | |     `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |      `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||| | |       `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |        `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||| | |         +* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |         |`* Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | +* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |`* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | | `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |  +* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | |  |+* Re: Limits of computations != actual limits of computers [ Church Turing ]immibis
||| | |         | |  |`* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |  `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | `- Re: Finlayson [ Church Turing ]Ross Finlayson
||| | |         `* Re: Actual limits of computations != actual limits of computers with unlimited mMikko
||| | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||| `- Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||`- Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
|+- Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
|`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Tristan Wibberley
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko

Pages:1234567
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us6osg$3morm$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53988&group=comp.theory#53988

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mikko.le...@iki.fi (Mikko)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Tue, 5 Mar 2024 11:33:04 +0200
Organization: -
Lines: 49
Message-ID: <us6osg$3morm$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org> <us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org> <us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org> <us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org> <us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me> <us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me> <us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me> <us58r3$3aoj4$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="8f92a93ab87016805ff6bd0ad153beb0";
logging-data="3892086"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+jkEw9rkQLPKETzUIHSvSS"
User-Agent: Unison/2.2
Cancel-Lock: sha1:Jx1pSdV9L/xfuRi7iT4C8wWuN3I=
 by: Mikko - Tue, 5 Mar 2024 09:33 UTC

On 2024-03-04 19:53:05 +0000, olcott said:

> On 3/4/2024 3:22 AM, Mikko wrote:
>> On 2024-03-03 18:47:29 +0000, olcott said:
>>
>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>
>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>
>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>> actual very deep understanding of these things.
>>>>>>
>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>
>>>>>
>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>
>>>> That does not demonstrate any understanding, even shallow.
>>>>
>>>
>>> The first thing that it does is agree that Hehner's
>>> "Carol's question" (augmented by Richards critique)
>>> is an example of the Liar Paradox.
>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>
>>> It ends up concluding that myself, professor Hehner and
>>> professor Stoddart are all correct in that there is
>>> something wrong with the halting problem.
>>
>> None of that demonstrates any understanding.
>>
>>> My persistent focus on these ideas gives me an increasingly
>>> deeper understanding thus my latest position is that the
>>> halting problem proofs do not actually show that halting
>>> is not computable.
>>
>> Your understanding is still defective and shallow.
>>
>
> If it really was shallow then a gap in my reasoning
> could be pointed out.

Gaps in your reasons are pointed out every day.

--
Mikko

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us6q59$3n0k2$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53989&group=comp.theory#53989

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mikko.le...@iki.fi (Mikko)
Newsgroups: comp.theory
Subject: Re: Actual limits of computations != actual limits of computers with unlimited memory ?
Date: Tue, 5 Mar 2024 11:54:49 +0200
Organization: -
Lines: 14
Message-ID: <us6q59$3n0k2$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org> <us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org> <us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org> <us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org> <us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org> <us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org> <us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org> <us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org> <us57it$3ag4o$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="b6f1a97c48189032596c727e3ff11ca5";
logging-data="3900034"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+00/nJO4ttBrOtbyyXy3Gl"
User-Agent: Unison/2.2
Cancel-Lock: sha1:xyXx8vX1SFsW9Cih6h/+8SeeaAo=
 by: Mikko - Tue, 5 Mar 2024 09:54 UTC

On 2024-03-04 19:31:40 +0000, olcott said:

> If there is a physical machine that can solve problems that a Turing
> machine cannot solve then we are only pretending that the limits of
> computation are the limits of computers.

However, we have no idea how such machine could be constructed.
Although that machine might solve the halting problem of Turing
machines it would also create a new halting problem that it
cannot solve.

--
Mikko

Re: Limits of computations != actual limits of computers [ Church Turing ]

<us6vut$re8s$4@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53993&group=comp.theory#53993

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Limits of computations != actual limits of computers [ Church
Turing ]
Date: Tue, 5 Mar 2024 06:33:48 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us6vut$re8s$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me> <us6209$psb9$6@i2pn2.org>
<us6548$3jd6k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 11:33:49 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="899356"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us6548$3jd6k$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Tue, 5 Mar 2024 11:33 UTC

On 3/4/24 10:55 PM, olcott wrote:
> On 3/4/2024 9:02 PM, Richard Damon wrote:
>> On 3/4/24 9:28 PM, olcott wrote:
>>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>>> On 3/4/24 2:31 PM, olcott wrote:
>>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar all
>>>>>>>>>>>>>>>>>>>> this time.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation
>>>>>>>>>>>>>>>>>>>> actually is,
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>>>>> Computers is just WRONG as they predate the
>>>>>>>>>>>>>>>>>> development of such a thing.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>>
>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>
>>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>>
>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>>
>>>>>>>>>> Nope, doesn't show what you claim, just that it has been
>>>>>>>>>> taught by "rote memorization" that the answer to a question
>>>>>>>>>> put the way you did is the answer it gave.
>>>>>>>>>>
>>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>>
>>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>>> words.
>>>>>>>>>>>>
>>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>>> technical language.
>>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>>
>>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>>> working in a technical field and using the words as that field
>>>>>>>>>> means, you are just being a out and out LIAR.
>>>>>>>>>
>>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>>
>>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>>> and thus you comments about working in the field is just an
>>>>>>>> ignorant pathological lie.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> If there are physically existing machines that can answer
>>>>>>>>> questions
>>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>>> access
>>>>>>>>> their own machine address then these machines would be strictly
>>>>>>>>> more
>>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>>
>>>>>>>> Nope.
>>>>>>>
>>>>>>>
>>>>>>> If machine M can solve problems that machine N
>>>>>>> cannot solve then for these problems M is more
>>>>>>> powerful than N.
>>>>>>
>>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>>>>> input (H1^) (H1^)
>>>>>>
>>>>>
>>>>> I am not even talking about that.
>>>>> In this new thread I am only talking about the generic case of:
>>>>> *Actual limits of computations != actual limits of computers*
>>>>> *with unlimited memory*
>>>>>
>>>>>> Note, I realise I misspoke a bit. Any "Non-computation"
>>>>>> sub-program can be turned into a Computation, just by being honest
>>>>>> and declairing as inputs the "Hidden Data" that it is using
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> But you just admitted you are too ignorant of the actual meaning
>>>>>>>> to make a reasoned statement and too dishonest to conceed that,
>>>>>>>> even after admitting it,
>>>>>>>>
>>>>>>>>>
>>>>>>>>> If computability only means can't be done in a certain
>>>>>>>>> artificially
>>>>>>>>> limited way and not any actual limit on what computers can
>>>>>>>>> actually
>>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>>
>>>>>>>
>>>>>>> Try and explain how this would not be nonsense.
>>>>>>
>>>>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts
>>>>>> of problems we actuallly want to solve.
>>>>>>
>>>>>
>>>>> If there is a physical machine that can solve problems that a Turing
>>>>> machine cannot solve then we are only pretending that the limits of
>>>>> computation are the limits of computers.
>>>>
>>>> But there isn't.
>>>>
>>>> At least not problems that can be phrased as a computation
>>>
>>> If (hypothetically) there are physical computers that can
>>> solve decision problems that Turing machines cannot solve
>>> then the notion of computability is not any actual real
>>> limit it is merely a fake limit.
>>
>> Except that it has been shown that there isn't such a thing, so your
>> hypthetical is just a trip into fantasy land.
>>
>
> This <is> such a virtual machine knows its own machine
> address.
>
> u32 H(ptr P, ptr I)
> {
>   u32 Address_of_H = (u32)H;
>
> Either Turing machines can accomplish the equivalent
> of this or they cannot.
>
> If they cannot and this prevents Turing machines
> from knowing that they have been called in recursive
> simulation and that prevents them from solving even
> a single instance of machine / input then that makes
> Turing machines less powerful on this machine/input pair.


Click here to read the complete article
Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us6vv2$re8s$6@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53995&group=comp.theory#53995

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Tue, 5 Mar 2024 06:33:54 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us6vv2$re8s$6@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me> <us5vc9$psb9$4@i2pn2.org>
<us68fc$3jtk1$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 11:33:54 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="899356"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us68fc$3jtk1$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Tue, 5 Mar 2024 11:33 UTC

On 3/4/24 11:52 PM, olcott wrote:
> On 3/4/2024 8:17 PM, Richard Damon wrote:
>> On 3/4/24 7:48 PM, olcott wrote:
>>> On 3/4/2024 6:21 PM, Richard Damon wrote:
>>>> On 3/4/24 3:14 PM, olcott wrote:
>>>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>>>
>>>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>>>
>>>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>>>>>
>>>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>>>
>>>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>>>> input it has been given.
>>>>>>>>>>
>>>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>>>> really didn't agree with you initially, but you finally
>>>>>>>>>> trained it to your version of reality.
>>>>>>>>>
>>>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>>>> When an input, such as the halting problem's pathological input
>>>>>>>>> D, is
>>>>>>>>> designed to contradict every value that the halting decider H
>>>>>>>>> returns,
>>>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>>>> providing a
>>>>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Which means NOTHING as LLM will tell non-truths if feed
>>>>>>>> misleading information.
>>>>>>>
>>>>>>> The above paragraph is proven to be completely true entirely
>>>>>>> on the basis of the meaning of its words as these words were
>>>>>>> defined in the dialogue that precedes them.
>>>>>>>
>>>>>>
>>>>>> Nope, the problem is you gave it incorrect implications on the
>>>>>> meaning of the wrods.
>>>>>
>>>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>>>> When an input, such as the halting problem's pathological input D, is
>>>>> designed to contradict every value that the halting decider H returns,
>>>>> it creates a self-referential paradox that prevents H from providing a
>>>>> consistent and correct response. In this context, D can be seen as
>>>>> posing an incorrect question to H, as its contradictory nature
>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>
>>>>> Within my definitions of my terms the above paragraph written by
>>>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>>>> 4.0 analysis is sound.
>>>>>
>>>>> *People are not free to disagree with stipulative definitions*
>>>>>
>>>>> A stipulative definition is a type of definition in which a new or
>>>>> currently existing term is given a new specific meaning for the
>>>>> purposes
>>>>> of argument or discussion in a given context. When the term already
>>>>> exists, this definition may, but does not necessarily, contradict the
>>>>> dictionary (lexical) definition of the term.
>>>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> Right, and by that EXACT SAME RULE, when you "stipulate" a
>>>> definition different then that stipulated by a field, you place
>>>> yourself outside that field, and if you still claim to be working in
>>>> it, you are just admitting to being a bald-face LIAR.
>>>>
>>>
>>> Not exactly. When I stipulate a definition that shows the
>>> incoherence of the conventional definitions then I am working
>>> at the foundational level above this field.
>>
>> Nope, if you change the definition of the field, you are in a new field.
>>
>> Just like ZFC Set Theory isn't Naive Set Theory, if you change the
>> basis, you are in a new field.
>
> Yes that is exactly what I am doing, good call !
>
> ZFC corrected the error of Naive Set Theory. I am correcting
> the error of the conventional foundation of computability.

So, just admit that you aren't doing "Compuputation Theory", and thus
can't say you are refute Linz or any one else that WAS doing Computation
Theory.

Just admit you are doing POOP theory.
>
>>>
>>>> And, no, CHAT GPT's analysis is NOT "Sound", at least not in the
>>>> field you claim to be working, as that has definition that must be
>>>> followed, which you don't.
>>>>
>>>
>>> It is perfectly sound within my stipulated definitions.
>>> Or we could say that it is perfectly valid when one take
>>> my definitions as its starting premises.
>>
>> And not when you look at the field you claim to be in.
>>
>
> OK then the field that I am in is the field of the
> *correction to the foundational notions of computability*
> Just like ZFC corrected Naive Set Theory.


Click here to read the complete article
Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩

<us6vv4$re8s$7@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53996&group=comp.theory#53996

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re:_Correcting_the_foundation_of_analytic_truth_and_Lin
z H ⟨Ĥ⟩ ⟨Ĥ⟩
Date: Tue, 5 Mar 2024 06:33:56 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us6vv4$re8s$7@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
<us58r3$3aoj4$1@dont-email.me> <us5ojq$psb8$4@i2pn2.org>
<us5r0p$3drq0$4@dont-email.me> <us5vm7$psb9$5@i2pn2.org>
<us6c3c$3kffi$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 11:33:56 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="899356"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us6c3c$3kffi$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Tue, 5 Mar 2024 11:33 UTC

On 3/5/24 12:54 AM, olcott wrote:
> On 3/4/2024 8:23 PM, Richard Damon wrote:
>> On 3/4/24 8:03 PM, olcott wrote:
>>> On 3/4/2024 6:22 PM, Richard Damon wrote:
>>>> On 3/4/24 2:53 PM, olcott wrote:
>>>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>>>
>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>
>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>
>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>
>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>
>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>
>>>>>>>
>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>> is an example of the Liar Paradox.
>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>
>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>> professor Stoddart are all correct in that there is
>>>>>>> something wrong with the halting problem.
>>>>>>
>>>>>> None of that demonstrates any understanding.
>>>>>>
>>>>>>> My persistent focus on these ideas gives me an increasingly
>>>>>>> deeper understanding thus my latest position is that the
>>>>>>> halting problem proofs do not actually show that halting
>>>>>>> is not computable.
>>>>>>
>>>>>> Your understanding is still defective and shallow.
>>>>>>
>>>>>
>>>>> If it really was shallow then a gap in my reasoning
>>>>> could be pointed out. The actual case is that because
>>>>> I have focused on the same problem based on the Linz
>>>>> proof for such a long time I noticed things that no
>>>>> one every noticed before. *Post from 2004*
>>>>
>>>> It has been.
>>>>
>>>> You are just too stupid to understand.
>>>>
>>>> You can't fix intentional stupid.
>>>>
>>>
>>> You can't see outside of the box of the incorrect foundation
>>> of the notion of analytic truth.
>>>
>>> Because hardly anyone knows that the Liar Paradox is not a truth bearer,
>>> even fewer people understand that epistemological antinomies are not
>>> truth bearers.
>>>
>>> All the people that do not understand that epistemological antinomies
>>> are not truth bearers cannot understand that asking question about the
>>> truth value of an epistemological antinomy is mistake.
>>>
>>> These people lack the basis to understand the decision problem/input
>>> pairs that are asking about the truth value of an epistemological
>>> antinomy is a mistake.
>>
>> If the foundation is so bad, why are you still in it?
>
> When I explain what is wrong with the foundation I am
> not within this same foundation that I am rebuking.

You can't rebuke a statement that isn't in the system you are working!

You are just admitting you don't understand how logic works and are
nothing other than an ignorant pathological lying idiot.

>
> There are some aspects of the notion of analytical
> truth that are incorrect and others that are not.

So, you can build you new system on what you think is still good, but
you need to build up from the ground, or you are just admitting that you
are still using and accepting what you claim to be wrong, and thus just
confirm that you are nothing but a liar.

>
>> You are effectively proving you can't do better by staying.
>>
>>>
>>> People lacking these prerequisite understandings simply write off what
>>> they do not understand as nonsense.
>>>
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn     // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>
>>> Simulating termination analyzer Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition
>>> to Ĥ.Hqn or fail to halt.
>>
>> And thus you LIE that you are working on the Halting Problem, by using
>> the wrong criteria.
>>
>
> When the "right" criteria cause Ĥ.H to never halt
> then these "right" criteria are wrong.

Not in the real computation theory, so until you actually develop you
alternat theory, you are just admitting to lying.

>
>>>
>>> When its sole criterion measure is to always say NO to every input
>>> that would prevent it from halting then it must say NO to ⟨Ĥ⟩ ⟨Ĥ⟩.
>>>
>>
>> Which makes it a LIE to claim you are working on the Halting Problem
>>
>
> If the only way that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that
> it must abort its simulation requires Ĥ to know its own machine
> address then I expanded the scope of the halting problem to
> include RASP machines where every P knows its own address.

In other words, you are admitting to using bad logic.

You are just admitting you are a stupid ignorant pathological lying idiot.

>
>>> When H ⟨Ĥ⟩ ⟨Ĥ⟩ correctly uses this exact same criterion measure
>>> then the "abort simulation" criteria <is not> met thus providing
>>> the correct basis for a different answer.
>>>
>>
>>
>> Which just proves you are a PATHOLOGICAL LIAR since you keep on
>> insisting that you are working on the Halting problem when you are
>> using the wrong criteria.
>>
>
> As long as Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that it must abort its
> simulation then I am merely showing that the conventional halting
> problem does not prove that a halt decider does not exist.

Nope. you are proving that you beleive that Strawman arguments are
valid, and thus are just a stupid ignorant pathological lying idiot.

>
> That you admitted that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition to Ĥ.Hqn
> to prevent its own infinite execution and you admitted
> that this makes Ĥ ⟨Ĥ⟩ halt then that proves that when
> H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to H.qy *THIS IS THE CORRECT ANSWER*

Except that having H (H^) (H^) go to qy while H^.H (H^) (H^) goes to qn
is just admitting that you havve lied about H and H^.H, thus proving you
are a catagorically stupid and ignorant pathalogical liar.
>
> You already know that all of that is correct, you
> *simply don't believe that H can figure out how to do that*

No, I know you have proven yourself unable to deal with the truth.

>
>> The right answer to the wrong question is not a right answer, since it
>> is the question that matters, and to say otherwise is just a LIE.
>

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us7hp5$3rfoj$4@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54001&group=comp.theory#54001

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Tue, 5 Mar 2024 10:37:57 -0600
Organization: A noiseless patient Spider
Lines: 33
Message-ID: <us7hp5$3rfoj$4@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us6okb$3mmn6$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 16:37:57 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="4046611"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19oUPOw4CEYwe3SATwkVjQX"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:II4BAsjHELnchsV8fowbwBkiYdU=
Content-Language: en-US
In-Reply-To: <us6okb$3mmn6$1@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 16:37 UTC

On 3/5/2024 3:28 AM, Mikko wrote:
> On 2024-03-04 20:14:34 +0000, olcott said:
>
>> *People are not free to disagree with stipulative definitions*
>
> People are free to disgree whether your stipulative definitions
> are useful or sensible.
Yes.

> They are not free to disagree with any
> correct inferences from those definitions
Yes.

> nor to agree with any incorrect inferences.
I don't think that there any of these.

The huge advantage of the dialogue with ChatGPT is that
its algorithm seemed to be able to spot any and all gaps
in reasoning. This enabled ChatGPT to provide feedback
so that I could make my definitions airtight.

> They may disagree about the relevance of
> any conclusions based on such definitions.

When a subset of undecidable decision problem/input
pairs are objectively determined to simply be wrong
then this conclusion seems to have no correct rebuttal.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?

<us7idm$3rtsq$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54002&group=comp.theory#54002

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ
⟩ ⟨Ĥ⟩ derive different results ?
Date: Tue, 5 Mar 2024 10:48:53 -0600
Organization: A noiseless patient Spider
Lines: 78
Message-ID: <us7idm$3rtsq$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
<us58r3$3aoj4$1@dont-email.me> <us6osg$3morm$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 16:48:54 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="4061082"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18Av2k2e12BwzwyFyLuOCbs"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:o3BDnKi0Owyozs39l2Cq2ot9JAo=
Content-Language: en-US
In-Reply-To: <us6osg$3morm$1@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 16:48 UTC

On 3/5/2024 3:33 AM, Mikko wrote:
> On 2024-03-04 19:53:05 +0000, olcott said:
>
>> On 3/4/2024 3:22 AM, Mikko wrote:
>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>
>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>
>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>
>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>> actual very deep understanding of these things.
>>>>>>>
>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>
>>>>>>
>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>
>>>>> That does not demonstrate any understanding, even shallow.
>>>>>
>>>>
>>>> The first thing that it does is agree that Hehner's
>>>> "Carol's question" (augmented by Richards critique)
>>>> is an example of the Liar Paradox.
>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>
>>>> It ends up concluding that myself, professor Hehner and
>>>> professor Stoddart are all correct in that there is
>>>> something wrong with the halting problem.
>>>
>>> None of that demonstrates any understanding.
>>>
>>>> My persistent focus on these ideas gives me an increasingly
>>>> deeper understanding thus my latest position is that the
>>>> halting problem proofs do not actually show that halting
>>>> is not computable.
>>>
>>> Your understanding is still defective and shallow.
>>>
>>
>> If it really was shallow then a gap in my reasoning
>> could be pointed out.
>
> Gaps in your reasons are pointed out every day.
>

There are no actual gaps in my reasoning. The closest that
anyone showed any actual gaps in my reasoning was merely
their presumption that there are gaps.

Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

It is true that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition to Ĥ.Hqn to prevent
its own infinite execution. It is true that this makes Ĥ ⟨Ĥ⟩ halt.
This entails that H ⟨Ĥ⟩ ⟨Ĥ⟩ would be correct to transition to H.qy.

No one can point to any gaps in the reasoning. The best
that Richard can do is simply disbelieve that H is not
smart enough to do that when both H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩
apply this same criterion measure:

Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.

We can also see that if both H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ do correctly
apply the above criterion measure that they would have the behavior
that I specified.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us7ite$3rsaf$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54004&group=comp.theory#54004

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Tue, 5 Mar 2024 17:57:18 +0100
Organization: A noiseless patient Spider
Lines: 13
Message-ID: <us7ite$3rsaf$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0g30$251f5$1@dont-email.me> <us0hln$fjqv$21@i2pn2.org>
<us0j45$25en4$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 16:57:18 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="57f0c3693e9e394524300d9230450882";
logging-data="4059471"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+LD5dHqjztLgD046563skQ"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:zxIve5LmzidqNhwDczj17m4BpM0=
Content-Language: en-US
In-Reply-To: <us0j45$25en4$1@dont-email.me>
 by: immibis - Tue, 5 Mar 2024 16:57 UTC

On 3/03/24 02:29, olcott wrote:
> On 3/2/2024 6:53 PM, Richard Damon wrote:
>> Note, Computers, as generally viewed, especiailly for "Compuation
>> Theory" have the limitation of being deterministic, whcih DOES make
>> them less powerful than the human mind, which has free will.
>
> LLMs have contradicted that. They are inherently stochastic.

Incorrect. That they use a random number generator as part of their
algorithm does not make them special. You can use a true random number
generator, which must be counted as an input to the computation, or you
can use a pseudo-random number generator which is deterministic. This is
no different from a computerized game of Blackjack.

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us7ivg$3rttj$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54005&group=comp.theory#54005

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Tue, 5 Mar 2024 10:58:24 -0600
Organization: A noiseless patient Spider
Lines: 28
Message-ID: <us7ivg$3rttj$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us6q59$3n0k2$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 16:58:24 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="4061107"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+4r8DrDWXZAZ6yntv/lxo2"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:VsNkFSgHBbImFg0+wWgBgQ7ZUpc=
In-Reply-To: <us6q59$3n0k2$1@dont-email.me>
Content-Language: en-US
 by: olcott - Tue, 5 Mar 2024 16:58 UTC

On 3/5/2024 3:54 AM, Mikko wrote:
> On 2024-03-04 19:31:40 +0000, olcott said:
>
>> If there is a physical machine that can solve problems that a Turing
>> machine cannot solve then we are only pretending that the limits of
>> computation are the limits of computers.
>
> However, we have no idea how such machine could be constructed.

The x86 language is already sufficiently isomorphic.

> Although that machine might solve the halting problem of Turing
> machines it would also create a new halting problem that it
> cannot solve.

The halting problem is already specified in the x86
isomorphism to a RASP machine where every P knows
its own machine address.

u32 H(ptr P, ptr I)
{ u32 Address_of_H = (u32)H;

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correct

<us7jma$3s73b$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54006&group=comp.theory#54006

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are
correct
Date: Tue, 5 Mar 2024 11:10:33 -0600
Organization: A noiseless patient Spider
Lines: 53
Message-ID: <us7jma$3s73b$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0g30$251f5$1@dont-email.me> <us0hln$fjqv$21@i2pn2.org>
<us0j45$25en4$1@dont-email.me> <us7ite$3rsaf$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 17:10:34 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="4070507"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19Iko21+02gER2zYrvL5HEr"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:OmPMLgd3bjVZ9IyugDTc5UrT728=
In-Reply-To: <us7ite$3rsaf$1@dont-email.me>
Content-Language: en-US
 by: olcott - Tue, 5 Mar 2024 17:10 UTC

On 3/5/2024 10:57 AM, immibis wrote:
> On 3/03/24 02:29, olcott wrote:
>> On 3/2/2024 6:53 PM, Richard Damon wrote:
>>> Note, Computers, as generally viewed, especiailly for "Compuation
>>> Theory" have the limitation of being deterministic, whcih DOES make
>>> them less powerful than the human mind, which has free will.
>>
>> LLMs have contradicted that. They are inherently stochastic.
>
> Incorrect. That they use a random number generator as part of their
> algorithm does not make them special. You can use a true random number
> generator, which must be counted as an input to the computation, or you
> can use a pseudo-random number generator which is deterministic. This is
> no different from a computerized game of Blackjack.

Paradoxical Yes/No Dilemma June 17, 2023
My copyright notice is at the bottom showing that this is my dialogue
https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b

*This was written by ChaGPT summing up its complete agreement*
When an input, such as the halting problem's pathological input D, is
designed to contradict every value that the halting decider H returns,
it creates a self-referential paradox that prevents H from providing a
consistent and correct response. In this context, D can be seen as
posing an incorrect question to H, as its contradictory nature
undermines the possibility of a meaningful and accurate answer.

The key most important aspect of this is that this conclusion
is perfectly and correctly semantically entailed by the definitions
of the meaning of these terms in the prior dialogue. I had to
progressively refine these definitions in this 34 page dialogue.

[1] E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford. 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf

[2] Bill Stoddart. The Halting Paradox
20 December 2017
https://arxiv.org/abs/1906.05340
arXiv:1906.05340 [cs.LO]

[3] E C R Hehner. Problems with the Halting Problem, COMPUTING2011
Symposium on 75 years of Turing Machine and Lambda-Calculus, Karlsruhe
Germany, invited, 2011 October 20-21; Advances in Computer Science and
Engineering v.10 n.1 p.31-60, 2013
https://www.cs.toronto.edu/~hehner/PHP.pdf

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Limits of computations != actual limits of computers [ Church Turing ]

<us7kb9$3s73b$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54007&group=comp.theory#54007

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Limits of computations != actual limits of computers [ Church
Turing ]
Date: Tue, 5 Mar 2024 11:21:45 -0600
Organization: A noiseless patient Spider
Lines: 622
Message-ID: <us7kb9$3s73b$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me> <us6209$psb9$6@i2pn2.org>
<us6548$3jd6k$1@dont-email.me> <us6vut$re8s$4@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 17:21:45 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="4070507"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18LwoH2rKsdn5gU1EuB6iJ/"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:iIpcOStGXDYEabncWojpqPUfUKs=
Content-Language: en-US
In-Reply-To: <us6vut$re8s$4@i2pn2.org>
 by: olcott - Tue, 5 Mar 2024 17:21 UTC

On 3/5/2024 5:33 AM, Richard Damon wrote:
> On 3/4/24 10:55 PM, olcott wrote:
>> On 3/4/2024 9:02 PM, Richard Damon wrote:
>>> On 3/4/24 9:28 PM, olcott wrote:
>>>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>>>> On 3/4/24 2:31 PM, olcott wrote:
>>>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are
>>>>>>>>>>>>>>>>>>>>> actually the same computation.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar
>>>>>>>>>>>>>>>>>>>>> all this time.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly
>>>>>>>>>>>>>>>>>>>>>> ignored.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation
>>>>>>>>>>>>>>>>>>>>> actually is,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a
>>>>>>>>>>>>>>>>>>> Computation) that needs to reference atributes of
>>>>>>>>>>>>>>>>>>> Modern Electronic Computers is just WRONG as they
>>>>>>>>>>>>>>>>>>> predate the development of such a thing.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>>>
>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>>>
>>>>>>>>>>> Nope, doesn't show what you claim, just that it has been
>>>>>>>>>>> taught by "rote memorization" that the answer to a question
>>>>>>>>>>> put the way you did is the answer it gave.
>>>>>>>>>>>
>>>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>>>> words.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>>>> technical language.
>>>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>>>
>>>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>>>> working in a technical field and using the words as that
>>>>>>>>>>> field means, you are just being a out and out LIAR.
>>>>>>>>>>
>>>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>>>
>>>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>>>> and thus you comments about working in the field is just an
>>>>>>>>> ignorant pathological lie.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If there are physically existing machines that can answer
>>>>>>>>>> questions
>>>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>>>> access
>>>>>>>>>> their own machine address then these machines would be
>>>>>>>>>> strictly more
>>>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>>>
>>>>>>>>> Nope.
>>>>>>>>
>>>>>>>>
>>>>>>>> If machine M can solve problems that machine N
>>>>>>>> cannot solve then for these problems M is more
>>>>>>>> powerful than N.
>>>>>>>
>>>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on
>>>>>>> the input (H1^) (H1^)
>>>>>>>
>>>>>>
>>>>>> I am not even talking about that.
>>>>>> In this new thread I am only talking about the generic case of:
>>>>>> *Actual limits of computations != actual limits of computers*
>>>>>> *with unlimited memory*
>>>>>>
>>>>>>> Note, I realise I misspoke a bit. Any "Non-computation"
>>>>>>> sub-program can be turned into a Computation, just by being
>>>>>>> honest and declairing as inputs the "Hidden Data" that it is using
>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> But you just admitted you are too ignorant of the actual
>>>>>>>>> meaning to make a reasoned statement and too dishonest to
>>>>>>>>> conceed that, even after admitting it,
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If computability only means can't be done in a certain
>>>>>>>>>> artificially
>>>>>>>>>> limited way and not any actual limit on what computers can
>>>>>>>>>> actually
>>>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>>>
>>>>>>>>
>>>>>>>> Try and explain how this would not be nonsense.
>>>>>>>
>>>>>>> First, it ISN'T "Artificial", it is a natural outcome of the
>>>>>>> sorts of problems we actuallly want to solve.
>>>>>>>
>>>>>>
>>>>>> If there is a physical machine that can solve problems that a Turing
>>>>>> machine cannot solve then we are only pretending that the limits of
>>>>>> computation are the limits of computers.
>>>>>
>>>>> But there isn't.
>>>>>
>>>>> At least not problems that can be phrased as a computation
>>>>
>>>> If (hypothetically) there are physical computers that can
>>>> solve decision problems that Turing machines cannot solve
>>>> then the notion of computability is not any actual real
>>>> limit it is merely a fake limit.
>>>
>>> Except that it has been shown that there isn't such a thing, so your
>>> hypthetical is just a trip into fantasy land.
>>>
>>
>> This <is> such a virtual machine knows its own machine
>> address.
>>
>> u32 H(ptr P, ptr I)
>> {
>>    u32 Address_of_H = (u32)H;
>>
>> Either Turing machines can accomplish the equivalent
>> of this or they cannot.
>>
>> If they cannot and this prevents Turing machines
>> from knowing that they have been called in recursive
>> simulation and that prevents them from solving even
>> a single instance of machine / input then that makes
>> Turing machines less powerful on this machine/input pair.
>
> In other words, you are admitting you don't understand what a
> computation is or a mathematical function.


Click here to read the complete article
Re: Limits of computations != actual limits of computers [ Church Turing ]

<us7n92$3suf6$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54008&group=comp.theory#54008

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Limits of computations != actual limits of computers [ Church
Turing ]
Date: Tue, 5 Mar 2024 12:11:45 -0600
Organization: A noiseless patient Spider
Lines: 655
Message-ID: <us7n92$3suf6$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me> <us6209$psb9$6@i2pn2.org>
<us6548$3jd6k$1@dont-email.me> <us6vut$re8s$4@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 18:11:47 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="4094438"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+4z7YOB+9YSwtcBtWBXcjh"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:dqajENu2g+WZC2qAaSmFnaRB4ak=
In-Reply-To: <us6vut$re8s$4@i2pn2.org>
Content-Language: en-US
 by: olcott - Tue, 5 Mar 2024 18:11 UTC

On 3/5/2024 5:33 AM, Richard Damon wrote:
> On 3/4/24 10:55 PM, olcott wrote:
>> On 3/4/2024 9:02 PM, Richard Damon wrote:
>>> On 3/4/24 9:28 PM, olcott wrote:
>>>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>>>> On 3/4/24 2:31 PM, olcott wrote:
>>>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are
>>>>>>>>>>>>>>>>>>>>> actually the same computation.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar
>>>>>>>>>>>>>>>>>>>>> all this time.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly
>>>>>>>>>>>>>>>>>>>>>> ignored.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation
>>>>>>>>>>>>>>>>>>>>> actually is,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a
>>>>>>>>>>>>>>>>>>> Computation) that needs to reference atributes of
>>>>>>>>>>>>>>>>>>> Modern Electronic Computers is just WRONG as they
>>>>>>>>>>>>>>>>>>> predate the development of such a thing.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>>>
>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>>>
>>>>>>>>>>> Nope, doesn't show what you claim, just that it has been
>>>>>>>>>>> taught by "rote memorization" that the answer to a question
>>>>>>>>>>> put the way you did is the answer it gave.
>>>>>>>>>>>
>>>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>>>> words.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>>>> technical language.
>>>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>>>
>>>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>>>> working in a technical field and using the words as that
>>>>>>>>>>> field means, you are just being a out and out LIAR.
>>>>>>>>>>
>>>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>>>
>>>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>>>> and thus you comments about working in the field is just an
>>>>>>>>> ignorant pathological lie.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If there are physically existing machines that can answer
>>>>>>>>>> questions
>>>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>>>> access
>>>>>>>>>> their own machine address then these machines would be
>>>>>>>>>> strictly more
>>>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>>>
>>>>>>>>> Nope.
>>>>>>>>
>>>>>>>>
>>>>>>>> If machine M can solve problems that machine N
>>>>>>>> cannot solve then for these problems M is more
>>>>>>>> powerful than N.
>>>>>>>
>>>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on
>>>>>>> the input (H1^) (H1^)
>>>>>>>
>>>>>>
>>>>>> I am not even talking about that.
>>>>>> In this new thread I am only talking about the generic case of:
>>>>>> *Actual limits of computations != actual limits of computers*
>>>>>> *with unlimited memory*
>>>>>>
>>>>>>> Note, I realise I misspoke a bit. Any "Non-computation"
>>>>>>> sub-program can be turned into a Computation, just by being
>>>>>>> honest and declairing as inputs the "Hidden Data" that it is using
>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> But you just admitted you are too ignorant of the actual
>>>>>>>>> meaning to make a reasoned statement and too dishonest to
>>>>>>>>> conceed that, even after admitting it,
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If computability only means can't be done in a certain
>>>>>>>>>> artificially
>>>>>>>>>> limited way and not any actual limit on what computers can
>>>>>>>>>> actually
>>>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>>>
>>>>>>>>
>>>>>>>> Try and explain how this would not be nonsense.
>>>>>>>
>>>>>>> First, it ISN'T "Artificial", it is a natural outcome of the
>>>>>>> sorts of problems we actuallly want to solve.
>>>>>>>
>>>>>>
>>>>>> If there is a physical machine that can solve problems that a Turing
>>>>>> machine cannot solve then we are only pretending that the limits of
>>>>>> computation are the limits of computers.
>>>>>
>>>>> But there isn't.
>>>>>
>>>>> At least not problems that can be phrased as a computation
>>>>
>>>> If (hypothetically) there are physical computers that can
>>>> solve decision problems that Turing machines cannot solve
>>>> then the notion of computability is not any actual real
>>>> limit it is merely a fake limit.
>>>
>>> Except that it has been shown that there isn't such a thing, so your
>>> hypthetical is just a trip into fantasy land.
>>>
>>
>> This <is> such a virtual machine knows its own machine
>> address.
>>
>> u32 H(ptr P, ptr I)
>> {
>>    u32 Address_of_H = (u32)H;
>>
>> Either Turing machines can accomplish the equivalent
>> of this or they cannot.
>>
>> If they cannot and this prevents Turing machines
>> from knowing that they have been called in recursive
>> simulation and that prevents them from solving even
>> a single instance of machine / input then that makes
>> Turing machines less powerful on this machine/input pair.
>
> In other words, you are admitting you don't understand what a
> computation is or a mathematical function.
>
> And, you are just a stupid ignorant pathological lying idiot.
>
>>
>>> So, maybe in your mythological worlds with Computers that exceed the
>>> computability of current machines (something like being able to make
>>> an infinite number of decisions in a finite period of time) could do
>>> that, but since they don't exist, THAT is a "Fake Ability".
>>>
>>>>
>>>>>>
>>>>>>> Second, as I just mentioned, you can turn a "non-computation"
>>>>>>> into a computation by just being honest and declaring the
>>>>>>> "hidden" input as an actual input.
>>>>>>
>>>>>> The specific case is machines that can correctly determine their
>>>>>> own machine address relative to machines that cannot do this.
>>>>>> An x86 based virtual machine can determine its own machine address.
>>>>>>
>>>>>> u32 H(ptr P, ptr I)
>>>>>> {
>>>>>>    u32 Address_of_H = (u32)H;
>>>>>
>>>>> But what COMPUTATION are you trying to do.
>>>>
>>>> When a halt decider can easily tell that it is calling
>>>> itself with its same inputs then it has a very simple
>>>> "abort simulation" criteria.
>>>
>>> Except that the thing being decided on is supposed to be a seperate
>>> program running in its own memory space, so that "address match"
>>> trick doesn't work.
>>>
>>
>> Sure it does.
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
>>
>> When Ĥ.H knows the machine address of Ĥ then it can see
>> that it is simulating a identical copy its own machine
>> with an identical copy of its own input. It can determine
>> this by ordinary string comparison.
>
> To What?
>
> Turing machines don't have a unique description, so you don't know what
> to compare to.
>


Click here to read the complete article
Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us867v$5gh$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54013&group=comp.theory#54013

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Tue, 5 Mar 2024 16:27:09 -0600
Organization: A noiseless patient Spider
Lines: 243
Message-ID: <us867v$5gh$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me> <us5vc9$psb9$4@i2pn2.org>
<us68fc$3jtk1$1@dont-email.me> <us6vv2$re8s$6@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 5 Mar 2024 22:27:11 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="273c7008c0bbd7cbe8e4eb86f2d7214a";
logging-data="5649"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18QlUkypVseElXQfWTwbWXn"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:IrgLHhMOAUjk3UmxMCiB9gD9E7E=
In-Reply-To: <us6vv2$re8s$6@i2pn2.org>
Content-Language: en-US
 by: olcott - Tue, 5 Mar 2024 22:27 UTC

On 3/5/2024 5:33 AM, Richard Damon wrote:
> On 3/4/24 11:52 PM, olcott wrote:
>> On 3/4/2024 8:17 PM, Richard Damon wrote:
>>> On 3/4/24 7:48 PM, olcott wrote:
>>>> On 3/4/2024 6:21 PM, Richard Damon wrote:
>>>>> On 3/4/24 3:14 PM, olcott wrote:
>>>>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually
>>>>>>>>>>>>> know what is a fact, and has been proven to lie,
>>>>>>>>>>>>
>>>>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>>>>
>>>>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>>>>> input it has been given.
>>>>>>>>>>>
>>>>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>>>>> really didn't agree with you initially, but you finally
>>>>>>>>>>> trained it to your version of reality.
>>>>>>>>>>
>>>>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>>>>> When an input, such as the halting problem's pathological
>>>>>>>>>> input D, is
>>>>>>>>>> designed to contradict every value that the halting decider H
>>>>>>>>>> returns,
>>>>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>>>>> providing a
>>>>>>>>>> consistent and correct response. In this context, D can be
>>>>>>>>>> seen as
>>>>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Which means NOTHING as LLM will tell non-truths if feed
>>>>>>>>> misleading information.
>>>>>>>>
>>>>>>>> The above paragraph is proven to be completely true entirely
>>>>>>>> on the basis of the meaning of its words as these words were
>>>>>>>> defined in the dialogue that precedes them.
>>>>>>>>
>>>>>>>
>>>>>>> Nope, the problem is you gave it incorrect implications on the
>>>>>>> meaning of the wrods.
>>>>>>
>>>>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>>>>> When an input, such as the halting problem's pathological input D, is
>>>>>> designed to contradict every value that the halting decider H
>>>>>> returns,
>>>>>> it creates a self-referential paradox that prevents H from
>>>>>> providing a
>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>
>>>>>> Within my definitions of my terms the above paragraph written by
>>>>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>>>>> 4.0 analysis is sound.
>>>>>>
>>>>>> *People are not free to disagree with stipulative definitions*
>>>>>>
>>>>>> A stipulative definition is a type of definition in which a new or
>>>>>> currently existing term is given a new specific meaning for the
>>>>>> purposes
>>>>>> of argument or discussion in a given context. When the term already
>>>>>> exists, this definition may, but does not necessarily, contradict the
>>>>>> dictionary (lexical) definition of the term.
>>>>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> Right, and by that EXACT SAME RULE, when you "stipulate" a
>>>>> definition different then that stipulated by a field, you place
>>>>> yourself outside that field, and if you still claim to be working
>>>>> in it, you are just admitting to being a bald-face LIAR.
>>>>>
>>>>
>>>> Not exactly. When I stipulate a definition that shows the
>>>> incoherence of the conventional definitions then I am working
>>>> at the foundational level above this field.
>>>
>>> Nope, if you change the definition of the field, you are in a new field.
>>>
>>> Just like ZFC Set Theory isn't Naive Set Theory, if you change the
>>> basis, you are in a new field.
>>
>> Yes that is exactly what I am doing, good call !
>>
>> ZFC corrected the error of Naive Set Theory. I am correcting
>> the error of the conventional foundation of computability.
>
> So, just admit that you aren't doing "Compuputation Theory", and thus
> can't say you are refute Linz or any one else that WAS doing Computation
> Theory.
>
> Just admit you are doing POOP theory.


Click here to read the complete article
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8i2e$24go$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54015&group=comp.theory#54015

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Wed, 6 Mar 2024 02:49:02 +0100
Organization: A noiseless patient Spider
Lines: 22
Message-ID: <us8i2e$24go$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 6 Mar 2024 01:49:02 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+8w7aK4va0Y9yWctz0NGOF"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:DGN9N+g15wniknJ8jX7kZGqzB/A=
In-Reply-To: <us0ii7$25emo$1@dont-email.me>
Content-Language: en-US
 by: immibis - Wed, 6 Mar 2024 01:49 UTC

On 3/03/24 02:08, olcott wrote:
> Virtual Machines that are exactly Turing Machines
> except for unlimited memory can and do exist.
>
> They necessarily must be implemented in physical memory
> and cannot possibly be implemented any other way.
>
> TM, The Turing Machine Interpreter
> David S. Woodruff
> http://www2.lns.mit.edu/~dsw/turing/turing.html
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> The states of a Turing machine <are> essentially
> memory locations.
>
> They have a perfect analogue in finite state machines

Good! Finite state machines cannot give different results just because
you store them in different places, either.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8i41$24go$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54016&group=comp.theory#54016

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Wed, 6 Mar 2024 02:49:53 +0100
Organization: A noiseless patient Spider
Lines: 6
Message-ID: <us8i41$24go$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 6 Mar 2024 01:49:54 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18cbaPdPlpBMtdXkYAR04MD"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Hz2Kj2QQ58ybO5KDQDNaIOHsicY=
Content-Language: en-US
In-Reply-To: <us0ka2$25m8f$1@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:49 UTC

On 3/03/24 02:38, olcott wrote:
> ChatGPT 4.0 dialogue.
> https://www.liarparadox.org/ChatGPT_HP.pdf

You must understand that ChatGPT is a professional liar.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8i6h$24go$3@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54017&group=comp.theory#54017

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Wed, 6 Mar 2024 02:51:13 +0100
Organization: A noiseless patient Spider
Lines: 36
Message-ID: <us8i6h$24go$3@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0buc$2490j$1@dont-email.me>
<us0chc$24a9q$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 6 Mar 2024 01:51:14 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18uMKRkjn7zcMlC48IGF2b/"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Mq2jaeX9lbaK3JGHEBJksRodtEc=
Content-Language: en-US
In-Reply-To: <us0chc$24a9q$1@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:51 UTC

On 3/03/24 00:25, olcott wrote:
> On 3/2/2024 5:15 PM, immibis wrote:
>> On 2/03/24 23:28, olcott wrote:
>>> The reason that people assume that H1(D,D) must get
>>> the same result as H(D,D) is that they make sure
>>> to ignore the reason why they get a different result.
>>>
>>> It turns out that the only reason that H1(D,D) derives a
>>> different result than H(D,D) is that H is at a different
>>> physical machine address than H1.
>>
>> Incorrect - the reason that H1(D,D) derives a different result is that
>> H *looks for* a different physical machine address than H1.
>
> _D()
> [00001cf2] 55         push ebp
> [00001cf3] 8bec       mov ebp,esp
> [00001cf5] 51         push ecx
> [00001cf6] 8b4508     mov eax,[ebp+08]
> [00001cf9] 50         push eax
> [00001cfa] 8b4d08     mov ecx,[ebp+08]
> [00001cfd] 51         push ecx
> [00001cfe] e81ff8ffff call 00001522 ; call to H
> ...
>
> *That is factually incorrect*
> H and H1 simulate the actual code of D which actually
> calls H at machine address 00001522 and does not call
> H1 at machine address 00001422.

H stores a memory that if the executed program calls machine address
00001522 with the same parameters, it should abort the simulation.

H1 stores a memory that if the executed program calls machine address
00001422 with the same parameters, it should abort the simulation.

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us8ia2$24go$5@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54019&group=comp.theory#54019

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Wed, 6 Mar 2024 02:53:06 +0100
Organization: A noiseless patient Spider
Lines: 71
Message-ID: <us8ia2$24go$5@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 6 Mar 2024 01:53:06 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19fdbSwVKf0ltpliL9ROm8S"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:fZ6vpFXkIK3XubfHDyhWrmIdYH4=
Content-Language: en-US
In-Reply-To: <us3j5o$2vhd5$1@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:53 UTC

On 4/03/24 05:37, olcott wrote:
> On 3/3/2024 10:25 PM, Richard Damon wrote:
>> On 3/3/24 10:32 PM, olcott wrote:
>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>
>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>
>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>
>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>
>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>
>>>>>>>
>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>> is an example of the Liar Paradox.
>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>
>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>> professor Stoddart are all correct in that there is
>>>>>>> something wrong with the halting problem.
>>>>>>
>>>>>> Which since it is proven that Chat GPT doesn't actually know what
>>>>>> is a fact, and has been proven to lie,
>>>>>
>>>>> The first thing that it figured out on its own is that
>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>
>>>>> It eventually agreed with the same conclusion that
>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>> It took 34 pages of dialog to understand this. I
>>>>> finally have a good PDF of this.
>>>>>
>>>>
>>>> It didn't "Figure it out". it pattern matched it to previous input
>>>> it has been given.
>>>>
>>>> If it took 34 pages to argee with your conclusion, then it really
>>>> didn't agree with you initially, but you finally trained it to your
>>>> version of reality.
>>>
>>> *HERE IS ITS AGREEMENT*
>>> When an input, such as the halting problem's pathological input D, is
>>> designed to contradict every value that the halting decider H returns,
>>> it creates a self-referential paradox that prevents H from providing a
>>> consistent and correct response. In this context, D can be seen as
>>> posing an incorrect question to H, as its contradictory nature
>>> undermines the possibility of a meaningful and accurate answer.
>>>
>>>
>>
>> Which means NOTHING as LLM will tell non-truths if feed misleading
>> information.
>
> The above paragraph is proven to be completely true entirely
> on the basis of the meaning of its words as these words were
> defined in the dialogue that precedes them.
>

no it isn't.

Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?

<us8ici$24go$6@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54020&group=comp.theory#54020

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H
⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?
Date: Wed, 6 Mar 2024 02:54:26 +0100
Organization: A noiseless patient Spider
Lines: 53
Message-ID: <us8ici$24go$6@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us43rr$32mqs$1@dont-email.me>
<us58r3$3aoj4$1@dont-email.me> <us6osg$3morm$1@dont-email.me>
<us7idm$3rtsq$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 6 Mar 2024 01:54:26 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+F6GweFVbBozcxP1ZO068v"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:NghhdQyawIA8GmDS9TrueDoEAVY=
In-Reply-To: <us7idm$3rtsq$1@dont-email.me>
Content-Language: en-US
 by: immibis - Wed, 6 Mar 2024 01:54 UTC

On 5/03/24 17:48, olcott wrote:
> On 3/5/2024 3:33 AM, Mikko wrote:
>> On 2024-03-04 19:53:05 +0000, olcott said:
>>
>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>
>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>
>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>
>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>
>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>
>>>>>>>
>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>
>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>
>>>>>
>>>>> The first thing that it does is agree that Hehner's
>>>>> "Carol's question" (augmented by Richards critique)
>>>>> is an example of the Liar Paradox.
>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>
>>>>> It ends up concluding that myself, professor Hehner and
>>>>> professor Stoddart are all correct in that there is
>>>>> something wrong with the halting problem.
>>>>
>>>> None of that demonstrates any understanding.
>>>>
>>>>> My persistent focus on these ideas gives me an increasingly
>>>>> deeper understanding thus my latest position is that the
>>>>> halting problem proofs do not actually show that halting
>>>>> is not computable.
>>>>
>>>> Your understanding is still defective and shallow.
>>>>
>>>
>>> If it really was shallow then a gap in my reasoning
>>> could be pointed out.
>>
>> Gaps in your reasons are pointed out every day.
>>
>
> There are no actual gaps in my reasoning.

Gap in your reasoning: when you think that a copy of a Turing machine
can possibly give any different result from the original.

Re: Limits of computations != actual limits of computers [ Church Turing ]

<us8if5$24go$7@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54021&group=comp.theory#54021

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: Limits of computations != actual limits of computers [ Church
Turing ]
Date: Wed, 6 Mar 2024 02:55:49 +0100
Organization: A noiseless patient Spider
Lines: 15
Message-ID: <us8if5$24go$7@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me> <us6209$psb9$6@i2pn2.org>
<us6548$3jd6k$1@dont-email.me> <us6vut$re8s$4@i2pn2.org>
<us7kb9$3s73b$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 6 Mar 2024 01:55:49 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/kukqhNgDzRE+0NXUUiV5A"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:AhlBJWQS9xsRupRpa5qjNeCLh0s=
In-Reply-To: <us7kb9$3s73b$2@dont-email.me>
Content-Language: en-US
 by: immibis - Wed, 6 Mar 2024 01:55 UTC

On 5/03/24 18:21, olcott wrote:
> Not at all. The key thing that I do not know is whether
> a Turing Machine can somehow accomplish the same function
> result so that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that
> itself would never halt unless it transitions to Ĥ.Hqn.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> I know that a RASP machine where every P knows its own address can
> easily do this. I am still trying to work out how a TM can do this.

It cannot. The only way it can know is if you tell it. And if you have
to tell it, then you can just lie to it, also that is extra input not
specified in the halting problem so it doesn't solve the halting problem.

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us8iku$24go$8@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54022&group=comp.theory#54022

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory
Subject: Re: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Wed, 6 Mar 2024 02:58:54 +0100
Organization: A noiseless patient Spider
Lines: 39
Message-ID: <us8iku$24go$8@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us6q59$3n0k2$1@dont-email.me>
<us7ivg$3rttj$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 6 Mar 2024 01:58:54 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+0D2Rs5C5fi4yRJQ7j9XYy"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:7CFM0hUGVnTswMeSRpJAo+RPUho=
In-Reply-To: <us7ivg$3rttj$2@dont-email.me>
Content-Language: en-US
 by: immibis - Wed, 6 Mar 2024 01:58 UTC

On 5/03/24 17:58, olcott wrote:
> On 3/5/2024 3:54 AM, Mikko wrote:
>> On 2024-03-04 19:31:40 +0000, olcott said:
>>
>>> If there is a physical machine that can solve problems that a Turing
>>> machine cannot solve then we are only pretending that the limits of
>>> computation are the limits of computers.
>>
>> However, we have no idea how such machine could be constructed.
>
> The x86 language is already sufficiently isomorphic.

As long as the Turing machine doesn't use more than 2^32 bytes of memory
it is possible to translate a Turing machine into x86. However, most x86
instructions aren't equivalent to Turing machine instructions, so you
have to be careful when going the other way. For example, you could use
a relative call instruction to allow an x86 program to discover its own
address. Turing machines don't have relative call instructions, or
addresses. Anything that uses addresses might not be isomorphic to a
Turing machine.

>
>> Although that machine might solve the halting problem of Turing
>> machines it would also create a new halting problem that it
>> cannot solve.
>
> The halting problem is already specified in the x86
> isomorphism to a RASP machine where every P knows
> its own machine address.
>
> u32 H(ptr P, ptr I)
> {
>   u32 Address_of_H = (u32)H;
>

If you show me a RASP function, is it possible for me to create another
function that returns the same thing the first function returns, even if
my function isn't at the same address?

Re: Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correct

<us8imj$24go$9@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54023&group=comp.theory#54023

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are
correct
Date: Wed, 6 Mar 2024 02:59:47 +0100
Organization: A noiseless patient Spider
Lines: 24
Message-ID: <us8imj$24go$9@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0g30$251f5$1@dont-email.me> <us0hln$fjqv$21@i2pn2.org>
<us0j45$25en4$1@dont-email.me> <us7ite$3rsaf$1@dont-email.me>
<us7jma$3s73b$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 6 Mar 2024 01:59:48 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="95948620aa15670eb004a07808cabeea";
logging-data="70168"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/1pOT7onF9YwAYU01yzrhj"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:gpjYop6zp01KE/HE/9D/Log5DVA=
In-Reply-To: <us7jma$3s73b$1@dont-email.me>
Content-Language: en-US
 by: immibis - Wed, 6 Mar 2024 01:59 UTC

On 5/03/24 18:10, olcott wrote:
> On 3/5/2024 10:57 AM, immibis wrote:
>> On 3/03/24 02:29, olcott wrote:
>>> On 3/2/2024 6:53 PM, Richard Damon wrote:
>>>> Note, Computers, as generally viewed, especiailly for "Compuation
>>>> Theory" have the limitation of being deterministic, whcih DOES make
>>>> them less powerful than the human mind, which has free will.
>>>
>>> LLMs have contradicted that. They are inherently stochastic.
>>
>> Incorrect. That they use a random number generator as part of their
>> algorithm does not make them special. You can use a true random number
>> generator, which must be counted as an input to the computation, or
>> you can use a pseudo-random number generator which is deterministic.
>> This is no different from a computerized game of Blackjack.
>
> Paradoxical Yes/No Dilemma  June 17, 2023
> My copyright notice is at the bottom showing that this is my dialogue
> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>
> *This was written by ChaGPT summing up its complete agreement*

ChatGPT tells you what you want to hear. It predicts the most probable
next word.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8mi2$6d79$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54030&group=comp.theory#54030

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Tue, 5 Mar 2024 21:05:38 -0600
Organization: A noiseless patient Spider
Lines: 30
Message-ID: <us8mi2$6d79$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us8i41$24go$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 6 Mar 2024 03:05:38 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="be9f8da9bec2ff00586a131a920ad70d";
logging-data="210153"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19LI8aJl85q7sZfIHKvJwcx"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:oztH2AUo0bc0YIXb3EQRlH7BVnM=
In-Reply-To: <us8i41$24go$2@dont-email.me>
Content-Language: en-US
 by: olcott - Wed, 6 Mar 2024 03:05 UTC

On 3/5/2024 7:49 PM, immibis wrote:
> On 3/03/24 02:38, olcott wrote:
>> ChatGPT 4.0 dialogue.
>> https://www.liarparadox.org/ChatGPT_HP.pdf
>
> You must understand that ChatGPT is a professional liar.
>

This occurs because ChatGPT has no basis to determine fact from fiction
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

When we are only analyzing the validity of ChatGPT then it
scored perfectly on this paragraph:

When an input, such as the halting problem's pathological input D, is
designed to contradict every value that the halting decider H returns,
it creates a self-referential paradox that prevents H from providing a
consistent and correct response. In this context, D can be seen as
posing an incorrect question to H, as its contradictory nature
undermines the possibility of a meaningful and accurate answer.

The above paragraph perfectly semantically follows from its
preceding (34 page) dialogue that progressively refines the
meaning of my terms.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8mln$6ifs$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54031&group=comp.theory#54031

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Tue, 5 Mar 2024 21:07:35 -0600
Organization: A noiseless patient Spider
Lines: 44
Message-ID: <us8mln$6ifs$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0buc$2490j$1@dont-email.me>
<us0chc$24a9q$1@dont-email.me> <us8i6h$24go$3@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 6 Mar 2024 03:07:35 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="be9f8da9bec2ff00586a131a920ad70d";
logging-data="215548"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18jQWglOlbfmP1zKVf9OheE"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:CfgyRLwodcbNXbDMiJIiAs6zIsk=
Content-Language: en-US
In-Reply-To: <us8i6h$24go$3@dont-email.me>
 by: olcott - Wed, 6 Mar 2024 03:07 UTC

On 3/5/2024 7:51 PM, immibis wrote:
> On 3/03/24 00:25, olcott wrote:
>> On 3/2/2024 5:15 PM, immibis wrote:
>>> On 2/03/24 23:28, olcott wrote:
>>>> The reason that people assume that H1(D,D) must get
>>>> the same result as H(D,D) is that they make sure
>>>> to ignore the reason why they get a different result.
>>>>
>>>> It turns out that the only reason that H1(D,D) derives a
>>>> different result than H(D,D) is that H is at a different
>>>> physical machine address than H1.
>>>
>>> Incorrect - the reason that H1(D,D) derives a different result is
>>> that H *looks for* a different physical machine address than H1.
>>
>> _D()
>> [00001cf2] 55         push ebp
>> [00001cf3] 8bec       mov ebp,esp
>> [00001cf5] 51         push ecx
>> [00001cf6] 8b4508     mov eax,[ebp+08]
>> [00001cf9] 50         push eax
>> [00001cfa] 8b4d08     mov ecx,[ebp+08]
>> [00001cfd] 51         push ecx
>> [00001cfe] e81ff8ffff call 00001522 ; call to H
>> ...
>>
>> *That is factually incorrect*
>> H and H1 simulate the actual code of D which actually
>> calls H at machine address 00001522 and does not call
>> H1 at machine address 00001422.
>
> H stores a memory that if the executed program calls machine address
> 00001522 with the same parameters, it should abort the simulation.
>
> H1 stores a memory that if the executed program calls machine address
> 00001422 with the same parameters, it should abort the simulation.
>

*Good job! Richard could never get this*

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us8nbp$6ifs$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54032&group=comp.theory#54032

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Tue, 5 Mar 2024 21:19:21 -0600
Organization: A noiseless patient Spider
Lines: 88
Message-ID: <us8nbp$6ifs$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us8ia2$24go$5@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 6 Mar 2024 03:19:22 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="be9f8da9bec2ff00586a131a920ad70d";
logging-data="215548"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1893Qnzieowgk/J+wNsXs9y"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:vqD8r5h9gFXm32rog/zc0SoSg7w=
In-Reply-To: <us8ia2$24go$5@dont-email.me>
Content-Language: en-US
 by: olcott - Wed, 6 Mar 2024 03:19 UTC

On 3/5/2024 7:53 PM, immibis wrote:
> On 4/03/24 05:37, olcott wrote:
>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>> On 3/3/24 10:32 PM, olcott wrote:
>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>
>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>
>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>
>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>
>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>
>>>>>>>>
>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>> is an example of the Liar Paradox.
>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>
>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>> something wrong with the halting problem.
>>>>>>>
>>>>>>> Which since it is proven that Chat GPT doesn't actually know what
>>>>>>> is a fact, and has been proven to lie,
>>>>>>
>>>>>> The first thing that it figured out on its own is that
>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>
>>>>>> It eventually agreed with the same conclusion that
>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>> It took 34 pages of dialog to understand this. I
>>>>>> finally have a good PDF of this.
>>>>>>
>>>>>
>>>>> It didn't "Figure it out". it pattern matched it to previous input
>>>>> it has been given.
>>>>>
>>>>> If it took 34 pages to argee with your conclusion, then it really
>>>>> didn't agree with you initially, but you finally trained it to your
>>>>> version of reality.
>>>>
>>>> *HERE IS ITS AGREEMENT*
>>>> When an input, such as the halting problem's pathological input D, is
>>>> designed to contradict every value that the halting decider H returns,
>>>> it creates a self-referential paradox that prevents H from providing a
>>>> consistent and correct response. In this context, D can be seen as
>>>> posing an incorrect question to H, as its contradictory nature
>>>> undermines the possibility of a meaningful and accurate answer.
>>>>
>>>>
>>>
>>> Which means NOTHING as LLM will tell non-truths if feed misleading
>>> information.
>>
>> The above paragraph is proven to be completely true entirely
>> on the basis of the meaning of its words as these words were
>> defined in the dialogue that precedes them.
>>
>
> no it isn't.

I am not saying the meanings that you think that the words have.
I am saying the meaning of these words that I progressively refined
within a 34 page dialogue.

When we take my meanings as the premises then the ChatGPT paragraph
is perfectly valid on the basis of these meanings.

ChatGPT was able to build something like a knowledge ontology of
these meanings thus close all ambiguity gaps. People get too
overwhelmed with this degree of detail.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Limits of computations != actual limits of computers [ Church Turing ]

<us8ni5$6ifs$3@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=54033&group=comp.theory#54033

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Limits of computations != actual limits of computers [ Church
Turing ]
Date: Tue, 5 Mar 2024 21:22:45 -0600
Organization: A noiseless patient Spider
Lines: 30
Message-ID: <us8ni5$6ifs$3@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me> <us3iev$lq4c$10@i2pn2.org>
<us3kd5$2vo1i$1@dont-email.me> <us4ffh$o3cj$1@i2pn2.org>
<us57it$3ag4o$1@dont-email.me> <us5sks$psb9$1@i2pn2.org>
<us5vvi$3ii6o$1@dont-email.me> <us6209$psb9$6@i2pn2.org>
<us6548$3jd6k$1@dont-email.me> <us6vut$re8s$4@i2pn2.org>
<us7kb9$3s73b$2@dont-email.me> <us8if5$24go$7@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 6 Mar 2024 03:22:45 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="be9f8da9bec2ff00586a131a920ad70d";
logging-data="215548"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19IOFIPOPcT36OSPjoVj24O"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:qIGCTh0p01kuY9k259i6FpPRrbE=
In-Reply-To: <us8if5$24go$7@dont-email.me>
Content-Language: en-US
 by: olcott - Wed, 6 Mar 2024 03:22 UTC

On 3/5/2024 7:55 PM, immibis wrote:
> On 5/03/24 18:21, olcott wrote:
>> Not at all. The key thing that I do not know is whether
>> a Turing Machine can somehow accomplish the same function
>> result so that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that
>> itself would never halt unless it transitions to Ĥ.Hqn.
>>
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>
>> I know that a RASP machine where every P knows its own address can
>> easily do this. I am still trying to work out how a TM can do this.
>
> It cannot. The only way it can know is if you tell it. And if you have
> to tell it, then you can just lie to it, also that is extra input not
> specified in the halting problem so it doesn't solve the halting problem.

A RASP machine P only needs to ask its interpreter what its
own machine address is.

Unlike the UTM, the RASP model has two sets of instructions – the state
machine table of instructions (the "interpreter") and the "program" in
the holes.
https://en.wikipedia.org/wiki/Random-access_stored-program_machine

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer


devel / comp.theory / Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

Pages:1234567
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor