Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Friction is a drag.


devel / comp.theory / Re: Why does H1(D,D) actually get a different result than H(D,D) ???

SubjectAuthor
* Why does H1(D,D) actually get a different result than H(D,D) ???olcott
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|||`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | | +- Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |   |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |   |    `* How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | | |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     | |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |      `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |       `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   |        `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     | |   |         `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     | |   `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |+- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |    | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |  +- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |    |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    |+* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||+- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    ||`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    || `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||  +- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |     |    |    ||  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    ||   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     |    |    ||    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    |`- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||| | | |   |     |    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Mikko
||| | | |   |     |     `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   |     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||| | | |   |      `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||| | | |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |    `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | | |     +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | | |     |`* Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||| | | |     | `* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||| | | |     |  `* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||| | | |     |   `- Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||| | | |     `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
||| | | |      `* Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?olcott
||| | | |       +- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?immibis
||| | | |       `- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?Richard Damon
||| | | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||| | |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||| | |    `* Actual limits of computations != actual limits of computers with unlimited memorolcott
||| | |     `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |      `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||| | |       `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |        `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||| | |         +* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||| | |         |`* Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | +* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |`* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | | `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |  +* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | |  |+* Re: Limits of computations != actual limits of computers [ Church Turing ]immibis
||| | |         | |  |`* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||| | |         | |  `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||| | |         | `- Re: Finlayson [ Church Turing ]Ross Finlayson
||| | |         `* Re: Actual limits of computations != actual limits of computers with unlimited mMikko
||| | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||| `- Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||`- Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko
|+- Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
|`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Tristan Wibberley
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Mikko

Pages:1234567
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us1kti$2f46h$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53879&group=comp.theory#53879

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED.176-93-237-4.bb.dnainternet.fi!not-for-mail
From: mikko.le...@iki.fi (Mikko)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 12:54:42 +0200
Organization: -
Lines: 10
Message-ID: <us1kti$2f46h$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org> <us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org> <us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org> <us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org> <us0m4i$25m8f$4@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="176-93-237-4.bb.dnainternet.fi:176.93.237.4";
logging-data="2592977"; mail-complaints-to="abuse@eternal-september.org"
User-Agent: Unison/2.2
 by: Mikko - Sun, 3 Mar 2024 10:54 UTC

On 2024-03-03 02:09:22 +0000, olcott said:

> None-the-less actual computers do actually demonstrate
> actual very deep understanding of these things.

Not very deep, just deeper that you can achieve.

--
Mikko

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us1pk9$fjqv$24@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53880&group=comp.theory#53880

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 07:15:05 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us1pk9$fjqv$24@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0buc$2490j$1@dont-email.me>
<us0chc$24a9q$1@dont-email.me> <us0hqq$fjqv$22@i2pn2.org>
<us0jp9$25l2k$1@dont-email.me> <us0kkb$fjqu$11@i2pn2.org>
<us0lga$25m8f$3@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 12:15:05 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="511839"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <us0lga$25m8f$3@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
 by: Richard Damon - Sun, 3 Mar 2024 12:15 UTC

On 3/2/24 8:58 PM, olcott wrote:
> On 3/2/2024 7:43 PM, Richard Damon wrote:
>> On 3/2/24 8:29 PM, olcott wrote:
>>> On 3/2/2024 6:55 PM, Richard Damon wrote:
>>>> On 3/2/24 6:25 PM, olcott wrote:
>>>>> On 3/2/2024 5:15 PM, immibis wrote:
>>>>>> On 2/03/24 23:28, olcott wrote:
>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>> to ignore the reason why they get a different result.
>>>>>>>
>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>> physical machine address than H1.
>>>>>>
>>>>>> Incorrect - the reason that H1(D,D) derives a different result is
>>>>>> that H *looks for* a different physical machine address than H1.
>>>>>
>>>>> _D()
>>>>> [00001cf2] 55         push ebp
>>>>> [00001cf3] 8bec       mov ebp,esp
>>>>> [00001cf5] 51         push ecx
>>>>> [00001cf6] 8b4508     mov eax,[ebp+08]
>>>>> [00001cf9] 50         push eax
>>>>> [00001cfa] 8b4d08     mov ecx,[ebp+08]
>>>>> [00001cfd] 51         push ecx
>>>>> [00001cfe] e81ff8ffff call 00001522 ; call to H
>>>>> ...
>>>>>
>>>>> *That is factually incorrect*
>>>>> H and H1 simulate the actual code of D which actually
>>>>> calls H at machine address 00001522 and does not call
>>>>> H1 at machine address 00001422.
>>>>
>>>> Which proves that H and H1 are not the "Same Computation" (if they
>>>> are computations at all)
>>>>
>>>
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>
>>> Yet since we ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly
>>> simulated by Ĥ.H cannot possibly terminate unless
>>> this simulation is aborted.
>>>
>>> We ourselves can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>> does abort its simulation then Ĥ will halt.
>>>
>>> I can't imagine that this will be forever too difficult
>>> for a computer especially when ChatGPT 4.0 eventually
>>> understood my critique the halting problem.
>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>
>>>
>>
>> Because WE are not computations.
>>
>> You are just proving that you are just a pathetic ignorant
>> pathological lying idiot.
>
> Even though no LLM has any actual understanding
> it did demonstrate the functional equivalent of
> very deep understanding much more than any human
> has ever demonstrated on the reasoning behind my
> inferences.

But that is WORTHLESS for solving problems that need an actual solution.

An AI model like LLM will NEVER be able to definitively say the input is
non-halting.

In fact, even YOU have agreed that no computation can get the right
answer for all Halting Problems, so that holds even if the computation
is powered by AI.

You are just showing how little you understand about what you speak.

>
> I found that the entire original dialogue is still
> there. It is named Paradoxical Yes_No Dilemma.
> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>
> No PhD professors specializing in the these things
> ever demonstrated the same depth of understanding
> in any of their papers.
>

Maybe that says something about how you express your statements.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us1pkc$fjqv$25@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53881&group=comp.theory#53881

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 07:15:08 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us1pkc$fjqv$25@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 12:15:08 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="511839"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <us0m4i$25m8f$4@dont-email.me>
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Sun, 3 Mar 2024 12:15 UTC

On 3/2/24 9:09 PM, olcott wrote:
> On 3/2/2024 7:46 PM, Richard Damon wrote:
>> On 3/2/24 8:38 PM, olcott wrote:
>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>
>>>>>>>> Namely that you are lying that H and H1 are actually the same
>>>>>>>> computation.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>> physical machine address than H1.
>>>>>>>>
>>>>>>>> Which means that H and H1 are not computations, and you have
>>>>>>>> been just an ignorant pathological liar all this time.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>
>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>
>>>>>>>> Right, but since the algorithm for H/H1 uses the address of the
>>>>>>>> decider, which isn't defined as an "input" to it, we see that
>>>>>>>> you have been lying that this code is a computation. Likely
>>>>>>>> because you have made yourself ignorant of what a computation
>>>>>>>> actually is,
>>>>>>>>
>>>>>>>> Thus you have made yourself into an Ignorant Pathological Lying
>>>>>>>> Idiot.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>> at some physical memory location.
>>>>>>>>
>>>>>>>> Nope.
>>>>>>> Any *physically implemented Turing machine*
>>>>>>> *physically implemented Turing machine*
>>>>>>> *physically implemented Turing machine*
>>>>>>> *physically implemented Turing machine*
>>>>>>> must exist at some physical memory location.
>>>>>>>
>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>
>>>>>>
>>>>>> And description of a Turing Machine (or a Computation) that needs
>>>>>> to reference atributes of Modern Electronic Computers is just
>>>>>> WRONG as they predate the development of such a thing.
>>>>>>
>>>>>
>>>>> Virtual Machines that are exactly Turing Machines
>>>>> except for unlimited memory can and do exist.
>>>>>
>>>>> They necessarily must be implemented in physical memory
>>>>> and cannot possibly be implemented any other way.
>>>>
>>>> So?
>>>>
>>>> Doesn't let a "Computation" change its answer based on its memory
>>>> address.
>>>>
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>
>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>> simulation.
>>>
>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>> does abort its simulation then Ĥ will halt.
>>>
>>> That no computer will ever achieve this degree of
>>> understanding is directly contradicted by this:
>>>
>>> ChatGPT 4.0 dialogue.
>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>
>>>
>>
>> No COMPUTATION can solve it, because it has been proved impossible.
>
> None-the-less actual computers do actually demonstrate
> actual very deep understanding of these things.

Do computers actually UNDERSTAND?

>
> This proves that the artifice of the human notion of
> computation is more limiting than actual real computers.

In other words, you reject the use of definitions to define words.

I guess to you, nothing means what others have said it means,

>
> It pretends to show the limits of what computers can do
> yet is simply wrong about this.

Nope, it shows YOUR stupidity.

You don't even know what you are talkinga bout.

>
>> Note, ChatGPT has been proven to LIE, so you can't trust what it says.
> That is why AI needs my formalization of the notion of
> Boolean True(language L, Expression x).

Which, as proven, CANT EXIST.

Of course, you are too stupid to understand the proof.

>
> The Cyc researchers proposed a solution similar to mine.
> LLMs become anchored in knowledge ontologies so that they
> can show the inference steps of their conclusion.
>
> Getting from Generative AI to Trustworthy AI:
> What LLMs might learn from Cyc
> https://arxiv.org/abs/2308.04445
>

Yes, The concept of a "Proof" can be converted into a program, and a
program can serach and TRY to find a proof for a statement.

The problem is that even if a proof does exist, it might take too long
to find it, and some truths are just not provable, so even the fastest
computer won't be able to solve the problem.

You are still missing the fact that "Computation Theory" is not really
about "Computers", as it predates the modern electronic computer by
decades, but is about logic, and what can be done in a finite number of
finite steps.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us1pke$fjqv$26@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53882&group=comp.theory#53882

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 07:15:10 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us1pke$fjqv$26@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0g30$251f5$1@dont-email.me> <us0hln$fjqv$21@i2pn2.org>
<us0j45$25en4$1@dont-email.me> <us0kgi$fjqu$10@i2pn2.org>
<us0l3h$25m8f$2@dont-email.me> <us0lf7$fjqv$23@i2pn2.org>
<us0mji$25m8f$5@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 3 Mar 2024 12:15:10 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="511839"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
In-Reply-To: <us0mji$25m8f$5@dont-email.me>
 by: Richard Damon - Sun, 3 Mar 2024 12:15 UTC

On 3/2/24 9:17 PM, olcott wrote:
> On 3/2/2024 7:57 PM, Richard Damon wrote:
>> On 3/2/24 8:51 PM, olcott wrote:
>>> On 3/2/2024 7:41 PM, Richard Damon wrote:
>>
>>>> As I said, they don't actually "Know" anything, just that some words
>>>> work togther well to make what sounds like a reasonable answer.
>>>
>>> An LLM determines what the most probable next text will be.
>>>
>>
>> Which has NOTHING to do with "KNOWLEDGE"
>>
>> The most likely word sequence is not base on what is "True" but base
>> on the learning samples, done with ZERO understanding of the semantics
>> of the data, what seemt the most likely sequence.
>
> None-the-less it does demonstrate the functional equivalent of
> deep understanding that has baffled its original developers.
>

Yes, we can build programs that are handling bigger models than we can
handle in our mind.

I am not sure what you mean by "functional equivalent" (I suspect just
another of your lies) but in no way do these models demonstrate
"Understanding".

>
>>
>> You are just proving that you are an ignorant pathological liar.
>

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us23p2$2i101$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53887&group=comp.theory#53887

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED.97.119.201.57!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 09:08:17 -0600
Organization: A noiseless patient Spider
Lines: 17
Message-ID: <us23p2$2i101$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 3 Mar 2024 15:08:18 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="97.119.201.57";
logging-data="2688001"; mail-complaints-to="abuse@eternal-september.org"
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us1kti$2f46h$1@dont-email.me>
 by: olcott - Sun, 3 Mar 2024 15:08 UTC

On 3/3/2024 4:54 AM, Mikko wrote:
> On 2024-03-03 02:09:22 +0000, olcott said:
>
>> None-the-less actual computers do actually demonstrate
>> actual very deep understanding of these things.
>
> Not very deep, just deeper that you can achieve.
>

Chat GPT 4.0 agreed that my reasoning is sound.
My copyright notice it as the bottom.
https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2449$fjqv$31@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53888&group=comp.theory#53888

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 10:14:17 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us2449$fjqv$31@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 3 Mar 2024 15:14:17 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="511839"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us23p2$2i101$1@dont-email.me>
 by: Richard Damon - Sun, 3 Mar 2024 15:14 UTC

On 3/3/24 10:08 AM, olcott wrote:
> On 3/3/2024 4:54 AM, Mikko wrote:
>> On 2024-03-03 02:09:22 +0000, olcott said:
>>
>>> None-the-less actual computers do actually demonstrate
>>> actual very deep understanding of these things.
>>
>> Not very deep, just deeper that you can achieve.
>>
>
> Chat GPT 4.0 agreed that my reasoning is sound.
> My copyright notice it as the bottom.
> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>

So?

The Programming of Chat GPT is to try to agree with the user.

You just got a "Yes Man" to agree with you.

You just don't understand how AI works, or intelligence in general.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us26ba$2ijf2$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53890&group=comp.theory#53890

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED.97.119.201.57!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 09:52:08 -0600
Organization: A noiseless patient Spider
Lines: 120
Message-ID: <us26ba$2ijf2$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0buc$2490j$1@dont-email.me>
<us0chc$24a9q$1@dont-email.me> <us0hqq$fjqv$22@i2pn2.org>
<us0jp9$25l2k$1@dont-email.me> <us0kkb$fjqu$11@i2pn2.org>
<us0lga$25m8f$3@dont-email.me> <us1pk9$fjqv$24@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 15:52:10 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="97.119.201.57";
logging-data="2706914"; mail-complaints-to="abuse@eternal-september.org"
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us1pk9$fjqv$24@i2pn2.org>
 by: olcott - Sun, 3 Mar 2024 15:52 UTC

On 3/3/2024 6:15 AM, Richard Damon wrote:
> On 3/2/24 8:58 PM, olcott wrote:
>> On 3/2/2024 7:43 PM, Richard Damon wrote:
>>> On 3/2/24 8:29 PM, olcott wrote:
>>>> On 3/2/2024 6:55 PM, Richard Damon wrote:
>>>>> On 3/2/24 6:25 PM, olcott wrote:
>>>>>> On 3/2/2024 5:15 PM, immibis wrote:
>>>>>>> On 2/03/24 23:28, olcott wrote:
>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>
>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>> physical machine address than H1.
>>>>>>>
>>>>>>> Incorrect - the reason that H1(D,D) derives a different result is
>>>>>>> that H *looks for* a different physical machine address than H1.
>>>>>>
>>>>>> _D()
>>>>>> [00001cf2] 55         push ebp
>>>>>> [00001cf3] 8bec       mov ebp,esp
>>>>>> [00001cf5] 51         push ecx
>>>>>> [00001cf6] 8b4508     mov eax,[ebp+08]
>>>>>> [00001cf9] 50         push eax
>>>>>> [00001cfa] 8b4d08     mov ecx,[ebp+08]
>>>>>> [00001cfd] 51         push ecx
>>>>>> [00001cfe] e81ff8ffff call 00001522 ; call to H
>>>>>> ...
>>>>>>
>>>>>> *That is factually incorrect*
>>>>>> H and H1 simulate the actual code of D which actually
>>>>>> calls H at machine address 00001522 and does not call
>>>>>> H1 at machine address 00001422.
>>>>>
>>>>> Which proves that H and H1 are not the "Same Computation" (if they
>>>>> are computations at all)
>>>>>
>>>>
>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>>
>>>> Yet since we ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly
>>>> simulated by Ĥ.H cannot possibly terminate unless
>>>> this simulation is aborted.
>>>>
>>>> We ourselves can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>> does abort its simulation then Ĥ will halt.
>>>>
>>>> I can't imagine that this will be forever too difficult
>>>> for a computer especially when ChatGPT 4.0 eventually
>>>> understood my critique the halting problem.
>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>
>>>>
>>>
>>> Because WE are not computations.
>>>
>>> You are just proving that you are just a pathetic ignorant
>>> pathological lying idiot.
>>
>> Even though no LLM has any actual understanding
>> it did demonstrate the functional equivalent of
>> very deep understanding much more than any human
>> has ever demonstrated on the reasoning behind my
>> inferences.
>
> But that is WORTHLESS for solving problems that need an actual solution.
>
> An AI model like LLM will NEVER be able to definitively say the input is
> non-halting.
>
> In fact, even YOU have agreed that no computation can get the right
> answer for all Halting Problems,

By using the blind variation and selective retention (BVSR)
with its superfluity and backtracking aspects outlined by
BY DEAN KEITH SIMONTON
https://www.scientificamerican.com/article/the-science-of-genius/

I constantly test slight variations of the same ideas until
I find one that works. When I read his article in Mind Magazine
12 years ago I instantly recognized my own process.

I also use to "to reverse-engineer complicated problems"
making sure to totally ignore all assumptions that anyone
else had ever made in the problem domain. Thus I derive
the actual foundational first principles that do no
depend on any assumptions.
https://fs.blog/first-principles/

The idea that
"no computation can get the right answer for all Halting Problems"
has been revised with new information. The Linz Ĥ can only fool
its own internal Ĥ.H and cannot fool the actual Linz H.

> so that holds even if the computation
> is powered by AI.
>
> You are just showing how little you understand about what you speak.
>
>>
>> I found that the entire original dialogue is still
>> there. It is named Paradoxical Yes_No Dilemma.
>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>
>> No PhD professors specializing in the these things
>> ever demonstrated the same depth of understanding
>> in any of their papers.
>>
>
> Maybe that says something about how you express your statements.

I think that the issue is the people are very biased against
new ideas and a machine has no such bias.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) (Linz version)

<us2795$2ipl5$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53891&group=comp.theory#53891

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED.97.119.201.57!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D)
(Linz version)
Date: Sun, 3 Mar 2024 10:08:04 -0600
Organization: A noiseless patient Spider
Lines: 41
Message-ID: <us2795$2ipl5$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us1kfq$2f1an$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 16:08:05 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="97.119.201.57";
logging-data="2713253"; mail-complaints-to="abuse@eternal-september.org"
User-Agent: Mozilla Thunderbird
In-Reply-To: <us1kfq$2f1an$1@dont-email.me>
Content-Language: en-US
 by: olcott - Sun, 3 Mar 2024 16:08 UTC

On 3/3/2024 4:47 AM, Mikko wrote:
> On 2024-03-02 22:28:44 +0000, olcott said:
>
>> The reason that people assume that H1(D,D) must get
>> the same result as H(D,D) is that they make sure
>> to ignore the reason why they get a different result.
>
> It is quite obvious why H1 gets a different result from H.
> It simply is that your "simulator" does not simulate
> correctly. In spite of that, your H gets H(D,D) wrong,
> so it is not a counter example to Linz' proof.
>

H/D are equivalent to Ĥ and in reality that is the only
way that H/D can be defined in an actual Turing Machine.
This makes H1 equivalent to Linz H.

Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

Using an adaptation of Mike's idea combined with an earlier
idea of mine: Ĥ.H simulates Ĥ applied to ⟨Ĥ⟩ until it sees
an exact copy of the same machine description try to simulate
itself again with identical copies of its same input.

Then the outermost Ĥ.H transitions to Ĥ.Hqn indicating that it
must abort its simulation of Ĥ applied to ⟨Ĥ⟩ to prevent its own
infinite execution.

Although this halt status does not correspond to the actual
behavior of Ĥ applied to ⟨Ĥ⟩ it does cause Ĥ to halt. When
H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ it can see that Ĥ applied to ⟨Ĥ⟩ halts.
*Thus Linz Ĥ can fool itself yet cannot fool Linz H*

The reason that this works is that Ĥ contradicts its own
internal copy of H yet cannot contradict the actual Linz H.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us27ut$2iua6$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53892&group=comp.theory#53892

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED.97.119.201.57!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 10:19:40 -0600
Organization: A noiseless patient Spider
Lines: 166
Message-ID: <us27ut$2iua6$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 16:19:41 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="97.119.201.57";
logging-data="2718022"; mail-complaints-to="abuse@eternal-september.org"
User-Agent: Mozilla Thunderbird
In-Reply-To: <us1pkc$fjqv$25@i2pn2.org>
Content-Language: en-US
 by: olcott - Sun, 3 Mar 2024 16:19 UTC

On 3/3/2024 6:15 AM, Richard Damon wrote:
> On 3/2/24 9:09 PM, olcott wrote:
>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>> On 3/2/24 8:38 PM, olcott wrote:
>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>
>>>>>>>>> Namely that you are lying that H and H1 are actually the same
>>>>>>>>> computation.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>> physical machine address than H1.
>>>>>>>>>
>>>>>>>>> Which means that H and H1 are not computations, and you have
>>>>>>>>> been just an ignorant pathological liar all this time.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>
>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>
>>>>>>>>> Right, but since the algorithm for H/H1 uses the address of the
>>>>>>>>> decider, which isn't defined as an "input" to it, we see that
>>>>>>>>> you have been lying that this code is a computation. Likely
>>>>>>>>> because you have made yourself ignorant of what a computation
>>>>>>>>> actually is,
>>>>>>>>>
>>>>>>>>> Thus you have made yourself into an Ignorant Pathological Lying
>>>>>>>>> Idiot.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>> at some physical memory location.
>>>>>>>>>
>>>>>>>>> Nope.
>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>> *physically implemented Turing machine*
>>>>>>>> *physically implemented Turing machine*
>>>>>>>> *physically implemented Turing machine*
>>>>>>>> must exist at some physical memory location.
>>>>>>>>
>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>
>>>>>>>
>>>>>>> And description of a Turing Machine (or a Computation) that needs
>>>>>>> to reference atributes of Modern Electronic Computers is just
>>>>>>> WRONG as they predate the development of such a thing.
>>>>>>>
>>>>>>
>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>> except for unlimited memory can and do exist.
>>>>>>
>>>>>> They necessarily must be implemented in physical memory
>>>>>> and cannot possibly be implemented any other way.
>>>>>
>>>>> So?
>>>>>
>>>>> Doesn't let a "Computation" change its answer based on its memory
>>>>> address.
>>>>>
>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>>
>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>> simulation.
>>>>
>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>> does abort its simulation then Ĥ will halt.
>>>>
>>>> That no computer will ever achieve this degree of
>>>> understanding is directly contradicted by this:
>>>>
>>>> ChatGPT 4.0 dialogue.
>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>
>>>>
>>>
>>> No COMPUTATION can solve it, because it has been proved impossible.
>>
>> None-the-less actual computers do actually demonstrate
>> actual very deep understanding of these things.
>
> Do computers actually UNDERSTAND?

https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
Demonstrates the functional equivalent of deep understanding.
The first thing that it does is categorize Carol's question
as equivalent to the Liar Paradox.

>>
>> This proves that the artifice of the human notion of
>> computation is more limiting than actual real computers.
>
> In other words, you reject the use of definitions to define words.
>
> I guess to you, nothing means what others have said it means,
>

I have found that it is the case that some definitions of
technical terms sometimes boxes people into misconceptions
such that alternative views are inexpressible within the
technical language. https://en.wikipedia.org/wiki/Linguistic_relativity

>>
>> It pretends to show the limits of what computers can do
>> yet is simply wrong about this.
>
> Nope, it shows YOUR stupidity.
>
> You don't even know what you are talkinga bout.
>
>
>>
>>> Note, ChatGPT has been proven to LIE, so you can't trust what it says.
>> That is why AI needs my formalization of the notion of
>> Boolean True(language L, Expression x).
>
> Which, as proven, CANT EXIST.
>
> Of course, you are too stupid to understand the proof.
>
>>
>> The Cyc researchers proposed a solution similar to mine.
>> LLMs become anchored in knowledge ontologies so that they
>> can show the inference steps of their conclusion.
>>
>> Getting from Generative AI to Trustworthy AI:
>> What LLMs might learn from Cyc
>> https://arxiv.org/abs/2308.04445
>>
>
> Yes, The concept of a "Proof" can be converted into a program, and a
> program can serach and TRY to find a proof for a statement.
>
> The problem is that even if a proof does exist, it might take too long
> to find it, and some truths are just not provable, so even the fastest
> computer won't be able to solve the problem.
>
> You are still missing the fact that "Computation Theory" is not really
> about "Computers", as it predates the modern electronic computer by
> decades, but is about logic, and what can be done in a finite number of
> finite steps.

Thus you acknowledge that the limits of this antiquated notion
of computations may not actually derive corresponding limits of
what actual computers can actually do.

Computations may indeed be an arbitrarily narrow specification
that does not actually apply to real computers.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2857$2iua6$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53893&group=comp.theory#53893

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED.97.119.201.57!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 10:23:03 -0600
Organization: A noiseless patient Spider
Message-ID: <us2857$2iua6$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0g30$251f5$1@dont-email.me> <us0hln$fjqv$21@i2pn2.org>
<us0j45$25en4$1@dont-email.me> <us0kgi$fjqu$10@i2pn2.org>
<us0l3h$25m8f$2@dont-email.me> <us0lf7$fjqv$23@i2pn2.org>
<us0mji$25m8f$5@dont-email.me> <us1pke$fjqv$26@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 3 Mar 2024 16:23:03 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="97.119.201.57";
logging-data="2718022"; mail-complaints-to="abuse@eternal-september.org"
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us1pke$fjqv$26@i2pn2.org>
 by: olcott - Sun, 3 Mar 2024 16:23 UTC

On 3/3/2024 6:15 AM, Richard Damon wrote:
> On 3/2/24 9:17 PM, olcott wrote:
>> On 3/2/2024 7:57 PM, Richard Damon wrote:
>>> On 3/2/24 8:51 PM, olcott wrote:
>>>> On 3/2/2024 7:41 PM, Richard Damon wrote:
>>>
>>>>> As I said, they don't actually "Know" anything, just that some
>>>>> words work togther well to make what sounds like a reasonable answer.
>>>>
>>>> An LLM determines what the most probable next text will be.
>>>>
>>>
>>> Which has NOTHING to do with "KNOWLEDGE"
>>>
>>> The most likely word sequence is not base on what is "True" but base
>>> on the learning samples, done with ZERO understanding of the
>>> semantics of the data, what seemt the most likely sequence.
>>
>> None-the-less it does demonstrate the functional equivalent of
>> deep understanding that has baffled its original developers.
>>
>
> Yes, we can build programs that are handling bigger models than we can
> handle in our mind.
>
> I am not sure what you mean by "functional equivalent"

To the extend that a machine can accomplish what a mind
can accomplish this machine is equivalent to a mind.

> (I suspect just
> another of your lies) but in no way do these models demonstrate
> "Understanding".
>
>>
>>>
>>> You are just proving that you are an ignorant pathological liar.
>>
>

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2d4s$2k65l$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53897&group=comp.theory#53897

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: mikko.le...@iki.fi (Mikko)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 19:48:12 +0200
Organization: -
Lines: 18
Message-ID: <us2d4s$2k65l$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org> <us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org> <us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org> <us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org> <us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me> <us23p2$2i101$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: dont-email.me; posting-host="3e4d10965c1b0db3906335269acfd256";
logging-data="2758837"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+rcA9+qqdBgTdGxyToJceK"
User-Agent: Unison/2.2
Cancel-Lock: sha1:hAGdsIqYSL1nbzHAELZRZed9m1w=
 by: Mikko - Sun, 3 Mar 2024 17:48 UTC

On 2024-03-03 15:08:17 +0000, olcott said:

> On 3/3/2024 4:54 AM, Mikko wrote:
>> On 2024-03-03 02:09:22 +0000, olcott said:
>>
>>> None-the-less actual computers do actually demonstrate
>>> actual very deep understanding of these things.
>>
>> Not very deep, just deeper that you can achieve.
>>
>
> Chat GPT 4.0 agreed that my reasoning is sound.

That does not demonstrate any understanding, even shallow.

--
Mikko

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2gk1$2ksv3$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53899&group=comp.theory#53899

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 12:47:29 -0600
Organization: A noiseless patient Spider
Lines: 41
Message-ID: <us2gk1$2ksv3$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 3 Mar 2024 18:47:30 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="783935dd73b238665700216b234eba83";
logging-data="2782179"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+Qy77nDeFjv217DWQCdUui"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:JuQGa68Sr7MHlE4qkpMtcVnHnRI=
In-Reply-To: <us2d4s$2k65l$1@dont-email.me>
Content-Language: en-US
 by: olcott - Sun, 3 Mar 2024 18:47 UTC

On 3/3/2024 11:48 AM, Mikko wrote:
> On 2024-03-03 15:08:17 +0000, olcott said:
>
>> On 3/3/2024 4:54 AM, Mikko wrote:
>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>
>>>> None-the-less actual computers do actually demonstrate
>>>> actual very deep understanding of these things.
>>>
>>> Not very deep, just deeper that you can achieve.
>>>
>>
>> Chat GPT 4.0 agreed that my reasoning is sound.
>
> That does not demonstrate any understanding, even shallow.
>

The first thing that it does is agree that Hehner's
"Carol's question" (augmented by Richards critique)
is an example of the Liar Paradox.
https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b

It ends up concluding that myself, professor Hehner and
professor Stoddart are all correct in that there is
something wrong with the halting problem.

My persistent focus on these ideas gives me an increasingly
deeper understanding thus my latest position is that the
halting problem proofs do not actually show that halting
is not computable.

The reason that this view has changed is that an actual
Turing Machine description of the counter-example input
must have its hypothetical halt decider embedded within
itself thus can only contradict itself and not an external
decider.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) (Linz version)

<us2n89$lq4d$1@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53901&group=comp.theory#53901

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D)
(Linz version)
Date: Sun, 3 Mar 2024 15:40:40 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us2n89$lq4d$1@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us1kfq$2f1an$1@dont-email.me>
<us2795$2ipl5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 20:40:41 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714893"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us2795$2ipl5$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Sun, 3 Mar 2024 20:40 UTC

On 3/3/24 11:08 AM, olcott wrote:
> On 3/3/2024 4:47 AM, Mikko wrote:
>> On 2024-03-02 22:28:44 +0000, olcott said:
>>
>>> The reason that people assume that H1(D,D) must get
>>> the same result as H(D,D) is that they make sure
>>> to ignore the reason why they get a different result.
>>
>> It is quite obvious why H1 gets a different result from H.
>> It simply is that your "simulator" does not simulate
>> correctly. In spite of that, your H gets H(D,D) wrong,
>> so it is not a counter example to Linz' proof.
>>
>
> H/D are equivalent to Ĥ and in reality that is the only
> way that H/D can be defined in an actual Turing Machine.
> This makes H1 equivalent to Linz H.

Only if H1 is the exact same computation as H, meaning it gives the
exact sames answer as H for the same input. At this point, we don't
really need two different names as far as Turing Machines.

Otherwise, you are just LYING that you built Linz H^ properly, as it
must be built on a computationally exact copy of Linz H.

>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> Using an adaptation of Mike's idea combined with an earlier
> idea of mine: Ĥ.H simulates Ĥ applied to ⟨Ĥ⟩ until it sees
> an exact copy of the same machine description try to simulate
> itself again with identical copies of its same input.

Except that there isn't a unique description for a given Turing Machine,
so H can't know what is its "Description".

>
> Then the outermost Ĥ.H transitions to Ĥ.Hqn indicating that it
> must abort its simulation of Ĥ applied to ⟨Ĥ⟩ to prevent its own
> infinite execution.
>
> Although this halt status does not correspond to the actual
> behavior of Ĥ applied to ⟨Ĥ⟩ it does cause Ĥ to halt. When
> H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ it can see that Ĥ applied to ⟨Ĥ⟩ halts.
> *Thus Linz Ĥ can fool itself yet cannot fool Linz H*
>
> The reason that this works is that Ĥ contradicts its own
> internal copy of H yet cannot contradict the actual Linz H.
>

But that impliess that H and H1 ard NOT the same computation, otherwise,
why did they act differe, and thus your argument is proved to be a LIE.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2n8c$lq4d$2@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53902&group=comp.theory#53902

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 15:40:43 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us2n8c$lq4d$2@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 3 Mar 2024 20:40:44 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714893"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us2gk1$2ksv3$2@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Sun, 3 Mar 2024 20:40 UTC

On 3/3/24 1:47 PM, olcott wrote:
> On 3/3/2024 11:48 AM, Mikko wrote:
>> On 2024-03-03 15:08:17 +0000, olcott said:
>>
>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>
>>>>> None-the-less actual computers do actually demonstrate
>>>>> actual very deep understanding of these things.
>>>>
>>>> Not very deep, just deeper that you can achieve.
>>>>
>>>
>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>
>> That does not demonstrate any understanding, even shallow.
>>
>
> The first thing that it does is agree that Hehner's
> "Carol's question" (augmented by Richards critique)
> is an example of the Liar Paradox.
> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>
> It ends up concluding that myself, professor Hehner and
> professor Stoddart are all correct in that there is
> something wrong with the halting problem.

Which since it is proven that Chat GPT doesn't actually know what is a
fact, and has been proven to lie, just shows that you have fallen into
the Argument by Athority fallacy, and are even using a non-authority
proving your stupidity.

>
> My persistent focus on these ideas gives me an increasingly
> deeper understanding thus my latest position is that the
> halting problem proofs do not actually show that halting
> is not computable.

No, your totally incorrect comment on this proves you don't understand
evven the basics of the topic.

You have proven, and admitted, you don't know the actual meaning of the
fundamental definitions of the field, so you so you are just showing
that you are an ignorant pathological lying idiot, that has shown he
can't learn the material of the field.

>
> The reason that this view has changed is that an actual
> Turing Machine description of the counter-example input
> must have its hypothetical halt decider embedded within
> itself thus can only contradict itself and not an external
> decider.
>

Which just proves you don't Jack shit about what a Turing Machine is,
and that all your statements are just the ignorant words of a
pathological lying idiot.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2n8e$lq4d$3@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53903&group=comp.theory#53903

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 15:40:45 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us2n8e$lq4d$3@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 20:40:46 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714893"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <us27ut$2iua6$1@dont-email.me>
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Sun, 3 Mar 2024 20:40 UTC

On 3/3/24 11:19 AM, olcott wrote:
> On 3/3/2024 6:15 AM, Richard Damon wrote:
>> On 3/2/24 9:09 PM, olcott wrote:
>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>
>>>>>>>>>> Namely that you are lying that H and H1 are actually the same
>>>>>>>>>> computation.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>
>>>>>>>>>> Which means that H and H1 are not computations, and you have
>>>>>>>>>> been just an ignorant pathological liar all this time.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>
>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>
>>>>>>>>>> Right, but since the algorithm for H/H1 uses the address of
>>>>>>>>>> the decider, which isn't defined as an "input" to it, we see
>>>>>>>>>> that you have been lying that this code is a computation.
>>>>>>>>>> Likely because you have made yourself ignorant of what a
>>>>>>>>>> computation actually is,
>>>>>>>>>>
>>>>>>>>>> Thus you have made yourself into an Ignorant Pathological
>>>>>>>>>> Lying Idiot.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>
>>>>>>>>>> Nope.
>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>
>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>
>>>>>>>>
>>>>>>>> And description of a Turing Machine (or a Computation) that
>>>>>>>> needs to reference atributes of Modern Electronic Computers is
>>>>>>>> just WRONG as they predate the development of such a thing.
>>>>>>>>
>>>>>>>
>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>> except for unlimited memory can and do exist.
>>>>>>>
>>>>>>> They necessarily must be implemented in physical memory
>>>>>>> and cannot possibly be implemented any other way.
>>>>>>
>>>>>> So?
>>>>>>
>>>>>> Doesn't let a "Computation" change its answer based on its memory
>>>>>> address.
>>>>>>
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>>>
>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>> simulation.
>>>>>
>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>> does abort its simulation then Ĥ will halt.
>>>>>
>>>>> That no computer will ever achieve this degree of
>>>>> understanding is directly contradicted by this:
>>>>>
>>>>> ChatGPT 4.0 dialogue.
>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>
>>>>>
>>>>
>>>> No COMPUTATION can solve it, because it has been proved impossible.
>>>
>>> None-the-less actual computers do actually demonstrate
>>> actual very deep understanding of these things.
>>
>> Do computers actually UNDERSTAND?
>
> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
> Demonstrates the functional equivalent of deep understanding.
> The first thing that it does is categorize Carol's question
> as equivalent to the Liar Paradox.

Nope, doesn't show what you claim, just that it has been taught by "rote
memorization" that the answer to a question put the way you did is the
answer it gave.

You are just showing that YOU don't understand what the word UNDERSTAND
actually means.

>
>>>
>>> This proves that the artifice of the human notion of
>>> computation is more limiting than actual real computers.
>>
>> In other words, you reject the use of definitions to define words.
>>
>> I guess to you, nothing means what others have said it means,
>>
>
> I have found that it is the case that some definitions of
> technical terms sometimes boxes people into misconceptions
> such that alternative views are inexpressible within the
> technical language.  https://en.wikipedia.org/wiki/Linguistic_relativity

In other words, you are admtting that when you claim to be working in a
technical field and using the words as that field means, you are just
being a out and out LIAR.

>
>>>
>>> It pretends to show the limits of what computers can do
>>> yet is simply wrong about this.
>>
>> Nope, it shows YOUR stupidity.
>>
>> You don't even know what you are talkinga bout.
>>
>>
>>>
>>>> Note, ChatGPT has been proven to LIE, so you can't trust what it says.
>>> That is why AI needs my formalization of the notion of
>>> Boolean True(language L, Expression x).
>>
>> Which, as proven, CANT EXIST.
>>
>> Of course, you are too stupid to understand the proof.
>>
>>>
>>> The Cyc researchers proposed a solution similar to mine.
>>> LLMs become anchored in knowledge ontologies so that they
>>> can show the inference steps of their conclusion.
>>>
>>> Getting from Generative AI to Trustworthy AI:
>>> What LLMs might learn from Cyc
>>> https://arxiv.org/abs/2308.04445
>>>
>>
>> Yes, The concept of a "Proof" can be converted into a program, and a
>> program can serach and TRY to find a proof for a statement.
>>
>> The problem is that even if a proof does exist, it might take too long
>> to find it, and some truths are just not provable, so even the fastest
>> computer won't be able to solve the problem.
>>
>> You are still missing the fact that "Computation Theory" is not really
>> about "Computers", as it predates the modern electronic computer by
>> decades, but is about logic, and what can be done in a finite number
>> of finite steps.
>
> Thus you acknowledge that the limits of this antiquated notion
> of computations may not actually derive corresponding limits of
> what actual computers can actually do.
>
> Computations may indeed be an arbitrarily narrow specification
> that does not actually apply to real computers.
>


Click here to read the complete article
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2n8g$lq4d$4@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53904&group=comp.theory#53904

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 15:40:48 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us2n8g$lq4d$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0g30$251f5$1@dont-email.me> <us0hln$fjqv$21@i2pn2.org>
<us0j45$25en4$1@dont-email.me> <us0kgi$fjqu$10@i2pn2.org>
<us0l3h$25m8f$2@dont-email.me> <us0lf7$fjqv$23@i2pn2.org>
<us0mji$25m8f$5@dont-email.me> <us1pke$fjqv$26@i2pn2.org>
<us2857$2iua6$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Sun, 3 Mar 2024 20:40:48 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714893"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
In-Reply-To: <us2857$2iua6$2@dont-email.me>
 by: Richard Damon - Sun, 3 Mar 2024 20:40 UTC

On 3/3/24 11:23 AM, olcott wrote:
> On 3/3/2024 6:15 AM, Richard Damon wrote:
>> On 3/2/24 9:17 PM, olcott wrote:
>>> On 3/2/2024 7:57 PM, Richard Damon wrote:
>>>> On 3/2/24 8:51 PM, olcott wrote:
>>>>> On 3/2/2024 7:41 PM, Richard Damon wrote:
>>>>
>>>>>> As I said, they don't actually "Know" anything, just that some
>>>>>> words work togther well to make what sounds like a reasonable answer.
>>>>>
>>>>> An LLM determines what the most probable next text will be.
>>>>>
>>>>
>>>> Which has NOTHING to do with "KNOWLEDGE"
>>>>
>>>> The most likely word sequence is not base on what is "True" but base
>>>> on the learning samples, done with ZERO understanding of the
>>>> semantics of the data, what seemt the most likely sequence.
>>>
>>> None-the-less it does demonstrate the functional equivalent of
>>> deep understanding that has baffled its original developers.
>>>
>>
>> Yes, we can build programs that are handling bigger models than we can
>> handle in our mind.
>>
>> I am not sure what you mean by "functional equivalent"
>
> To the extend that a machine can accomplish what a mind
> can accomplish this machine is equivalent to a mind.

And to the extend that it can't, it doesn't.

You seem to think that a little bit alike is equivalent, which is just a
LIE.

That seems to be a common problem for you.

LLM do NOT have a "deep understanding" of the material they process.

They come up with some unexected correlations that might not have been
antisipated, but those can as often as not be MISUNDERSTANDINGS of the
problem because the learning set wasn't controlled well enough.

>
>> (I suspect just another of your lies) but in no way do these models
>> demonstrate "Understanding".
>>
>>>
>>>>
>>>> You are just proving that you are an ignorant pathological liar.
>>>
>>
>

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us2n8i$lq4d$5@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53905&group=comp.theory#53905

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 15:40:50 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us2n8i$lq4d$5@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0buc$2490j$1@dont-email.me>
<us0chc$24a9q$1@dont-email.me> <us0hqq$fjqv$22@i2pn2.org>
<us0jp9$25l2k$1@dont-email.me> <us0kkb$fjqu$11@i2pn2.org>
<us0lga$25m8f$3@dont-email.me> <us1pk9$fjqv$24@i2pn2.org>
<us26ba$2ijf2$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 3 Mar 2024 20:40:50 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714893"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us26ba$2ijf2$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Sun, 3 Mar 2024 20:40 UTC

On 3/3/24 10:52 AM, olcott wrote:
> On 3/3/2024 6:15 AM, Richard Damon wrote:
>> On 3/2/24 8:58 PM, olcott wrote:
>>> On 3/2/2024 7:43 PM, Richard Damon wrote:
>>>> On 3/2/24 8:29 PM, olcott wrote:
>>>>> On 3/2/2024 6:55 PM, Richard Damon wrote:
>>>>>> On 3/2/24 6:25 PM, olcott wrote:
>>>>>>> On 3/2/2024 5:15 PM, immibis wrote:
>>>>>>>> On 2/03/24 23:28, olcott wrote:
>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>
>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>> physical machine address than H1.
>>>>>>>>
>>>>>>>> Incorrect - the reason that H1(D,D) derives a different result
>>>>>>>> is that H *looks for* a different physical machine address than H1.
>>>>>>>
>>>>>>> _D()
>>>>>>> [00001cf2] 55         push ebp
>>>>>>> [00001cf3] 8bec       mov ebp,esp
>>>>>>> [00001cf5] 51         push ecx
>>>>>>> [00001cf6] 8b4508     mov eax,[ebp+08]
>>>>>>> [00001cf9] 50         push eax
>>>>>>> [00001cfa] 8b4d08     mov ecx,[ebp+08]
>>>>>>> [00001cfd] 51         push ecx
>>>>>>> [00001cfe] e81ff8ffff call 00001522 ; call to H
>>>>>>> ...
>>>>>>>
>>>>>>> *That is factually incorrect*
>>>>>>> H and H1 simulate the actual code of D which actually
>>>>>>> calls H at machine address 00001522 and does not call
>>>>>>> H1 at machine address 00001422.
>>>>>>
>>>>>> Which proves that H and H1 are not the "Same Computation" (if they
>>>>>> are computations at all)
>>>>>>
>>>>>
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>>>
>>>>> Yet since we ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly
>>>>> simulated by Ĥ.H cannot possibly terminate unless
>>>>> this simulation is aborted.
>>>>>
>>>>> We ourselves can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>> does abort its simulation then Ĥ will halt.
>>>>>
>>>>> I can't imagine that this will be forever too difficult
>>>>> for a computer especially when ChatGPT 4.0 eventually
>>>>> understood my critique the halting problem.
>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>
>>>>>
>>>>
>>>> Because WE are not computations.
>>>>
>>>> You are just proving that you are just a pathetic ignorant
>>>> pathological lying idiot.
>>>
>>> Even though no LLM has any actual understanding
>>> it did demonstrate the functional equivalent of
>>> very deep understanding much more than any human
>>> has ever demonstrated on the reasoning behind my
>>> inferences.
>>
>> But that is WORTHLESS for solving problems that need an actual solution.
>>
>> An AI model like LLM will NEVER be able to definitively say the input
>> is non-halting.
>>
>> In fact, even YOU have agreed that no computation can get the right
>> answer for all Halting Problems,
>
> By using the blind variation and selective retention (BVSR)
> with its superfluity and backtracking aspects outlined by
> BY DEAN KEITH SIMONTON
> https://www.scientificamerican.com/article/the-science-of-genius/
>
> I constantly test slight variations of the same ideas until
> I find one that works. When I read his article in Mind Magazine
> 12 years ago I instantly recognized my own process.
>
> I also use to "to reverse-engineer complicated problems"
> making sure to totally ignore all assumptions that anyone
> else had ever made in the problem domain. Thus I derive
> the actual foundational first principles that do no
> depend on any assumptions.
> https://fs.blog/first-principles/

But since you clearly (and admittedly) don't actually know the core
principles of the fields you are trying to work in, you are just gas
lighting yourself into a prime Dunning-Kruger case.

>
> The idea that
> "no computation can get the right answer for all Halting Problems"
> has been revised with new information. The Linz Ĥ can only fool
> its own internal Ĥ.H and cannot fool the actual Linz H.

Not understanding that both, BY DEFINITION, must do the same thing.

You think it must be possible to make them different, but you don't have
the tools in you tool box to even attempt to try it to see that you have
postulated an impossibility.

Intentionally not knowing the groud rules of the fields you talk in has
made you into the villige idiot of the fields, and you have convinced
yourself that the fields must be wrong, so you have made yourself locked
into your idiocy.

>
>> so that holds even if the computation is powered by AI.
>>
>> You are just showing how little you understand about what you speak.
>>
>>>
>>> I found that the entire original dialogue is still
>>> there. It is named Paradoxical Yes_No Dilemma.
>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>
>>> No PhD professors specializing in the these things
>>> ever demonstrated the same depth of understanding
>>> in any of their papers.
>>>
>>
>> Maybe that says something about how you express your statements.
>
> I think that the issue is the people are very biased against
> new ideas and a machine has no such bias.
>

Nope.

Re: Why does H1(D,D) actually get a different result than H(D,D) (Linz version)

<us36a1$2pf6s$2@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53916&group=comp.theory#53916

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D)
(Linz version)
Date: Sun, 3 Mar 2024 18:57:37 -0600
Organization: A noiseless patient Spider
Lines: 75
Message-ID: <us36a1$2pf6s$2@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us1kfq$2f1an$1@dont-email.me>
<us2795$2ipl5$1@dont-email.me> <us2n89$lq4d$1@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Mar 2024 00:57:37 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="2931932"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19TNHJQLu8cx51h+S8BqiO6"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Uj7kosucRK2RngYYyWRLfR+qTzI=
Content-Language: en-US
In-Reply-To: <us2n89$lq4d$1@i2pn2.org>
 by: olcott - Mon, 4 Mar 2024 00:57 UTC

On 3/3/2024 2:40 PM, Richard Damon wrote:
> On 3/3/24 11:08 AM, olcott wrote:
>> On 3/3/2024 4:47 AM, Mikko wrote:
>>> On 2024-03-02 22:28:44 +0000, olcott said:
>>>
>>>> The reason that people assume that H1(D,D) must get
>>>> the same result as H(D,D) is that they make sure
>>>> to ignore the reason why they get a different result.
>>>
>>> It is quite obvious why H1 gets a different result from H.
>>> It simply is that your "simulator" does not simulate
>>> correctly. In spite of that, your H gets H(D,D) wrong,
>>> so it is not a counter example to Linz' proof.
>>>
>>
>> H/D are equivalent to Ĥ and in reality that is the only
>> way that H/D can be defined in an actual Turing Machine.
>> This makes H1 equivalent to Linz H.
>
> Only if H1 is the exact same computation as H, meaning it gives the
> exact sames answer as H for the same input. At this point, we don't
> really need two different names as far as Turing Machines.
>
> Otherwise, you are just LYING that you built Linz H^ properly, as it
> must be built on a computationally exact copy of Linz H.
>
>>
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>
>> Using an adaptation of Mike's idea combined with an earlier
>> idea of mine: Ĥ.H simulates Ĥ applied to ⟨Ĥ⟩ until it sees
>> an exact copy of the same machine description try to simulate
>> itself again with identical copies of its same input.
>
> Except that there isn't a unique description for a given Turing Machine,
> so H can't know what is its "Description".
>
>>
>> Then the outermost Ĥ.H transitions to Ĥ.Hqn indicating that it
>> must abort its simulation of Ĥ applied to ⟨Ĥ⟩ to prevent its own
>> infinite execution.
>>
>> Although this halt status does not correspond to the actual
>> behavior of Ĥ applied to ⟨Ĥ⟩ it does cause Ĥ to halt. When
>> H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ it can see that Ĥ applied to ⟨Ĥ⟩ halts.
>> *Thus Linz Ĥ can fool itself yet cannot fool Linz H*
>>
>> The reason that this works is that Ĥ contradicts its own
>> internal copy of H yet cannot contradict the actual Linz H.
>>
>
> But that impliess that H and H1 ard NOT the same computation, otherwise,
> why did they act differe, and thus your argument is proved to be a LIE.

Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

Execution trace of Ĥ applied to ⟨Ĥ⟩
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
(b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process

Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.

Humans can see that this criteria derives different answers
for Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ than for H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) (Linz version)

<us3763$lq4c$8@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53921&group=comp.theory#53921

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D)
(Linz version)
Date: Sun, 3 Mar 2024 20:12:35 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us3763$lq4c$8@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us1kfq$2f1an$1@dont-email.me>
<us2795$2ipl5$1@dont-email.me> <us2n89$lq4d$1@i2pn2.org>
<us36a1$2pf6s$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Mar 2024 01:12:35 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714892"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us36a1$2pf6s$2@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 4 Mar 2024 01:12 UTC

On 3/3/24 7:57 PM, olcott wrote:
> On 3/3/2024 2:40 PM, Richard Damon wrote:
>> On 3/3/24 11:08 AM, olcott wrote:
>>> On 3/3/2024 4:47 AM, Mikko wrote:
>>>> On 2024-03-02 22:28:44 +0000, olcott said:
>>>>
>>>>> The reason that people assume that H1(D,D) must get
>>>>> the same result as H(D,D) is that they make sure
>>>>> to ignore the reason why they get a different result.
>>>>
>>>> It is quite obvious why H1 gets a different result from H.
>>>> It simply is that your "simulator" does not simulate
>>>> correctly. In spite of that, your H gets H(D,D) wrong,
>>>> so it is not a counter example to Linz' proof.
>>>>
>>>
>>> H/D are equivalent to Ĥ and in reality that is the only
>>> way that H/D can be defined in an actual Turing Machine.
>>> This makes H1 equivalent to Linz H.
>>
>> Only if H1 is the exact same computation as H, meaning it gives the
>> exact sames answer as H for the same input. At this point, we don't
>> really need two different names as far as Turing Machines.
>>
>> Otherwise, you are just LYING that you built Linz H^ properly, as it
>> must be built on a computationally exact copy of Linz H.
>>
>>>
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>
>>> Using an adaptation of Mike's idea combined with an earlier
>>> idea of mine: Ĥ.H simulates Ĥ applied to ⟨Ĥ⟩ until it sees
>>> an exact copy of the same machine description try to simulate
>>> itself again with identical copies of its same input.
>>
>> Except that there isn't a unique description for a given Turing
>> Machine, so H can't know what is its "Description".
>>
>>>
>>> Then the outermost Ĥ.H transitions to Ĥ.Hqn indicating that it
>>> must abort its simulation of Ĥ applied to ⟨Ĥ⟩ to prevent its own
>>> infinite execution.
>>>
>>> Although this halt status does not correspond to the actual
>>> behavior of Ĥ applied to ⟨Ĥ⟩ it does cause Ĥ to halt. When
>>> H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ it can see that Ĥ applied to ⟨Ĥ⟩ halts.
>>> *Thus Linz Ĥ can fool itself yet cannot fool Linz H*
>>>
>>> The reason that this works is that Ĥ contradicts its own
>>> internal copy of H yet cannot contradict the actual Linz H.
>>>
>>
>> But that impliess that H and H1 ard NOT the same computation,
>> otherwise, why did they act differe, and thus your argument is proved
>> to be a LIE.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> Execution trace of Ĥ applied to ⟨Ĥ⟩
> (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
> (b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
> (c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process
>
> Both H and Ĥ.H transition to their NO state when a correct and
> complete simulation of their input would cause their own infinite
> execution and otherwise transition to their YES state.
>
> Humans can see that this criteria derives different answers
> for Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ than for H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.
>
>

Yes, Humans can see it, and in fact, many halt deciders that are not H
can see it. The key is that H just can't return the right answer to this
form of input, thus we can show that no correct halt decider can exist.

IF H1 acts differently than H for the exact same input, that just proves
that H1 is NOT a equivalent copy of H, so can't be the Linz-H for your
H^, and thus you LIE when you say it refutes the proof.

And, we can just make a H1^ to show that H1 isn't a correct halt decider.

You are just shown to be the ignorant pathological lying idiot again.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us3ao5$2q7v4$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53926&group=comp.theory#53926

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 20:13:25 -0600
Organization: A noiseless patient Spider
Lines: 43
Message-ID: <us3ao5$2q7v4$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 02:13:25 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="2957284"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19GnK9jzHLwuuZrROBgUtIZ"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:p1lWOEV5Fc8xS9eS31ORhZ3sL3s=
Content-Language: en-US
In-Reply-To: <us2n8c$lq4d$2@i2pn2.org>
 by: olcott - Mon, 4 Mar 2024 02:13 UTC

On 3/3/2024 2:40 PM, Richard Damon wrote:
> On 3/3/24 1:47 PM, olcott wrote:
>> On 3/3/2024 11:48 AM, Mikko wrote:
>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>
>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>
>>>>>> None-the-less actual computers do actually demonstrate
>>>>>> actual very deep understanding of these things.
>>>>>
>>>>> Not very deep, just deeper that you can achieve.
>>>>>
>>>>
>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>
>>> That does not demonstrate any understanding, even shallow.
>>>
>>
>> The first thing that it does is agree that Hehner's
>> "Carol's question" (augmented by Richards critique)
>> is an example of the Liar Paradox.
>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>
>> It ends up concluding that myself, professor Hehner and
>> professor Stoddart are all correct in that there is
>> something wrong with the halting problem.
>
> Which since it is proven that Chat GPT doesn't actually know what is a
> fact, and has been proven to lie,

The first thing that it figured out on its own is that
Carol's question is isomorphic to the Liar Paradox.

It eventually agreed with the same conclusion that
myself and professors Hehner and Stoddart agreed to.
It took 34 pages of dialog to understand this. I
finally have a good PDF of this.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us3bcg$lq4d$12@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53927&group=comp.theory#53927

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 21:24:16 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us3bcg$lq4d$12@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 02:24:16 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714893"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <us3ao5$2q7v4$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
 by: Richard Damon - Mon, 4 Mar 2024 02:24 UTC

On 3/3/24 9:13 PM, olcott wrote:
> On 3/3/2024 2:40 PM, Richard Damon wrote:
>> On 3/3/24 1:47 PM, olcott wrote:
>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>
>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>
>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>> actual very deep understanding of these things.
>>>>>>
>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>
>>>>>
>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>
>>>> That does not demonstrate any understanding, even shallow.
>>>>
>>>
>>> The first thing that it does is agree that Hehner's
>>> "Carol's question" (augmented by Richards critique)
>>> is an example of the Liar Paradox.
>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>
>>> It ends up concluding that myself, professor Hehner and
>>> professor Stoddart are all correct in that there is
>>> something wrong with the halting problem.
>>
>> Which since it is proven that Chat GPT doesn't actually know what is a
>> fact, and has been proven to lie,
>
> The first thing that it figured out on its own is that
> Carol's question is isomorphic to the Liar Paradox.
>
> It eventually agreed with the same conclusion that
> myself and professors Hehner and Stoddart agreed to.
> It took 34 pages of dialog to understand this. I
> finally have a good PDF of this.
>

It didn't "Figure it out". it pattern matched it to previous input it
has been given.

If it took 34 pages to argee with your conclusion, then it really didn't
agree with you initially, but you finally trained it to your version of
reality.

The programming for it is designed to try to figure out how to agree
with what the questioner is describing.

Actual limits of computations != actual limits of computers with unlimited memory ?

<us3c9f$2qj3n$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53929&group=comp.theory#53929

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Sun, 3 Mar 2024 20:39:41 -0600
Organization: A noiseless patient Spider
Lines: 150
Message-ID: <us3c9f$2qj3n$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Mar 2024 02:39:43 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="2968695"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18B39nF8v4N0txpcK1W1nrJ"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:0Q7nNnmr2TUj8vSbey+x3A+G1u8=
In-Reply-To: <us2n8e$lq4d$3@i2pn2.org>
Content-Language: en-US
 by: olcott - Mon, 4 Mar 2024 02:39 UTC

On 3/3/2024 2:40 PM, Richard Damon wrote:
> On 3/3/24 11:19 AM, olcott wrote:
>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>> On 3/2/24 9:09 PM, olcott wrote:
>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>
>>>>>>>>>>> Namely that you are lying that H and H1 are actually the same
>>>>>>>>>>> computation.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>
>>>>>>>>>>> Which means that H and H1 are not computations, and you have
>>>>>>>>>>> been just an ignorant pathological liar all this time.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>
>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>
>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the address of
>>>>>>>>>>> the decider, which isn't defined as an "input" to it, we see
>>>>>>>>>>> that you have been lying that this code is a computation.
>>>>>>>>>>> Likely because you have made yourself ignorant of what a
>>>>>>>>>>> computation actually is,
>>>>>>>>>>>
>>>>>>>>>>> Thus you have made yourself into an Ignorant Pathological
>>>>>>>>>>> Lying Idiot.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>
>>>>>>>>>>> Nope.
>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>
>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> And description of a Turing Machine (or a Computation) that
>>>>>>>>> needs to reference atributes of Modern Electronic Computers is
>>>>>>>>> just WRONG as they predate the development of such a thing.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>
>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>
>>>>>>> So?
>>>>>>>
>>>>>>> Doesn't let a "Computation" change its answer based on its memory
>>>>>>> address.
>>>>>>>
>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not
>>>>>> halt
>>>>>>
>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>> simulation.
>>>>>>
>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>
>>>>>> That no computer will ever achieve this degree of
>>>>>> understanding is directly contradicted by this:
>>>>>>
>>>>>> ChatGPT 4.0 dialogue.
>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>
>>>>>>
>>>>>
>>>>> No COMPUTATION can solve it, because it has been proved impossible.
>>>>
>>>> None-the-less actual computers do actually demonstrate
>>>> actual very deep understanding of these things.
>>>
>>> Do computers actually UNDERSTAND?
>>
>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>> Demonstrates the functional equivalent of deep understanding.
>> The first thing that it does is categorize Carol's question
>> as equivalent to the Liar Paradox.
>
> Nope, doesn't show what you claim, just that it has been taught by "rote
> memorization" that the answer to a question put the way you did is the
> answer it gave.
>
> You are just showing that YOU don't understand what the word UNDERSTAND
> actually means.
>
>>
>>>>
>>>> This proves that the artifice of the human notion of
>>>> computation is more limiting than actual real computers.
>>>
>>> In other words, you reject the use of definitions to define words.
>>>
>>> I guess to you, nothing means what others have said it means,
>>>
>>
>> I have found that it is the case that some definitions of
>> technical terms sometimes boxes people into misconceptions
>> such that alternative views are inexpressible within the
>> technical language.  https://en.wikipedia.org/wiki/Linguistic_relativity
>
> In other words, you are admtting that when you claim to be working in a
> technical field and using the words as that field means, you are just
> being a out and out LIAR.

Not all all. When working with any technical definition I never
simply assume that it is coherent. I always assume that it is
possibly incoherent until proven otherwise.

If there are physically existing machines that can answer questions
that are not Turing computable only because these machine can access
their own machine address then these machines would be strictly more
powerful than Turing Machines on these questions.

If computability only means can't be done in a certain artificially
limited way and not any actual limit on what computers can actually
do then computability would seem to be nonsense.

*Alternatively Turing Machines can somehow solve the same set*
*of problems as machines that know their own machine address*

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us3fc1$2uo74$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53931&group=comp.theory#53931

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 21:32:17 -0600
Organization: A noiseless patient Spider
Lines: 68
Message-ID: <us3fc1$2uo74$1@dont-email.me>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 03:32:17 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="ce657901b2f63b080d23b72945a142c8";
logging-data="3104996"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/lzeswtg6OG9UZpvjbIxHM"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:dVG9uzmHs690FOOXmPYgtx5tPWw=
Content-Language: en-US
In-Reply-To: <us3bcg$lq4d$12@i2pn2.org>
 by: olcott - Mon, 4 Mar 2024 03:32 UTC

On 3/3/2024 8:24 PM, Richard Damon wrote:
> On 3/3/24 9:13 PM, olcott wrote:
>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>> On 3/3/24 1:47 PM, olcott wrote:
>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>
>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>
>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>> actual very deep understanding of these things.
>>>>>>>
>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>
>>>>>>
>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>
>>>>> That does not demonstrate any understanding, even shallow.
>>>>>
>>>>
>>>> The first thing that it does is agree that Hehner's
>>>> "Carol's question" (augmented by Richards critique)
>>>> is an example of the Liar Paradox.
>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>
>>>> It ends up concluding that myself, professor Hehner and
>>>> professor Stoddart are all correct in that there is
>>>> something wrong with the halting problem.
>>>
>>> Which since it is proven that Chat GPT doesn't actually know what is
>>> a fact, and has been proven to lie,
>>
>> The first thing that it figured out on its own is that
>> Carol's question is isomorphic to the Liar Paradox.
>>
>> It eventually agreed with the same conclusion that
>> myself and professors Hehner and Stoddart agreed to.
>> It took 34 pages of dialog to understand this. I
>> finally have a good PDF of this.
>>
>
> It didn't "Figure it out". it pattern matched it to previous input it
> has been given.
>
> If it took 34 pages to argee with your conclusion, then it really didn't
> agree with you initially, but you finally trained it to your version of
> reality.

*HERE IS ITS AGREEMENT*
When an input, such as the halting problem's pathological input D, is
designed to contradict every value that the halting decider H returns,
it creates a self-referential paradox that prevents H from providing a
consistent and correct response. In this context, D can be seen as
posing an incorrect question to H, as its contradictory nature
undermines the possibility of a meaningful and accurate answer.

>
> The programming for it is designed to try to figure out how to agree
> with what the questioner is describing.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Actual limits of computations != actual limits of computers with unlimited memory ?

<us3iev$lq4c$10@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53934&group=comp.theory#53934

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: Actual limits of computations != actual limits of computers with
unlimited memory ?
Date: Sun, 3 Mar 2024 23:25:03 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us3iev$lq4c$10@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1pkc$fjqv$25@i2pn2.org>
<us27ut$2iua6$1@dont-email.me> <us2n8e$lq4d$3@i2pn2.org>
<us3c9f$2qj3n$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 4 Mar 2024 04:25:03 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714892"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <us3c9f$2qj3n$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
 by: Richard Damon - Mon, 4 Mar 2024 04:25 UTC

On 3/3/24 9:39 PM, olcott wrote:
> On 3/3/2024 2:40 PM, Richard Damon wrote:
>> On 3/3/24 11:19 AM, olcott wrote:
>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>
>>>>>>>>>>>> Namely that you are lying that H and H1 are actually the
>>>>>>>>>>>> same computation.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>
>>>>>>>>>>>> Which means that H and H1 are not computations, and you have
>>>>>>>>>>>> been just an ignorant pathological liar all this time.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>
>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>
>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the address of
>>>>>>>>>>>> the decider, which isn't defined as an "input" to it, we see
>>>>>>>>>>>> that you have been lying that this code is a computation.
>>>>>>>>>>>> Likely because you have made yourself ignorant of what a
>>>>>>>>>>>> computation actually is,
>>>>>>>>>>>>
>>>>>>>>>>>> Thus you have made yourself into an Ignorant Pathological
>>>>>>>>>>>> Lying Idiot.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>
>>>>>>>>>>>> Nope.
>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>
>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> And description of a Turing Machine (or a Computation) that
>>>>>>>>>> needs to reference atributes of Modern Electronic Computers is
>>>>>>>>>> just WRONG as they predate the development of such a thing.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>
>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>
>>>>>>>> So?
>>>>>>>>
>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>> memory address.
>>>>>>>>
>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not
>>>>>>> halt
>>>>>>>
>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>> simulation.
>>>>>>>
>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>
>>>>>>> That no computer will ever achieve this degree of
>>>>>>> understanding is directly contradicted by this:
>>>>>>>
>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> No COMPUTATION can solve it, because it has been proved impossible.
>>>>>
>>>>> None-the-less actual computers do actually demonstrate
>>>>> actual very deep understanding of these things.
>>>>
>>>> Do computers actually UNDERSTAND?
>>>
>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>> Demonstrates the functional equivalent of deep understanding.
>>> The first thing that it does is categorize Carol's question
>>> as equivalent to the Liar Paradox.
>>
>> Nope, doesn't show what you claim, just that it has been taught by
>> "rote memorization" that the answer to a question put the way you did
>> is the answer it gave.
>>
>> You are just showing that YOU don't understand what the word
>> UNDERSTAND actually means.
>>
>>>
>>>>>
>>>>> This proves that the artifice of the human notion of
>>>>> computation is more limiting than actual real computers.
>>>>
>>>> In other words, you reject the use of definitions to define words.
>>>>
>>>> I guess to you, nothing means what others have said it means,
>>>>
>>>
>>> I have found that it is the case that some definitions of
>>> technical terms sometimes boxes people into misconceptions
>>> such that alternative views are inexpressible within the
>>> technical language.  https://en.wikipedia.org/wiki/Linguistic_relativity
>>
>> In other words, you are admtting that when you claim to be working in
>> a technical field and using the words as that field means, you are
>> just being a out and out LIAR.
>
> Not all all. When working with any technical definition I never
> simply assume that it is coherent. I always assume that it is
> possibly incoherent until proven otherwise.

In other words, you ADMIT that you ignore technical definitions and thus
you comments about working in the field is just an ignorant pathological
lie.

>
> If there are physically existing machines that can answer questions
> that are not Turing computable only because these machine can access
> their own machine address then these machines would be strictly more
> powerful than Turing Machines on these questions.

Nope.

But you just admitted you are too ignorant of the actual meaning to make
a reasoned statement and too dishonest to conceed that, even after
admitting it,


Click here to read the complete article
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us3if9$lq4c$11@i2pn2.org>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=53935&group=comp.theory#53935

  copy link   Newsgroups: comp.theory
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory
Subject: Re: Why does H1(D,D) actually get a different result than H(D,D) ???
Date: Sun, 3 Mar 2024 23:25:13 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us3if9$lq4c$11@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Mon, 4 Mar 2024 04:25:13 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="714892"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us3fc1$2uo74$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 4 Mar 2024 04:25 UTC

On 3/3/24 10:32 PM, olcott wrote:
> On 3/3/2024 8:24 PM, Richard Damon wrote:
>> On 3/3/24 9:13 PM, olcott wrote:
>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>
>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>
>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>
>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>
>>>>>>>
>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>
>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>
>>>>>
>>>>> The first thing that it does is agree that Hehner's
>>>>> "Carol's question" (augmented by Richards critique)
>>>>> is an example of the Liar Paradox.
>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>
>>>>> It ends up concluding that myself, professor Hehner and
>>>>> professor Stoddart are all correct in that there is
>>>>> something wrong with the halting problem.
>>>>
>>>> Which since it is proven that Chat GPT doesn't actually know what is
>>>> a fact, and has been proven to lie,
>>>
>>> The first thing that it figured out on its own is that
>>> Carol's question is isomorphic to the Liar Paradox.
>>>
>>> It eventually agreed with the same conclusion that
>>> myself and professors Hehner and Stoddart agreed to.
>>> It took 34 pages of dialog to understand this. I
>>> finally have a good PDF of this.
>>>
>>
>> It didn't "Figure it out". it pattern matched it to previous input it
>> has been given.
>>
>> If it took 34 pages to argee with your conclusion, then it really
>> didn't agree with you initially, but you finally trained it to your
>> version of reality.
>
> *HERE IS ITS AGREEMENT*
> When an input, such as the halting problem's pathological input D, is
> designed to contradict every value that the halting decider H returns,
> it creates a self-referential paradox that prevents H from providing a
> consistent and correct response. In this context, D can be seen as
> posing an incorrect question to H, as its contradictory nature
> undermines the possibility of a meaningful and accurate answer.
>
>

Which means NOTHING as LLM will tell non-truths if feed misleading
information.

You are just proving you don't understand how they work.

You are just proving you don't understand how LOGIC works

You are just proving you are a TOTAL IDIOT.

>>
>> The programming for it is designed to try to figure out how to agree
>> with what the questioner is describing.
>
>
>


devel / comp.theory / Re: Why does H1(D,D) actually get a different result than H(D,D) ???

Pages:1234567
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor