Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

A university faculty is 500 egotists with a common parking problem.


tech / sci.logic / Re: Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correct

SubjectAuthor
* Why does H1(D,D) actually get a different result than H(D,D) ???olcott
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|| `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  | | +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  | | |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  | | | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  | | |  `* Actual limits of computations != actual limits of computers with unlimited memorolcott
||  | | |   `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||  | | |    `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||  | | |     `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||  | | |      `* Re: Actual limits of computations != actual limits of computers with unlimited molcott
||  | | |       `* Re: Actual limits of computations != actual limits of computers with unlimited mRichard Damon
||  | | |        `* Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         +* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |`* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         | `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |  +* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |  |+* Re: Limits of computations != actual limits of computers [ Church Turing ]immibis
||  | | |         |  ||`* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |  || `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |  ||  `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |  ||   `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |  ||    `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |  ||     `- Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |  |`* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |  | `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |  |  `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |  |   `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |  |    `- Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |  `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |   `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |    `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |     `* Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         |      `* Re: Limits of computations != actual limits of computers [ Church Turing ]olcott
||  | | |         |       `- Re: Limits of computations != actual limits of computers [ Church Turing ]Richard Damon
||  | | |         `- Re: Finlayson [ Church Turing ]Ross Finlayson
||  | | +* How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | |+* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | ||`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || +* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || |+* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || ||`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || || `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || ||  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || ||   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || ||    `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || | `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || |   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || |     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |      `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || |       `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || |        `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | || |         `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | || `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | ||  `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | ||   `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | ||    `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | ||     +- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | ||     `* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | ||      `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?Richard Damon
||  | | |`* Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?immibis
||  | | | `- Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?olcott
||  | | +* Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||  | | |`* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||  | | | `* Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩olcott
||  | | |  `- Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩Richard Damon
||  | | `* Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?olcott
||  | |  +- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?immibis
||  | |  `- Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?Richard Damon
||  | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||  |  `- Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  `- Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
|+- Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
|+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
|| `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  +* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  |`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  | `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  |  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  |   `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  |    `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???olcott
||  |     `- Re: Why does H1(D,D) actually get a different result than H(D,D) ???Richard Damon
||  `* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
||   `* Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correctolcott
||    `* Re: Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correctimmibis
|`* Re: Why does H1(D,D) actually get a different result than H(D,D) ???Tristan Wibberley
+* Re: Why does H1(D,D) actually get a different result than H(D,D) ???immibis
`* Re: Why does H1(D,D) actually get a different result than H(D,D) (Linz version)olcott

Pages:123456
Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us5vc9$psb9$4@i2pn2.org>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9045&group=sci.logic#9045

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 02:17 UTC

On 3/4/24 7:48 PM, olcott wrote:
> On 3/4/2024 6:21 PM, Richard Damon wrote:
>> On 3/4/24 3:14 PM, olcott wrote:
>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>
>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>
>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>
>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>
>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>>>
>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>
>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>
>>>>>>>>
>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>> input it has been given.
>>>>>>>>
>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>> really didn't agree with you initially, but you finally trained
>>>>>>>> it to your version of reality.
>>>>>>>
>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>> When an input, such as the halting problem's pathological input
>>>>>>> D, is
>>>>>>> designed to contradict every value that the halting decider H
>>>>>>> returns,
>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>> providing a
>>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> Which means NOTHING as LLM will tell non-truths if feed misleading
>>>>>> information.
>>>>>
>>>>> The above paragraph is proven to be completely true entirely
>>>>> on the basis of the meaning of its words as these words were
>>>>> defined in the dialogue that precedes them.
>>>>>
>>>>
>>>> Nope, the problem is you gave it incorrect implications on the
>>>> meaning of the wrods.
>>>
>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>> When an input, such as the halting problem's pathological input D, is
>>> designed to contradict every value that the halting decider H returns,
>>> it creates a self-referential paradox that prevents H from providing a
>>> consistent and correct response. In this context, D can be seen as
>>> posing an incorrect question to H, as its contradictory nature
>>> undermines the possibility of a meaningful and accurate answer.
>>>
>>> Within my definitions of my terms the above paragraph written by
>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>> 4.0 analysis is sound.
>>>
>>> *People are not free to disagree with stipulative definitions*
>>>
>>> A stipulative definition is a type of definition in which a new or
>>> currently existing term is given a new specific meaning for the purposes
>>> of argument or discussion in a given context. When the term already
>>> exists, this definition may, but does not necessarily, contradict the
>>> dictionary (lexical) definition of the term.
>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>
>>>
>>>
>>
>>
>> Right, and by that EXACT SAME RULE, when you "stipulate" a definition
>> different then that stipulated by a field, you place yourself outside
>> that field, and if you still claim to be working in it, you are just
>> admitting to being a bald-face LIAR.
>>
>
> Not exactly. When I stipulate a definition that shows the
> incoherence of the conventional definitions then I am working
> at the foundational level above this field.

Nope, if you change the definition of the field, you are in a new field.

Just like ZFC Set Theory isn't Naive Set Theory, if you change the
basis, you are in a new field.

>
>> And, no, CHAT GPT's analysis is NOT "Sound", at least not in the field
>> you claim to be working, as that has definition that must be followed,
>> which you don't.
>>
>
> It is perfectly sound within my stipulated definitions.
> Or we could say that it is perfectly valid when one take
> my definitions as its starting premises.

And not when you look at the field you claim to be in.

That just make you a LIAR.

>
>> So, maybe it is sound POOP logic, but not sound Computation logic.
>>
>> And you are js admitting you have been lying all these years.
>
> My stipulative definitions show that the conventional
> ones are established within an incoherent foundation.
>
> Your reviews continue to be very helpful for me to
> make my points increasingly more clearly.
>

Then build your new foundations and stop lying that you are working in
the ones you say are broken.

Only an idiot works in a field they think is broken, maybe that is why
you are doing it.

Of course, this means you need to learn enough of foundation building to
build your new system. That is a fairly rigorous task.

Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩

<us5vm7$psb9$5@i2pn2.org>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9046&group=sci.logic#9046

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 02:23 UTC

On 3/4/24 8:03 PM, olcott wrote:
> On 3/4/2024 6:22 PM, Richard Damon wrote:
>> On 3/4/24 2:53 PM, olcott wrote:
>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>
>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>
>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>
>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>
>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>
>>>>>>>
>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>
>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>
>>>>>
>>>>> The first thing that it does is agree that Hehner's
>>>>> "Carol's question" (augmented by Richards critique)
>>>>> is an example of the Liar Paradox.
>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>
>>>>> It ends up concluding that myself, professor Hehner and
>>>>> professor Stoddart are all correct in that there is
>>>>> something wrong with the halting problem.
>>>>
>>>> None of that demonstrates any understanding.
>>>>
>>>>> My persistent focus on these ideas gives me an increasingly
>>>>> deeper understanding thus my latest position is that the
>>>>> halting problem proofs do not actually show that halting
>>>>> is not computable.
>>>>
>>>> Your understanding is still defective and shallow.
>>>>
>>>
>>> If it really was shallow then a gap in my reasoning
>>> could be pointed out. The actual case is that because
>>> I have focused on the same problem based on the Linz
>>> proof for such a long time I noticed things that no
>>> one every noticed before. *Post from 2004*
>>
>> It has been.
>>
>> You are just too stupid to understand.
>>
>> You can't fix intentional stupid.
>>
>
> You can't see outside of the box of the incorrect foundation
> of the notion of analytic truth.
>
> Because hardly anyone knows that the Liar Paradox is not a truth bearer,
> even fewer people understand that epistemological antinomies are not
> truth bearers.
>
> All the people that do not understand that epistemological antinomies
> are not truth bearers cannot understand that asking question about the
> truth value of an epistemological antinomy is mistake.
>
> These people lack the basis to understand the decision problem/input
> pairs that are asking about the truth value of an epistemological
> antinomy is a mistake.

If the foundation is so bad, why are you still in it?

You are effectively proving you can't do better by staying.

>
> People lacking these prerequisite understandings simply write off what
> they do not understand as nonsense.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn     // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> Simulating termination analyzer Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition
> to Ĥ.Hqn or fail to halt.

And thus you LIE that you are working on the Halting Problem, by using
the wrong criteria.

>
> When its sole criterion measure is to always say NO to every input
> that would prevent it from halting then it must say NO to ⟨Ĥ⟩ ⟨Ĥ⟩.
>

Which makes it a LIE to claim you are working on the Halting Problem

> When H ⟨Ĥ⟩ ⟨Ĥ⟩ correctly uses this exact same criterion measure
> then the "abort simulation" criteria <is not> met thus providing
> the correct basis for a different answer.
>

Which just proves you are a PATHOLOGICAL LIAR since you keep on
insisting that you are working on the Halting problem when you are using
the wrong criteria.

The right answer to the wrong question is not a right answer, since it
is the question that matters, and to say otherwise is just a LIE.

Limits of computations != actual limits of computers [ Church Turing ]

<us5vvi$3ii6o$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9047&group=sci.logic#9047

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 02:28 UTC

On 3/4/2024 7:31 PM, Richard Damon wrote:
> On 3/4/24 2:31 PM, olcott wrote:
>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>> On 3/3/24 11:58 PM, olcott wrote:
>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and you
>>>>>>>>>>>>>>>>> have been just an ignorant pathological liar all this
>>>>>>>>>>>>>>>>> time.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation actually is,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>> Computers is just WRONG as they predate the development
>>>>>>>>>>>>>>> of such a thing.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>>>>>> memory address.
>>>>>>>>>>>>>
>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>> does not halt
>>>>>>>>>>>>
>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>> simulation.
>>>>>>>>>>>>
>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>
>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>
>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>> impossible.
>>>>>>>>>>
>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>
>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>
>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>
>>>>>>> Nope, doesn't show what you claim, just that it has been taught
>>>>>>> by "rote memorization" that the answer to a question put the way
>>>>>>> you did is the answer it gave.
>>>>>>>
>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>> UNDERSTAND actually means.
>>>>>>>
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>
>>>>>>>>> In other words, you reject the use of definitions to define words.
>>>>>>>>>
>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>
>>>>>>>>
>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>> technical language.
>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>
>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>> working in a technical field and using the words as that field
>>>>>>> means, you are just being a out and out LIAR.
>>>>>>
>>>>>> Not all all. When working with any technical definition I never
>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>> possibly incoherent until proven otherwise.
>>>>>
>>>>> In other words, you ADMIT that you ignore technical definitions and
>>>>> thus you comments about working in the field is just an ignorant
>>>>> pathological lie.
>>>>>
>>>>>>
>>>>>> If there are physically existing machines that can answer questions
>>>>>> that are not Turing computable only because these machine can access
>>>>>> their own machine address then these machines would be strictly more
>>>>>> powerful than Turing Machines on these questions.
>>>>>
>>>>> Nope.
>>>>
>>>>
>>>> If machine M can solve problems that machine N
>>>> cannot solve then for these problems M is more
>>>> powerful than N.
>>>
>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>> input (H1^) (H1^)
>>>
>>
>> I am not even talking about that.
>> In this new thread I am only talking about the generic case of:
>> *Actual limits of computations != actual limits of computers*
>> *with unlimited memory*
>>
>>> Note, I realise I misspoke a bit. Any "Non-computation" sub-program
>>> can be turned into a Computation, just by being honest and declairing
>>> as inputs the "Hidden Data" that it is using
>>>
>>>>
>>>>>
>>>>> But you just admitted you are too ignorant of the actual meaning to
>>>>> make a reasoned statement and too dishonest to conceed that, even
>>>>> after admitting it,
>>>>>
>>>>>>
>>>>>> If computability only means can't be done in a certain artificially
>>>>>> limited way and not any actual limit on what computers can actually
>>>>>> do then computability would seem to be nonsense.
>>>>>>
>>>>
>>>> Try and explain how this would not be nonsense.
>>>
>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts of
>>> problems we actuallly want to solve.
>>>
>>
>> If there is a physical machine that can solve problems that a Turing
>> machine cannot solve then we are only pretending that the limits of
>> computation are the limits of computers.
>
> But there isn't.
>
> At least not problems that can be phrased as a computation


Click here to read the complete article
Re: Limits of computations != actual limits of computers [ Church Turing ]

<us6209$psb9$6@i2pn2.org>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9048&group=sci.logic#9048

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 03:02 UTC

On 3/4/24 9:28 PM, olcott wrote:
> On 3/4/2024 7:31 PM, Richard Damon wrote:
>> On 3/4/24 2:31 PM, olcott wrote:
>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D) derives a
>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a different
>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar all
>>>>>>>>>>>>>>>>>> this time.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation actually is,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>>> Computers is just WRONG as they predate the development
>>>>>>>>>>>>>>>> of such a thing.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on its
>>>>>>>>>>>>>> memory address.
>>>>>>>>>>>>>>
>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>
>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>
>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>
>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>> impossible.
>>>>>>>>>>>
>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>
>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>
>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>
>>>>>>>> Nope, doesn't show what you claim, just that it has been taught
>>>>>>>> by "rote memorization" that the answer to a question put the way
>>>>>>>> you did is the answer it gave.
>>>>>>>>
>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>> UNDERSTAND actually means.
>>>>>>>>
>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>
>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>> words.
>>>>>>>>>>
>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>> technical language.
>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>
>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>> working in a technical field and using the words as that field
>>>>>>>> means, you are just being a out and out LIAR.
>>>>>>>
>>>>>>> Not all all. When working with any technical definition I never
>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>> possibly incoherent until proven otherwise.
>>>>>>
>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>> and thus you comments about working in the field is just an
>>>>>> ignorant pathological lie.
>>>>>>
>>>>>>>
>>>>>>> If there are physically existing machines that can answer questions
>>>>>>> that are not Turing computable only because these machine can access
>>>>>>> their own machine address then these machines would be strictly more
>>>>>>> powerful than Turing Machines on these questions.
>>>>>>
>>>>>> Nope.
>>>>>
>>>>>
>>>>> If machine M can solve problems that machine N
>>>>> cannot solve then for these problems M is more
>>>>> powerful than N.
>>>>
>>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>>> input (H1^) (H1^)
>>>>
>>>
>>> I am not even talking about that.
>>> In this new thread I am only talking about the generic case of:
>>> *Actual limits of computations != actual limits of computers*
>>> *with unlimited memory*
>>>
>>>> Note, I realise I misspoke a bit. Any "Non-computation" sub-program
>>>> can be turned into a Computation, just by being honest and
>>>> declairing as inputs the "Hidden Data" that it is using
>>>>
>>>>>
>>>>>>
>>>>>> But you just admitted you are too ignorant of the actual meaning
>>>>>> to make a reasoned statement and too dishonest to conceed that,
>>>>>> even after admitting it,
>>>>>>
>>>>>>>
>>>>>>> If computability only means can't be done in a certain artificially
>>>>>>> limited way and not any actual limit on what computers can actually
>>>>>>> do then computability would seem to be nonsense.
>>>>>>>
>>>>>
>>>>> Try and explain how this would not be nonsense.
>>>>
>>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts
>>>> of problems we actuallly want to solve.
>>>>
>>>
>>> If there is a physical machine that can solve problems that a Turing
>>> machine cannot solve then we are only pretending that the limits of
>>> computation are the limits of computers.
>>
>> But there isn't.
>>
>> At least not problems that can be phrased as a computation
>
> If (hypothetically) there are physical computers that can
> solve decision problems that Turing machines cannot solve
> then the notion of computability is not any actual real
> limit it is merely a fake limit.


Click here to read the complete article
Re: Limits of computations != actual limits of computers [ Church Turing ]

<us6548$3jd6k$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9049&group=sci.logic#9049

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 03:55 UTC

On 3/4/2024 9:02 PM, Richard Damon wrote:
> On 3/4/24 9:28 PM, olcott wrote:
>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>> On 3/4/24 2:31 PM, olcott wrote:
>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar all
>>>>>>>>>>>>>>>>>>> this time.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation actually
>>>>>>>>>>>>>>>>>>> is,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>>>> Computers is just WRONG as they predate the development
>>>>>>>>>>>>>>>>> of such a thing.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>
>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>
>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>
>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>
>>>>>>>>> Nope, doesn't show what you claim, just that it has been taught
>>>>>>>>> by "rote memorization" that the answer to a question put the
>>>>>>>>> way you did is the answer it gave.
>>>>>>>>>
>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>
>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>> words.
>>>>>>>>>>>
>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>> technical language.
>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>
>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>> working in a technical field and using the words as that field
>>>>>>>>> means, you are just being a out and out LIAR.
>>>>>>>>
>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>
>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>> and thus you comments about working in the field is just an
>>>>>>> ignorant pathological lie.
>>>>>>>
>>>>>>>>
>>>>>>>> If there are physically existing machines that can answer questions
>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>> access
>>>>>>>> their own machine address then these machines would be strictly
>>>>>>>> more
>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>
>>>>>>> Nope.
>>>>>>
>>>>>>
>>>>>> If machine M can solve problems that machine N
>>>>>> cannot solve then for these problems M is more
>>>>>> powerful than N.
>>>>>
>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>>>> input (H1^) (H1^)
>>>>>
>>>>
>>>> I am not even talking about that.
>>>> In this new thread I am only talking about the generic case of:
>>>> *Actual limits of computations != actual limits of computers*
>>>> *with unlimited memory*
>>>>
>>>>> Note, I realise I misspoke a bit. Any "Non-computation" sub-program
>>>>> can be turned into a Computation, just by being honest and
>>>>> declairing as inputs the "Hidden Data" that it is using
>>>>>
>>>>>>
>>>>>>>
>>>>>>> But you just admitted you are too ignorant of the actual meaning
>>>>>>> to make a reasoned statement and too dishonest to conceed that,
>>>>>>> even after admitting it,
>>>>>>>
>>>>>>>>
>>>>>>>> If computability only means can't be done in a certain artificially
>>>>>>>> limited way and not any actual limit on what computers can actually
>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>
>>>>>>
>>>>>> Try and explain how this would not be nonsense.
>>>>>
>>>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts
>>>>> of problems we actuallly want to solve.
>>>>>
>>>>
>>>> If there is a physical machine that can solve problems that a Turing
>>>> machine cannot solve then we are only pretending that the limits of
>>>> computation are the limits of computers.
>>>
>>> But there isn't.
>>>
>>> At least not problems that can be phrased as a computation
>>
>> If (hypothetically) there are physical computers that can
>> solve decision problems that Turing machines cannot solve
>> then the notion of computability is not any actual real
>> limit it is merely a fake limit.
>
> Except that it has been shown that there isn't such a thing, so your
> hypthetical is just a trip into fantasy land.
>


Click here to read the complete article
Re: Finlayson [ Church Turing ]

<0aWcnebHVN2hBXv4nZ2dnZfqnPadnZ2d@giganews.com>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9050&group=sci.logic#9050

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: ross.a.f...@gmail.com (Ross Finlayson)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Ross Finlayson - Tue, 5 Mar 2024 03:58 UTC

On 03/04/2024 06:28 PM, olcott wrote:

[...]

> If there is an actual limit that a Turing equivalent machine
> can't know its own machine address I expect that either this
> limit is fake or machines that can know their own machine
> address (and have unlimited memory) may be more powerful
> than Turing machines.
>
> In computability theory, the Church–Turing thesis...
> is a thesis about the nature of computable functions. It states
> that a function on the natural numbers can be calculated by an
> effective method if and only if it is computable by a Turing machine.
> https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
>
> If a machine that can know its own machine address is an
> aspect of the requirements to solve any decision problem
> such as the halting problem and the finite string input
> to such a machine is construed as a natural number and
> halting is construed as a function on natural numbers
> then such a machine would seem to refute Church/Turing.
>

Some theories of the natural numbers, have that,
there is some "standard model", of the natural numbers.

Some theories of the natural numbers, have that,
there is not a "standard model", of the natural numbers,
only "fragments", and, "extensions".

Most of the time, that there is a, "law of large numbers",
is according to infinite induction wiling away the finite,
to zero, and along the way, linearly.

Some theories have that there are, "law(s) of large numbers",
and it depends on the space, and, the practical, the potential,
the effective, the actual: infinity, of the numbers, with
regards to classical expectations or standard expectations,
and non-classical expectations or non-standard expectations
or the non-linear.

You'll usually find these notions in probability theory,
theories of the non-standard and theories of the non-classical,
the non-linear, to explain why things usually framed in the
Central Limit Theorem, don't pan out, vis-a-vis the long tail
or "error record", and as for, ...,
"functions that converge very sloooww-ly",
and also "functions that belie their finite inputs".

So, any Turing model you've built is not having an infinite
tape, it is having an unbounded tape.

And, there are functions of natural numbers,
"computed", not by it, the unbounded Turing machine,
in these theories with "law(s), plural", of large numbers,
and what's called non-classical, non-standard, or non-linear,
but isn't just called wrong, because it's so.

So, Church-Rice theorem, Rice theorem, and Church-Turing thesis,
with regards to numbers: are incomplete.

One might qualify that by declaring what all's "classical,
standard, and linear", establishing a distinctness result,
instead of a wrong uniqueness result.

Then, this is part of what's called "extra-ordinary" the theory.

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us68fc$3jtk1$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9052&group=sci.logic#9052

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 04:52 UTC

On 3/4/2024 8:17 PM, Richard Damon wrote:
> On 3/4/24 7:48 PM, olcott wrote:
>> On 3/4/2024 6:21 PM, Richard Damon wrote:
>>> On 3/4/24 3:14 PM, olcott wrote:
>>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>>
>>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>>
>>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>>
>>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>>>>
>>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>>
>>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>>> input it has been given.
>>>>>>>>>
>>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>>> really didn't agree with you initially, but you finally trained
>>>>>>>>> it to your version of reality.
>>>>>>>>
>>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>>> When an input, such as the halting problem's pathological input
>>>>>>>> D, is
>>>>>>>> designed to contradict every value that the halting decider H
>>>>>>>> returns,
>>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>>> providing a
>>>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> Which means NOTHING as LLM will tell non-truths if feed
>>>>>>> misleading information.
>>>>>>
>>>>>> The above paragraph is proven to be completely true entirely
>>>>>> on the basis of the meaning of its words as these words were
>>>>>> defined in the dialogue that precedes them.
>>>>>>
>>>>>
>>>>> Nope, the problem is you gave it incorrect implications on the
>>>>> meaning of the wrods.
>>>>
>>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>>> When an input, such as the halting problem's pathological input D, is
>>>> designed to contradict every value that the halting decider H returns,
>>>> it creates a self-referential paradox that prevents H from providing a
>>>> consistent and correct response. In this context, D can be seen as
>>>> posing an incorrect question to H, as its contradictory nature
>>>> undermines the possibility of a meaningful and accurate answer.
>>>>
>>>> Within my definitions of my terms the above paragraph written by
>>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>>> 4.0 analysis is sound.
>>>>
>>>> *People are not free to disagree with stipulative definitions*
>>>>
>>>> A stipulative definition is a type of definition in which a new or
>>>> currently existing term is given a new specific meaning for the
>>>> purposes
>>>> of argument or discussion in a given context. When the term already
>>>> exists, this definition may, but does not necessarily, contradict the
>>>> dictionary (lexical) definition of the term.
>>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>>
>>>>
>>>>
>>>
>>>
>>> Right, and by that EXACT SAME RULE, when you "stipulate" a definition
>>> different then that stipulated by a field, you place yourself outside
>>> that field, and if you still claim to be working in it, you are just
>>> admitting to being a bald-face LIAR.
>>>
>>
>> Not exactly. When I stipulate a definition that shows the
>> incoherence of the conventional definitions then I am working
>> at the foundational level above this field.
>
> Nope, if you change the definition of the field, you are in a new field.
>
> Just like ZFC Set Theory isn't Naive Set Theory, if you change the
> basis, you are in a new field.

Yes that is exactly what I am doing, good call !

ZFC corrected the error of Naive Set Theory. I am correcting
the error of the conventional foundation of computability.

>>
>>> And, no, CHAT GPT's analysis is NOT "Sound", at least not in the
>>> field you claim to be working, as that has definition that must be
>>> followed, which you don't.
>>>
>>
>> It is perfectly sound within my stipulated definitions.
>> Or we could say that it is perfectly valid when one take
>> my definitions as its starting premises.
>
> And not when you look at the field you claim to be in.
>

OK then the field that I am in is the field of the
*correction to the foundational notions of computability*
Just like ZFC corrected Naive Set Theory.

> That just make you a LIAR.
>
>>
>>> So, maybe it is sound POOP logic, but not sound Computation logic.
>>>
>>> And you are js admitting you have been lying all these years.
>>
>> My stipulative definitions show that the conventional
>> ones are established within an incoherent foundation.
>>
>> Your reviews continue to be very helpful for me to
>> make my points increasingly more clearly.
>>
>
> Then build your new foundations and stop lying that you are working in
> the ones you say are broken.
>
> Only an idiot works in a field they think is broken, maybe that is why
> you are doing it.
>
> Of course, this means you need to learn enough of foundation building to
> build your new system. That is a fairly rigorous task.


Click here to read the complete article
Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩

<us6c3c$3kffi$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9056&group=sci.logic#9056

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 05:54 UTC

On 3/4/2024 8:23 PM, Richard Damon wrote:
> On 3/4/24 8:03 PM, olcott wrote:
>> On 3/4/2024 6:22 PM, Richard Damon wrote:
>>> On 3/4/24 2:53 PM, olcott wrote:
>>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>>
>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>
>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>
>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>
>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>
>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>
>>>>>>
>>>>>> The first thing that it does is agree that Hehner's
>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>> is an example of the Liar Paradox.
>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>
>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>> professor Stoddart are all correct in that there is
>>>>>> something wrong with the halting problem.
>>>>>
>>>>> None of that demonstrates any understanding.
>>>>>
>>>>>> My persistent focus on these ideas gives me an increasingly
>>>>>> deeper understanding thus my latest position is that the
>>>>>> halting problem proofs do not actually show that halting
>>>>>> is not computable.
>>>>>
>>>>> Your understanding is still defective and shallow.
>>>>>
>>>>
>>>> If it really was shallow then a gap in my reasoning
>>>> could be pointed out. The actual case is that because
>>>> I have focused on the same problem based on the Linz
>>>> proof for such a long time I noticed things that no
>>>> one every noticed before. *Post from 2004*
>>>
>>> It has been.
>>>
>>> You are just too stupid to understand.
>>>
>>> You can't fix intentional stupid.
>>>
>>
>> You can't see outside of the box of the incorrect foundation
>> of the notion of analytic truth.
>>
>> Because hardly anyone knows that the Liar Paradox is not a truth bearer,
>> even fewer people understand that epistemological antinomies are not
>> truth bearers.
>>
>> All the people that do not understand that epistemological antinomies
>> are not truth bearers cannot understand that asking question about the
>> truth value of an epistemological antinomy is mistake.
>>
>> These people lack the basis to understand the decision problem/input
>> pairs that are asking about the truth value of an epistemological
>> antinomy is a mistake.
>
> If the foundation is so bad, why are you still in it?

When I explain what is wrong with the foundation I am
not within this same foundation that I am rebuking.

There are some aspects of the notion of analytical
truth that are incorrect and others that are not.

> You are effectively proving you can't do better by staying.
>
>>
>> People lacking these prerequisite understandings simply write off what
>> they do not understand as nonsense.
>>
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn     // Ĥ applied to ⟨Ĥ⟩ does not halt
>>
>> Simulating termination analyzer Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition
>> to Ĥ.Hqn or fail to halt.
>
> And thus you LIE that you are working on the Halting Problem, by using
> the wrong criteria.
>

When the "right" criteria cause Ĥ.H to never halt
then these "right" criteria are wrong.

>>
>> When its sole criterion measure is to always say NO to every input
>> that would prevent it from halting then it must say NO to ⟨Ĥ⟩ ⟨Ĥ⟩.
>>
>
> Which makes it a LIE to claim you are working on the Halting Problem
>

If the only way that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that
it must abort its simulation requires Ĥ to know its own machine
address then I expanded the scope of the halting problem to
include RASP machines where every P knows its own address.

>> When H ⟨Ĥ⟩ ⟨Ĥ⟩ correctly uses this exact same criterion measure
>> then the "abort simulation" criteria <is not> met thus providing
>> the correct basis for a different answer.
>>
>
>
> Which just proves you are a PATHOLOGICAL LIAR since you keep on
> insisting that you are working on the Halting problem when you are using
> the wrong criteria.
>

As long as Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that it must abort its
simulation then I am merely showing that the conventional halting
problem does not prove that a halt decider does not exist.

That you admitted that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition to Ĥ.Hqn
to prevent its own infinite execution and you admitted
that this makes Ĥ ⟨Ĥ⟩ halt then that proves that when
H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to H.qy *THIS IS THE CORRECT ANSWER*

You already know that all of that is correct, you
*simply don't believe that H can figure out how to do that*

> The right answer to the wrong question is not a right answer, since it
> is the question that matters, and to say otherwise is just a LIE.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Limits of computations != actual limits of computers [ Church Turing ]

<us6vut$re8s$4@i2pn2.org>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9063&group=sci.logic#9063

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 11:33 UTC

On 3/4/24 10:55 PM, olcott wrote:
> On 3/4/2024 9:02 PM, Richard Damon wrote:
>> On 3/4/24 9:28 PM, olcott wrote:
>>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>>> On 3/4/24 2:31 PM, olcott wrote:
>>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are actually
>>>>>>>>>>>>>>>>>>>> the same computation.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar all
>>>>>>>>>>>>>>>>>>>> this time.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly ignored.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation
>>>>>>>>>>>>>>>>>>>> actually is,
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a Computation)
>>>>>>>>>>>>>>>>>> that needs to reference atributes of Modern Electronic
>>>>>>>>>>>>>>>>>> Computers is just WRONG as they predate the
>>>>>>>>>>>>>>>>>> development of such a thing.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>>
>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>
>>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>>
>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>>
>>>>>>>>>> Nope, doesn't show what you claim, just that it has been
>>>>>>>>>> taught by "rote memorization" that the answer to a question
>>>>>>>>>> put the way you did is the answer it gave.
>>>>>>>>>>
>>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>>
>>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>>> words.
>>>>>>>>>>>>
>>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>>> technical language.
>>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>>
>>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>>> working in a technical field and using the words as that field
>>>>>>>>>> means, you are just being a out and out LIAR.
>>>>>>>>>
>>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>>
>>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>>> and thus you comments about working in the field is just an
>>>>>>>> ignorant pathological lie.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> If there are physically existing machines that can answer
>>>>>>>>> questions
>>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>>> access
>>>>>>>>> their own machine address then these machines would be strictly
>>>>>>>>> more
>>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>>
>>>>>>>> Nope.
>>>>>>>
>>>>>>>
>>>>>>> If machine M can solve problems that machine N
>>>>>>> cannot solve then for these problems M is more
>>>>>>> powerful than N.
>>>>>>
>>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on the
>>>>>> input (H1^) (H1^)
>>>>>>
>>>>>
>>>>> I am not even talking about that.
>>>>> In this new thread I am only talking about the generic case of:
>>>>> *Actual limits of computations != actual limits of computers*
>>>>> *with unlimited memory*
>>>>>
>>>>>> Note, I realise I misspoke a bit. Any "Non-computation"
>>>>>> sub-program can be turned into a Computation, just by being honest
>>>>>> and declairing as inputs the "Hidden Data" that it is using
>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> But you just admitted you are too ignorant of the actual meaning
>>>>>>>> to make a reasoned statement and too dishonest to conceed that,
>>>>>>>> even after admitting it,
>>>>>>>>
>>>>>>>>>
>>>>>>>>> If computability only means can't be done in a certain
>>>>>>>>> artificially
>>>>>>>>> limited way and not any actual limit on what computers can
>>>>>>>>> actually
>>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>>
>>>>>>>
>>>>>>> Try and explain how this would not be nonsense.
>>>>>>
>>>>>> First, it ISN'T "Artificial", it is a natural outcome of the sorts
>>>>>> of problems we actuallly want to solve.
>>>>>>
>>>>>
>>>>> If there is a physical machine that can solve problems that a Turing
>>>>> machine cannot solve then we are only pretending that the limits of
>>>>> computation are the limits of computers.
>>>>
>>>> But there isn't.
>>>>
>>>> At least not problems that can be phrased as a computation
>>>
>>> If (hypothetically) there are physical computers that can
>>> solve decision problems that Turing machines cannot solve
>>> then the notion of computability is not any actual real
>>> limit it is merely a fake limit.
>>
>> Except that it has been shown that there isn't such a thing, so your
>> hypthetical is just a trip into fantasy land.
>>
>
> This <is> such a virtual machine knows its own machine
> address.
>
> u32 H(ptr P, ptr I)
> {
>   u32 Address_of_H = (u32)H;
>
> Either Turing machines can accomplish the equivalent
> of this or they cannot.
>
> If they cannot and this prevents Turing machines
> from knowing that they have been called in recursive
> simulation and that prevents them from solving even
> a single instance of machine / input then that makes
> Turing machines less powerful on this machine/input pair.


Click here to read the complete article
Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us6vv2$re8s$6@i2pn2.org>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9065&group=sci.logic#9065

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 11:33 UTC

On 3/4/24 11:52 PM, olcott wrote:
> On 3/4/2024 8:17 PM, Richard Damon wrote:
>> On 3/4/24 7:48 PM, olcott wrote:
>>> On 3/4/2024 6:21 PM, Richard Damon wrote:
>>>> On 3/4/24 3:14 PM, olcott wrote:
>>>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>>>
>>>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>>>
>>>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually know
>>>>>>>>>>>> what is a fact, and has been proven to lie,
>>>>>>>>>>>
>>>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>>>
>>>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>>>> input it has been given.
>>>>>>>>>>
>>>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>>>> really didn't agree with you initially, but you finally
>>>>>>>>>> trained it to your version of reality.
>>>>>>>>>
>>>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>>>> When an input, such as the halting problem's pathological input
>>>>>>>>> D, is
>>>>>>>>> designed to contradict every value that the halting decider H
>>>>>>>>> returns,
>>>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>>>> providing a
>>>>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> Which means NOTHING as LLM will tell non-truths if feed
>>>>>>>> misleading information.
>>>>>>>
>>>>>>> The above paragraph is proven to be completely true entirely
>>>>>>> on the basis of the meaning of its words as these words were
>>>>>>> defined in the dialogue that precedes them.
>>>>>>>
>>>>>>
>>>>>> Nope, the problem is you gave it incorrect implications on the
>>>>>> meaning of the wrods.
>>>>>
>>>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>>>> When an input, such as the halting problem's pathological input D, is
>>>>> designed to contradict every value that the halting decider H returns,
>>>>> it creates a self-referential paradox that prevents H from providing a
>>>>> consistent and correct response. In this context, D can be seen as
>>>>> posing an incorrect question to H, as its contradictory nature
>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>
>>>>> Within my definitions of my terms the above paragraph written by
>>>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>>>> 4.0 analysis is sound.
>>>>>
>>>>> *People are not free to disagree with stipulative definitions*
>>>>>
>>>>> A stipulative definition is a type of definition in which a new or
>>>>> currently existing term is given a new specific meaning for the
>>>>> purposes
>>>>> of argument or discussion in a given context. When the term already
>>>>> exists, this definition may, but does not necessarily, contradict the
>>>>> dictionary (lexical) definition of the term.
>>>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> Right, and by that EXACT SAME RULE, when you "stipulate" a
>>>> definition different then that stipulated by a field, you place
>>>> yourself outside that field, and if you still claim to be working in
>>>> it, you are just admitting to being a bald-face LIAR.
>>>>
>>>
>>> Not exactly. When I stipulate a definition that shows the
>>> incoherence of the conventional definitions then I am working
>>> at the foundational level above this field.
>>
>> Nope, if you change the definition of the field, you are in a new field.
>>
>> Just like ZFC Set Theory isn't Naive Set Theory, if you change the
>> basis, you are in a new field.
>
> Yes that is exactly what I am doing, good call !
>
> ZFC corrected the error of Naive Set Theory. I am correcting
> the error of the conventional foundation of computability.

So, just admit that you aren't doing "Compuputation Theory", and thus
can't say you are refute Linz or any one else that WAS doing Computation
Theory.

Just admit you are doing POOP theory.
>
>>>
>>>> And, no, CHAT GPT's analysis is NOT "Sound", at least not in the
>>>> field you claim to be working, as that has definition that must be
>>>> followed, which you don't.
>>>>
>>>
>>> It is perfectly sound within my stipulated definitions.
>>> Or we could say that it is perfectly valid when one take
>>> my definitions as its starting premises.
>>
>> And not when you look at the field you claim to be in.
>>
>
> OK then the field that I am in is the field of the
> *correction to the foundational notions of computability*
> Just like ZFC corrected Naive Set Theory.


Click here to read the complete article
Re: Correcting the foundation of analytic truth and Linz H ⟨Ĥ⟩ ⟨Ĥ⟩

<us6vv4$re8s$7@i2pn2.org>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9066&group=sci.logic#9066

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: rich...@damon-family.org (Richard Damon)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: Richard Damon - Tue, 5 Mar 2024 11:33 UTC

On 3/5/24 12:54 AM, olcott wrote:
> On 3/4/2024 8:23 PM, Richard Damon wrote:
>> On 3/4/24 8:03 PM, olcott wrote:
>>> On 3/4/2024 6:22 PM, Richard Damon wrote:
>>>> On 3/4/24 2:53 PM, olcott wrote:
>>>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>>>
>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>
>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>
>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>
>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>
>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>
>>>>>>>
>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>> is an example of the Liar Paradox.
>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>
>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>> professor Stoddart are all correct in that there is
>>>>>>> something wrong with the halting problem.
>>>>>>
>>>>>> None of that demonstrates any understanding.
>>>>>>
>>>>>>> My persistent focus on these ideas gives me an increasingly
>>>>>>> deeper understanding thus my latest position is that the
>>>>>>> halting problem proofs do not actually show that halting
>>>>>>> is not computable.
>>>>>>
>>>>>> Your understanding is still defective and shallow.
>>>>>>
>>>>>
>>>>> If it really was shallow then a gap in my reasoning
>>>>> could be pointed out. The actual case is that because
>>>>> I have focused on the same problem based on the Linz
>>>>> proof for such a long time I noticed things that no
>>>>> one every noticed before. *Post from 2004*
>>>>
>>>> It has been.
>>>>
>>>> You are just too stupid to understand.
>>>>
>>>> You can't fix intentional stupid.
>>>>
>>>
>>> You can't see outside of the box of the incorrect foundation
>>> of the notion of analytic truth.
>>>
>>> Because hardly anyone knows that the Liar Paradox is not a truth bearer,
>>> even fewer people understand that epistemological antinomies are not
>>> truth bearers.
>>>
>>> All the people that do not understand that epistemological antinomies
>>> are not truth bearers cannot understand that asking question about the
>>> truth value of an epistemological antinomy is mistake.
>>>
>>> These people lack the basis to understand the decision problem/input
>>> pairs that are asking about the truth value of an epistemological
>>> antinomy is a mistake.
>>
>> If the foundation is so bad, why are you still in it?
>
> When I explain what is wrong with the foundation I am
> not within this same foundation that I am rebuking.

You can't rebuke a statement that isn't in the system you are working!

You are just admitting you don't understand how logic works and are
nothing other than an ignorant pathological lying idiot.

>
> There are some aspects of the notion of analytical
> truth that are incorrect and others that are not.

So, you can build you new system on what you think is still good, but
you need to build up from the ground, or you are just admitting that you
are still using and accepting what you claim to be wrong, and thus just
confirm that you are nothing but a liar.

>
>> You are effectively proving you can't do better by staying.
>>
>>>
>>> People lacking these prerequisite understandings simply write off what
>>> they do not understand as nonsense.
>>>
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn     // Ĥ applied to ⟨Ĥ⟩ does not halt
>>>
>>> Simulating termination analyzer Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition
>>> to Ĥ.Hqn or fail to halt.
>>
>> And thus you LIE that you are working on the Halting Problem, by using
>> the wrong criteria.
>>
>
> When the "right" criteria cause Ĥ.H to never halt
> then these "right" criteria are wrong.

Not in the real computation theory, so until you actually develop you
alternat theory, you are just admitting to lying.

>
>>>
>>> When its sole criterion measure is to always say NO to every input
>>> that would prevent it from halting then it must say NO to ⟨Ĥ⟩ ⟨Ĥ⟩.
>>>
>>
>> Which makes it a LIE to claim you are working on the Halting Problem
>>
>
> If the only way that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that
> it must abort its simulation requires Ĥ to know its own machine
> address then I expanded the scope of the halting problem to
> include RASP machines where every P knows its own address.

In other words, you are admitting to using bad logic.

You are just admitting you are a stupid ignorant pathological lying idiot.

>
>>> When H ⟨Ĥ⟩ ⟨Ĥ⟩ correctly uses this exact same criterion measure
>>> then the "abort simulation" criteria <is not> met thus providing
>>> the correct basis for a different answer.
>>>
>>
>>
>> Which just proves you are a PATHOLOGICAL LIAR since you keep on
>> insisting that you are working on the Halting problem when you are
>> using the wrong criteria.
>>
>
> As long as Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that it must abort its
> simulation then I am merely showing that the conventional halting
> problem does not prove that a halt decider does not exist.

Nope. you are proving that you beleive that Strawman arguments are
valid, and thus are just a stupid ignorant pathological lying idiot.

>
> That you admitted that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition to Ĥ.Hqn
> to prevent its own infinite execution and you admitted
> that this makes Ĥ ⟨Ĥ⟩ halt then that proves that when
> H ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to H.qy *THIS IS THE CORRECT ANSWER*

Except that having H (H^) (H^) go to qy while H^.H (H^) (H^) goes to qn
is just admitting that you havve lied about H and H^.H, thus proving you
are a catagorically stupid and ignorant pathalogical liar.
>
> You already know that all of that is correct, you
> *simply don't believe that H can figure out how to do that*

No, I know you have proven yourself unable to deal with the truth.

>
>> The right answer to the wrong question is not a right answer, since it
>> is the question that matters, and to say otherwise is just a LIE.
>

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us7hp5$3rfoj$4@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9071&group=sci.logic#9071

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 16:37 UTC

On 3/5/2024 3:28 AM, Mikko wrote:
> On 2024-03-04 20:14:34 +0000, olcott said:
>
>> *People are not free to disagree with stipulative definitions*
>
> People are free to disgree whether your stipulative definitions
> are useful or sensible.
Yes.

> They are not free to disagree with any
> correct inferences from those definitions
Yes.

> nor to agree with any incorrect inferences.
I don't think that there any of these.

The huge advantage of the dialogue with ChatGPT is that
its algorithm seemed to be able to spot any and all gaps
in reasoning. This enabled ChatGPT to provide feedback
so that I could make my definitions airtight.

> They may disagree about the relevance of
> any conclusions based on such definitions.

When a subset of undecidable decision problem/input
pairs are objectively determined to simply be wrong
then this conclusion seems to have no correct rebuttal.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?

<us7idm$3rtsq$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9072&group=sci.logic#9072

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 16:48 UTC

On 3/5/2024 3:33 AM, Mikko wrote:
> On 2024-03-04 19:53:05 +0000, olcott said:
>
>> On 3/4/2024 3:22 AM, Mikko wrote:
>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>
>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>
>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>
>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>> actual very deep understanding of these things.
>>>>>>>
>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>
>>>>>>
>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>
>>>>> That does not demonstrate any understanding, even shallow.
>>>>>
>>>>
>>>> The first thing that it does is agree that Hehner's
>>>> "Carol's question" (augmented by Richards critique)
>>>> is an example of the Liar Paradox.
>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>
>>>> It ends up concluding that myself, professor Hehner and
>>>> professor Stoddart are all correct in that there is
>>>> something wrong with the halting problem.
>>>
>>> None of that demonstrates any understanding.
>>>
>>>> My persistent focus on these ideas gives me an increasingly
>>>> deeper understanding thus my latest position is that the
>>>> halting problem proofs do not actually show that halting
>>>> is not computable.
>>>
>>> Your understanding is still defective and shallow.
>>>
>>
>> If it really was shallow then a gap in my reasoning
>> could be pointed out.
>
> Gaps in your reasons are pointed out every day.
>

There are no actual gaps in my reasoning. The closest that
anyone showed any actual gaps in my reasoning was merely
their presumption that there are gaps.

Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

It is true that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must transition to Ĥ.Hqn to prevent
its own infinite execution. It is true that this makes Ĥ ⟨Ĥ⟩ halt.
This entails that H ⟨Ĥ⟩ ⟨Ĥ⟩ would be correct to transition to H.qy.

No one can point to any gaps in the reasoning. The best
that Richard can do is simply disbelieve that H is not
smart enough to do that when both H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩
apply this same criterion measure:

Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.

We can also see that if both H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ do correctly
apply the above criterion measure that they would have the behavior
that I specified.

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us7ite$3rsaf$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9073&group=sci.logic#9073

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Tue, 5 Mar 2024 16:57 UTC

On 3/03/24 02:29, olcott wrote:
> On 3/2/2024 6:53 PM, Richard Damon wrote:
>> Note, Computers, as generally viewed, especiailly for "Compuation
>> Theory" have the limitation of being deterministic, whcih DOES make
>> them less powerful than the human mind, which has free will.
>
> LLMs have contradicted that. They are inherently stochastic.

Incorrect. That they use a random number generator as part of their
algorithm does not make them special. You can use a true random number
generator, which must be counted as an input to the computation, or you
can use a pseudo-random number generator which is deterministic. This is
no different from a computerized game of Blackjack.

Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correct

<us7jma$3s73b$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9074&group=sci.logic#9074

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 17:10 UTC

On 3/5/2024 10:57 AM, immibis wrote:
> On 3/03/24 02:29, olcott wrote:
>> On 3/2/2024 6:53 PM, Richard Damon wrote:
>>> Note, Computers, as generally viewed, especiailly for "Compuation
>>> Theory" have the limitation of being deterministic, whcih DOES make
>>> them less powerful than the human mind, which has free will.
>>
>> LLMs have contradicted that. They are inherently stochastic.
>
> Incorrect. That they use a random number generator as part of their
> algorithm does not make them special. You can use a true random number
> generator, which must be counted as an input to the computation, or you
> can use a pseudo-random number generator which is deterministic. This is
> no different from a computerized game of Blackjack.

Paradoxical Yes/No Dilemma June 17, 2023
My copyright notice is at the bottom showing that this is my dialogue
https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b

*This was written by ChaGPT summing up its complete agreement*
When an input, such as the halting problem's pathological input D, is
designed to contradict every value that the halting decider H returns,
it creates a self-referential paradox that prevents H from providing a
consistent and correct response. In this context, D can be seen as
posing an incorrect question to H, as its contradictory nature
undermines the possibility of a meaningful and accurate answer.

The key most important aspect of this is that this conclusion
is perfectly and correctly semantically entailed by the definitions
of the meaning of these terms in the prior dialogue. I had to
progressively refine these definitions in this 34 page dialogue.

[1] E C R Hehner. Objective and Subjective Specifications
WST Workshop on Termination, Oxford. 2018 July 18.
See https://www.cs.toronto.edu/~hehner/OSS.pdf

[2] Bill Stoddart. The Halting Paradox
20 December 2017
https://arxiv.org/abs/1906.05340
arXiv:1906.05340 [cs.LO]

[3] E C R Hehner. Problems with the Halting Problem, COMPUTING2011
Symposium on 75 years of Turing Machine and Lambda-Calculus, Karlsruhe
Germany, invited, 2011 October 20-21; Advances in Computer Science and
Engineering v.10 n.1 p.31-60, 2013
https://www.cs.toronto.edu/~hehner/PHP.pdf

--
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Limits of computations != actual limits of computers [ Church Turing ]

<us7kb9$3s73b$2@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9075&group=sci.logic#9075

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 17:21 UTC

On 3/5/2024 5:33 AM, Richard Damon wrote:
> On 3/4/24 10:55 PM, olcott wrote:
>> On 3/4/2024 9:02 PM, Richard Damon wrote:
>>> On 3/4/24 9:28 PM, olcott wrote:
>>>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>>>> On 3/4/24 2:31 PM, olcott wrote:
>>>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are
>>>>>>>>>>>>>>>>>>>>> actually the same computation.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar
>>>>>>>>>>>>>>>>>>>>> all this time.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly
>>>>>>>>>>>>>>>>>>>>>> ignored.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation
>>>>>>>>>>>>>>>>>>>>> actually is,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a
>>>>>>>>>>>>>>>>>>> Computation) that needs to reference atributes of
>>>>>>>>>>>>>>>>>>> Modern Electronic Computers is just WRONG as they
>>>>>>>>>>>>>>>>>>> predate the development of such a thing.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>>>
>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>>>
>>>>>>>>>>> Nope, doesn't show what you claim, just that it has been
>>>>>>>>>>> taught by "rote memorization" that the answer to a question
>>>>>>>>>>> put the way you did is the answer it gave.
>>>>>>>>>>>
>>>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>>>> words.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>>>> technical language.
>>>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>>>
>>>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>>>> working in a technical field and using the words as that
>>>>>>>>>>> field means, you are just being a out and out LIAR.
>>>>>>>>>>
>>>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>>>
>>>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>>>> and thus you comments about working in the field is just an
>>>>>>>>> ignorant pathological lie.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If there are physically existing machines that can answer
>>>>>>>>>> questions
>>>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>>>> access
>>>>>>>>>> their own machine address then these machines would be
>>>>>>>>>> strictly more
>>>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>>>
>>>>>>>>> Nope.
>>>>>>>>
>>>>>>>>
>>>>>>>> If machine M can solve problems that machine N
>>>>>>>> cannot solve then for these problems M is more
>>>>>>>> powerful than N.
>>>>>>>
>>>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on
>>>>>>> the input (H1^) (H1^)
>>>>>>>
>>>>>>
>>>>>> I am not even talking about that.
>>>>>> In this new thread I am only talking about the generic case of:
>>>>>> *Actual limits of computations != actual limits of computers*
>>>>>> *with unlimited memory*
>>>>>>
>>>>>>> Note, I realise I misspoke a bit. Any "Non-computation"
>>>>>>> sub-program can be turned into a Computation, just by being
>>>>>>> honest and declairing as inputs the "Hidden Data" that it is using
>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> But you just admitted you are too ignorant of the actual
>>>>>>>>> meaning to make a reasoned statement and too dishonest to
>>>>>>>>> conceed that, even after admitting it,
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If computability only means can't be done in a certain
>>>>>>>>>> artificially
>>>>>>>>>> limited way and not any actual limit on what computers can
>>>>>>>>>> actually
>>>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>>>
>>>>>>>>
>>>>>>>> Try and explain how this would not be nonsense.
>>>>>>>
>>>>>>> First, it ISN'T "Artificial", it is a natural outcome of the
>>>>>>> sorts of problems we actuallly want to solve.
>>>>>>>
>>>>>>
>>>>>> If there is a physical machine that can solve problems that a Turing
>>>>>> machine cannot solve then we are only pretending that the limits of
>>>>>> computation are the limits of computers.
>>>>>
>>>>> But there isn't.
>>>>>
>>>>> At least not problems that can be phrased as a computation
>>>>
>>>> If (hypothetically) there are physical computers that can
>>>> solve decision problems that Turing machines cannot solve
>>>> then the notion of computability is not any actual real
>>>> limit it is merely a fake limit.
>>>
>>> Except that it has been shown that there isn't such a thing, so your
>>> hypthetical is just a trip into fantasy land.
>>>
>>
>> This <is> such a virtual machine knows its own machine
>> address.
>>
>> u32 H(ptr P, ptr I)
>> {
>>    u32 Address_of_H = (u32)H;
>>
>> Either Turing machines can accomplish the equivalent
>> of this or they cannot.
>>
>> If they cannot and this prevents Turing machines
>> from knowing that they have been called in recursive
>> simulation and that prevents them from solving even
>> a single instance of machine / input then that makes
>> Turing machines less powerful on this machine/input pair.
>
> In other words, you are admitting you don't understand what a
> computation is or a mathematical function.


Click here to read the complete article
Re: Limits of computations != actual limits of computers [ Church Turing ]

<us7n92$3suf6$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9076&group=sci.logic#9076

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 18:11 UTC

On 3/5/2024 5:33 AM, Richard Damon wrote:
> On 3/4/24 10:55 PM, olcott wrote:
>> On 3/4/2024 9:02 PM, Richard Damon wrote:
>>> On 3/4/24 9:28 PM, olcott wrote:
>>>> On 3/4/2024 7:31 PM, Richard Damon wrote:
>>>>> On 3/4/24 2:31 PM, olcott wrote:
>>>>>> On 3/4/2024 6:40 AM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:58 PM, olcott wrote:
>>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 9:39 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 11:19 AM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 6:15 AM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/2/24 9:09 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/2/2024 7:46 PM, Richard Damon wrote:
>>>>>>>>>>>>>>> On 3/2/24 8:38 PM, olcott wrote:
>>>>>>>>>>>>>>>> On 3/2/2024 7:30 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>> On 3/2/24 8:08 PM, olcott wrote:
>>>>>>>>>>>>>>>>>> On 3/2/2024 6:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>> On 3/2/24 6:01 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>> On 3/2/2024 4:54 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>>> On 3/2/24 5:28 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>>> The reason that people assume that H1(D,D) must get
>>>>>>>>>>>>>>>>>>>>>> the same result as H(D,D) is that they make sure
>>>>>>>>>>>>>>>>>>>>>> to ignore the reason why they get a different result.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Namely that you are lying that H and H1 are
>>>>>>>>>>>>>>>>>>>>> actually the same computation.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> It turns out that the only reason that H1(D,D)
>>>>>>>>>>>>>>>>>>>>>> derives a
>>>>>>>>>>>>>>>>>>>>>> different result than H(D,D) is that H is at a
>>>>>>>>>>>>>>>>>>>>>> different
>>>>>>>>>>>>>>>>>>>>>> physical machine address than H1.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Which means that H and H1 are not computations, and
>>>>>>>>>>>>>>>>>>>>> you have been just an ignorant pathological liar
>>>>>>>>>>>>>>>>>>>>> all this time.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> D calls 00001522 // machine address of H
>>>>>>>>>>>>>>>>>>>>>> thus D does call 00001422 // machine address of H1
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Since this <is> the empirically determined deciding
>>>>>>>>>>>>>>>>>>>>>> factor I don't think that it can be correctly
>>>>>>>>>>>>>>>>>>>>>> ignored.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Right, but since the algorithm for H/H1 uses the
>>>>>>>>>>>>>>>>>>>>> address of the decider, which isn't defined as an
>>>>>>>>>>>>>>>>>>>>> "input" to it, we see that you have been lying that
>>>>>>>>>>>>>>>>>>>>> this code is a computation. Likely because you have
>>>>>>>>>>>>>>>>>>>>> made yourself ignorant of what a computation
>>>>>>>>>>>>>>>>>>>>> actually is,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Thus you have made yourself into an Ignorant
>>>>>>>>>>>>>>>>>>>>> Pathological Lying Idiot.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Any physically implemented Turing machine must exist
>>>>>>>>>>>>>>>>>>>>>> at some physical memory location.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Nope.
>>>>>>>>>>>>>>>>>>>> Any *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> *physically implemented Turing machine*
>>>>>>>>>>>>>>>>>>>> must exist at some physical memory location.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Try and explain the details of how it can be otherwise.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> And description of a Turing Machine (or a
>>>>>>>>>>>>>>>>>>> Computation) that needs to reference atributes of
>>>>>>>>>>>>>>>>>>> Modern Electronic Computers is just WRONG as they
>>>>>>>>>>>>>>>>>>> predate the development of such a thing.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Virtual Machines that are exactly Turing Machines
>>>>>>>>>>>>>>>>>> except for unlimited memory can and do exist.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> They necessarily must be implemented in physical memory
>>>>>>>>>>>>>>>>>> and cannot possibly be implemented any other way.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Doesn't let a "Computation" change its answer based on
>>>>>>>>>>>>>>>>> its memory address.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> halts
>>>>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does not halt
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by
>>>>>>>>>>>>>>>> Ĥ.H cannot possibly terminate unless Ĥ.H aborts its
>>>>>>>>>>>>>>>> simulation.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> We ourselves can also see that when Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>>>>>>>>>>>>> does abort its simulation then Ĥ will halt.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> That no computer will ever achieve this degree of
>>>>>>>>>>>>>>>> understanding is directly contradicted by this:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ChatGPT 4.0 dialogue.
>>>>>>>>>>>>>>>> https://www.liarparadox.org/ChatGPT_HP.pdf
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> No COMPUTATION can solve it, because it has been proved
>>>>>>>>>>>>>>> impossible.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Do computers actually UNDERSTAND?
>>>>>>>>>>>>
>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>> Demonstrates the functional equivalent of deep understanding.
>>>>>>>>>>>> The first thing that it does is categorize Carol's question
>>>>>>>>>>>> as equivalent to the Liar Paradox.
>>>>>>>>>>>
>>>>>>>>>>> Nope, doesn't show what you claim, just that it has been
>>>>>>>>>>> taught by "rote memorization" that the answer to a question
>>>>>>>>>>> put the way you did is the answer it gave.
>>>>>>>>>>>
>>>>>>>>>>> You are just showing that YOU don't understand what the word
>>>>>>>>>>> UNDERSTAND actually means.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This proves that the artifice of the human notion of
>>>>>>>>>>>>>> computation is more limiting than actual real computers.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In other words, you reject the use of definitions to define
>>>>>>>>>>>>> words.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I guess to you, nothing means what others have said it means,
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have found that it is the case that some definitions of
>>>>>>>>>>>> technical terms sometimes boxes people into misconceptions
>>>>>>>>>>>> such that alternative views are inexpressible within the
>>>>>>>>>>>> technical language.
>>>>>>>>>>>> https://en.wikipedia.org/wiki/Linguistic_relativity
>>>>>>>>>>>
>>>>>>>>>>> In other words, you are admtting that when you claim to be
>>>>>>>>>>> working in a technical field and using the words as that
>>>>>>>>>>> field means, you are just being a out and out LIAR.
>>>>>>>>>>
>>>>>>>>>> Not all all. When working with any technical definition I never
>>>>>>>>>> simply assume that it is coherent. I always assume that it is
>>>>>>>>>> possibly incoherent until proven otherwise.
>>>>>>>>>
>>>>>>>>> In other words, you ADMIT that you ignore technical definitions
>>>>>>>>> and thus you comments about working in the field is just an
>>>>>>>>> ignorant pathological lie.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If there are physically existing machines that can answer
>>>>>>>>>> questions
>>>>>>>>>> that are not Turing computable only because these machine can
>>>>>>>>>> access
>>>>>>>>>> their own machine address then these machines would be
>>>>>>>>>> strictly more
>>>>>>>>>> powerful than Turing Machines on these questions.
>>>>>>>>>
>>>>>>>>> Nope.
>>>>>>>>
>>>>>>>>
>>>>>>>> If machine M can solve problems that machine N
>>>>>>>> cannot solve then for these problems M is more
>>>>>>>> powerful than N.
>>>>>>>
>>>>>>> But your H1 doesn't actually SOLVE the problem, as it fails on
>>>>>>> the input (H1^) (H1^)
>>>>>>>
>>>>>>
>>>>>> I am not even talking about that.
>>>>>> In this new thread I am only talking about the generic case of:
>>>>>> *Actual limits of computations != actual limits of computers*
>>>>>> *with unlimited memory*
>>>>>>
>>>>>>> Note, I realise I misspoke a bit. Any "Non-computation"
>>>>>>> sub-program can be turned into a Computation, just by being
>>>>>>> honest and declairing as inputs the "Hidden Data" that it is using
>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> But you just admitted you are too ignorant of the actual
>>>>>>>>> meaning to make a reasoned statement and too dishonest to
>>>>>>>>> conceed that, even after admitting it,
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> If computability only means can't be done in a certain
>>>>>>>>>> artificially
>>>>>>>>>> limited way and not any actual limit on what computers can
>>>>>>>>>> actually
>>>>>>>>>> do then computability would seem to be nonsense.
>>>>>>>>>>
>>>>>>>>
>>>>>>>> Try and explain how this would not be nonsense.
>>>>>>>
>>>>>>> First, it ISN'T "Artificial", it is a natural outcome of the
>>>>>>> sorts of problems we actuallly want to solve.
>>>>>>>
>>>>>>
>>>>>> If there is a physical machine that can solve problems that a Turing
>>>>>> machine cannot solve then we are only pretending that the limits of
>>>>>> computation are the limits of computers.
>>>>>
>>>>> But there isn't.
>>>>>
>>>>> At least not problems that can be phrased as a computation
>>>>
>>>> If (hypothetically) there are physical computers that can
>>>> solve decision problems that Turing machines cannot solve
>>>> then the notion of computability is not any actual real
>>>> limit it is merely a fake limit.
>>>
>>> Except that it has been shown that there isn't such a thing, so your
>>> hypthetical is just a trip into fantasy land.
>>>
>>
>> This <is> such a virtual machine knows its own machine
>> address.
>>
>> u32 H(ptr P, ptr I)
>> {
>>    u32 Address_of_H = (u32)H;
>>
>> Either Turing machines can accomplish the equivalent
>> of this or they cannot.
>>
>> If they cannot and this prevents Turing machines
>> from knowing that they have been called in recursive
>> simulation and that prevents them from solving even
>> a single instance of machine / input then that makes
>> Turing machines less powerful on this machine/input pair.
>
> In other words, you are admitting you don't understand what a
> computation is or a mathematical function.
>
> And, you are just a stupid ignorant pathological lying idiot.
>
>>
>>> So, maybe in your mythological worlds with Computers that exceed the
>>> computability of current machines (something like being able to make
>>> an infinite number of decisions in a finite period of time) could do
>>> that, but since they don't exist, THAT is a "Fake Ability".
>>>
>>>>
>>>>>>
>>>>>>> Second, as I just mentioned, you can turn a "non-computation"
>>>>>>> into a computation by just being honest and declaring the
>>>>>>> "hidden" input as an actual input.
>>>>>>
>>>>>> The specific case is machines that can correctly determine their
>>>>>> own machine address relative to machines that cannot do this.
>>>>>> An x86 based virtual machine can determine its own machine address.
>>>>>>
>>>>>> u32 H(ptr P, ptr I)
>>>>>> {
>>>>>>    u32 Address_of_H = (u32)H;
>>>>>
>>>>> But what COMPUTATION are you trying to do.
>>>>
>>>> When a halt decider can easily tell that it is calling
>>>> itself with its same inputs then it has a very simple
>>>> "abort simulation" criteria.
>>>
>>> Except that the thing being decided on is supposed to be a seperate
>>> program running in its own memory space, so that "address match"
>>> trick doesn't work.
>>>
>>
>> Sure it does.
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
>>
>> When Ĥ.H knows the machine address of Ĥ then it can see
>> that it is simulating a identical copy its own machine
>> with an identical copy of its own input. It can determine
>> this by ordinary string comparison.
>
> To What?
>
> Turing machines don't have a unique description, so you don't know what
> to compare to.
>


Click here to read the complete article
Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us867v$5gh$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9081&group=sci.logic#9081

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: olcott - Tue, 5 Mar 2024 22:27 UTC

On 3/5/2024 5:33 AM, Richard Damon wrote:
> On 3/4/24 11:52 PM, olcott wrote:
>> On 3/4/2024 8:17 PM, Richard Damon wrote:
>>> On 3/4/24 7:48 PM, olcott wrote:
>>>> On 3/4/2024 6:21 PM, Richard Damon wrote:
>>>>> On 3/4/24 3:14 PM, olcott wrote:
>>>>>> On 3/4/2024 6:31 AM, Richard Damon wrote:
>>>>>>> On 3/3/24 11:37 PM, olcott wrote:
>>>>>>>> On 3/3/2024 10:25 PM, Richard Damon wrote:
>>>>>>>>> On 3/3/24 10:32 PM, olcott wrote:
>>>>>>>>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>>>>>>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>>>>>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>>>>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>>>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>>>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>>>>>>>>> is an example of the Liar Paradox.
>>>>>>>>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>>>>>>>>> professor Stoddart are all correct in that there is
>>>>>>>>>>>>>> something wrong with the halting problem.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Which since it is proven that Chat GPT doesn't actually
>>>>>>>>>>>>> know what is a fact, and has been proven to lie,
>>>>>>>>>>>>
>>>>>>>>>>>> The first thing that it figured out on its own is that
>>>>>>>>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>>>>>>>>
>>>>>>>>>>>> It eventually agreed with the same conclusion that
>>>>>>>>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>>>>>>>>> It took 34 pages of dialog to understand this. I
>>>>>>>>>>>> finally have a good PDF of this.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It didn't "Figure it out". it pattern matched it to previous
>>>>>>>>>>> input it has been given.
>>>>>>>>>>>
>>>>>>>>>>> If it took 34 pages to argee with your conclusion, then it
>>>>>>>>>>> really didn't agree with you initially, but you finally
>>>>>>>>>>> trained it to your version of reality.
>>>>>>>>>>
>>>>>>>>>> *HERE IS ITS AGREEMENT*
>>>>>>>>>> When an input, such as the halting problem's pathological
>>>>>>>>>> input D, is
>>>>>>>>>> designed to contradict every value that the halting decider H
>>>>>>>>>> returns,
>>>>>>>>>> it creates a self-referential paradox that prevents H from
>>>>>>>>>> providing a
>>>>>>>>>> consistent and correct response. In this context, D can be
>>>>>>>>>> seen as
>>>>>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Which means NOTHING as LLM will tell non-truths if feed
>>>>>>>>> misleading information.
>>>>>>>>
>>>>>>>> The above paragraph is proven to be completely true entirely
>>>>>>>> on the basis of the meaning of its words as these words were
>>>>>>>> defined in the dialogue that precedes them.
>>>>>>>>
>>>>>>>
>>>>>>> Nope, the problem is you gave it incorrect implications on the
>>>>>>> meaning of the wrods.
>>>>>>
>>>>>> *ChatGPT 4.0 final analysis of everything that preceded it*
>>>>>> When an input, such as the halting problem's pathological input D, is
>>>>>> designed to contradict every value that the halting decider H
>>>>>> returns,
>>>>>> it creates a self-referential paradox that prevents H from
>>>>>> providing a
>>>>>> consistent and correct response. In this context, D can be seen as
>>>>>> posing an incorrect question to H, as its contradictory nature
>>>>>> undermines the possibility of a meaningful and accurate answer.
>>>>>>
>>>>>> Within my definitions of my terms the above paragraph written by
>>>>>> Chat GPT 4.0 necessarily follows. This proves that the Chat GPT
>>>>>> 4.0 analysis is sound.
>>>>>>
>>>>>> *People are not free to disagree with stipulative definitions*
>>>>>>
>>>>>> A stipulative definition is a type of definition in which a new or
>>>>>> currently existing term is given a new specific meaning for the
>>>>>> purposes
>>>>>> of argument or discussion in a given context. When the term already
>>>>>> exists, this definition may, but does not necessarily, contradict the
>>>>>> dictionary (lexical) definition of the term.
>>>>>> https://en.wikipedia.org/wiki/Stipulative_definition
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> Right, and by that EXACT SAME RULE, when you "stipulate" a
>>>>> definition different then that stipulated by a field, you place
>>>>> yourself outside that field, and if you still claim to be working
>>>>> in it, you are just admitting to being a bald-face LIAR.
>>>>>
>>>>
>>>> Not exactly. When I stipulate a definition that shows the
>>>> incoherence of the conventional definitions then I am working
>>>> at the foundational level above this field.
>>>
>>> Nope, if you change the definition of the field, you are in a new field.
>>>
>>> Just like ZFC Set Theory isn't Naive Set Theory, if you change the
>>> basis, you are in a new field.
>>
>> Yes that is exactly what I am doing, good call !
>>
>> ZFC corrected the error of Naive Set Theory. I am correcting
>> the error of the conventional foundation of computability.
>
> So, just admit that you aren't doing "Compuputation Theory", and thus
> can't say you are refute Linz or any one else that WAS doing Computation
> Theory.
>
> Just admit you are doing POOP theory.


Click here to read the complete article
Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8i2e$24go$1@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9082&group=sci.logic#9082

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:49 UTC

On 3/03/24 02:08, olcott wrote:
> Virtual Machines that are exactly Turing Machines
> except for unlimited memory can and do exist.
>
> They necessarily must be implemented in physical memory
> and cannot possibly be implemented any other way.
>
> TM, The Turing Machine Interpreter
> David S. Woodruff
> http://www2.lns.mit.edu/~dsw/turing/turing.html
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> The states of a Turing machine <are> essentially
> memory locations.
>
> They have a perfect analogue in finite state machines

Good! Finite state machines cannot give different results just because
you store them in different places, either.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8i41$24go$2@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9083&group=sci.logic#9083

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:49 UTC

On 3/03/24 02:38, olcott wrote:
> ChatGPT 4.0 dialogue.
> https://www.liarparadox.org/ChatGPT_HP.pdf

You must understand that ChatGPT is a professional liar.

Re: Why does H1(D,D) actually get a different result than H(D,D) ???

<us8i6h$24go$3@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9084&group=sci.logic#9084

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:51 UTC

On 3/03/24 00:25, olcott wrote:
> On 3/2/2024 5:15 PM, immibis wrote:
>> On 2/03/24 23:28, olcott wrote:
>>> The reason that people assume that H1(D,D) must get
>>> the same result as H(D,D) is that they make sure
>>> to ignore the reason why they get a different result.
>>>
>>> It turns out that the only reason that H1(D,D) derives a
>>> different result than H(D,D) is that H is at a different
>>> physical machine address than H1.
>>
>> Incorrect - the reason that H1(D,D) derives a different result is that
>> H *looks for* a different physical machine address than H1.
>
> _D()
> [00001cf2] 55         push ebp
> [00001cf3] 8bec       mov ebp,esp
> [00001cf5] 51         push ecx
> [00001cf6] 8b4508     mov eax,[ebp+08]
> [00001cf9] 50         push eax
> [00001cfa] 8b4d08     mov ecx,[ebp+08]
> [00001cfd] 51         push ecx
> [00001cfe] e81ff8ffff call 00001522 ; call to H
> ...
>
> *That is factually incorrect*
> H and H1 simulate the actual code of D which actually
> calls H at machine address 00001522 and does not call
> H1 at machine address 00001422.

H stores a memory that if the executed program calls machine address
00001522 with the same parameters, it should abort the simulation.

H1 stores a memory that if the executed program calls machine address
00001422 with the same parameters, it should abort the simulation.

Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?

<us8ia2$24go$5@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9086&group=sci.logic#9086

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:53 UTC

On 4/03/24 05:37, olcott wrote:
> On 3/3/2024 10:25 PM, Richard Damon wrote:
>> On 3/3/24 10:32 PM, olcott wrote:
>>> On 3/3/2024 8:24 PM, Richard Damon wrote:
>>>> On 3/3/24 9:13 PM, olcott wrote:
>>>>> On 3/3/2024 2:40 PM, Richard Damon wrote:
>>>>>> On 3/3/24 1:47 PM, olcott wrote:
>>>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>>>
>>>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>>>
>>>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>>>
>>>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>>>
>>>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>>>
>>>>>>>
>>>>>>> The first thing that it does is agree that Hehner's
>>>>>>> "Carol's question" (augmented by Richards critique)
>>>>>>> is an example of the Liar Paradox.
>>>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>>>
>>>>>>> It ends up concluding that myself, professor Hehner and
>>>>>>> professor Stoddart are all correct in that there is
>>>>>>> something wrong with the halting problem.
>>>>>>
>>>>>> Which since it is proven that Chat GPT doesn't actually know what
>>>>>> is a fact, and has been proven to lie,
>>>>>
>>>>> The first thing that it figured out on its own is that
>>>>> Carol's question is isomorphic to the Liar Paradox.
>>>>>
>>>>> It eventually agreed with the same conclusion that
>>>>> myself and professors Hehner and Stoddart agreed to.
>>>>> It took 34 pages of dialog to understand this. I
>>>>> finally have a good PDF of this.
>>>>>
>>>>
>>>> It didn't "Figure it out". it pattern matched it to previous input
>>>> it has been given.
>>>>
>>>> If it took 34 pages to argee with your conclusion, then it really
>>>> didn't agree with you initially, but you finally trained it to your
>>>> version of reality.
>>>
>>> *HERE IS ITS AGREEMENT*
>>> When an input, such as the halting problem's pathological input D, is
>>> designed to contradict every value that the halting decider H returns,
>>> it creates a self-referential paradox that prevents H from providing a
>>> consistent and correct response. In this context, D can be seen as
>>> posing an incorrect question to H, as its contradictory nature
>>> undermines the possibility of a meaningful and accurate answer.
>>>
>>>
>>
>> Which means NOTHING as LLM will tell non-truths if feed misleading
>> information.
>
> The above paragraph is proven to be completely true entirely
> on the basis of the meaning of its words as these words were
> defined in the dialogue that precedes them.
>

no it isn't.

Re: Why do H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ derive different results ?

<us8ici$24go$6@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9087&group=sci.logic#9087

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:54 UTC

On 5/03/24 17:48, olcott wrote:
> On 3/5/2024 3:33 AM, Mikko wrote:
>> On 2024-03-04 19:53:05 +0000, olcott said:
>>
>>> On 3/4/2024 3:22 AM, Mikko wrote:
>>>> On 2024-03-03 18:47:29 +0000, olcott said:
>>>>
>>>>> On 3/3/2024 11:48 AM, Mikko wrote:
>>>>>> On 2024-03-03 15:08:17 +0000, olcott said:
>>>>>>
>>>>>>> On 3/3/2024 4:54 AM, Mikko wrote:
>>>>>>>> On 2024-03-03 02:09:22 +0000, olcott said:
>>>>>>>>
>>>>>>>>> None-the-less actual computers do actually demonstrate
>>>>>>>>> actual very deep understanding of these things.
>>>>>>>>
>>>>>>>> Not very deep, just deeper that you can achieve.
>>>>>>>>
>>>>>>>
>>>>>>> Chat GPT 4.0 agreed that my reasoning is sound.
>>>>>>
>>>>>> That does not demonstrate any understanding, even shallow.
>>>>>>
>>>>>
>>>>> The first thing that it does is agree that Hehner's
>>>>> "Carol's question" (augmented by Richards critique)
>>>>> is an example of the Liar Paradox.
>>>>> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>>>>>
>>>>> It ends up concluding that myself, professor Hehner and
>>>>> professor Stoddart are all correct in that there is
>>>>> something wrong with the halting problem.
>>>>
>>>> None of that demonstrates any understanding.
>>>>
>>>>> My persistent focus on these ideas gives me an increasingly
>>>>> deeper understanding thus my latest position is that the
>>>>> halting problem proofs do not actually show that halting
>>>>> is not computable.
>>>>
>>>> Your understanding is still defective and shallow.
>>>>
>>>
>>> If it really was shallow then a gap in my reasoning
>>> could be pointed out.
>>
>> Gaps in your reasons are pointed out every day.
>>
>
> There are no actual gaps in my reasoning.

Gap in your reasoning: when you think that a copy of a Turing machine
can possibly give any different result from the original.

Re: Limits of computations != actual limits of computers [ Church Turing ]

<us8if5$24go$7@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9088&group=sci.logic#9088

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:55 UTC

On 5/03/24 18:21, olcott wrote:
> Not at all. The key thing that I do not know is whether
> a Turing Machine can somehow accomplish the same function
> result so that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can correctly determine that
> itself would never halt unless it transitions to Ĥ.Hqn.
>
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt
>
> I know that a RASP machine where every P knows its own address can
> easily do this. I am still trying to work out how a TM can do this.

It cannot. The only way it can know is if you tell it. And if you have
to tell it, then you can just lie to it, also that is extra input not
specified in the halting problem so it doesn't solve the halting problem.

Re: Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correct

<us8imj$24go$9@dont-email.me>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=9089&group=sci.logic#9089

  copy link   Newsgroups: comp.theory sci.logic
Path: i2pn2.org!.POSTED!not-for-mail
From: new...@immibis.com (immibis)
Newsgroups: comp.theory,sci.logic
Subject: Re: How do we know that ChatGPT 4.0 correctly evaluated my ideas?
Date: Mon, 4 Mar 2024 21:17:47 -0500
Organization: i2pn2 (i2pn.org)
Message-ID: <us5vc9$psb9$4@i2pn2.org>
References: <us096s$23k0u$2@dont-email.me> <us0ao0$fjqv$19@i2pn2.org>
<us0b51$23tf5$1@dont-email.me> <us0hbq$fjqv$20@i2pn2.org>
<us0ii7$25emo$1@dont-email.me> <us0jrl$fjqu$9@i2pn2.org>
<us0ka2$25m8f$1@dont-email.me> <us0kp0$fjqu$12@i2pn2.org>
<us0m4i$25m8f$4@dont-email.me> <us1kti$2f46h$1@dont-email.me>
<us23p2$2i101$1@dont-email.me> <us2d4s$2k65l$1@dont-email.me>
<us2gk1$2ksv3$2@dont-email.me> <us2n8c$lq4d$2@i2pn2.org>
<us3ao5$2q7v4$1@dont-email.me> <us3bcg$lq4d$12@i2pn2.org>
<us3fc1$2uo74$1@dont-email.me> <us3if9$lq4c$11@i2pn2.org>
<us3j5o$2vhd5$1@dont-email.me> <us4eva$o3ci$2@i2pn2.org>
<us5a3c$3b05o$1@dont-email.me> <us5oi1$psb8$3@i2pn2.org>
<us5q58$3drq0$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 5 Mar 2024 02:17:45 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="848233"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <us5q58$3drq0$2@dont-email.me>
 by: immibis - Wed, 6 Mar 2024 01:59 UTC

On 5/03/24 18:10, olcott wrote:
> On 3/5/2024 10:57 AM, immibis wrote:
>> On 3/03/24 02:29, olcott wrote:
>>> On 3/2/2024 6:53 PM, Richard Damon wrote:
>>>> Note, Computers, as generally viewed, especiailly for "Compuation
>>>> Theory" have the limitation of being deterministic, whcih DOES make
>>>> them less powerful than the human mind, which has free will.
>>>
>>> LLMs have contradicted that. They are inherently stochastic.
>>
>> Incorrect. That they use a random number generator as part of their
>> algorithm does not make them special. You can use a true random number
>> generator, which must be counted as an input to the computation, or
>> you can use a pseudo-random number generator which is deterministic.
>> This is no different from a computerized game of Blackjack.
>
> Paradoxical Yes/No Dilemma  June 17, 2023
> My copyright notice is at the bottom showing that this is my dialogue
> https://chat.openai.com/share/5607ac4d-2973-49f4-a335-31783bae899b
>
> *This was written by ChaGPT summing up its complete agreement*

ChatGPT tells you what you want to hear. It predicts the most probable
next word.


tech / sci.logic / Re: Chat GPT 4.0 affirms that Professors Hehner, Stoddart and I are correct

Pages:123456
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor