Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

Wernher von Braun settled for a V-2 when he coulda had a V-8.


devel / comp.theory / Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

SubjectAuthor
* ChatGPT agrees that the halting problem input can be construed as anolcott
+* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|+* ChatGPT agrees that the halting problem input can be construed asolcott
||`* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|| `* ChatGPT agrees that the halting problem input can be construed asolcott
||  `- ChatGPT agrees that the halting problem input can be construed asRichard Damon
|`* ChatGPT agrees that the halting problem input can be construed as an incorrect qBen Bacarisse
| `* ChatGPT agrees that the halting problem input can be construed asolcott
|  +* ChatGPT agrees that the halting problem input can be construed asJeff Barnett
|  |`* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  | `* ChatGPT agrees that the halting problem input can be construed asolcott
|  |  `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  |   `* ChatGPT agrees that the halting problem input can be construed asolcott
|  |    `- ChatGPT agrees that the halting problem input can be construed asRichard Damon
|  `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|   `* ChatGPT agrees that the halting problem input can be construed asolcott
|    `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|     `* ChatGPT agrees that the halting problem input can be construed asolcott
|      `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|       `* ChatGPT agrees that the halting problem input can be construed asolcott
|        `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|         `* ChatGPT agrees that the halting problem input can be construed asolcott
|          `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|           `* ChatGPT agrees that the halting problem input can be construed asolcott
|            +* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |`* ChatGPT agrees that the halting problem input can be construed asolcott
|            | `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |  `* ChatGPT agrees that the halting problem input can be construed asolcott
|            |   `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |    +* ChatGPT agrees that the halting problem input can be construed asolcott
|            |    |`- ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |    `* ChatGPT agrees that the halting problem input can be construed asolcott
|            |     `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |      `* ChatGPT agrees that the halting problem input can be construed asolcott
|            |       `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            |        `* ChatGPT agrees that the halting problem input can be construed asolcott
|            |         `- ChatGPT agrees that the halting problem input can be construed asRichard Damon
|            `* ChatGPT agrees that the halting problem input can be construed asolcott
|             `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|              +* ChatGPT agrees that the halting problem input can be construed asolcott
|              |`* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|              | `* ChatGPT agrees that the halting problem input can be construed asolcott
|              |  `* ChatGPT agrees that the halting problem input can be construed asRichard Damon
|              |   +* Does input D have semantic property S or is input D [BAD INPUT]?olcott
|              |   |`* Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|              |   | `* Does input D have semantic property S or is input D [BAD INPUT]?olcott
|              |   |  `* Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|              |   |   `* Does input D have semantic property S or is input D [BAD INPUT]?olcott
|              |   |    `* Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|              |   |     `* Does input D have semantic property S or is input D [BAD INPUT]?olcott
|              |   |      `- Does input D have semantic property S or is input D [BAD INPUT]?Richard Damon
|              |   `* Termination Analyzer H determines the semantic property of ..olcott
|              |    `- Termination Analyzer H determines the semantic property of ..Richard Damon
|              `* ChatGPT agrees that the halting problem input can be construed asolcott
|               `- ChatGPT agrees that the halting problem input can be construed asRichard Damon
+* Ben Bacarisse specifically targets my posts to discourage honestolcott
|`* Ben Bacarisse specifically targets my posts to discourage honestRichard Damon
| `* dishonest subject linesBen Bacarisse
|  `- Ben Bacarisse specifically targets my posts to discourage honestolcott
+* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|`* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
| `* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|  `* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
|   `* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|    `* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
|     `* Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toolcott
|      `- Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts toRichard Damon
+- Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]olcott
+- Refutation of the Ben Bacarisse Rebuttal [Ben targets my posts]olcott
+* ChatGPT and stack limits (was: Re: ChatGPT agrees that the haltingvallor
|+- ChatGPT and stack limits (was: Re: ChatGPT agrees that thevallor
|`* ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
| `* ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
|  `* ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
|   `* ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
|    `* ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
|     `* ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
|      `* ChatGPT and stack limits (was: Re: ChatGPT agrees that theolcott
|       `- ChatGPT and stack limits (was: Re: ChatGPT agrees that theRichard Damon
`- ChatGPT agrees that the halting problem input can be construed asolcott

Pages:1234
Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u70dch$36h09$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=47970&group=comp.theory#47970

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Date: Wed, 21 Jun 2023 21:58:25 -0500
Organization: A noiseless patient Spider
Lines: 95
Message-ID: <u70dch$36h09$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 22 Jun 2023 02:58:26 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="2545df0e4b7cf57e65f9ecaa42564d8d";
logging-data="3359753"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18ytzJ6uabVwwwS99f0nBb6"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:ZmjuCU04eKeTiqkivVq3uaipkPY=
In-Reply-To: <dDOkM.10063$jlQ4.3709@fx12.iad>
Content-Language: en-US
 by: olcott - Thu, 22 Jun 2023 02:58 UTC

On 6/21/2023 9:47 PM, Richard Damon wrote:
> On 6/21/23 8:40 PM, olcott wrote:
>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>> On 6/21/23 3:59 PM, olcott wrote:
>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>
>>>>>> ChatGPT:
>>>>>>      “Therefore, based on the understanding that self-contradictory
>>>>>>      questions lack a correct answer and are deemed incorrect, one
>>>>>> could
>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>      categorized as an incorrect question when posed to the halting
>>>>>>      decider H.”
>>>>>>
>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>
>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>> to give you the illusion that they know what they're talking about,
>>>>> but they are the world's best BS artists.
>>>>>
>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>
>>>>
>>>> I already know that and much worse than that they simply make up facts
>>>> on the fly citing purely fictional textbooks that have photos and back
>>>> stories for the purely fictional authors. The fake textbooks themselves
>>>> are complete and convincing.
>>>>
>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>> reasoning.
>>>>
>>>
>>> So, you admit that they will lie and tell you want you want to hear,
>>> you think the fact that it agrees with you means something?
>>>
>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>> It did not leap to this conclusion it took a lot of convincing.
>>>
>>> Which is a good sign that it was learnig what you wanted it to say so
>>> it finally said it.
>>>
>>>>
>>>> People are not convinced by this same reasoning only because they spend
>>>> 99.9% of their attention on rebuttal thus there is not enough attention
>>>> left over for comprehension.
>>>
>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>> what you call "Correct Reasoning". Your problem is that your idea of
>>> correct isn't.
>>>
>>>>
>>>> The only reason that the halting problem cannot be solved is that the
>>>> halting question is phrased incorrectly. The way that the halting
>>>> problem is phrased allows inputs that contradict every Boolean return
>>>> value from a set of specific deciders.
>>>
>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>> decider to give false answer and still be considered "correct" by
>>> your faulty logic.
>>>
>>>>
>>>> Each of the halting problems instances is exactly isomorphic to
>>>> requiring a correct answer to this question:
>>>> Is this sentence true or false: "This sentence is not true".
>>>>
>>>
>>> Nope.
>>>
>>> How is "Does the Machine represented by the input to the decider?"
>>> isomopric to your statement.
>>>
>>
>> The halting problem instances that ask:
>> "Does this input halt"
>>
>> are isomorphic to asking Jack this question:
>> "Will Jack's answer to this question be no?"
>
> Nope, because Jack is a volitional being, so we CAN'T know the correct
> answer to the question until after Jack answers the question, thus Jack,
> in trying to be correct, hits a contradiction.
>

We can know that the correct answer from Jack and the correct return
value from H cannot possibly exist, now and forever.

You are just playing head games.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<ZdWkM.10575$zG0d.977@fx04.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=47971&group=comp.theory#47971

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer03.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx04.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
<u70dch$36h09$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
Content-Language: en-US
In-Reply-To: <u70dch$36h09$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 112
Message-ID: <ZdWkM.10575$zG0d.977@fx04.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Thu, 22 Jun 2023 07:26:48 -0400
X-Received-Bytes: 5665
 by: Richard Damon - Thu, 22 Jun 2023 11:26 UTC

On 6/21/23 10:58 PM, olcott wrote:
> On 6/21/2023 9:47 PM, Richard Damon wrote:
>> On 6/21/23 8:40 PM, olcott wrote:
>>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>>> On 6/21/23 3:59 PM, olcott wrote:
>>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>>
>>>>>>> ChatGPT:
>>>>>>>      “Therefore, based on the understanding that self-contradictory
>>>>>>>      questions lack a correct answer and are deemed incorrect,
>>>>>>> one could
>>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>>      categorized as an incorrect question when posed to the halting
>>>>>>>      decider H.”
>>>>>>>
>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It
>>>>>>> did
>>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>>
>>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>>> to give you the illusion that they know what they're talking about,
>>>>>> but they are the world's best BS artists.
>>>>>>
>>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>>
>>>>>
>>>>> I already know that and much worse than that they simply make up facts
>>>>> on the fly citing purely fictional textbooks that have photos and back
>>>>> stories for the purely fictional authors. The fake textbooks
>>>>> themselves
>>>>> are complete and convincing.
>>>>>
>>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>>> reasoning.
>>>>>
>>>>
>>>> So, you admit that they will lie and tell you want you want to hear,
>>>> you think the fact that it agrees with you means something?
>>>>
>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>
>>>> Which is a good sign that it was learnig what you wanted it to say
>>>> so it finally said it.
>>>>
>>>>>
>>>>> People are not convinced by this same reasoning only because they
>>>>> spend
>>>>> 99.9% of their attention on rebuttal thus there is not enough
>>>>> attention
>>>>> left over for comprehension.
>>>>
>>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>>> what you call "Correct Reasoning". Your problem is that your idea of
>>>> correct isn't.
>>>>
>>>>>
>>>>> The only reason that the halting problem cannot be solved is that the
>>>>> halting question is phrased incorrectly. The way that the halting
>>>>> problem is phrased allows inputs that contradict every Boolean return
>>>>> value from a set of specific deciders.
>>>>
>>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>>> decider to give false answer and still be considered "correct" by
>>>> your faulty logic.
>>>>
>>>>>
>>>>> Each of the halting problems instances is exactly isomorphic to
>>>>> requiring a correct answer to this question:
>>>>> Is this sentence true or false: "This sentence is not true".
>>>>>
>>>>
>>>> Nope.
>>>>
>>>> How is "Does the Machine represented by the input to the decider?"
>>>> isomopric to your statement.
>>>>
>>>
>>> The halting problem instances that ask:
>>> "Does this input halt"
>>>
>>> are isomorphic to asking Jack this question:
>>> "Will Jack's answer to this question be no?"
>>
>> Nope, because Jack is a volitional being, so we CAN'T know the correct
>> answer to the question until after Jack answers the question, thus
>> Jack, in trying to be correct, hits a contradiction.
>>
>
> We can know that the correct answer from Jack and the correct return
> value from H cannot possibly exist, now and forever.
>
> You are just playing head games.
>
>

But the question isn't what H can return to be correct, since the only
possible answer that H can return is what it does return by its
programming, which will either BE correct or not. (In this case NOT).

Therefore, the correct answer that H SHOULD HAVE returned (to be
correct) has an answer, so the question actually HAS a correct answer.

You clearly don't understand the difference between a volitional being
and a deterministic machinne. This shows your stupidity and ignornace.
Maybe you have lost your free will and ability to think because of the
evil in your life, and are condemned to keep repeating the same error
over and over proving your insanity and stupidity.

I guess you are now shown to be a Hypocritical Ignorant Pathological
Lying insane idiot.

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u71l85$3aqv2$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=47973&group=comp.theory#47973

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Date: Thu, 22 Jun 2023 09:18:45 -0500
Organization: A noiseless patient Spider
Lines: 107
Message-ID: <u71l85$3aqv2$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
<u70dch$36h09$1@dont-email.me> <ZdWkM.10575$zG0d.977@fx04.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 22 Jun 2023 14:18:45 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="2545df0e4b7cf57e65f9ecaa42564d8d";
logging-data="3501026"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19LqvW11YMbAsFzzt6BsQel"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:uNVeuowEk2yPIsRpAIeJlrgwmzU=
In-Reply-To: <ZdWkM.10575$zG0d.977@fx04.iad>
Content-Language: en-US
 by: olcott - Thu, 22 Jun 2023 14:18 UTC

On 6/22/2023 6:26 AM, Richard Damon wrote:
> On 6/21/23 10:58 PM, olcott wrote:
>> On 6/21/2023 9:47 PM, Richard Damon wrote:
>>> On 6/21/23 8:40 PM, olcott wrote:
>>>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>>>> On 6/21/23 3:59 PM, olcott wrote:
>>>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>>>
>>>>>>>> ChatGPT:
>>>>>>>>      “Therefore, based on the understanding that self-contradictory
>>>>>>>>      questions lack a correct answer and are deemed incorrect,
>>>>>>>> one could
>>>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>>>      categorized as an incorrect question when posed to the halting
>>>>>>>>      decider H.”
>>>>>>>>
>>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>>>> It did
>>>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>>>
>>>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>>>> to give you the illusion that they know what they're talking about,
>>>>>>> but they are the world's best BS artists.
>>>>>>>
>>>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>>>
>>>>>>
>>>>>> I already know that and much worse than that they simply make up
>>>>>> facts
>>>>>> on the fly citing purely fictional textbooks that have photos and
>>>>>> back
>>>>>> stories for the purely fictional authors. The fake textbooks
>>>>>> themselves
>>>>>> are complete and convincing.
>>>>>>
>>>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>>>> reasoning.
>>>>>>
>>>>>
>>>>> So, you admit that they will lie and tell you want you want to
>>>>> hear, you think the fact that it agrees with you means something?
>>>>>
>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>>
>>>>> Which is a good sign that it was learnig what you wanted it to say
>>>>> so it finally said it.
>>>>>
>>>>>>
>>>>>> People are not convinced by this same reasoning only because they
>>>>>> spend
>>>>>> 99.9% of their attention on rebuttal thus there is not enough
>>>>>> attention
>>>>>> left over for comprehension.
>>>>>
>>>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>>>> what you call "Correct Reasoning". Your problem is that your idea
>>>>> of correct isn't.
>>>>>
>>>>>>
>>>>>> The only reason that the halting problem cannot be solved is that the
>>>>>> halting question is phrased incorrectly. The way that the halting
>>>>>> problem is phrased allows inputs that contradict every Boolean return
>>>>>> value from a set of specific deciders.
>>>>>
>>>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>>>> decider to give false answer and still be considered "correct" by
>>>>> your faulty logic.
>>>>>
>>>>>>
>>>>>> Each of the halting problems instances is exactly isomorphic to
>>>>>> requiring a correct answer to this question:
>>>>>> Is this sentence true or false: "This sentence is not true".
>>>>>>
>>>>>
>>>>> Nope.
>>>>>
>>>>> How is "Does the Machine represented by the input to the decider?"
>>>>> isomopric to your statement.
>>>>>
>>>>
>>>> The halting problem instances that ask:
>>>> "Does this input halt"
>>>>
>>>> are isomorphic to asking Jack this question:
>>>> "Will Jack's answer to this question be no?"
>>>
>>> Nope, because Jack is a volitional being, so we CAN'T know the
>>> correct answer to the question until after Jack answers the question,
>>> thus Jack, in trying to be correct, hits a contradiction.
>>>
>>
>> We can know that the correct answer from Jack and the correct return
>> value from H cannot possibly exist, now and forever.
>>
>> You are just playing head games.
>>
>>
>
> But the question isn't what H can return to be correct,
Yes it is and you just keep playing heed games.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<ce6lM.38368$7915.14256@fx10.iad>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=47979&group=comp.theory#47979

  copy link   Newsgroups: comp.theory sci.logic comp.ai.philosophy
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!fx10.iad.POSTED!not-for-mail
MIME-Version: 1.0
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0)
Gecko/20100101 Thunderbird/102.12.0
Subject: Re: ChatGPT and stack limits (was: Re: ChatGPT agrees that the
halting problem input can be construed as an incorrect question
Newsgroups: comp.theory,sci.logic,comp.ai.philosophy
References: <u6jhqq$1570m$1@dont-email.me> <u6vhv2$2urnr$1@dont-email.me>
<u6vkro$30a76$1@dont-email.me> <QiLkM.629$sW_c.553@fx07.iad>
<u705ae$323du$1@dont-email.me> <dDOkM.10063$jlQ4.3709@fx12.iad>
<u70dch$36h09$1@dont-email.me> <ZdWkM.10575$zG0d.977@fx04.iad>
<u71l85$3aqv2$1@dont-email.me>
From: Rich...@Damon-Family.org (Richard Damon)
Content-Language: en-US
In-Reply-To: <u71l85$3aqv2$1@dont-email.me>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Lines: 134
Message-ID: <ce6lM.38368$7915.14256@fx10.iad>
X-Complaints-To: abuse@easynews.com
Organization: Forte - www.forteinc.com
X-Complaints-Info: Please be sure to forward a copy of ALL headers otherwise we will be unable to process your complaint properly.
Date: Thu, 22 Jun 2023 21:06:16 -0400
X-Received-Bytes: 6635
 by: Richard Damon - Fri, 23 Jun 2023 01:06 UTC

On 6/22/23 10:18 AM, olcott wrote:
> On 6/22/2023 6:26 AM, Richard Damon wrote:
>> On 6/21/23 10:58 PM, olcott wrote:
>>> On 6/21/2023 9:47 PM, Richard Damon wrote:
>>>> On 6/21/23 8:40 PM, olcott wrote:
>>>>> On 6/21/2023 6:01 PM, Richard Damon wrote:
>>>>>> On 6/21/23 3:59 PM, olcott wrote:
>>>>>>> On 6/21/2023 2:10 PM, vallor wrote:
>>>>>>>> On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
>>>>>>>>
>>>>>>>>> ChatGPT:
>>>>>>>>>      “Therefore, based on the understanding that
>>>>>>>>> self-contradictory
>>>>>>>>>      questions lack a correct answer and are deemed incorrect,
>>>>>>>>> one could
>>>>>>>>>      argue that the halting problem's pathological input D can be
>>>>>>>>>      categorized as an incorrect question when posed to the
>>>>>>>>> halting
>>>>>>>>>      decider H.”
>>>>>>>>>
>>>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>>>>> It did
>>>>>>>>> not leap to this conclusion it took a lot of convincing.
>>>>>>>>
>>>>>>>> Chatbots are highly unreliable at reasoning.  They are designed
>>>>>>>> to give you the illusion that they know what they're talking about,
>>>>>>>> but they are the world's best BS artists.
>>>>>>>>
>>>>>>>> (Try playing a game of chess with ChatGPT, you'll see what I mean.)
>>>>>>>>
>>>>>>>
>>>>>>> I already know that and much worse than that they simply make up
>>>>>>> facts
>>>>>>> on the fly citing purely fictional textbooks that have photos and
>>>>>>> back
>>>>>>> stories for the purely fictional authors. The fake textbooks
>>>>>>> themselves
>>>>>>> are complete and convincing.
>>>>>>>
>>>>>>> In my case ChatGPT was able to be convinced by clearly correct
>>>>>>> reasoning.
>>>>>>>
>>>>>>
>>>>>> So, you admit that they will lie and tell you want you want to
>>>>>> hear, you think the fact that it agrees with you means something?
>>>>>>
>>>>>>> https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
>>>>>>> It did not leap to this conclusion it took a lot of convincing.
>>>>>>
>>>>>> Which is a good sign that it was learnig what you wanted it to say
>>>>>> so it finally said it.
>>>>>>
>>>>>>>
>>>>>>> People are not convinced by this same reasoning only because they
>>>>>>> spend
>>>>>>> 99.9% of their attention on rebuttal thus there is not enough
>>>>>>> attention
>>>>>>> left over for comprehension.
>>>>>>
>>>>>> No, people can apply REAL "Correct Reasoning" and see the error in
>>>>>> what you call "Correct Reasoning". Your problem is that your idea
>>>>>> of correct isn't.
>>>>>>
>>>>>>>
>>>>>>> The only reason that the halting problem cannot be solved is that
>>>>>>> the
>>>>>>> halting question is phrased incorrectly. The way that the halting
>>>>>>> problem is phrased allows inputs that contradict every Boolean
>>>>>>> return
>>>>>>> value from a set of specific deciders.
>>>>>>
>>>>>> Nope, it is phrased exactly as needed. Your alterations allow the
>>>>>> decider to give false answer and still be considered "correct" by
>>>>>> your faulty logic.
>>>>>>
>>>>>>>
>>>>>>> Each of the halting problems instances is exactly isomorphic to
>>>>>>> requiring a correct answer to this question:
>>>>>>> Is this sentence true or false: "This sentence is not true".
>>>>>>>
>>>>>>
>>>>>> Nope.
>>>>>>
>>>>>> How is "Does the Machine represented by the input to the decider?"
>>>>>> isomopric to your statement.
>>>>>>
>>>>>
>>>>> The halting problem instances that ask:
>>>>> "Does this input halt"
>>>>>
>>>>> are isomorphic to asking Jack this question:
>>>>> "Will Jack's answer to this question be no?"
>>>>
>>>> Nope, because Jack is a volitional being, so we CAN'T know the
>>>> correct answer to the question until after Jack answers the
>>>> question, thus Jack, in trying to be correct, hits a contradiction.
>>>>
>>>
>>> We can know that the correct answer from Jack and the correct return
>>> value from H cannot possibly exist, now and forever.
>>>
>>> You are just playing head games.
>>>
>>>
>>
>> But the question isn't what H can return to be correct,
> Yes it is and you just keep playing heed games.
>

So, you aren't talking about the Halting Problem, and your definition of
"Head Games" must be to be correcting your mistakes.

The question of the Halting Problem is does the Machine that the input
describes Halt. It make no reference to H itself. H to be correct needs
to get the right answer, but the question isn't what it needs to return
to be correct, since once you define H, its answer is fixed, so the only
answer it CAN give is what it DOES give.

You seem to not understand that programs are deterministic entities and
have no option of "choice", so we can't ask what they can do to be
correct, because they will only do what they do.

Your Head Games seems to be about assuming things might do what they
don't actually do, and thus thinking about lies of pure fantasy.

You alse seem to not understand the difference between a volitional
being an a deterministic process. Maybe because you have lost your own
determinism and gave it to your insanity, and now you are stuck forever
trying to do what you incorrect thought of.

Clearly you have lost the intelegence that comes out of volition, as you
show yourself to be so stupid and ignorant to not understand the basic
presented to you.

Re: ChatGPT agrees that the halting problem input can be construed as an incorrect question

<u7363l$3kfbd$1@dont-email.me>

  copy mid

https://www.novabbs.com/devel/article-flat.php?id=47986&group=comp.theory#47986

  copy link   Newsgroups: sci.logic comp.theory
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polco...@gmail.com (olcott)
Newsgroups: sci.logic,comp.theory
Subject: Re: ChatGPT agrees that the halting problem input can be construed as
an incorrect question
Date: Thu, 22 Jun 2023 23:12:35 -0500
Organization: A noiseless patient Spider
Lines: 60
Message-ID: <u7363l$3kfbd$1@dont-email.me>
References: <u6jhqq$1570m$1@dont-email.me>
<580d711a-afd0-4eae-b2bd-84b0126905d3n@googlegroups.com>
<u6ptvb$23qkq$1@dont-email.me>
<6a7076e8-79b9-49cb-8da9-dc538329cf89n@googlegroups.com>
<87wmzzjjv6.fsf@bsb.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 23 Jun 2023 04:12:37 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="cfa17af7891102818a251cd71fa483f2";
logging-data="3816813"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/l9mMKUBy9U+vLglgWJBhj"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.12.0
Cancel-Lock: sha1:c5ke72tomaraw/gq4BNm1s6qg5s=
In-Reply-To: <87wmzzjjv6.fsf@bsb.me.uk>
Content-Language: en-US
 by: olcott - Fri, 23 Jun 2023 04:12 UTC

On 6/19/2023 3:08 PM, Ben Bacarisse wrote:
> Fritz Feldhase <franz.fritschee.ff@gmail.com> writes:
>
>> On Monday, June 19, 2023 at 5:58:39 PM UTC+2, olcott wrote:
>>
>>> the full semantics of the question <bla>
>>
>> Look, dumbo, we are asking the simple question: "Does D(D) halt?"
>>
>> Now, D(D) either halts or doesn't halt.
>>
>> Hence the CORRECT yes/no-answer to the question "Does D(D) halt?" is
>> "yes" iff D(D) halts and "no" if D(D) doesn't halt.
>
> Just a reminder that you are arguing with someone who has declared that
> the wrong answer is the right one:
>
> Me: "do you still assert that [...] false is the "correct" answer even
> though P(P) halts?"
>
> PO: Yes that is the correct answer even though P(P) halts.
>
> (Back then, D was called P.)
>

If it is correct in an absolute sense rather from the relative sense of
a frame-of-reference that D(D) simply halts then H would have no need to
abort its simulation of D.

Because it is easy to see that D simulated by H will never stop running
unless aborted then it is not true in an absolute sense that D(D) halts.

In the relative sense of the frame-of-reference of H(D,D) its input does
not halt is proven by the fact that it will never stop running unless
aborted.

In the relative sense of the frame-of-reference of the directly executed
D(D) (after H has already aborted its simulation of D) it seems that
D(D) halts.

> This was not a slip of the tongue. He has been quite clear that he is
> talking about something other than what the world calls halting. It's
> about what /would/ happen if the program were slight different, not
> about what actually happens:
>
> PO: "A non-halting computation is every computation that never halts
> unless its simulation is aborted. This maps to every element of the
> conventional halting problem set of non-halting computations and a
> few more."
>
> He has been (eventually) perfectly clear -- PO's "Other Halting" is not
> halting, which is why false can be the correct answer for some halting
> computations. The only mystery is why anyone still wants to talk about
> POOH.
>

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Pages:1234
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor