Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

A prediction is worth twenty explanations. -- K. Brecher


interests / soc.culture.china / More of my philosophy about my fluid intelligence and about causality and about the Transformers and about another important limitation of Large Language Models and about planning and reasoning of Large Language Models and about the window context an

SubjectAuthor
o More of my philosophy about my fluid intelligence and about causalityAmine Moulay Ramdane

1
More of my philosophy about my fluid intelligence and about causality and about the Transformers and about another important limitation of Large Language Models and about planning and reasoning of Large Language Models and about the window context an

<12a5cbbb-23d7-456d-934e-4826be862046n@googlegroups.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=13412&group=soc.culture.china#13412

  copy link   Newsgroups: soc.culture.china
X-Received: by 2002:a05:6214:4a43:b0:62f:e926:6eb8 with SMTP id ph3-20020a0562144a4300b0062fe9266eb8mr1535641qvb.5.1687134545679;
Sun, 18 Jun 2023 17:29:05 -0700 (PDT)
X-Received: by 2002:a9d:62c2:0:b0:6b4:690a:99ce with SMTP id
z2-20020a9d62c2000000b006b4690a99cemr647436otk.1.1687134545226; Sun, 18 Jun
2023 17:29:05 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: soc.culture.china
Date: Sun, 18 Jun 2023 17:29:05 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=66.131.174.130; posting-account=R-6XjwoAAACnHXTO3L-lyPW6wRsSmYW9
NNTP-Posting-Host: 66.131.174.130
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <12a5cbbb-23d7-456d-934e-4826be862046n@googlegroups.com>
Subject: More of my philosophy about my fluid intelligence and about causality
and about the Transformers and about another important limitation of Large
Language Models and about planning and reasoning of Large Language Models and
about the window context an
From: amine...@gmail.com (Amine Moulay Ramdane)
Injection-Date: Mon, 19 Jun 2023 00:29:05 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 50641
 by: Amine Moulay Ramdane - Mon, 19 Jun 2023 00:29 UTC

Hello,

More of my philosophy about my fluid intelligence and about causality and about the Transformers and about another important limitation of Large Language Models and about planning and reasoning of Large Language Models and about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large Language Models such as GPT-4 and about future of artificial intelligence and about the multimodality of Large Language Models such as GPT-4 and about my philosophy and about my contributions here...

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i have to explain something important, so since i have just discovered an important pattern with my fluid intelligence and read about it in my below previous thoughts, so i think that Large Language Models such as GPT-4 are capturing causality indirectly from the language and much more easily in the languages like french or english , so in professions like psychology or the law or philosophy that use the language like french or english, it is easy for Large Language Models to capture causality, but in software development , it is more difficult, since for example in gitub where you find the open source software projects, Large Language Models are not able to capture causality correctly, since those software projects in gitub are not languages like french and english, so causality in them is not specified and modeled with
a language like french or english, so you are understanding more the important pattern that i am discovering with my fluid intelligence below that explains a so important thing, so then you are understanding that it is important to help Large Language Models such as GPT-4 to capture causality by modeling it with a language like french or english , by specifying it enough and clearly. So then you are understanding this important limitation of Large Language Models such as GPT-4. So i invite you to reread all my previous thoughts so that to understand my views:

So notice that i am discovering an important pattern with my fluid intelligence since i have a high IQ, so the pattern that i am discovering is below, so then since i have discovered it

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so the pattern
that i am discovering with my fluid intelligence, is that Large Language Models such as GPT-4 don't capture causality like we do it we humans, so , so that to capture causality they have to "indirectly" capture it by specifying it enough and clearly by using the "language", so i think it is the limitation of Large Language Models such as GPT-4 , so it is why you are noticing that i am smart by talking as i am talking with ChatGPT below by also making it understand that i am understanding and discovering this pattern with my fluid intelligence. So reread my following previous thoughts so that to understand my views:

I have just forgotten to put other of my important questions and answers
from ChatGPT, so i invite to reread carefully since it is important, and of course you have to understand that Large Language Models such as GPT-4 are based on Transformers, and it explains my kind of questions below to ChatGPT:

I have just posted the following so that to show how Large Language Models such as GPT-4 are not good at software development and why i think that Large Language Models such as GPT-4 will not replace a lot of programmers, so read it carefully and my new more proof of it is also below:

"Perhaps most impressive were the performances of GPT-4 in professional arenas. It scored a 298 out of 400 on the Uniform Bar Exam, which is high enough to be licensed as a lawyer in every state using the UBE system. It also scored a reasonable 75% on the MKSAP 19 medical knowledge test, and an impressive 77% on the Advanced-level Sommelier theory exam.

In programming tests, GPT-4 did worse; the AI struggles with code, it seems.. GPT-4 was able to get 31 out of 41 correct solutions in the "easy" Leetcode test, but got just 21/80 on the medium test and only 3 correct questions on the hard test. Meanwhile, its Codeforces rating is a measly 392, placing it below the 5th percentile of users."

Read more here:

https://hothardware.com/news/openai-gpt-4-model

And here is more logical proof of the above so that you understand why:

Here is my other important question to ChatGPT from my new discussion with ChatGPT that shows another important limitation of ChatGPT , so i invite you to read it carefully:

So here is my first question to ChatGPT:

"Can you explain more with some good and simple example the following:
so you have said: "Lack of Causality: Transformers inherently lack the notion of causality in their architecture. They treat all positions in the input sequence equally and do not have an explicit mechanism to model cause-effect relationships. This limitation can affect tasks that require reasoning about causality or temporal dependencies.""

And here is what has answered ChatGPT:

"Certainly! Let's consider an example to illustrate the limitation of lack of causality in transformers.

Suppose we have a transformer-based language model trained on a large corpus of news articles. We want to use this model to answer questions about events described in the articles. Let's take the following question as an example: "What caused the traffic jam?"

Without a specific mechanism to model causality, the transformer model would process the question and input sequence in a parallel, attention-based manner, treating all words equally. It would not have an inherent understanding of the cause-effect relationship between events described in the articles.

To generate a response, the model might attend to relevant words like "traffic" and "jam" in the input sequence, and possibly identify associated words like "road," "accident," or "construction." However, the model wouldn't explicitly grasp the causal connection between these words and the traffic jam.

In reality, identifying the cause of a traffic jam may require analyzing the context, understanding the chronology of events, and reasoning about the cause-effect relationships between different factors. For instance, the cause could be an accident that happened earlier, road construction work, or heavy rainfall. Without an explicit mechanism to model causality, the transformer model might struggle to provide accurate or detailed answers that consider temporal dependencies and cause-effect relationships.

In contrast, a model specifically designed to handle causality, such as a structured causal model or a temporal reasoning framework, could potentially reason about the events in a more accurate and meaningful way. Such models would take into account the temporal order of events, the dependencies between causes and effects, and potentially incorporate domain-specific knowledge to make better predictions about causality.

It's important to note that researchers are actively exploring methods to address the lack of causality in transformers. Some approaches incorporate temporal encodings, modify attention mechanisms to capture temporal dependencies, or combine transformers with external knowledge graphs to model cause-effect relationships explicitly. These advancements aim to enhance the ability of transformers to reason about causality and temporal dependencies in a more sophisticated manner."

And here is my second question to ChatGPT:

"So can it limit Large Language Models such ChatGPT in programming ?"

And here what has answered ChatGPT:

"The lack of causality in transformers can indeed pose limitations in large language models like ChatGPT when it comes to programming tasks. While ChatGPT is proficient in generating natural language responses, it may struggle with certain programming-related aspects that involve causality and temporal dependencies.

Programming often requires understanding the sequential and cause-effect relationships between different code statements, variables, and actions. Without an explicit mechanism to model causality, a transformer-based language model may have difficulty reasoning about these relationships accurately.

For instance, when programming, it is crucial to understand the order of execution, the impact of one code statement on subsequent statements, and the cause-effect relationships between different variables and functions. Transformers typically treat all positions in the input sequence equally and lack the inherent ability to model these causal relationships.

Additionally, programming tasks may involve complex control flow, such as loops, conditionals, and recursive functions. Understanding the causal dependencies and temporal aspects within these constructs is essential for writing correct and efficient code. Transformers, without explicit mechanisms to handle causality, may struggle to capture and reason about these intricate relationships.

However, it's worth mentioning that transformers can still be useful in programming tasks, especially for tasks that involve natural language processing, code summarization, or generating code documentation. They can assist with generating human-readable explanations, providing code examples, or even aiding in code completion.

To address the limitations of causality in programming, specialized programming language models and frameworks have been developed that explicitly incorporate temporal and causal relationships in their architectures. Some examples include models like Gated Graph Neural Networks (GGNNs) or frameworks like Neural Abstract Syntax Trees (NASTs), which aim to better capture the semantics and causal dependencies in programming languages.


Click here to read the complete article
1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor