Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

I never met a piece of chocolate I didn't like.


interests / soc.culture.china / More of my philosophy about planning and reasoning of Large Language Models and about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large La

SubjectAuthor
o More of my philosophy about planning and reasoning of Large LanguageAmine Moulay Ramdane

1
More of my philosophy about planning and reasoning of Large Language Models and about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large La

<f449a8b5-0eb5-4c19-b519-19877a120bb8n@googlegroups.com>

  copy mid

https://www.novabbs.com/interests/article-flat.php?id=13394&group=soc.culture.china#13394

  copy link   Newsgroups: soc.culture.china
X-Received: by 2002:ad4:4b61:0:b0:62f:fbba:b7de with SMTP id m1-20020ad44b61000000b0062ffbbab7demr1471027qvx.2.1687118574976;
Sun, 18 Jun 2023 13:02:54 -0700 (PDT)
X-Received: by 2002:a05:6870:5b03:b0:19f:f2d8:57f8 with SMTP id
ds3-20020a0568705b0300b0019ff2d857f8mr447372oab.8.1687118574387; Sun, 18 Jun
2023 13:02:54 -0700 (PDT)
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!peer01.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: soc.culture.china
Date: Sun, 18 Jun 2023 13:02:54 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=66.131.174.130; posting-account=R-6XjwoAAACnHXTO3L-lyPW6wRsSmYW9
NNTP-Posting-Host: 66.131.174.130
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <f449a8b5-0eb5-4c19-b519-19877a120bb8n@googlegroups.com>
Subject: More of my philosophy about planning and reasoning of Large Language
Models and about the window context and about GPT-4 memory and about Canada
and about agile methodology and about the maximum token limit and about the
other limitations of Large La
From: amine...@gmail.com (Amine Moulay Ramdane)
Injection-Date: Sun, 18 Jun 2023 20:02:54 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 30371
 by: Amine Moulay Ramdane - Sun, 18 Jun 2023 20:02 UTC

Hello,

More of my philosophy about planning and reasoning of Large Language Models and about the window context and about GPT-4 memory and about Canada and about agile methodology and about the maximum token limit and about the other limitations of Large Language Models such as GPT-4 and about future of artificial intelligence and about the multimodality of Large Language Models such as GPT-4 and about my philosophy and about my contributions here...

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

So here is my just new questions that shows another important limitation of ChatGPT:

So i have just asked the following question to ChatGPT:

"So when you present Large Language Models such as ChatGPT with a question , they are limited by the question and they are not able to plan beyond the question, so they are like supervised by the questions from the prompt"

And ChatGPT has just answered:

"Yes, you are correct. Large language models like ChatGPT are limited by the input they receive, including the prompt or question. They generate responses based on the context provided in the input and the patterns they have learned from the training data.

The model's output is supervised by the input it receives, meaning it tries to generate a response that is relevant and coherent within the context of the given prompt or question. However, the model does not have the ability to actively expand its reasoning or plan beyond the provided input.

The responses generated by these models are influenced by the patterns and associations found in the training data, and they strive to generate outputs that are similar to human-generated text. But their responses are not the result of a conscious thought process or independent planning.

It's important to note that language models are highly effective at processing and generating text, but they lack true understanding, consciousness, and the ability to reason and plan in the same way humans do."

And here is my second question to ChatGPT:

"So then Large Language Models such as ChatGPT are not as smart as humans by not being able to autonomously plan and reason like i am explaining above"

And here is what has just answered ChatGPT:

"Correct, language models like ChatGPT are not as smart as humans in terms of planning and reasoning. While they can generate coherent and contextually relevant text based on patterns learned from training data, they lack the depth of understanding, intuition, and cognitive abilities that humans possess.

Humans have the ability to actively plan, reason, and think through problems, considering various factors, long-term goals, and potential outcomes. Humans can also use prior knowledge and experiences to make informed decisions and navigate complex situations. Large language models, on the other hand, do not possess these higher-level cognitive abilities.

While language models can provide impressive responses and simulate human-like conversation, they are ultimately limited by the data they were trained on and the patterns they learned during that training. They lack true comprehension and consciousness and are unable to engage in autonomous planning or reasoning beyond the immediate input they receive."

And here is my third question to ChatGPT:

"So then we can think Large Language Models such as ChatGPT as needing to be supervised from the prompt like in ChatGPT , so they are not able to plan and reason in a more unsupervised or autonomous way like humans"

And here is what has just answered ChatGPT:

"Yes, that's a reasonable way to think about it. Large language models like ChatGPT are typically supervised by the input they receive, such as a prompt or a question. They generate responses based on the patterns and associations they have learned from their training data.

Unlike humans, who can plan and reason in a more unsupervised way, language models rely on explicit input and guidance to generate responses. They do not possess the same level of autonomy, creativity, or independent thinking that humans have.

While language models can produce impressive outputs and mimic human-like conversation, their responses are ultimately driven by the patterns in the training data and the input they receive. They lack the ability to autonomously plan and reason beyond the limitations of the given prompt or question.

However, it's worth noting that ongoing research aims to explore ways to enhance the capabilities of language models, potentially incorporating more unsupervised learning and reasoning techniques. But as of now, large language models are primarily supervised by the prompt and do not possess the same level of unsupervised planning and reasoning abilities as humans."

And here is the answer to my another question to ChatGPT that shows another limitation of ChatGPT:

Here is what has just answered me ChatGPT:

"The window context during the training process refers to the number of preceding words or tokens that the language model considers while learning from the training data. A larger window context would allow the model to capture longer-range dependencies and patterns in the data.

While future powerful quantum computers may provide computational advantages for certain tasks, they do not directly address the challenge of enlarging the window context during the training process. The window context is primarily determined by the architectural design and computational constraints of the language model itself, rather than the specific computing substrate..

Increasing the window context in the training process involves managing memory requirements, computational resources, and optimization techniques specific to language models. These considerations are related to the model's architecture, available memory capacity, and training efficiency. It requires innovations in model design, training algorithms, and memory management techniques.

While quantum computers may offer computational benefits for certain aspects of training or inference tasks, such as optimization or sampling, they do not inherently solve the challenges associated with enlarging the window context during the training process. The solution lies in advancements in language model design, training methodologies, and memory optimization, which are independent of the specific computing substrate used."

I invite you to read the following article so that to understand an important limitation of Large Language Models such as GPT-4:

GPT-4 has the memory of a Goldfish

https://www.theatlantic.com/technology/archive/2023/03/gpt-4-has-memory-context-window/673426/

I think i am highly smart since I have passed two certified IQ tests and i have scored above 115 IQ, and i mean that it is "above" 115 IQ, so i think the above article is not explaining correctly , so i will explain: So another important problem is how to solve the above problem, since Large Language Models such as GPT-4 can only discover patterns with the attention mechanisms etc. only inside a window of the maximum limit of tokens when they are trained , so the problem remains of how to ensure that the global patterns are discovered, since for example an average book typically contains around 80,000 to 100,000 words, so GPT-4 has a window of maximum token limit of 32,000 (equivalent to 25,000 words), so then there is local patterns that require a small window , but there is also global patterns that require a large window, so for example discovering the global patterns in the training data can require one book or two books or more, so then you are noticing that it is an important limitation of Large Language Models such as GPT-4, since the size of the context window can indeed impact the capacity of a language model to understand nuanced concepts and leverage common sense knowledge, since a smaller window may limit the model's ability to capture long-range dependencies, complex relationships, and broader contextual information that are also crucial for nuanced understanding, but making the window of maximum token limit bigger than that of GPT-4, so that to solve efficiently the problem, comes with too much computational limitations and with too much increased inference. So i invite you to read my below previous thoughts so that to understand my views on it and on artificial intelligence:

How Canada could become the third or fourth largest economy in the world by the 2040s

Read more here:

https://www.nextbigfuture.com/2023/06/how-canada-could-become-the-third-or-fourth-largest-economy-in-the-world-by-the-2040s.html#more-183725

So as you have just noticed, i have just talked yesterday about my new thoughts on programming by saying the following:

"So in programming so that to not make the system harder to understand, test, and maintain, you have to implement what you need and you have to minimize at best complexity and you should avoid the duplication of code in your application and you should encapsulate data and behavior in your classes and objects, and take advantage of object-oriented programming (OOP) concepts such as inheritance, composition, and polymorphism to create modular, manageable, and organized code, and of course you have to minimize at best coupling and maximize at best cohesion, and you should well document code so that it be much easier to manage, maintain, and debug and you should run unit tests often, and you have to use meaningful names, and of course you should refactor your code regularly by improving code quality since refactoring makes the code far easier to maintain over time."

But i think i have also to talk about the most important ideas of agile methodology, so of course agile methodology is used so that to adapt efficiently to the changing environment or so that to adapt efficiently to change, so here is my interesting thoughts about agile methodology , so read them carefully:


Click here to read the complete article
1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor