Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

panic: kernel segmentation violation. core dumped (only kidding)


tech / sci.math / More of my philosophy about quantum computers and about my new poem and about my other goals and about reinforcement learning and about artificial intelligence and about global warming and about China's Semiconductor advances to 5nm and more of my th

SubjectAuthor
o More of my philosophy about quantum computers and about my new poemAmine Moulay Ramdane

1
More of my philosophy about quantum computers and about my new poem and about my other goals and about reinforcement learning and about artificial intelligence and about global warming and about China's Semiconductor advances to 5nm and more of my th

<b1ecc862-0251-4d7d-8985-06906bacb9ecn@googlegroups.com>

  copy mid

https://www.novabbs.com/tech/article-flat.php?id=151700&group=sci.math#151700

  copy link   Newsgroups: sci.math
X-Received: by 2002:ac8:6608:0:b0:41c:c65b:24a4 with SMTP id c8-20020ac86608000000b0041cc65b24a4mr318745qtp.12.1698962987767;
Thu, 02 Nov 2023 15:09:47 -0700 (PDT)
X-Received: by 2002:a05:6870:2116:b0:1dd:7381:e05 with SMTP id
f22-20020a056870211600b001dd73810e05mr9504882oae.3.1698962987281; Thu, 02 Nov
2023 15:09:47 -0700 (PDT)
Path: i2pn2.org!i2pn.org!news.nntp4.net!news.niel.me!glou.org!news.glou.org!fdn.fr!proxad.net!feeder1-2.proxad.net!209.85.160.216.MISMATCH!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: sci.math
Date: Thu, 2 Nov 2023 15:09:46 -0700 (PDT)
Injection-Info: google-groups.googlegroups.com; posting-host=66.131.174.130; posting-account=R-6XjwoAAACnHXTO3L-lyPW6wRsSmYW9
NNTP-Posting-Host: 66.131.174.130
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <b1ecc862-0251-4d7d-8985-06906bacb9ecn@googlegroups.com>
Subject: More of my philosophy about quantum computers and about my new poem
and about my other goals and about reinforcement learning and about
artificial intelligence and about global warming and about China's
Semiconductor advances to 5nm and more of my th
From: amine...@gmail.com (Amine Moulay Ramdane)
Injection-Date: Thu, 02 Nov 2023 22:09:47 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
 by: Amine Moulay Ramdane - Thu, 2 Nov 2023 22:09 UTC

Hello,

More of my philosophy about quantum computers and about my new poem and about my other goals and about reinforcement learning and about artificial intelligence and about global warming and about China's Semiconductor advances to 5nm and more of my thoughts..

I am a white arab from Morocco, and i think i am smart since i have also
invented many scalable algorithms and algorithms..

I invite you to read the following interesting new article from ScienceDaily that says that imperfect timekeeping places significant limit on quantum computers , but the it is quick to emphasise that the message is not entirely pessimistic, because the problem could be mitigated in the future by designing clever error correction protocols , so then quantum computers can scale to much more large algorithms, so read the following new interesting article from ScienceDaily so that to understand:

https://www.sciencedaily.com/releases/2023/10/231030194558.htm

Also i invite you to read carefully my interesting new thoughts about the new and future technologies in the following web link that also talks about quantum computers:

https://groups.google.com/g/alt.culture.morocco/c/lfnlD52jDzI

Sherlock in my poem below is someone who can be employed as a detective to collect information, so here is my new poem in english:

--

So i am not into the punk rock

And i am not a Mohawk

But i want to be precise like an atomic clock

So i am not into the punk rock

So i want to be smart like a smart Sherlock

And i want to have a spirit like a rock

So i am not into the punk rock

Since i Love to think well and to talk

Since i want to avoid the deadlock

And i am not killing innocent people with a tomahawk

Since i want to be smart like a smart Sherlock

--

So have just spoken about one of my goals that is to be make my own spirit reliable , but of course you have to know that i am a gentleman type of person and my other goal is that i want with my below thoughts to learn others so that to help others so that to take care of others, also my other goal is to sell some of my softwares and to make money with them so that to give money and help the poor and the needy.

So as you have just noticed , i have just invented a new model of what is human consciousness so that to show the limitations of artificial intelligence of like it will not attain artificial general intelligence , and you can read it in my below thoughts, also i have just showed you another important
limitation of the transformers like the one of ChatGPT , and you can read about it in my below thoughts , and now here is other of my interesting questions to ChatGPT so that to show another important limitation of ChatGPT and of the next artificial intelligence of Google that is called Gemini and that will compete against the next GPT-5:

So here is my first question to ChatGPT:

Does reinforcement learning in ChatGPT takes a lot of time for good refinements , and can you transfer the refinements to the next version of ChatGPT ?

And ChatGPT has just answered me the following:

"Reinforcement learning for refining a language model like ChatGPT can indeed take a significant amount of time and resources to achieve good results. The training process involves interacting with users and receiving feedback, which can be time-consuming and require substantial computational power. The model needs to learn from a large amount of data and user interactions to improve its performance and responsiveness.

Regarding transferring refinements to the next version of ChatGPT, it depends on the specific policies and practices of the organization that develops and maintains the model. Generally, improvements and refinements made to a model can be used as a basis for enhancing future versions, but it's not a direct transfer. Developers can incorporate lessons learned from previous versions and apply new techniques to improve the model's performance in subsequent releases.

It's important to note that ChatGPT and similar models are often fine-tuned on specific datasets and use reinforcement learning from human feedback, which means they can be adapted and improved over time, but the process is iterative and may involve starting from a pretrained model and fine-tuning it with new data and feedback for each new version. It's also essential to address ethical and safety concerns in the development and refinement of AI models like ChatGPT."

And here is my second question to ChatGPT:

So then good refinements in ChatGPT can not be guaranteed , so it is also an important weakness in ChatGPT.

And ChatGPT has just answered me the following:

"Achieving a "good" refinement in AI models like ChatGPT is a complex and ongoing process, and there are no guarantees of perfection. ChatGPT and similar models have both strengths and weaknesses.

Weaknesses and challenges in refining models like ChatGPT include:

1- Bias and Ethical Concerns: Language models may inherit biases from the training data, and addressing bias issues is an ongoing challenge. Ensuring that the model responds appropriately to various inputs and avoids harmful or discriminatory behavior is an ongoing concern.

2- Ambiguity and Errors: Language models can make mistakes or produce answers that are plausible-sounding but incorrect or nonsensical. Achieving high accuracy in all responses is difficult.

3- Lack of Common Sense: AI models like ChatGPT often lack true common sense reasoning and may struggle with tasks that require a deep understanding of the world.

4- Scalability: Providing real-time interactions at scale can be challenging, and it may limit the ability to have nuanced, context-aware conversations.

5- Fine-tuning Challenges: The reinforcement learning and fine-tuning process depends on the quality and quantity of feedback data, and the model's behavior can be sensitive to the data it's trained on.

6- User-Specific Requirements: Meeting individual user preferences and specific use cases can be challenging, as not all users have the same needs or expectations.

While these are challenges, they do not make ChatGPT or similar models inherently weak. They can still provide valuable assistance, answer questions, and facilitate a range of applications. Continuous research and development are aimed at addressing these issues and improving the capabilities of such models.

It's important to use these models responsibly, to acknowledge their limitations, and to combine them with human oversight when necessary. While perfection cannot be guaranteed, ongoing refinement and feedback can lead to improvements in the model's performance and behavior over time."

So here is my interesting discussion with ChatGPT about the transformers
like of the transformer of ChatGPT that is the core system of ChatGPT, and a transformer model is a neural network that learns context and thus meaning by tracking relationships in sequential data like the words in this sentence. Transformer models apply an evolving set of mathematical techniques, called attention or self-attention, to detect subtle ways even distant data elements in a series influence and depend on each other.

So here is my first question to ChatGPT:

So i think i understand what is doing the transformers , so they understand the lower level patterns and they understand the higher level patterns , so then they understand it all.

And here is what has just answered me ChatGPT:

"Yes, you've captured the idea correctly. Transformers, the architecture underlying models like ChatGPT, are capable of understanding both low-level and high-level patterns in the data they are trained on. This enables them to grasp a wide range of information, from the fine-grained details to the broader themes and concepts present in the training data. By learning these patterns comprehensively, they can generate text that reflects a deep understanding of the input data and respond in ways that incorporate both specific details and overarching themes.

However, it's important to note that while transformers can learn and recognize patterns, their understanding is based on statistical associations in the training data. They don't possess true comprehension or consciousness. They generate text by applying learned patterns and associations to the input data, which can produce responses that seem insightful and contextually relevant but are ultimately based on patterns and statistics."

And here is my second question to ChatGPT:

So the pattern can be a low level concept or a high level concept.

And here is what has just answered me ChatGPT:

"That's correct. Patterns in the context of deep learning models like transformers, including both low-level and high-level concepts, can be used to understand and generate text:

1- Low-Level Patterns: These patterns involve specific details, such as word choice, sentence structure, grammar, and stylistic elements. Low-level patterns are more granular and can include things like the use of metaphors, specific vocabulary, or consistent use of certain linguistic features.

2- High-Level Patterns: These patterns encompass broader aspects of language and content, such as themes, writing style, recurring motifs, and overarching concepts. High-level patterns relate to the overall structure and message of a text or creative work.

Deep learning models like transformers are capable of learning and applying patterns at various levels of abstraction. They can understand and generate text that reflects both low-level and high-level patterns, enabling them to respond to a wide range of input and generate contextually relevant and coherent output. The combination of these patterns contributes to the models' ability to generate text that appears meaningful and contextually appropriate."


Click here to read the complete article
1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor