home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.programming      Programming issues that transcend langua      57,431 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 56,887 of 57,431   
   =?UTF-8?B?QXVnx51s?= to Amine Moulay Ramdane   
   Re: More of my philosophy about how to r   
   02 Jan 23 17:48:21   
   
   From: angel00000100000@mail.ee   
      
   Hi, laMb.......................................   
      
      
      
      
   On Monday, January 2, 2023 at 1:23:38 AM UTC+2, Amine Moulay Ramdane wrote:   
   > Hello,    
   >    
   >    
   >    
   >    
   >    
   > More of my philosophy about how to regulate ChatGPT and artificial   
   intelligence and more of my thoughts..    
   >    
   > I am a white arab from Morocco, and i think i am smart since i have also    
   > invented many scalable algorithms and algorithms..    
   >    
   >    
   > I think i am highly smart since I have passed two certified IQ tests and i   
   have scored "above" 115 IQ, and i mean that it is "above" 115 IQ,    
   > so i think that OpenAI is not thinking correctly, since the ChatGPT    
   > is taking the patterns from the data of creativity of humans and is using   
   them, so i don't think that it has the right to do that, since it is "clear"   
   in our legal system that humans that have also created the patterns of there   
   creativity have not    
   given the rights to make an artificial intelligence such as ChatGPT with them   
   that can hurt humans in a not lawful way, since the value of a product or   
   service comes from smartness and/or from work or the hard work, but artificial   
   intelligence tools such    
   as ChatGPT are hurting in a not lawful way humans and are hurting in a not   
   lawful way the value of creativity of a product or service that comes from   
   smartness and/or work or hard work of a human, and i invite you to read my   
   previous thoughts so that to    
   understand my views:    
   >    
   >    
   > More of my philosophy about the weakness of Generative Pre-trained   
   Transformer and more of my thoughts..    
   >    
   >    
   > So i think i am discovering the pattern with my fluid intelligence that    
   > explains the weakness of Generative Pre-trained Transformer of like ChatGPT,   
   and it is that ChatGPT can discover the patterns using    
   > the existing patterns from the data or knowledge, so it is like using    
   > the smartness of the data, but ChatGPT can not use the smartness of the   
   human brain that also comes with human consciousness that optimizes more   
   smartness, so it can not invent highly smart patterns or things from like is   
   doing it a highly smart human    
   from his brain, so i think that ChatGPT will still be not capable of this kind   
   of highly smart creativity, but still it remains really powerful and really   
   useful, so i invite you to read my following previous thoughts that make you   
   understand my views:    
   >    
   >    
   > More precision of my philosophy about the mechanisms of attention and   
   self-attention of Transformers AI models and more of my thoughts..    
   >    
   >    
   > I think i am highly smart since I have passed two certified IQ tests and i   
   have scored "above" 115 IQ, and i mean that it is "above" 115 IQ, i think i am   
   understanding deep learning, but i say that Transformers are deep learning +   
   self-attention and    
   attention, and this attention and self-attention permit to grasp "context" and   
   "antecedents", for example when you say the following sentence:    
   >    
   >    
   > "The animal didn't cross the street because it was too tired"    
   >    
   >    
   > So we can ask how the artificial intelligence of ChatGPT that uses    
   > Generative Pre-trained Transformer will understand that the "it" in    
   > the above sentence is not the street but the animal, so i say that    
   > it is with self-attention and attention mechanisms of artificial   
   intelligence and with the training with more and more data that the   
   transformer can "detect" the pattern of the "it" refers to the "animal" in the   
   above sentence, so self-attention and    
   attention of the artificial intelligence of ChatGPT that we call Generative   
   Pre-trained Transformer permit to grasp "context" and "antecedents" too, it is   
   also like logically inferring the patterns using self-attention and attention   
   from the context of    
   the many many sentences from the data, and since the data is exponentially   
   growing and since the artificial intelligence is also generative, so i think   
   it will permit to make the artificial intelligence of the transformer much   
   more powerful, so as you    
   notice that the data is King , and the "generative" word of the Generative   
   Pre-trained Transformer refers to the model's ability to generate text, and of   
   course we are now noticing that it is making ChatGPT really useful and   
   powerful, and of course i say    
   that ChatGPT will still much more improve , and read my following previous   
   thoughts so that to understand my views about it:    
   >    
   >    
   > More of my philosophy about transformers and about the next GPT-4 and about   
   ChatGPT and more of my thoughts..    
   >    
   >    
   >    
   > The capabilities of transformer architectures, as in GPT of ChatGPT that is   
   called Generative Pre-trained Transformer, are truly remarkable, as they allow   
   machine learning models to surpass human reading comprehension and cognitive   
   abilities in many    
   ways. These models are trained on massive amounts of text data, including   
   entire corpora such as the English Wikipedia or the entire internet, which   
   enables them to become highly advanced language models (LMs) with a deep   
   understanding of language and    
   the ability to perform complex predictive analytics based on text analysis.   
   The result is a model that is able to approximate human-level text cognition,   
   or reading, to an exceptional degree - not just simple comprehension, but also   
   the ability to make    
   sophisticated connections and interpretations about the text, because the   
   network of Transformer pay “attention” to multiple sentences, enabling it   
   to grasp "context" and "antecedents". These transformer models represent a   
   significant advancement in    
   the field of natural language processing and have the potential to   
   revolutionize how we interact with and understand language.    
   >    
   > GPT-4 is significantly larger and more powerful than GPT-3, with 170   
   trillion parameters compared to GPT-3’s 175 billion parameters(and even   
   GPT-3.5 of the new ChatGPT has 175 billion parameters). This allows GPT-4 to   
   process and generate text with    
   greater accuracy and fluency, so with feedback from users and a more powerful   
   GPT-4 model coming up and by being trained on a substantially larger amount of   
   data , ChatGPT that will use GPT-4 may "significantly" improve in the future.   
   So i think ChatGPT    
   will still become much more powerful. And i invite you to read my previous   
   thoughts about my experience with the new ChatGPT:    
   >    
   >    
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca