home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.cyberpunk.tech      Cyberpunks LOVE making shit complicated      1,115 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 714 of 1,115   
   thule to All   
   Re: Giving AGI developers at target to h   
   27 Oct 25 21:16:49   
   
   From: thule@thule.invalid   
      
   On 10/27/25 8:08 PM, an0n wrote:   
   > On 10/28/25 2:56 AM, thule wrote:   
   >> i'm not one of those ai-hating redditors; i host a few small models on   
   >> my home network and use them regularly.  i'm unqualified but i have   
   >> read a few computational neuroscience textbooks and i'm convinced llms   
   >> cannot become agi.  a fundamental limitation is their lack of world   
   >> models, and some models now work around this with chain of thought,   
   >> but it's not a true solution.  while general intelligence is defined   
   >> as whatever humans can do, i doubt there's such thing as truly   
   >> "general" - i.e. opposite of narrow - intelligence; indeed, the brain   
   >> is a collection of specialised regions bootstrapped together, with the   
   >> cortex and its canonical circuit being the only sort of general-   
   >> purpose structure i'm aware of.   
   >   
   > My guess is that OpenAI and others will be trying to create something   
   > that works in a very similar way to the human brain when replicating AGI   
   > behavior.   
   >   
   > A single monolithic LLM is unlikely to solve the problem of AGI, but a   
   > network of LLMs, each attached to its own set of long-term memories and   
   > working together—might get closer, especially with orders of magnitude   
   > more compute. It's the arbitrator "function" that chooses the path (akin   
   > to subconscious and conscious decision-making) that complicates things.   
   >   
   > Not saying it's easy, or even that close, but I really don't think it's   
   > as unattainable as it might seem. Think Kryten or Nexus 6; start with   
   > rules based directives and evolve to a more general understanding of   
   > reality through lived experience.   
      
   i subscribe to the idea that whatever the brain is doing is somewhat   
   close to optimal, so when people develop completely different ways of   
   achieving what the brain does, i get sceptical.   
      
   llms do not work at all like the brain.  for example, backpropagation is   
   not biologically plausible, and plausible alternatives - e.g. GeneRec,   
   part of Leabra - are unlikely to mirror what is done by the brain.   
      
   a network of llms beyond what mixture of experts already does could be   
   interesting, but until we have large spiking models with at least   
   2-point neurons - like our pyramidal neurons which integrate at both   
   apical and basal dendrites, allowing context-aware function - i won't be   
   expecting agi soon.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca