Forums before death by AOL, social media and spammers... "We can't have nice things"
|    alt.cyberpunk.tech    |    Cyberpunks LOVE making shit complicated    |    1,115 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 711 of 1,115    |
|    Hespejo to All    |
|    Re: Giving AGI developers at target to h    |
|    27 Oct 25 20:53:42    |
      From: hector@hespejo.com              an0n wrote:       > Interesting article shared on Y-Combinator >       > https://arxiv.org/abs/2510.18212 trying to quantify how close we are to       > AGI by starting with an empirical analysis of basic human intelligence.       >       > Tl;Dr > The paper attributes overall AGI score of 27% for GPT-4 and 57%       > for GPT-5, showing both the rapid progress and the substantial gaps       > remaining to reach human-level cognition. The most significant       > bottleneck identified is Long-Term Memory Storage (MS); This fundamental       > "amnesia" means models must rely on "capability contortions", such as       > using massive context windows (Working Memory) to compensate for their       > inability to learn and store new information persistently, often the       > root cause of hallucinations.       >       > Soooooo .... not long now boys !! "just" need to solve for these issues       > and we are off to the races.              As much as I like ML / AI, the current state of LLMs which guess the       next token will never become something a AGI. Professor LeCun is very       clear on this and I he states this in a very lucid way.              This is a very interesting article that summarizes his statements - much       recommended reading it!              https://eu.36kr.com/en/p/3364112069871367              --       Cheers,              Hespejo       =======              My page: https://erratia.com/       PGP Fingerprint: 26F8 B4C5 30E5 F883 F22A 5B30 62C0 59C3 9B1A C06A              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca