home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 58,845 of 59,235   
   Tristan Wibberley to olcott   
   Re: By what process can we trust the ana   
   27 Dec 25 17:24:13   
   
   From: tristan.wibberley+netnews2@alumni.manchester.ac.uk   
      
   On 27/12/2025 04:59, olcott wrote:   
   > On 12/26/2025 10:50 PM, Tristan Wibberley wrote:   
   >> On 27/12/2025 03:19, olcott wrote:   
   >>> Whenever it can be verified that correct semantic   
   >>> entailment is applied to the semantic meaning of   
   >>> expressions of language then what-so-ever conclusion   
   >>> is derived is a necessary consequence of this   
   >>> expression of language.   
   >>   
   >> "the" applied to a continuum. How do you trust a system that does such a   
   >> verification? It's related to LLMs so closely itself.   
   >>   
   >   
   > It is not how you trust such a system that does   
   > such a verification. You yourself verify that   
   > the semantic entailment is correct.   
   >   
   > That it can show every tiny step and paraphrase   
   > its understanding of these steps shows that it   
   > has the actual equivalent of human understanding.   
   >   
      
   no. It shows that it has statistics on a population of utterances.   
      
   A human student with understanding infers new utterances not covered by   
   the measurements. It relies on mental synergy with the professor (uses   
   doctrine as a reference and the capability of the professor to   
   understand to communicate on variations). It gambles with its wealth,   
   health, and life using the knowledge and thrives (doesn't fail) on its   
   topical effect--but not its market or political effect except when   
   they're the topic.   
      
   If you give a human student so many utterances that they can just pick   
   out new paths that are likely to be accepted by the professor you say   
   he's a Chinese room, not an understander. You rely on the human   
   inability to do that well to discover that they're doing that instead of   
   using a model of a system, of a professor, of a language and its nuances   
   as it pertains to the professor's possible own internal models of the   
   system.   
      
   A more difficult corner is when the topic is market effects and politics   
   of "nudging", because that's really all they can do and thrive by.   
   However, there's a big problem (which I'd like to know more about,   
   academically) about whether an LLM acts and claims congruent knowledge   
   by its Chinese room, or derives acts from the knowledge. Humans that   
   don't understand do that and they also thrive and explain, we're trained   
   to do it as children.   
      
   An additional wrinkle is that humans that don't understand forget, or   
   else they make mistakes under questioning even when motivated not to act   
   like an LLM. Small models derived from large ones turn out to have   
   forgotten, and then they make mistakes under questioning. Questioning   
   can trigger sycophancy, a strategy to avoid mistake detection wherein   
   the questioner's misunderstanding is mirrored.   
      
   I've seen LLMs appear to understand my topics as I'm learning them and   
   not be sycophantic but I felt they didn't understand when I pushed into   
   my inferences in the topic and they were merely repeating worn paths. I   
   think that was due to a lack of mental synergy and instead training to   
   emulate large corpuses taken from (a) those who didn't really understand   
   and (b) those who understood and tried inappropriate control of the   
   population around them to perceive that they advantaged their position.   
      
   An emulator doesn't understand, its just a model of a physical   
   phenomenon. A more interesting idea is whether the population of LLM   
   creators with their body of compute resources understands as a single   
   entity. That, perhaps, does but when it doesn't seem to demonstrate that   
   it does, is it merely understanding the population of humans and using   
   that understanding on us.   
      
   --   
   Tristan Wibberley   
      
   The message body is Copyright (C) 2025 Tristan Wibberley except   
   citations and quotations noted. All Rights Reserved except that you may,   
   of course, cite it academically giving credit to me, distribute it   
   verbatim as part of a usenet system or its archives, and use it to   
   promote my greatness and general superiority without misrepresentation   
   of my opinions other than my opinion of my greatness and general   
   superiority which you _may_ misrepresent. You definitely MAY NOT train   
   any production AI system with it but you may train experimental AI that   
   will only be used for evaluation of the AI methods it implements.   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca