home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,252 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 59,251 of 59,252   
   olcott to Tristan Wibberley   
   Re: The proper way to use LLMs to aid pr   
   06 Mar 26 21:53:55   
   
   XPost: sci.logic, comp.theory, sci.math   
   From: polcott333@gmail.com   
      
   On 3/6/2026 9:24 PM, Tristan Wibberley wrote:   
   > On 06/03/2026 14:36, olcott wrote:   
   >> On 3/6/2026 3:06 AM, Mikko wrote:   
   >   
   >>> Typical LLM's don't have deep knowledge. They can handle large amounts   
   >>> of knoledge but only superficially.   
   >>>   
   >>> LLM's are worthless as validators. An automatic proof checker is good   
   >>> for validation but only if it is itself sufficiently validated.   
   >>>   
   >>   
   >> You have been empirically proven incorrect at least as far   
   >> as the philosophical foundations of math, computer science,   
   >> logic and linguistics goes. Three years ago all of these systems   
   >> were quite stupid. After 300 conversations averaging 50 pages   
   >> each I can attest that they have vastly improved. If we think   
   >> of them as search engines for ideas that is their best use.   
   >   
   >   
   > It might be they have been fitted to your conversations. You cannot   
   > infer that they reason more based on increased user satisfaction. It's   
   > difficult to draw such a conclusion even from multiple user evaluations   
   > because users share common cultural factors so they will make similar   
   > conversation and the LLM may learn from one to fool the other.   
   >   
      
   I have conversed with them 12 hours a day every day for three months.   
   I have mostly only talked about things that can be verified entirely   
   on the basis of the meaning of the words. The biggest difference is the   
   size of context window has vastly increased.   
      
   In the last three months I have been able to anchored my 28 years of   
   primary research in a few peer reviewed papers. This one is the most   
   important one:   
   https://link.springer.com/article/10.1007/s11245-011-9107-6   
      
   I went all the way to my University to get the full paper.   
   My 28 years of primary research augments the notions of the   
   above paper and proof theoretic semantics in ways that seem   
   to be their obvious next steps.   
      
   > GPT-5 mini performed poorly for me and did not demonstrate reasoning. It   
   > demonstrated saying things that people say after someone else says   
   > things he says. I assume GPT-5 mini has the reasoning mechanism of other   
   > GPT-5 variants but is smaller and therefore less fit to me. Frankly it   
    > felt like a 1990s game but with more data and something like texture>   
   transferral but for language (creation of poetry from a conversation,   
   > for example).   
   >   
      
   Copilot Think Deeper,   
   Claude Opus 4.6 Extended,   
   Gemini Pro,   
   Grok Expert,   
   Google NotebookLM   
      
   All of them demonstrate deep understanding of the   
   technical subjects math, computation, logic and   
   linguistics as well as all of their alternative   
   philosophical foundations. They conclusively prove   
   that these understandings are correct by anchoring   
   them in foundational peer reviewed papers.   
      
   > It is similar to this   
   > this usenetsplit segment   
   > but   
   > at a different scale.   
   >   
   > I wonder if it would spontaneously form a new poetic format like that   
   > and how it would feel to read.   
   >   
   > I hope a government with capable weapons never uses it. cf. Ross   
   > Finlayson's recently stated logical principles.   
   >   
      
      
   --   
   Copyright 2026 Olcott

              My 28 year goal has been to make
       "true on the basis of meaning expressed in language"
       reliably computable for the entire body of knowledge.

              This required establishing a new foundation
              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca