home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   sci.logic      Logic -- math, philosophy & computationa      262,912 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 262,085 of 262,912   
   olcott to Python   
   Re: D correctly simulated by H proved fo   
   21 Dec 25 20:57:30   
   
   XPost: sci.math, comp.theory, comp.ai.philosophy   
   From: polcott333@gmail.com   
      
   On 12/21/2025 8:25 PM, Python wrote:   
   > Le 22/12/2025 à 03:08, olcott a écrit :   
   >> On 12/21/2025 7:35 PM, Mr Flibble wrote:   
   >>> On Sun, 21 Dec 2025 17:19:50 -0600, olcott wrote:   
   >>>   
   >>>> On 6/12/2024 11:50 AM, olcott wrote:   
   >>>>>   
   >>>>> When we compute the mapping from the input to H(D,D) this must apply a   
   >>>>> set of finite string transformation rules (specified by the semantics   
   >>>>> of the x86 language) to this input.   
   >>>>>   
   >>>>>   
   >>>> The above is my first use applying this term to a halt decider.   
   >>>>   
   >>>> My first documented use of the term "finite string transformation   
   >>>> rules"   
   >>>> https://groups.google.com/g/comp.theory/c/TFXhleKnHmY/m/lqhDVnvUBgAJ   
   >>>>   
   >>>> *This is the basis for my unique definition of a generic decider*   
   >>>>   
   >>>> Deciders: Transform finite string inputs by finite string   
   >>>> transformation   
   >>>> rules into {Accept, Reject} values.   
   >>>   
   >>> D halts, H is not a halt decider.   
   >>>   
   >>> /Flibble   
   >>>   
   >>   
   >> Turing machine deciders: Transform finite string   
   >> inputs by finite string transformation rules into   
   >>   
   >> It turns out that on the basis of the above definition   
   >> and other standard definitions that I have proved   
   >> that the halting problem has always been fundamentally   
   >> incorrect.   
   >>   
   >> So far only ChatGPT, Claude AI and Grok totally agree   
   >> that I have completely proved my point on that basis.   
   >   
   > LLMs are a deadly poison for cranks of your kind. Even OpenAI recognized   
   > this.   
   >   
      
   *Not in this specific case*   
   Whatever conclusion is derived through correct   
   semantic entailment from definitions is a necessary   
   consequence of these definitions.   
      
   The only issue that I have had with them is:   
   (a) They have to be reminded to pay closer attention   
   to exactly what was said.   
      
   (b) To only form conclusions that are semantically   
   entailed from definitions.   
      
   --   
   Copyright 2025 Olcott

              My 28 year goal has been to make
       "true on the basis of meaning expressed in language"
       reliably computable.

              This required establishing a new foundation
              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca