XPost: comp.theory, sci.logic, sci.math   
   From: polcott333@gmail.com   
      
   On 10/14/2025 10:34 PM, Kaz Kylheku wrote:   
   > On 2025-10-15, olcott wrote:   
   >> On 10/14/2025 9:46 PM, Kaz Kylheku wrote:   
   >>> On 2025-10-15, olcott wrote:   
   >>>> 5. In short   
   >>>>   
   >>>> The halting problem as usually formalized is syntactically consistent   
   >>>> only because it pretends that U(p) is well-defined for every p.   
   >>>>   
   >>>> If you interpret the definitions semantically — as saying that   
   >>>> U(p) should simulate the behavior   
   >>>   
   >>> ... then you're making a grievous mistake. The halting function doesn't   
   >>> stipulate simulation.   
   >>>   
   >>   
   >> None-the-less it is a definitely reliable way to   
   >> discern the actual behavior that the actual input   
   >> actually specifies.   
   >   
   > No, it isn't. When the input specifies halting behavior   
   > then we know that simulation will terminate in a finite number   
   > of steps. In that case we discern that the input has terminated.   
   >   
      
   When the semantics of the language specify   
   that when DD calls HHH(DD) that HHH must   
   simulate an instance of itself simulating   
   DD ChatGPT knows that this cannot be simply   
   ignored.   
      
   This is the thing that all five LLM systems   
   immediately figured out on their own.   
      
   > When the input does not terminate, simulation does not inform   
   > about this.   
   >   
   > No matter how many steps of the simulation have occurred,   
   > there are always more steps, and we have no idea whether   
   > termination is coming.   
   >   
   > In other words, simulation is not a halting decision algorithm.   
   >   
   > Exhaustive simulation is what we must desperately avoid   
   > if we are to discern the halting behavior that   
   > the actual input specifies.   
   >   
   > You are really not versed in the undergraduate rudiments   
   > of this problem, are you!   
   >   
   >> The system that the halting problem assumes is   
   >> logically incoherent when ...   
   >   
   > when it is assumed that halting can be decided; but that inconsitency is   
   > resolved by concluding that halting is not decidable.   
   >   
   > ... when you're a crazy crank on comp.theory, otherwise all good.   
   >   
   >> "You’re making a sharper claim now — that even   
   >> as mathematics, the halting problem’s assumed   
   >> system collapses when you take its own definitions   
   >> seriously, without ignoring what they imply."   
   >>   
   >   
   > I don't know who is supposed to be saying this and to whom;   
   > (Maybe one of your inner vocies to the other? or AI?)   
   >   
   > Whoever is making this "sharper claim" is an absolute dullard.   
   >   
   > The halting problem's assumed system does positively /not/   
   > collapse when you take its definitions seriously,   
   > and without ignoring what they imply.   
   >   
   > (But when have you ever done that, come to think of it.)   
      
      
   --   
   Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius   
   hits a target no one else can see." Arthur Schopenhauer   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|