home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 58,202 of 59,235   
   olcott to Kaz Kylheku   
   Re: Never any actual rebuttal to HHH(DD)   
   29 Oct 25 11:12:27   
   
   XPost: comp.theory   
   From: polcott333@gmail.com   
      
   On 10/29/2025 12:36 AM, Kaz Kylheku wrote:   
   > On 2025-10-29, olcott  wrote:   
   >> On 10/28/2025 9:19 PM, Kaz Kylheku wrote:   
   >>> On 2025-10-29, olcott  wrote:   
   >>>> On 10/28/2025 7:25 PM, Kaz Kylheku wrote:   
   >>>>> Under your system, I don't know whether Y is correct.   
   >>>>>   
   >>>>> Y could be a broken decider that is wrongly deciding D (and /that/   
   >>>>> is why its execution trace differs from X).   
   >>>>>   
   >>>>> Or it could be the case that D is a non-input to Y, in which case Y is   
   >>>>> deemed to be correct because D being a non-input to Y means that D   
   >>>>> denotes non-halting semantics to Y (and /that/ is why its execution   
   >>>>> trace differs from X).   
   >>>>>   
   >>>>> The fact that the execution trace differs doesn't inform.   
   >>>>>   
   >>>>> We need to know the value of is_input(Y, D): we need to /decide/ whether   
   >>>>> D is non-input or input to Y in order to /decide/ whether its rejection   
   >>>>> is correct.   
   >>>>>   
   >>>>   
   >>>> Whatever is a correct simulation of an input by   
   >>>> a decider is the behavior that must be reported on.   
   >>>   
   >>> But under your system, if I am a user of deciders, and have been   
   >>> given a decider H which is certified to be correct, I cannot   
   >>> rely on it to decide halting.   
   >>>   
   >>   
   >> When halting is defined correctly:   
   >> Does this input specify a sequence of moves that   
   >> reach a final halt state?   
   >>   
   >> and not defined incorrectly: to require something   
   >> that is not specified in the input then this does   
   >> overcome the halting problem proof and shows that   
   >> the halting problem itself has always been a category   
   >> error. (Flibble's brilliant term).   
   >>   
   >>> I want to know whether D halts, that's all.   
   >>>   
   >>> H says no. It is certified correct under your paradigm, so   
   >>> so I don't have to suspect that if it is given an /input/   
   >>> it will be wrong.   
   >>>   
   >>> But: I have no idea whether D is an input to H or a non-input!   
   >>>   
   >>   
   >> That is ridiculous. If it is an argument   
   >> to the decider function then it is an input.   
   >   
   > So how it's supposed to work that an otherwise halting D   
   > is a non-halting input to H.   
   >   
      
   int D()   
   {   
      int Halt_Status = H(D);   
      if (Halt_Status)   
        HERE: goto HERE;   
      return Halt_Status;   
   }   
      
   H simulates D   
   that calls H(D) to simulate D   
   that calls H(D) to simulate D   
   that calls H(D) to simulate D   
   that calls H(D) to simulate D   
   that calls H(D) to simulate D   
   until H sees this repeating pattern.   
      
   > When the non-halting D is an input to H (which it undeniably as you have   
   > now decided) D is non-halting.   
   >   
      
   D.input_to_H is non-halting is confirmed in that   
   D simulated by cannot possibly reach its own   
   "return" statement final halt state. This divides   
   non-halting from stopping running.   
      
   > With respect to H, it's as if the halting D exists in another dimension;   
   > /that/ D is not the input.   
   >   
   > Okay, but anyway ...   
   >   
   > - The decider user has some program P..   
   >   
   > - P terminates, but it takes three years on the user's hardware.   
   >   
   > - The user does not know this; they tried running P for weeks,   
   >    months, but it never terminated.   
   >   
   > - The user has H which they have been assured is correct under   
   >    the Olcott Halting Paradigm.   
   >   
   > - The applies H to P, and H rejects it.   
   >   
      
   The would mean that P has specifically targeted   
   H in an attempt to thwart a correct assessment.   
      
   > - The program P is actually D, but the user doesn't know this.   
   >   
      
   The system works on source-code.   
      
   > What should the user believe? Does D halt or not?   
   >   
      
   When the input P targets the decider H or does not target   
   the decider H input P simulated by decider H always reports   
   on the basis of whether P can reach its own final halt state.   
      
   > How is the user /not/ deceived if they believe that P doesn't halt?   
   >   
   >>> When H says 0, I have no idea whether it's being judged non-halting   
   >>> as an input, or whether it's being judged as a non-input (whereby   
   >>> either value is the correct answer as far as H is concerned).   
   >>>   
   >>   
   >> Judging by anything besides and input has always   
   >> been incorrect. H(D) maps its input to a reject   
   >> value on the basis of the behavior that this   
   >> argument to H specifies.   
   >   
   > But that behavior is only real /as/ an argument to H; it is not the   
   > behavior that the halter-decider customer wants reported on.   
   >   
      
   When what the customer wants and what is in the scope of   
   Turing machines differ the user must face reality. There   
   may be practical workarounds these are outside the scope   
   of the theoretical limits.   
      
   > How is the user supposed to know which inputs are handled by their   
   > decider and which are not?   
   >   
      
   When the input P targets the decider H or does not target   
   the decider H input P simulated by decider H always reports   
   on the basis of whether P can reach its own final halt state.   
      
   >>> Again, I just want to know, does D halt?   
   >>>   
   >>   
   >> You might also want a purely mental Turing   
   >> machine to bake you a birthday cake.   
   >   
   > Are you insinuating that the end user for halt deciders is wrong to want   
   > to know whether something halts?   
   >   
      
   What is outside of the scope of all Turing machines is   
   outside of the scope of all Turing machines.   
      
   > And /that's/ how you ultimately refute the halting problem?   
   >   
      
   The halting problem as defined requires something   
   that is outside of the scope of all Turing machines.   
      
   > The standard halting problem and its theorem tells the user   
   > they cannot have a halting algorithm that will decide everything;   
   > stop wanting that!   
   >   
      
   When the input P targets the decider H or does not target   
   the decider H input P simulated by decider H always reports   
   on the basis of whether P can reach its own final halt state.   
      
   > Your paradigm tells the user that the question is wrong, or at least for   
   > some programs, and doesn't tell them which.   
   >   
      
   I am discussing theoretical limits not practical workarounds.   
      
   >>> Under your paradigm, even though I have a certified correct H,   
   >>> I am not informed.   
   >>>   
   >>> Under the standard halting problem, I am not informed because   
   >>> I /don't/ have a certified correct H; it doesn't exist.   
   >>>   
   >>   
   >> The standard halting problem requires behavior   
   >> that is out-of-scope for Turing machines, like   
   >> requiring that they bake birthday cakes.   
   >   
   > But what changes if we simply /stop requiring/ that behavior?   
   >   
      
   When the input P targets the decider H or does not target   
   the decider H input P simulated by decider H always reports   
   on the basis of whether P can reach its own final halt state.   
      
   >>> How am I better off in your paradigm?   
   >>   
   >> In my paradigm you face reality rather than   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca