home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 58,198 of 59,235   
   Kaz Kylheku to olcott   
   Re: Never any actual rebuttal to HHH(DD)   
   29 Oct 25 05:36:13   
   
   XPost: comp.theory   
   From: 643-408-1753@kylheku.com   
      
   On 2025-10-29, olcott  wrote:   
   > On 10/28/2025 9:19 PM, Kaz Kylheku wrote:   
   >> On 2025-10-29, olcott  wrote:   
   >>> On 10/28/2025 7:25 PM, Kaz Kylheku wrote:   
   >>>> Under your system, I don't know whether Y is correct.   
   >>>>   
   >>>> Y could be a broken decider that is wrongly deciding D (and /that/   
   >>>> is why its execution trace differs from X).   
   >>>>   
   >>>> Or it could be the case that D is a non-input to Y, in which case Y is   
   >>>> deemed to be correct because D being a non-input to Y means that D   
   >>>> denotes non-halting semantics to Y (and /that/ is why its execution   
   >>>> trace differs from X).   
   >>>>   
   >>>> The fact that the execution trace differs doesn't inform.   
   >>>>   
   >>>> We need to know the value of is_input(Y, D): we need to /decide/ whether   
   >>>> D is non-input or input to Y in order to /decide/ whether its rejection   
   >>>> is correct.   
   >>>>   
   >>>   
   >>> Whatever is a correct simulation of an input by   
   >>> a decider is the behavior that must be reported on.   
   >>   
   >> But under your system, if I am a user of deciders, and have been   
   >> given a decider H which is certified to be correct, I cannot   
   >> rely on it to decide halting.   
   >>   
   >   
   > When halting is defined correctly:   
   > Does this input specify a sequence of moves that   
   > reach a final halt state?   
   >   
   > and not defined incorrectly: to require something   
   > that is not specified in the input then this does   
   > overcome the halting problem proof and shows that   
   > the halting problem itself has always been a category   
   > error. (Flibble's brilliant term).   
   >   
   >> I want to know whether D halts, that's all.   
   >>   
   >> H says no. It is certified correct under your paradigm, so   
   >> so I don't have to suspect that if it is given an /input/   
   >> it will be wrong.   
   >>   
   >> But: I have no idea whether D is an input to H or a non-input!   
   >>   
   >   
   > That is ridiculous. If it is an argument   
   > to the decider function then it is an input.   
      
   So how it's supposed to work that an otherwise halting D   
   is a non-halting input to H.   
      
   When the non-halting D is an input to H (which it undeniably as you have   
   now decided) D is non-halting.   
      
   With respect to H, it's as if the halting D exists in another dimension;   
   /that/ D is not the input.   
      
   Okay, but anyway ...   
      
   - The decider user has some program P..   
      
   - P terminates, but it takes three years on the user's hardware.   
      
   - The user does not know this; they tried running P for weeks,   
     months, but it never terminated.   
      
   - The user has H which they have been assured is correct under   
     the Olcott Halting Paradigm.   
      
   - The applies H to P, and H rejects it.   
      
   - The program P is actually D, but the user doesn't know this.   
      
   What should the user believe? Does D halt or not?   
      
   How is the user /not/ deceived if they believe that P doesn't halt?   
      
   >> When H says 0, I have no idea whether it's being judged non-halting   
   >> as an input, or whether it's being judged as a non-input (whereby   
   >> either value is the correct answer as far as H is concerned).   
   >>   
   >   
   > Judging by anything besides and input has always   
   > been incorrect. H(D) maps its input to a reject   
   > value on the basis of the behavior that this   
   > argument to H specifies.   
      
   But that behavior is only real /as/ an argument to H; it is not the   
   behavior that the halter-decider customer wants reported on.   
      
   How is the user supposed to know which inputs are handled by their   
   decider and which are not?   
      
   >> Again, I just want to know, does D halt?   
   >>   
   >   
   > You might also want a purely mental Turing   
   > machine to bake you a birthday cake.   
      
   Are you insinuating that the end user for halt deciders is wrong to want   
   to know whether something halts?   
      
   And /that's/ how you ultimately refute the halting problem?   
      
   The standard halting problem and its theorem tells the user   
   they cannot have a halting algorithm that will decide everything;   
   stop wanting that!   
      
   Your paradigm tells the user that the question is wrong, or at least for   
   some programs, and doesn't tell them which.   
      
   >> Under your paradigm, even though I have a certified correct H,   
   >> I am not informed.   
   >>   
   >> Under the standard halting problem, I am not informed because   
   >> I /don't/ have a certified correct H; it doesn't exist.   
   >>   
   >   
   > The standard halting problem requires behavior   
   > that is out-of-scope for Turing machines, like   
   > requiring that they bake birthday cakes.   
      
   But what changes if we simply /stop requiring/ that behavior?   
      
   >> How am I better off in your paradigm?   
   >   
   > In my paradigm you face reality rather than   
   > ignoring it.   
      
   So does that reality provide an algorithm to decide the   
   halting of any machine, or not?   
      
   >> Do I use 10 different certified deciders, and take a majority vote?   
   >>   
   >   
   > sum(3,4) computes the sum of 3+4 even if   
   > the sum of 5+6 is required from sum(3,4).   
   >   
   > Whatever behavior is measured by the decider's   
   > simulation of its input *is* the behavior that   
   > it must report on.   
      
   That's the internallhy focused discussion. How are you   
   solving the end user's demand for halting decision?   
      
   >   
   >> But the function which combines 10 deciders into a majority vote   
   >> is itself a decider! And that 10-majority-decider function can be   
   >> targeted by a diagonal test case ... and such a test case is now   
   >> a non-input.  See?   
   >>   
   >>>> You are not looking at it from the perspective of a /consumer/ of a   
   >>>> /decider product/ actually trying to use deciders and trust their   
   >>>> answer.   
   >>>   
   >>> Whatever is a correct simulation of an input by   
   >>> a decider is the behavior that must be reported on.   
   >>   
   >> But how does the user interpret that result?   
   >   
   > The the input to this decider specifies a sequence   
   > that cannot possibly reach its final halt state.   
      
   But you have inputs for which that is reported, which   
   readily halt when they are executed.   
      
   Don't you think the user wants to know /that/, and not what happens   
   under the decider (if that is different)?   
      
   >> The user just wants to know, does this thing halt or not?   
   >   
   > The user may equally want a purely imaginary   
   > Turing machine to bake a birthday cake.   
   >   
   >> How does it answer the user's question?   
   >   
   > As far as theoretical limitations go I have addressed   
   > them.   
      
   By address, do you mean remove?   
      
   > Practical workarounds can be addressed after I   
   > am published and my work is accepted.   
      
   Workarounds for what? You've left something unsolved in halting; what is   
   that?   
      
   --   
   TXR Programming Language: http://nongnu.org/txr   
   Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal   
   Mastodon: @Kazinator@mstdn.ca   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca