XPost: comp.theory, sci.logic, sci.math   
   From: user7160@newsgrouper.org.invalid   
      
   On 11/19/25 11:47 AM, Kaz Kylheku wrote:   
   > On 2025-11-19, dart200 wrote:   
   >> On 11/19/25 10:48 AM, Kaz Kylheku wrote:   
   >>> On 2025-11-19, dart200 wrote:   
   >>>> On 11/19/25 9:17 AM, Tristan Wibberley wrote:   
   >>>>> On 19/11/2025 01:40, dart200 wrote:   
   >>>>>   
   >>>>>> i'm currently a bit stumped on dealing with a possible a halting paradox   
   >>>>>> constructed within RTMs, using an RTM simulating a TM simulating an RTM.   
   >>>>>> this chain similarly mechanically cuts off the required information to   
   >>>>>> avoid a paradox, kinda like a TM alone. not fully confident it's a   
   >>>>>> problem or not   
   >>>>>   
   >>>>> It sounds equivalent to problems of security wrt. leaky sandboxes.   
   >>>>> Interesting stuff. Maybe valuable too.   
   >>>>   
   >>>> i'm actually pretty distraught over this rn. who's gunna care if all i   
   >>>> did was reframe the halting problem?? i'm stuck on quite literally a   
   >>>> liar's paradox, with emphasis on a clear lie taking place   
   >>>>   
   >>>> specifically: the simulated TM simulating an RTM is lying about the true   
   >>>> runtime context, bamboozling reflection's ability to prevent paradox   
   >>>> construction   
   >>>   
   >>> Don't you have mechanisms to prevent the procedures from being   
   >>> able to manipulate the environment?   
   >>>   
   >>>> und = () -> {   
   >>>> simTM {   
   >>>> if ( simRTM{halts(und)} )   
   >>>> loop_forever()   
   >>>> else   
   >>>> return   
   >>>> }   
   >>>> }   
   >>>   
   >>> So in ths above construction, simTM creates a contour around a new   
   >>> context, which is empty?   
   >>   
   >> essentially yes. simTM does not support REFLECT, so simulations within   
   >> the simulation have no method of accessing the runtime context, creating   
   >> the illusion (or lie) of an null context   
   >   
   > In a computational system with context, functions do not have a halting   
   > status that depends only on their arguments, but on their arguments plus   
   > context.   
   >   
   > Therefore, the question "does this function halt when applied to these   
   > arguments" isn't right in this domain; it needs to be "does this function,   
   > in a context with such and such content, and these arguments, halt".   
   >   
   > Then, to have a diagonal case whch opposes the decider, that diagonal   
   > case has to be sure to be using that same context, otherwise it   
   > is not diagonal; i.e.   
   >   
   > in_context C { // <-- but but construct is banned!   
   >   
   > // D, in context C "behaves opposite" to the decision   
   > // produced by H regarding D in context C:   
   >   
   > D() {   
   > if (H(D, C))   
   > loop();   
   > }   
   > }   
      
   if we can find a way to surely prevent that erasure from being   
   expressible, then we can eliminate the halting paradox   
      
   idk if that's possible anymore,   
      
   but we may be able to isolate that paradox into a set of machines that   
   contains nothing uniquely computable (remember for any particular   
   computable number, there are an infinite machines that compute said   
   number), and therefore can be safely ignored as uninteresting   
      
   or maybe there's some mechanism i haven't thought of yet...   
      
   >   
   > Or:   
   >   
   > D() {   
   > let C = getParentContext(); // likewise banned?   
   >   
   > if (H(D, C))   
   > loop();   
   > }   
   >   
   >   
   >   
      
   nothing wrong here, i think...   
      
   passing in the context C you'd like to compute Ds halting semantics in   
   regards to is fine. since H still has access to the full context, it can   
   correctly discern where it is in the computation and respond with false   
   (does not halt OR undecidable) on line "if (H(D,C))", and true anywhere   
   else to that particular input   
      
   the problem arises when you erase the context via a liar's simulation.   
   it must be done via a simulation since reflection is baked into the   
   fundamental mechanisms available to every computation via REFLECT, and   
   cannot be erased other than a lying simulation.   
      
   --   
   a burnt out swe investigating into why our tooling doesn't involve   
   basic semantic proofs like halting analysis   
      
   please excuse my pseudo-pyscript,   
      
   ~ nick   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|