Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 57,862 of 59,235    |
|    Fred. Zwarts to All    |
|    Re: Who is telling the truth here? HHH(D    |
|    08 Aug 25 09:51:32    |
      XPost: comp.theory, sci.logic       From: F.Zwarts@HetNet.nl              Op 07.aug.2025 om 15:44 schreef olcott:       > On 8/7/2025 4:14 AM, Fred. Zwarts wrote:       >> Op 07.aug.2025 om 05:24 schreef olcott:       >>> On 8/6/2025 9:27 PM, Richard Damon wrote:       >>>> On 8/6/25 7:53 AM, olcott wrote:       >>>>> On 8/6/2025 5:54 AM, Richard Damon wrote:       >>>>>> On 8/5/25 11:47 PM, olcott wrote:       >>>>>>>       >>>>>>> It corrects the error of the requirement that       >>>>>>> a halt decider reports on its own behavior.       >>>>>>       >>>>>> But it isn't an error.       >>>>>>       >>>>>> Halt Deciders need to answer about *ANY* program, and they are       >>>>>> programs.       >>>>>>       >>>>>>>       >>>>>>> "The contradiction in Linz's (or Turing's) self-referential       >>>>>>> halting construction only appears if one insists that the       >>>>>>> machine can and must decide on its own behavior, which is       >>>>>>> neither possible nor required."       >>>>>>>       >>>>>>> https://chatgpt.com/share/6890ee5a-52bc-8011-852e-3d9f97bcfbd8       >>>>>>>       >>>>>>       >>>>>> Because you lied to it and said it was an error.       >>>>>>       >>>>>       >>>>> The above paragraph is proved true by the meaning of its words.       >>>>> When we ask a Turing machine to report on its own behavior we       >>>>> are assuming that its Turing Machine description is an accurate       >>>>> proxy for this behavior. When there are exceptions to this rule       >>>>> then we cannot possibly ask a Turing machine about its own behavior.       >>>>> This is easily corrected:       >>>>       >>>> But the existance of UTMs means that it IS possible to make a       >>>> perfect proxy, as the UTM can completely recreate the behavior of       >>>> ANY machine from its Turing Machine Description.       >>>>       >>>> All you are doing is admitting that you think errors and lies are ok.       >>>>       >>>       >>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.UTM ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy,       >>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.UTM ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn       >>>       >>> After a finite number of correct simulations the UTM       >>> will reach its final state yet no simulated UTM ever       >>> reaches its own final state.       >>>       >>       >> Indeed, that is the proof that such a simulation fails.       >> No simulator exists that is able to simulate correctly all halting       >> programs up to the end,because it fails to simulate correctly code       >> that resembles its own code.       >       > *You are confusing two distinctly different notions*              As usual incorrect claims without evidence.>       > No simulator can correctly simulate an input that has a       > pathological relationship to itself until this simulated       > input reaches its simulated final halt state because this       > simulated final halt state is unreachable by this correctly       > simulated input.       >              Indeed, such a simulator must fail, but not because the final halt state       does not exist, but because the simulator is unable to reach that final       halt state specified in the input, where other simulators prove that       such a final halt state is specified in the input.       This proves that simulation is not the right tool for such an analysis.              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca