Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 57,535 of 59,235    |
|    olcott to joes    |
|    Re: The halting problem as defined is a     |
|    19 Jul 25 09:42:22    |
      XPost: comp.theory, sci.logic       From: polcott333@gmail.com              On 7/18/2025 3:49 AM, joes wrote:              > That is wrong. It is, as you say, very obvious that HHH cannot simulate       > DDD past the call to HHH. You just draw the wrong conclusion from it.       > (Aside: what "seems" to you will convince no one. You can just call       > everybody dishonest. Also, they are not "your reviewers".)       >              For the purposes of this discussion this is the       100% complete definition of HHH. It is the exact       same one that I give to all the chat bots.              Termination Analyzer HHH simulates its input until       it detects a non-terminating behavior pattern. When       HHH detects such a pattern it aborts its simulation       and returns 0.              I have always proved that HHH does simulate itself simulating DDD       https://liarparadox.org/HHH(DDD)_Full_Trace.pdf              I have also always proved that DDD correctly simulated by       HHH cannot possibly reach its own "return" statement final       halt state.              *The following analysis cannot be correctly refuted*       https://chatgpt.com/share/687aa4c2-b814-8011-9e7d-b85c03b291eb              --       Copyright 2025 Olcott "Talent hits a target no one else can hit; Genius       hits a target no one else can see." Arthur Schopenhauer              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca