Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 58,831 of 59,235    |
|    olcott to Richard Damon    |
|    Re: Proof that the halting problem is in    |
|    26 Dec 25 21:22:59    |
      [continued from previous message]              >> I am now down to 15 pages on each system.       >>       >> It is not that these LLM systems are terribly faulty.       >> It is that conventional wisdom about undecidability       >> across computer science , math and logic is a foundational       >> error.       >       > Shows how hard you had to work for them to remember your lies.       >       > All you are doing is proving you are just a liar.       >       >>       >>> The problem is you forget to define what it means to be a Halt       >>> Decider, or any form of XXXX Decider. Your problem is "Halting" is       >>> defined as a property of the actual machine being talked about, which       >>> can be expressed in terms of a UTM processing the string       >>> representation of it.       >>>       >>> You then get this crasy idea (which is just a lie) that you can just       >>> ignore the behavior of the CORRECT simulation of that input, as shown       >>> by what the UTM does, and try to define it's incorrect simulation       >>> (since it just stops short based on its own error) as being correct.       >>>       >>> And then, you show your problem by just refusing to even try to       >>> answer with a justification on why your idea is correct.       >>>       >>> How can your H have "Correctly Simuated" and input that "Correctly       >>> Spedifies" the behavior of the machine P, and get the different       >>> result of that machine or the machine defined to do the correct       >>> simulation, that is, the UTM.       >>>       >>> Remember, if UTM([x]) doesn't match the behavior of machine X, then       >>> it just isn't a UTM.       >>>       >>> If your problem is that you encoding method can't produce a string       >>> that allows for a UTM to exist, then you encoding method is just       >>> insufficient, and you doomed yourself from the start, as the criteria       >>> for semantic properties ALWAYS goes back to the original machine.       >>>       >>> All you are doing is proving you don't understand how "requirements"       >>> work, as you just try to sweep them under the carpet with your lies.       >>>       >>> Sorry, all you are doing is proving your stupidity.       >>       >>       >                     --       Copyright 2025 Olcott |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca