Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 57,531 of 59,235    |
|    Richard Damon to olcott    |
|    Re: The halting problem as defined is a     |
|    19 Jul 25 08:50:54    |
      XPost: comp.theory, sci.logic       From: richard@damon-family.org              On 7/18/25 11:39 PM, olcott wrote:       > On 7/18/2025 9:25 PM, Richard Damon wrote:       >> On 7/18/25 6:11 PM, Mr Flibble wrote:       >>> On Thu, 17 Jul 2025 13:01:31 -0500, olcott wrote:       >>>       >>>> Claude.ai agrees that the halting problem as defined is a category       >>>> error.       >>>>       >>>> https://claude.ai/share/0b784d2a-447e-441f-b3f0-a204fa17135a       >>>>       >>>> This can only be directly seen within my notion of a simulating halt       >>>> decider. I used the Linz proof as my basis.       >>>>       >>>> Sorrowfully Peter Linz passed away 2 days less than one year ago on my       >>>> Mom's birthday July 19, 2024.       >>>       >>> I was the first to state that the halting problem as defined is a       >>> category       >>> error and I stated it in this forum.       >>>       >>> /Flibble       >>       >> But can't define the categories in a way that is actually meaningful.       >>       >> There is no way to tell by looking at a piece of code which category       >> it belongs to.       >>       >> The category error comes from Olcotts ignoring the actual requirments       >> of the problem, and trying to get away with non-programs.       >       > It does turn out to be the case that the actual requirements       > are anchored in a fundamentally false assumption and this       > is key the error of the proofs. I finally articulated my       > position on this so that it could be understood to be correct.       >              But the requriement *ARE* the requirements.              All you are doing here is ADMITTING that you are lying by working with       someother set of requirements, and not the requirements of the actual       problem.              This says you are admitting to the LIE of a Strawman arguements.              And, the problem is there isn't a "fudamentally false assumption" in the       requirements of the problem, just in your understanding of it, because       you just don't understand what the words mean.              The fact that you have persisted in repeating that error for so long       says that either you have the pathological moral defect of not caring if       you are lying, or the pathological mental defect of not being able to       learn these basics, or quite likely BOTH.              Turing Machine can, in fact, be asked about the behavior of the direct       execution of another machine, because that machine CAN be fully       described to it in a way that fully defines that behavior. The existance       of Universal Turing Machines, that can be given such a description and       fully reproduce the behavior shows that.              Your LIE that the partial simulation of the decider must be able to be a       stand in it just that, a LIE, out of you failure to understand what you       are talking about.              Sorry, All you have done is prove that you are just an idiotic       pathological liar.              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca