Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 58,572 of 59,235    |
|    olcott to Richard Damon    |
|    Re: Very simple first principles showing    |
|    10 Dec 25 21:33:33    |
      XPost: comp.theory, sci.logic, sci.math       From: polcott333@gmail.com              On 12/10/2025 9:21 PM, Richard Damon wrote:       > On 12/10/25 9:19 PM, olcott wrote:       >> On 12/10/2025 8:13 PM, Richard Damon wrote:       >>> On 12/10/25 9:00 PM, olcott wrote:       >>>> *It has take me 21 years to boil it down to this*       >>>>       >>>> When the halting problem requires a halt decider       >>>> to report on the behavior of a Turing machine this       >>>> is always a category error.       >>>>       >>>> The corrected halting problem requires a Turing       >>>> machine decider to report in the behavior that       >>>> its finite string input specifies.       >>>>       >>>       >>> And since the input specifies the behavior of the Turing Machine it       >>> represents when run,       >>       >> Counter-factual, but then you have only ever been       >> a somewhat smart bot stuck in rebuttal mode.       >>       >       > WHy do you say that?       > What grounds do you have for that claim?       > Do you even know what you are saying?       >              That is the behavior pattern that you have been       consistently showing with every post for years.              You cannot possibly go through any reasoning to       show that I am incorrect. You only have a predefined       boiler-plate set of replies. So you just flunked       the Turing test.              --       Copyright 2025 Olcott |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca