Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 59,004 of 59,235    |
|    dart200 to Richard Damon    |
|    Re: is the ct-thesis cooked?    |
|    12 Jan 26 14:09:07    |
      XPost: comp.theory, comp.software-eng       From: user7160@newsgrouper.org.invalid              On 1/12/26 4:06 AM, Richard Damon wrote:       > On 1/6/26 10:03 PM, dart200 wrote:       >> On 1/6/26 5:26 PM, olcott wrote:       >>> On 1/6/2026 1:47 AM, dart200 wrote:       >>>> On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:       >>>>> Just an external observation:       >>>>>       >>>>> A lot of tech innovations in software optimization area get       >>>>> discarded from the very beginning because people who work on them       >>>>> perceive the halting problem as a dogma. As result, certain       >>>>> practical things (in code analysis) are not even tried because it's       >>>>> assumed that they are bound by the halting problem.       >>>>>       >>>>> In practice, however, the halting problem is rarely a limitation.       >>>>> And even when one hits it, they can safely discard a particular       >>>>> analysis branch by marking it as inconclusive.       >>>>>       >>>>> Halting problem for sure can be better framed to not sound as a       >>>>> dogma, at least. In practice, algorithmic inconclusiveness has       >>>>> 0.001 probability, not a 100% guarantee as many engineers perceive it.       >>>>       >>>> god it's been such a mind-fuck to unpack the halting problem,       >>>>       >>>> but the halting problem does not mean that no algorithm exists for       >>>> any given machine, just that a "general" decider does not exist for       >>>> all machiens ...       >>>>       >>>> heck it must be certain that for any given machine there must exist       >>>> a partial decider that can decide on it ... because otherwise a       >>>> paradox would have to address all possible partial deciders in a       >>>> computable fashion and that runs up against it's own limit to       >>>> classical computing. therefore some true decider must exist for any       >>>> given machine that exists ... we just can't funnel the knowledge       >>>> thru a general interface.       >>>>       >>>       >>> For every H there is a D such that D does the opposite       >>> of whatever H reports. In this case use H1 on this D.       >>       >> yes, the inability to correctly resolve halting thru a singular       >> interface is a flaw of TM computing, not an inherent algorithmic limit       >>       >       > Nope, because the proof doesn't actually need to talk about HOW the       > decider actually made its decision, and thus not limited to Turing       > Machines.       >       > All it needs is that the decider be limited by the rules of a computation.       >       > All the arguements against the proof seem to begin with the error that       > the decider can be changed after the fact and such change changes the       > input to match, but that breaks the fundamental property of       > computations, that they are fixed algorithms       >       > The proof shows that the SPECIFIC decider that the input was made from       > will get the wrong answer, and we can make such an input for ANY       > specific decider, and thus no decider can get all answers correct.       >       > That the input HAS a correct answer (just the opposite of what that       > specific decider gives) shows that there IS a correct answer, so there       > is nothing wrong about the question of its halting, and thus a non-       > answer like "its behavior is contrary" is valid.       >       > Everyone trying to make the arguements just shows they don't understand       > the basics of what a computation is.              missed ya dick!              given that deciders are inherently part of the execution path they are       deciding on ... ofc deciders can modify their behavior based on an input       which they are included within, like they can modify their behavior       based on the properties of the input.              this is how partial deciders can intelligently block on responding to       input that cannot be answered thru their particular interface.              i'm not aware of any method that can prove a partial decider can't be       more efficient that brute force, because again, they can block when       encountering a paradox specific to their interface.              furthermore this doesn't disprove a general algorithm backing the       partial deciders, all the general algorithm needs is a "self" input       which identifies the particular interface it's computing for. this       general algo for partial deciders will have three outputs: HALTS, LOOPS,       and PARADOX. when partial deciders receive PARADOX back from their algo       run they will then just loop forever to never respond.              yes i'm aware "interfaces" are complete descriptions of a partial       decider, and that's what i mean by passing in a self. the partial       decider must have a quine that allows it to recognize itself, and it       passes this into the general algo.              "but i can loop over all partial deciders to produce a paradox" ... uhh       no you can't? traditional computing cannot iterate over all functionally       equivalent machines, so it certainly can't iterate over all almost       functionally equivalent machines, so you cannot claim to produce a       general paradox for the general algo as such a computation is outside       the scope of classical computing limits.              i await to see how you purposefully misunderstand this              --       arising us out of the computing dark ages,       please excuse my pseudo-pyscript,       ~ nick              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca