home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.ai.philosophy      Perhaps we should ask SkyNet about this      59,235 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 59,078 of 59,235   
   dart200 to Richard Damon   
   Re: is the ct-thesis cooked? (1/2)   
   16 Jan 26 16:43:27   
   
   XPost: comp.theory, comp.software-eng   
   From: user7160@newsgrouper.org.invalid   
      
   On 1/16/26 3:21 PM, Richard Damon wrote:   
   > On 1/16/26 5:21 PM, dart200 wrote:   
   >> On 1/16/26 8:46 AM, Richard Damon wrote:   
   >>> On 1/16/26 4:08 AM, dart200 wrote:   
   >>>> On 1/15/26 7:28 PM, Richard Damon wrote:   
   >>>>> On 1/15/26 7:23 AM, dart200 wrote:   
   >>>>>   
   >>>>>> bro stick a giant dildo up ur asshole u hypocritical fuckface...   
   >>>>>>   
   >>>>>> when i tried to suggest improvements to the computational model,   
   >>>>>> like RTMs, u then told me i *can't* do that because muh ct-thesis,   
   >>>>>> and here u are crying about how no superior method has been found   
   >>>>>> as if u'd ever even tried to look past the ct-thesis...   
   >>>>>   
   >>>>> No, you didn't suggest improvements to the model, you just showed   
   >>>>> you don't knoww what that means.   
   >>>>>   
   >>>>> You don't get to change what a "computation" is, that isn't part of   
   >>>>> the "model".   
   >>>>   
   >>>> you honestly could have just said that cause the rest of this is   
   >>>> just u repeating urself as if that makes it more correct   
   >>>>   
   >>>   
   >>> But I HAVE said it that simply, and you rejected it as you think you   
   >>> get to,   
   >>   
   >> but repeating urself doesn't make it more true   
   >   
   > And your ignoring it doesn't make it false.   
   >   
   >>   
   >>>   
   >>>   
   >>>>>   
   >>>>> The model would be the format of the machine, and while your RTM   
   >>>>> might be a type of machine that could be thought of, they don't do   
   >>>>> COMPUTATIONS, as it violates the basic rules of what a compuation IS.   
   >>>>>   
   >>>>> Computations are specific algorithms acting on just the input data.   
   >>>>>   
   >>>>> A fundamental property needed to reach at least Turing Complete   
   >>>>> ability, is the ability to cascade algorithms.   
   >>>>>   
   >>>>> Your RTM break that capability, and thus become less than Turing   
   >>>>> Complete.   
   >>>>   
   >>>> i'm sorry, RTMs are literally just TMs with one added instruction   
   >>>> that dumps static meta-data + copies tape ... how have they *lost*   
   >>>> power with that??? clearly they can express anything that TMs can ...   
   >>>   
   >>> Which means you don't understand how "TM"s work, as they don't have   
   >>> that sort of "instructions".   
   >>   
   >> fuck dude sorry "operation" is the term turing used, i added to the   
   >> list of possible operations with RTMs, my god dude...   
   >   
   > But the only "operations" that a turing machine does is write a   
   > specified value to the tape, move the tape, and change state.   
      
   yes RTMs are an extension of TMs, please do pay attention   
      
   >   
   >>   
   >> see how fucking unhelpful u are???   
   >   
   > So, how is your "operation" of the same class as what they do?   
      
   cause it's just as mechanically feasible. mechanical feasibility to   
   self-evident just like with the other rules of turing machines.   
      
   >   
   > Try to specify the tuple that your "operation" is.   
      
   idk what you mean by this, REFLECT is just another operation like   
   HEAD_LEFT, HEAD_RIGHT, or WRITE_, the. transition table has a   
   list of transition functions:   
      
   cur_state, head_symbol -> action, nxt_state   
      
   and REFLECT goes into the action slot specifying the action that should   
   be taking to transition the tape to the next step.   
      
   >   
   >>   
   >>>   
   >>>>   
   >>>>>   
   >>>>> And, any algorithm that actually USES their capability to detect if   
   >>>>> they have been nested will become incorrect as a decider, as a   
   >>>>> decider is a machine that computes a specific mapping of its input   
   >>>>> to its output, and if that result changes in the submachine, only   
   >>>>> one of the answers it gives (as a stand-alone, or as the sub-   
   >>>>> machine) can be right, so you just show that it gave a wrong answer.   
   >>>>   
   >>>> u have proof that doesn't work yet you keep asserting this is the   
   >>>> "one true way". seems like u just enjoy shooting urself in the foot,   
   >>>> with the only actual rational way being it's just the "one true way"   
   >>>   
   >>> IT IS DEFINITION. Something you don't seem to understand.   
   >>>   
   >>> "Computation" is NOT defined by what some machine does, that is   
   >>> algorithms and results. "Computation" is the mapping generated by it,   
   >>> which MUST be a specific mapping of input to output.   
   >>   
   >> no one has defined "computation" well enough to prove that turing   
   >> machines can compute them all,   
   >>   
   >> that's why it's the ct-thesis dude, not ct-law,   
   >>   
   >> ur just affirming the consequent without proof.   
   >   
   > No, the DEFINITION of a computation defines what it can be irrespective   
   > of the actual machinery used to perform it.   
   >   
   > It is, by definition, the algorithm computing of a given mapping.   
   >   
   > Said maps, are BY DEFINITION mappings from the "input" to the "output".   
   >   
   > If the machine can produce two different output from the same input, the   
   > machine can not be a computation.   
      
   a context-dependent computation is computing a mapping that isn't   
   directly specified by the formal input params. it's computing a mapping of:   
      
   (context, input) -> output   
      
   or more generally just   
      
   context -> output   
      
   since the formal input is just a specific part of the context. and the   
   reason we got stuck on the halting problem of a fucking century is   
   ignoring that context matters.   
      
   >   
   >>   
   >> add that to list of the growing fallacies i've pointed out in ur   
   >> recent arguments, which i'm sure ur not actually tracking, as that   
   >> would be far more honesty than u are capable of putting out.   
   >   
   > So, what is the fallacy?   
      
   AFFIRMING THE CONSEQUENT   
      
   >   
   > It seems you just assume you are allowed to change the definition,   
   > perhaps because you never bothered to learn it.   
   >   
   >>   
   >>>   
   >>>>   
   >>>>>   
   >>>>> This is sort of like the problem with a RASP machine architecture,   
   >>>>> sub- machines on such a platform are not necessarily computations,   
   >>>>> if they use the machines capability to pass information not allowed   
   >>>>> by the rules of a computation. Your RTM similarly break that property.   
   >>>>>   
   >>>>> Remember, Computations are NOT just what some model of processing   
   >>>>> produce, but specifically is defined based on producing a specific   
   >>>>> mapping of input to output, so if (even as a sub-machine) a   
   >>>>> specific input might produce different output, your architecture is   
   >>>>> NOT doing a computation.   
   >>>>>   
   >>>>> And without that property, using what the machine could do, becomes   
   >>>>> a pretty worthless criteria, as you can't actually talk much about it.   
   >>>>   
   >>>> the output is still well-defined and deterministic at runtime,   
   >>>   
   >>> Not from the "input" to the piece of algorithm, as it includes   
   >>> "hidden" state from outside that input stored elsewhere in the machine.   
   >>>   
   >>>>   
   >>>> context-dependent computations are still computations. the fact TMs   
   >>>> don't capture them is an indication that the ct-thesis may be false   
   >>>>   
   >>>   
   >>> Nope. Not unless the "context" is made part of the "input", and if   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca