Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 59,178 of 59,235    |
|    olcott to Richard Damon    |
|    Re: a subset of Turing machines can stil    |
|    24 Jan 26 08:21:09    |
      XPost: comp.theory, sci.logic, sci.math       From: polcott333@gmail.com              On 1/24/2026 6:16 AM, Richard Damon wrote:       > On 1/23/26 10:05 PM, olcott wrote:       >> On 1/23/2026 7:56 PM, Richard Damon wrote:       >>> On 1/23/26 8:51 PM, olcott wrote:       >>>> On 1/23/2026 7:30 PM, Richard Damon wrote:       >>>>> On 1/23/26 6:50 PM, olcott wrote:       >>>>>> On 1/23/2026 5:01 PM, Richard Damon wrote:       >>>>>>> On 1/23/26 12:24 PM, olcott wrote:       >>>>>>>> On 1/23/2026 10:29 AM, dart200 wrote:       >>>>>>>>> On 1/23/26 2:19 AM, olcott wrote:       >>>>>>>>>> On 1/22/2026 11:21 PM, dart200 wrote:       >>>>>>>>>>> On 1/22/26 3:58 PM, olcott wrote:       >>>>>>>>>>>> It is self-evident that a subset of Turing machines       >>>>>>>>>>>> can be Turing complete entirely on the basis of the       >>>>>>>>>>>> meaning of the words.       >>>>>>>>>>>>       >>>>>>>>>>>> Every machine that performs the same set of       >>>>>>>>>>>> finite string transformations on the same inputs       >>>>>>>>>>>> and produces the same finite string outputs from       >>>>>>>>>>>> these inputs is equivalent by definition and thus       >>>>>>>>>>>> redundant in the set of Turing complete computations.       >>>>>>>>>>>>       >>>>>>>>>>>> Can we change the subject now?       >>>>>>>>>>>>       >>>>>>>>>>>       >>>>>>>>>>> no because perhaps isolating out non-paradoxical machine may       >>>>>>>>>>> prove a turing-complete subset of machines with no decision       >>>>>>>>>>> paradoxes, removing a core pillar in the undecidability       >>>>>>>>>>> arguments.       >>>>>>>>>>>       >>>>>>>>>>       >>>>>>>>>> FYI, five LLMs have all agreed that I have conquered that.       >>>>>>>>>       >>>>>>>>> but no humans have and that's what actually counts       >>>>>>>>>       >>>>>>>>       >>>>>>>> *It really does seem to me that I am a human*       >>>>>>>>       >>>>>>>> Also HHH(DD) Really does correctly detect the       >>>>>>>> non-well-founded cyclic dependency in the       >>>>>>>> evaluation graph.       >>>>>>>       >>>>>>> Since DD isn't doing a proof or making a declariation of truth,       >>>>>>> "non- well-founded" is a meaningless term in this context.       >>>>>>>       >>>>>>       >>>>>> Only while you make sure to have no idea what       >>>>>> this term means:       >>>>>> "non-well-founded in proof theoretic semantics"       >>>>>>       >>>>>       >>>>> Since proof theoretic semantics insists that only things that can       >>>>> be proven can be asserted, it needs to be able to PROVE that the       >>>>> statement is not provable or refutable for it to assert that the       >>>>> input is non- well-founded.       >>>>>       >>>>> Or, are you admitting that you proof theoretics semantics are       >>>>> really just truth-conditional semantics with a downgrading of Truth       >>>>> to being probvabilility? (Which isn't what others consider it to be).       >>>>       >>>> I didn't think this stuff up on my own. I had at       >>>> least 100 dialogues with five different LLM systems       >>>> and after much push-back they all agreed that I am       >>>> correct after 60 pages of dialogue each. I have       >>>> been working on this every waking moment for weeks.       >>>       >>> In other words you are working with the counsel of admitted liars,       >>> whose terms of use include that you acknoldege their results may not       >>> be accurate.       >>>       >>>>       >>>> It was Copilot that recognized that my system was       >>>> Proof Theoretic Semantics (PTS) that resolves to       >>>> provable / refutable / non-well-founded.       >>>>       >>>> Every system also agrees that HHH(DD) does       >>>> correctly reject DD as non-well-founded.       >>>> I just can't get them to do that concisely yet.       >>>>       >>>> Once I can get them to actually do the simulation       >>>> then they immediately see from their own simulation       >>>> trace that HHH correctly rejects DD as non-well-founded       >>>> within proof theoretic semantics.       >>>>       >>>> non-well-founded literally means that the proof       >>>> itself is stuck in a loop.       >>>>       >>>       >>> Nope, non-well-founded means that there is no proof.       >>>       >>> "Proofs" don't get stuck in a loop, as it isn't a proof until it is       >>> complete.       >>>       >>       >> You simply don't know enough about logic programming.       >> Logic programming routinely proves that an input does       >> not have a well-founded proof.       >       > Just because you can handle SOME problems, doesn't mean you can find an       > answer for ALL problems.       >       >>       >> When I explain the details in terms of cycles in       >> directed graphs you don't have a clue. This has       >> always been anchored in well-founded proof theoretic       >> semantics.       >       > But there isn't always a cycle in the graph, sometimes the graph is just       > infinitely deep.       >       > But, I guess thinking about infinity is something your brain can't handle.       >       >>       >> Get AI to explain well-founded proof theoretic       >> semantics to you and ask it for references that       >> you can verify.       >       > Why should I ask an AI liar, when I can get it from a lying human.       >              We will not be able to have a productive       conversation until you learn more about       proof theory. I will look for some good       references.              >>       >>> THe fact that ONE attempted method of proving doesn't result in       >>> getting to the answer doesn't mean that some other method doesn't work       >>>       >>> All you are doing is showing that you are just ignorant of what you       >>> are talking about, and you admit that you are trusting programs known       >>> to lie and give false results, and are even admittedly programmed to       >>> try to give good sounding results over factually accurate results.       >>       >>       >                     --       Copyright 2026 Olcott |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca