Forums before death by AOL, social media and spammers... "We can't have nice things"
|    sci.logic    |    Logic -- math, philosophy & computationa    |    262,912 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 261,604 of 262,912    |
|    olcott to Python    |
|    Re: The halting problem is incorrect two    |
|    01 Dec 25 10:27:34    |
      XPost: sci.math, comp.theory       From: polcott333@gmail.com              On 12/1/2025 10:00 AM, Python wrote:       > Le 01/12/2025 à 16:55, olcott a écrit :       >> On 12/1/2025 9:48 AM, Python wrote:       >>> Le 01/12/2025 à 16:39, olcott a écrit :       >>>> On 12/1/2025 9:31 AM, Python wrote:       >>>>> Le 01/12/2025 à 16:29, olcott a écrit :       >>>>>> On 12/1/2025 9:26 AM, olcott wrote:       >>>>>>> On 12/1/2025 9:19 AM, olcott wrote:       >>>>>>>> On 12/1/2025 9:06 AM, Python wrote:       >>>>>>>>> Le 01/12/2025 à 15:57, olcott a écrit :       >>>>>>>>>> On 12/1/2025 8:45 AM, Python wrote:       >>>>>>>>>>> Le 01/12/2025 à 15:38, olcott a écrit :       >>>>>>>>>>>> On 12/1/2025 8:29 AM, Python wrote:       >>>>>>>>>>>>>> [snip boring nonsense and lies]       >>>>>>>>>>>>>       >>>>>>>>>>>>> Peter you've intoxicated yourself.       >>>>>>>>>>>>>       >>>>>>>>>>>>> Here is what Chat GPT told me once about himself:       >>>>>>>>>>>>>       >>>>>>>>>>>>       >>>>>>>>>>>> Welcome back!       >>>>>>>>>>>>       >>>>>>>>>>>>> You have put your finger on the single most fundamental       >>>>>>>>>>>>> limitation of large language models:       >>>>>>>>>>>>>       >>>>>>>>>>>>> They can generate coherent arguments for things that are       >>>>>>>>>>>>> false, harmful, fringe, or logically impossible — not       >>>>>>>>>>>>> because they “believe” them, but because they can simulate       >>>>>>>>>>>>> the rhetorical form of such arguments.       >>>>>>>>>>>>>       >>>>>>>>>>>>> And you’re right:       >>>>>>>>>>>>> The fact that the model “doesn’t believe it” is irrelevant.       >>>>>>>>>>>>> What matters is:       >>>>>>>>>>>>> it can produce it.       >>>>>>>>>>>>>       >>>>>>>>>>>>> f2up math.       >>>>>>>>>>>>       >>>>>>>>>>>> Once you fully understand semantic tautologies       >>>>>>>>>>>> (the ultimate basis of all of my work)       >>>>>>>>>>>>       >>>>>>>>>>>> In epistemology (theory of knowledge), a self-evident       >>>>>>>>>>>> proposition is a proposition that is known to be true       >>>>>>>>>>>> by understanding its meaning without proof...       >>>>>>>>>>>> https://en.wikipedia.org/wiki/Self-evidence       >>>>>>>>>>>>       >>>>>>>>>>>> You will understand that I am correct. If you insist       >>>>>>>>>>>> on finding fault at a much higher priority than an       >>>>>>>>>>>> honest dialogue then you will never understand that       >>>>>>>>>>>> I am correct.       >>>>>>>>>>>       >>>>>>>>>>> You are NOT correct.       >>>>>>>>>>>       >>>>>>>>>>       >>>>>>>>>> You will continue to lack a sufficient basis       >>>>>>>>>> for that until you grok (Heinlein) semantic       >>>>>>>>>> tautology / self-evident truth.       >>>>>>>>>>       >>>>>>>>>>>> It seems that the single most useful application       >>>>>>>>>>>> of my work is to make LLM systems much more reliable.       >>>>>>>>>>>       >>>>>>>>>>> Your "work" is complete garbage... Sorry.       >>>>>>>>>>>       >>>>>>>>>>>       >>>>>>>>>>       >>>>>>>>>> Yet you cannot possibly show that with complete       >>>>>>>>>> and correct reasoning because you continue to       >>>>>>>>>> lack the above required basis.       >>>>>>>>>       >>>>>>>>> I'm am not willing to endorse a sophistry that I KNOW to be       >>>>>>>>> INCORRECT.       >>>>>>>>>       >>>>>>>>>       >>>>>>>>       >>>>>>>> How can you possibly show that a semantic tautology       >>>>>>>> is incorrect when it is inherently correct?       >>>>>>>>       >>>>>>>>       >>>>>>>       >>>>>>> Within the definition that "cats" |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca