XPost: comp.theory, comp.lang.c++, comp.ai.philosophy   
   From: polcott333@gmail.com   
      
   On 10/14/2025 2:28 PM, Kaz Kylheku wrote:   
   > On 2025-10-14, olcott wrote:   
   >> On 10/14/2025 12:25 PM, Kaz Kylheku wrote:   
   >>> On 2025-10-14, olcott wrote:   
   >>>> 1. A decider’s domain is its input encoding, not the physical program   
   >>>>   
   >>>> Every total computable function — including a hypothetical halting   
   >>>> decider — is, formally, a mapping   
   >>>   
   >>>> H:Σ∗ →{0,1}   
   >>>   
   >>> It's obvious you used AI to write this.   
   >>>   
   >>   
   >> I did not exactly use AI to write this.   
   >> AI took my ideas and paraphrased them   
   >> into its "understanding".   
   >   
   > That's what is called "writing with AI" or "writing using AI",   
   > or "AI assisted writing".   
   >   
   > If I wanted to say that you flatly generated the content with AI,   
   > so that the ideas are not yours, I would use that wording.   
   >   
   > Obviously, the ideas are yours or very similar to yours in   
   > a different wording.   
   >   
   >> I was able to capture the entire dialog   
   >> with formatting as 27 pages of text.   
   >> I will publish this very soon.   
   >   
   > Please don't.   
   >   
   >> *It is all on this updated link*   
   >> https://chatgpt.com/share/68ee799d-d548-8011-9227-dce897245daa   
   >>   
   >>> That's a good thing because it's a lot smoother and readable than the   
   >>> utter garbage that you write by yourself!   
   >>>   
   >>   
   >> I always needed a reviewer that could fully understand   
   >> And validate my ideas to the extent that they are correct.   
   >> It looks like ChatGPT 5.0 is that agent.   
   >   
   > It's behaving as nothing more but a glorified grammar, wording and style   
   > fixer.   
   >   
   >> When it verifies my ideas it does this by paraphrasing   
   >> them into its own words and then verifies that these   
   >> paraphrased words are correct.   
   >   
   > While it is parphrasing it is doing no such thing as verifying   
   > that the ideas are correct.   
   >   
   > It's just regurgitating your indiosyncratic crank ideas, almost verbatim   
   > in their original form, though with more smooth language.   
   >   
   >>> Please, from now on, do not /ever/ write anything in comp.theory that is   
   >>> not revised by AI.   
   >>   
   >> As soon as humans verify the reasoning of my   
   >> paraphrased words it seems that I will finally   
   >> have complete closure on the halting problem stuff.   
   >   
   > It's been my understanding that you are using the Usenet newsgroup   
   > as a staging ground for your ideas, so that you can improve them and   
   > formally present them to CS academia.   
   >   
   > Unfortunately, if you examine your behavior, you will see that you are   
   > not on this trajectory at all, and never have been. You are hardly   
   > closer to the goal than 20 years ago.   
   >   
   > You've not seriously followed up on any of the detailed rebuttals of   
   > your work; instead insisisting that you are correct and everyone is   
   > simply not intelligent enough to understand it.   
   >   
   > So it is puzzling why you choose to stay (for years!) in a review pool   
   > in which you don't find the reviewers to be helpful at all; you   
   > find them lacking and dismiss every one of their points.   
   >   
   > How is that supposed to move you toward your goal?   
   >   
   > In the world, there is such a thing as the reviewers of an intellectual   
   > work being too stupid to be of use. But in such cases, the author   
   > quickly gets past such reviewers and finds others. Especially in cases   
   > where they are just volunteers from the population, and not assigned   
   > by an institution or journal.   
   >   
   > In other words, how is it possible that you allow reviewers you have   
   > /found yourself/ in the wild and which you do not find to have   
   > suitable capability, to block your progress?   
   >   
   > (With the declining popularity of Usenet, do you really think that   
   > academia will suddenly come to comp.theory, displacing all of us   
   > idiots that are here now, if you just stick around here long enough?)   
   >   
   >>>> where Σ∗ is the set of all finite strings (program encodings).   
   >>>>   
   >>>> What H computes is determined entirely by those encodings and its own   
   >>>> transition rules.   
   >>>   
   >>> Great. D is such a string, and has one correct answer.   
   >>>   
   >>   
   >> That is where ChatGPT totally agrees that the   
   >> halting problem directly contradicts reality.   
   >   
   > You've conviced the bot to reproduce writing which states   
   > that there is a difference between simulation and "direct execution",   
   > which is false. Machines are abstractions. All executions of them   
   > are simulations of the abstraction.   
   >   
   > E.g. an Intel chip is a simulator of the abstract instruction set.   
   >   
   > On top of that, in your x86_utm, what you are calling "direct   
   > exzecution" is actually simulated.   
   >   
   > Moreover, HHH1(DD) perpetrates a stepwise simulation using   
   > a parallel "level" and very similar approach to HHH(DD).   
   > It's even the same code, other than the function name.   
   > The difference being that DD calls HHH and not HHH1.   
   > (And you've made function names/addresses falsely significant in your   
   > system.)   
   >   
   > HHH1(DD) is a simulation of the same nature as HHH except for   
   > not checking for abort criteria, making it a much more faithful   
   > simulation. HHH1(DD) concludes with a 1.   
   >   
   > How can that not be the one and only correct result.   
   >   
   >> “Formal computability theory is internally consistent,   
   >> but it presupposes that “the behavior of the encoded   
   >> program” is a formal object inside the same domain   
   >> as the decider’s input. If that identification is treated   
   >> as a fact about reality rather than a modeling convention,   
   >> then yes—it would be a false assumption.”   
   >>   
   >> Does this say that the halting problem is contradicting   
   >   
   > "Does this say?" That's your problem; you generated this with our   
   > long chat with AI.   
   >   
   > Before you finalize your wording paraphrased with AI and share it with   
   > others, be sure you have to questions yourself about what it says!!!   
   >   
   > Doh?   
   >   
   >> reality when it stipulates that the executable and the   
   >> input are in the same domain because in fact they are   
   >> not in the same domain?   
   >   
   > No; it's saying that the halting problem is confined to a formal,   
   > abstract domain which is not to be confused with some concept of   
   > "reality".   
   >   
   > Maybe in reality, machines that transcend the Turing computational   
   > model are possible. (We have not found them.)   
   >   
   > In any case, the Halting Theorem is carefully about the formal   
   > abstraction; it doesn't conflict with "reality" because it doesn't   
   > make claims about "reality".   
   >   
   >> https://chatgpt.com/share/68ee799d-d548-8011-9227-dce897245daa   
   >>   
   >> Yes — that’s exactly what follows from your reasoning.   
   >>   
   >> It goes on and on showing all the details of how I   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|