Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.philosophy    |    Perhaps we should ask SkyNet about this    |    59,235 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 57,870 of 59,235    |
|    dbush to olcott    |
|    Re: Some decision problems are only "und    |
|    13 Aug 25 22:56:40    |
      XPost: comp.theory, sci.logic       From: dbush.mobile@gmail.com              On 8/13/2025 10:29 PM, olcott wrote:       > On 8/13/2025 9:02 PM, dbush wrote:       >> On 8/13/2025 9:56 PM, olcott wrote:       >>> On 8/13/2025 8:45 PM, dbush wrote:       >>>> On 8/13/2025 9:40 PM, olcott wrote:       >>>>> How many tests that are black in color are entirely       >>>>> white in color and the answer must be a positive       >>>>> integer and must come with proof that it is correct.       >>>>       >>>> Error: Assumes that something can be entirely black and entirely white       >>>>       >>>>>       >>>>> What time is it (yes or no) ?       >>>>       >>>> Error: Assumes that the answer can be yes or no       >>>>       >>>>>       >>>>> Is this sentence true or false: "This sentence is not true" ?       >>>>> The above is the basis for the Tarski undefinability theorem.       >>>>       >>>> Error: Assume that sentence can have a truth value       >>>>       >>>       >>> Yes and by saying that you have proven that you       >>> understand the Liar Paradox much better than every       >>> expert on the philosophy of logic in the world.       >>> The very best expert in the sub field of truthmaker       >>> maximalism said that the Liar Paradox might not       >>> have a truth value.       >>       >>       >> They all understand that.       >>       >> What you don't understand is that if you assume that a truth predicate       >> exists, then by performing a set series of truth preserving operations       >> we reach the conclusion that the liar paradox does have a truth value.       >>       >       > I have the actual Tarski proof and it does not go       > that way at all.       >       > https://liarparadox.org/Tarski_275_276.pdf              That's exactly how it goes. You just don't understand it, just like you       don't understand the halting problem proof.              >       >> Therefore no truth predicate exists.       >>       >> Once again, you're proving you don't understand proof by contradiction.       >>       >       > When the halting problem shows that there is an       > input that does the opposite of whatever the halt       > decider decides              So you start with the assumption that a halt decider exists, i.e. you       have an H that meets these requirements:                     Given any algorithm (i.e. a fixed immutable sequence of instructions) X       described as |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca