home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.philosophy      Didn't Freud have sex with his mother?      170,335 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 169,923 of 170,335   
   D to Ed Cryer   
   Re: Secondary brains   
   18 Mar 25 22:20:36   
   
   From: nospam@example.net   
      
   On Tue, 18 Mar 2025, Ed Cryer wrote:   
      
   > D wrote:   
   >   
   >> Another techo-religion aspect is the equating of AI with god. I am very   
   >> uncomfortable with some transhumanists. Some of them seem almost religious   
   >> in their belief in technology. I think whereas traditional religious people   
   >> like to play with concepts outside of time and space, transhumanists, draw   
   >> out the time axis way too much and assume science will just continue to   
   >> progress exponentially. This can cross over into religion.   
   >   
   > I've not come across the idea of equating AI with God or a god. The fear I   
   see   
      
   Very common in transhumanist circles.   
      
   > is the dawn of consciousness in AI, and then the condemning of homo sapiens   
   by   
   > it, followed by wiping us out as deleterious to the earth and the better   
   life.   
   > Either that or we try to stop conscious AI and they fight back.   
      
   I think this is a mistake. Technological progress will march on regardless of   
   any limits or moratoriums. The question is... do you want many countries,   
   democracies and authoritarian countries, to reach the point independently of   
   each other, or do you want an authoritarian regime to be the first to discover   
   AGI in a hidden lab somewhere?   
      
   Also, when assigning infinite negative value to the outcome, all risk   
   calculations become meaningless. We have no reason to assign infinite negative   
   values. We do research on nuclear and biological warfare in public, and in   
   hidden labs. They could both be assigned infinite negative value, and yet, they   
   continue. AI will be the same.   
      
   I argue that more openness and more AI will be what saves us. Not the reverse.   
      
   >   
   > I think the recent new algorithms of OpenAI have promoted this fear, with the   
   > high quality of good written language that they use. They pass the old Turing   
   > test easily.   
      
   I only have the Ollama of duck.ai to play with, and what I see will pass away   
   as   
   a fad in a year or so, and we'll have ourselves an AI crash.   
      
   I also don't think OpenAI would pass the Turing test "as is". It would require   
   a   
   lot of effort to make it pass the turing test. Goals, volition and a sense of   
   self-preservation are lacking, and I would beat any AI by being quiet. A human   
   will eventually say "hello?" and no AI I have seen has ever done that.   
      
   I'm thinking about creating a 1000 USD prize for AI:s who will beat an updated   
   Turing test, with focus on the three things above.   
      
   > Ed   
   >   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca