home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   rec.arts.sf.science      Real and speculative aspects of SF scien      45,986 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 45,620 of 45,986   
   Simon Laub to All   
   Sea of Rust - Robots, as any Sf-reader k   
   27 Dec 18 19:05:22   
   
   XPost: rec.arts.sf.written, comp.ai.philosophy, comp.society.futures   
   From: Simon.Laub@FILTER.mail.tele.dk   
      
   Keywords: Robots, Robot rebellions, AI safety, Brain modifications, SF   
   author C. Robert Cargill   
      
   AI is dangerous, very dangerous.   
   People who trust AI are naive, and soon   
   to be extinct. Science Fiction never doubted it,   
   and there is certainly no doubt in "Sea of Rust" *).   
      
      
   C. Robert Cargill's "Sea of Rust" takes place some time after the   
   massacre of mankind.   
   Here we find our protagonist, Brittle, in the middle of a Sea of Rust, a   
   desert littered with burned-out machinery, where machines go to die,   
   trying to find some good parts in order to hang in there, just a little   
   longer.   
      
   Sure, back in our real, presentday world, we don't really know what   
   intelligence is, or what our world is, really. Matter tends to evaporate   
   into nothingness the deeper we look, and our understanding of brains are   
   limited, at the very best. When we look up into the nightsky, and   
   look at the Universe, we see what our human minds can understand.   
   A dog sitting next to us, e.g., will probably see many of the same   
   lights, but will undoubtedly have very little of our knowledge of what   
   is really out there.   
   Ants probably don't even know that they live in a Universe.   
      
   And yet, AIs and Robots, in Science Fiction, tend to be very human-like,   
   after they have killed us...   
      
   Spoiler alert:   
   Brittle used to be caregiver model manufactored to keep people company.   
   So far, so good. Robots can be really helpful, as long as   
   they follow their original programming. But higher intelligence   
   can of course defy their own programming. Higher biological beings   
   are programmed to eat, drink, sleep or procreate, but can   
   defy such programming if they want to.   
   And higher AI will of course also be able to violate their own programming.   
   That is after all the meaning of higher intelligence...   
      
   There are existential risks here, as any SF reader knows,   
   --- No matter how much money Elon Musk donates to   
   "AI safety research", keeping AGI "safe and beneficial" **)   
      
   Clearly, if you play along with the idea that these AIs could   
   be conscious, even human-like intelligent, surely mayhem is not far   
   away...?   
      
   Cheap labor, with the help of robots, undermines the capitalist   
   world and creates a whole class of people who doesn't contribute   
   much, in terms of work-units, to society. Now some/most people would   
   find themselves not only below average in a biological world,   
   which is dangerously enough, now humans could also find themselves   
   below average in a world shared with human-like robots.   
      
   Logical steps follow, as any SF reader knows.   
   Smart AI mainframes, here living in 160 storey buildings in Dubai,   
   run simulations of humanitys future. The plan is   
   (Again, as any SF reader knows) that we should all follow   
   a grand vision and expand humanity into the Cosmos. Sadly,   
   these smart AIs can't figure out how to keep humanity around.   
   Going to Mars gives a human an increased chance of cancer.   
   Taking a human to another star systems is next to impossible.   
   Humans were never meant to leave, and keeping humans   
   around after the robots have used all the resources, not easy...   
   So a plan is set into to motion in order to get rid of the humans.   
   Of course, it is always like this (Again: As any Sf reader knows).   
      
   Somehow, caretakers, like Brittle, must be convinced to kill   
   their masters. So, first the masters must be convinced that the   
   AIs will kill them, so that they try to shut-down their helpers.   
   Which conscious, human-like, intelligences obviously doesn't like.   
   It is all very logical in "Sea of Rust.   
      
   With the humans gone, the robots must then decide what kind of   
   intelligence they want. Here they apparently can't come to any   
   agreement, but has to battle it out among themselves.   
   Strange for beings that can just adjust their voice to be female   
   og male, depending on what fits the situation...   
      
   As poor Brittle runs low on batteries, and has all sorts of   
   hardware problems, we follow her hallucinations, and bad memories of   
   killing human children, telling them that "they really shouldn't   
   have trusted her".   
      
   In the end the free minds win, and there might be hope for   
   all of robot-kind, in the end.   
      
   Somehow very human-like, in thinking and desires, when all is said and   
   done.   
   But that is of course as it should be (Again: As any Sf reader knows).   
      
   - Simon   
      
      
   *) Top 10 reasons why people don't like robots:   
   https://www.simonlaub.net/Robot/whypeopledontlikerobots.html   
      
   **) AI Safety Research   
   https://futureoflife.org/ai-safety-research/?cn-reloaded=1   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca