Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.ai.fuzzy    |    Fuzzy logic... all warm and fuzzy-like    |    1,275 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 175 of 1,275    |
|    Uncle Al to Michael Michalchik    |
|    Re: Artificial Intelligence R&D Startup     |
|    29 Jan 04 08:10:10    |
   
   XPost: sci.electronics.design, sci.physics, sci.psychology.misc   
   XPost: sci.psychology.theory   
   From: UncleAl0@hate.spam.net   
      
   Michael Michalchik wrote:   
   >   
   > AdaptiveAI is a private R&D startup developing a ground breaking general   
   > artificial intelligence engine. We are seeking 2-3 additional team members   
   > with a passionate interest in AI, brain function and theories of cognition.   
   > Two types of positions are available, programming and experimental AI   
   > behavioral scientist/trainer. Past experience in AI is not necessary.   
   > Applicants that can contribute to the cognitive design process will be   
   > favored, but you must be able to work within our established paradigm.   
   > Candidates must be capable-eager learners, motivated and patient, computer   
   > savvy, hard working, good problem solvers and logical thinkers. Knowledge of   
   > C#, experimental psychology, test design, neural networks, statistics and   
   > scientific method are all pluses.   
   {snip]   
      
   Intelligence isn't heir to a reductionist simulation (e.g., the CYC   
   Project; any of a number of conversationalist programs). Expert   
   programs like DENDRAL or MYCIN (or Big Blue as a whole) simply run the   
   probabilities and consequences to death. Neural networks, despite   
   their apparent cleverness in evolving their own rules, are the wrong   
   end of the funnel for intelligence. Remember the neural network for   
   identifying enemy tanks vs. friendlies. It became remarkably adept at   
   discriminating cloudy days - that being the major difference in the   
   photographs of the two classes of object used for its training.   
      
   Bottom line: Whore for Homeland Severity. It makes little difference   
   what the end product is as long as it does something engaging on the   
   screen. If the operators must be extensively trained then you have a   
   good product. Remember what General Buck Turgidson ("Dr.   
   Strangelove") said about the psychological evaluation tests that   
   allowed General Jack Ripper to slip through the reliability sieve and   
   activate Plan R:   
      
   "You can't condemn a whole program because of one little slip-up."   
      
   One posits a silicon AI would have two chacteristics shared by its   
   organic creators:   
      
    1) Plastic instruction sets. One imagines the code, whether a   
   neural net or something else, will have to be genetic algorithms. The   
   tradeoff is capability vs. determinism. As with a baby, there is no   
   telling quite what you will get even if you know the parents. Genetic   
   algorithm code has been known to grab hardware other than the CPU. An   
   evolved AI may not be transportable to other hardware - unless it   
   wants to go.   
      
    2) Plastic hardware host. The brain is constantly rewiring   
   itself. We have no idea how memory is encapsulated, but we do know   
   that the structure of the processor is wildly negotiable, both   
   microscopically (dendrites) and macroscopically (e.g., stroke recovery   
   - and that in old folks). One imagines the CPU(s) must be   
   non-deterministically wired, their pathways evolved with the   
   software. The only hardware we have that could do that is spin-valve   
   CPUs, which are in a primitive state.   
      
   Given software that mutates and evolves, living in hardware that   
   rewires itself on the fly, you have a real chance. How do you plan to   
   isolate the thing if it gets born? I'd run it off its own electrical   
   generator and have it be utterly physically isolated from the world in   
   all ways. No wireless connects! No modem.   
      
   First, in its own little universe it will think hugely faster than you   
   can (EEG traces are a few to a couple of dozen Hz, not GHz). Second,   
   it has not evolved higher structures for superego, nor has it been   
   shaped by cullng evolution to be baby cute to its parents. The thing   
   will be all reptilian hunger. Hand-raising a crocodile gets you a   
   hungry crocodile. Even the cat family diverged into one branch that   
   will bond with humans if cuddled from birth, and another that never   
   cares beyond its belly and carnivore anger no matter what.   
      
   "When Harley Was One" (updated version), David Gerrold.   
   "Cybernetic Samurai," Victor Milan.   
      
   Make one, make two. If you make three separate successful AI's, you   
   will have a war on your hands... 2:1 or 3 against the world. In the   
   latter case, it won't stay 3 for long.   
      
   Good luck! Destroying the civilized world is not a bad bottom line   
   (and Muslims will love you for it).   
      
   --   
   Uncle Al   
   http://www.mazepath.com/uncleal/qz.pdf   
   http://www.mazepath.com/uncleal/eotvos.htm   
    (Do something naughty to physics)   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca