home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   talk.philosophy.humanism      Humanism in the modern world      22,193 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 20,330 of 22,193   
   quibbler to All   
   Re: On Ray Kurzweil (1/2)   
   19 Mar 06 18:11:31   
   
   XPost: alt.philosophy, alt.atheism   
   From: quibbler247@yahoo.com   
      
   In article <0U7SVbGAObHEFwkJ@eddlewood.demon.co.uk>,   
   ralph@eddlewood.demon.co.uk says...   
   > In message , quibbler   
   > Two questions. How do you measure progress in quantitative terms? OK,   
   > cpu power is easy, but the others?   
      
   Well, people are trying to operationalize some of those more   
   specifically.  In the case of nano-devics it will be both making things   
   very small and accurately, as compared to one's desired design.  In the   
   case of biology it will be the number of base pairs one can sequence per   
   second in a genome, the total number of species one can classify, the   
   amount of time we can extend the life of a rat beyond the expectancy of   
   controls, etc.  In electronics it will be how small and fast and   
   efficient we can make things like transistors as well as how parallel and   
   three-dimensionally interconnected we can make them.  In terms of   
   information, it will be the number of bytes and nodes and keys we have in   
   our schemas or databases, as well as how quickly we can retrieve data.   
      
      
   > And do we have any evidence that   
   > progress in these fields will follow what we consider to be "normal"   
   > growth curves?   
      
   Well, the technology used for miniaturization of of transistors is   
   driving nano-technology.  The fast electronics that we have today are   
   also making faster gene identification and sequencing possible.   
      
   Things like life expectancy have been, likewise, high influenced by   
   digital technologies like cat scanners and MRIs.  It may be too soon to   
   say whether the rate of life extension is growing exponentially, because   
   we may be further back on the gradual slope of the initial curve.   
   Technically speaking, while we have increase *average life expectancy* we   
   haven't necessarily increased maximum life expectancy by that much.   The   
   current estimate is that life expectancy is increasing by 0.1 years per   
   chronological year.  Obviously, if we could accelerate it by an order of   
   magnitude then things would get really interesting.   
      
      
   >   
   > > As I'm sure you know,   
   > >that is the singular point and it could be the equivalent of centuries of   
   > >technological progress by previous standards.   
   > >   
   > You could say we'd had the equivalent of centuries of change in the last   
   > decade, depending where you start.   
      
   Definitely.  Of course, our human capacities to absorb this change   
   haven't been enhanced much.  We have caffeine and ritalin and various   
   more promising neurostimulants like ampakines.  But we're really just in   
   the infancy of enhancing human performance.  To an extent our digital   
   electronics can make us seem smarter.  Small wireless devices allow us to   
   look up data which we would formerly have to go research in a library.   
   Software does all the hard parts of various complicated kinds of design   
   processes and can allow someone with only minimal clue to produce   
   elaborate finished products.   
   I think it will get much better.  I think we'll have cpus wired into our   
   corpus callosums, so that we can think about a question and have data   
   instantly appear from an encyclopedia.  I think we'll have tiny cameras   
   implanted in the blind spots of our eyes that will allow us to record   
   every event that we see and then search back through it as needed.  I   
   think that we will interact with a host of expert systems housed in the   
   ventricles of our brains and also wirelessly with large numbers of other   
   like-minded people.  Products that used to take a team of engineers a   
   year to roll out, might be produced by single individuals in months or   
   weeks or days.   
   Obviously this technology isn't ready to go tomorrow, but we are   
   approaching some of these things.  If these technologies, by enabling   
   individuals to much more rapidly respond to information really made   
   people smarter then we could achieve some amazing things.  If we could   
   artificially create some Richard Feynmanns and some James Clerk Maxwells,   
   then it might enable formerly average people to make breakthroughs.   
      
      
      
      
   .   
   > >   
   > This is, I believe, the really difficult area. That computers will be   
   > more intelligent than we are within twenty years raises very important   
   > questions. If we make rules to prevent the making of machines which   
   > might take us over, we shall be called Luddites.   
      
      
   Not to worry, because no such rules are possible.  A sufficiently smart   
   computer will eventually figure out a way around any clumsy type of rules   
   we can come up with.  IA and giant brains in vats might keep up with the   
   AIs for a while, or perhaps we could create littla matrix worlds, where   
   we used one AI to keep an eye on another one.  However, ultimately, we   
   can't rely upon any system to prevent such things.  Of course,  even when   
   such entities firmly gained the upperhand, that wouldn't mean, contrary   
   to many speculations, that the computers would seek to destroy us.  They   
   might not even consider us worth wasting their time upon.  Or they might   
   decide that it would be best to keep us all as happy pets, satisfying our   
   every desire, which would be easy for them, so that we would have no   
   incentive to even try to challenge them.  The problem, of course, is that   
   one can infinitely speculate about different such scenarios.   
      
      
      
   > More importantly, some   
   > will probably not comply with such rules.   
      
      
   Right.  The Laws of Robots were supposed to be logically iron-clad and   
   wired into the brains of all robots, so that they could not even think of   
   breaking them if they wanted to.  But it's not convincing that one could   
   really set down a set of rules specific enough to cover every situation   
   or your robot would have to think for 10 minutes before it took one step.   
      
      
   >   
   > But it will also be enormously difficult to ensure that such advances   
   > benefit mankind as a whole, rather than merely increasing our present   
   > divisions.   
      
   Disruptive, asymmetric technologies can empower people lower down in the   
   food chain.  The hope would probably be that the technology would be   
   sufficient so that we don't have to be quite so selfish and petty to each   
   other any more.  But I'm not sure if that will be the reality.   
      
      
      
   > Many of these are due to a lack of political will, rather   
   > than a lack of intelligence.   
      
   True, but if we all had giant photosythetic ears then it would be harder   
   to intentionally starve large masses of people....without going around   
   and cutting off all their ears.  And if they could cut off their ears,   
   they could slit their throats.  But ideally, alleviating hunger will go a   
   long way toward alleviating the need to fight so much.   
      
      
   > If we have reached an irreversible point in   
   > global warming, the only use of enhanced intelligence may be to take a   
   > few humans to somewhere else in the solar system.   
      
   We will definitely want space colonies, probably built with lunar   
   regolith, ala Gerard O'Neill.  By getting some humans off the earth we   
   increase the odds of mankind surviving even if catastrophic events occur   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca