home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   talk.philosophy.humanism      Humanism in the modern world      22,193 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 20,327 of 22,193   
   ralph to quibbler247@yahoo.com   
   Re: On Ray Kurzweil (1/2)   
   19 Mar 06 19:39:44   
   
   XPost: alt.philosophy, alt.atheism   
   From: ralph@eddlewood.demon.co.uk   
      
   In message , quibbler   
    writes   
      
   ... some interesting quibbles.   
      
   >In article <1142765537.070980.141460@e56g2000cwe.googlegroups.com>,   
   >joseph@humanisation.org says...   
   >> Now, I've had a little think about Mr.Kurzweil's prognostications and   
   >> I'm not as   
   >> imprerssed as I was initially.   
   >   
   >I'm not sure how impressed you were initially, so that doesn't   
   >necessarily help.   
   >   
   >   
   >   
   >> What's he saying?   
   >   
   >   
   >He's saying a lot of things actually, and it's difficult to condense that   
   >down to a sentence or two.   
   >   
   >   
   >   
   >> That advancing   
   >> technology and artificial-intelligence will quite soon, and quite   
   >> suddenly, push us into a post-human phase.   
   >   
   >Well, it's not that soon by human standards.  It's still decades away   
   >according to even his estimates.  Furthermore, the post-human possibility   
   >is just one of many.  One can't really predict what such rapidly   
   >advancing technology will yield.  However, the rate of progress in nano,   
   >electro, bio and info industries appears to be modelled logistically.   
   >The inflection point of this curve will eventually be reached and that   
   >will represent the period of most rapid change.   
      
   Two questions. How do you measure progress in quantitative terms? OK,   
   cpu power is easy, but the others? And do we have any evidence that   
   progress in these fields will follow what we consider to be "normal"   
   growth curves?   
      
   > As I'm sure you know,   
   >that is the singular point and it could be the equivalent of centuries of   
   >technological progress by previous standards.   
   >   
   You could say we'd had the equivalent of centuries of change in the last   
   decade, depending where you start.   
   >   
   >> Okay, it has an aura of   
   >> plausibility.  It's an advance on the kind of thing Alvin   
   >> Toffler was saying twentyfive years ago   
   >   
   >Toffler's predictions were about like predicting that the tides would   
   >come in.  He didn't really understand the dynamics.  He just compiled   
   >databases of buzzwords from newspapers, counted them up and then   
   >blabbered about trends.  He wrote a number of books with tons of vague   
   >predictions.   
   >   
   >   
   >> when he spoke of the Knowledge   
   >> Revolution.   
   >   
   >   
   >He was hardly the first person to observe this and many other people have   
   >provided much greater insight into the phenomena.   
   >   
   >   
   >   
   >> Now, he was right, ol' Alvin, and i never thought he would   
   >> be.   
   >   
   >Why didn't you think that.  He was only stating what was obvious to most   
   >people in any high tech industry at the time?   
   >   
   >   
   >> We here on this Usenet are proof of how right he was.   
   >   
   >Not really.  Usenet clearly preceded Toffler.   
   >   
   Only up to a point. I don't remember anyone predicting that a company   
   with a search engine product would be (however briefly) capitalised   
   ahead of GM.   
   >   
   >   
   >> But he was   
   >> also grieviously wrong - because he underestimated greatly the ongoing   
   >> complexity of human existence.   
   >   
   >Toffler purposely didn't worry about that, because he was just talking   
   >about general trends.  He didn't really explore the underlying mechanics   
   >of "why" or even "how".  He was trying to figure out the "what" side of   
   >the equation.  As to the issue of human existence, the idea of   
   >transhumanism is precisely to deal with the complex set of limitations   
   >which make it difficult for humans to incorporate new technology.  IA or   
   >intelligence amplification, for example, is an integral part of the   
   >agenda, because, without it, humans might easily be overwhelmed by   
   >advancing technology.   
   >   
   This is, I believe, the really difficult area. That computers will be   
   more intelligent than we are within twenty years raises very important   
   questions. If we make rules to prevent the making of machines which   
   might take us over, we shall be called Luddites. More importantly, some   
   will probably not comply with such rules.   
      
   But it will also be enormously difficult to ensure that such advances   
   benefit mankind as a whole, rather than merely increasing our present   
   divisions. Many of these are due to a lack of political will, rather   
   than a lack of intelligence. If we have reached an irreversible point in   
   global warming, the only use of enhanced intelligence may be to take a   
   few humans to somewhere else in the solar system.   
      
   As I've said before, it would be more helpful to have more wisdom rather   
   than more intelligence, but I'm not aware of anyone working on that.   
   >   
   >> Life can be "transformed" endlessly -   
   >> but we still have to get on with the business of living with each   
   >> other.   
   >   
   >That's already accounted for extensively by many extropian analyses.   
   >Maybe you should do some research on it.   
   >   
   Any names to offer?   
   >   
   >   
   >> I suspect Mr.Kurzweil has performed the same kind of finesse. By   
   >   
   >Kurzweil and other transhumanists don't ignore complex sociological,   
   >psychological and political factors, though clearly much remains to be   
   >done.   
   >   
   >   
   >> focusing on change we diminish continuity. A more profound analysis   
   >   
   >   
   >Everyone claims a "more profound" analysis than all the work that they   
   >haven't even read by Kurzweil and the rest of the transhumanist   
   >community.   
   >   
   >   
   >   
   >> of   
   >> our human future must take into account not merely change but also our   
   >> ongoing and, indeed, enduring needs.   
   >   
   >But our needs and therefore our values may not be anything like what they   
   >are today when we approach singularity.  Technology may address all our   
   >most pressing problems today.  In any event, we may not be human any more   
   >in the sense that an unaugmented human may seem the way that a chimpanzee   
   >seems to us now.   
      
   That doesn't encourage me much. I would suggest that, if things go the   
   way that you hint that they might, we'll have at least 5.5bn   
   "unaugmented humans"! This general drift sounds like going back to   
   (Aldous) Huxley rather than forwards.   
   >   
   >   
   >> It suggests, in fact, that what is holding us back   
   >> is the lack of such a vision as Humanisation,   
   >   
   >   
   >Clearly we will need many visions if we are to survive and function in a   
   >high tech future.  I agree that there are many primitive institutions and   
   >misunderstandings that hold humans back today.  However, as the humans   
   >augment their intellects, most will finally have the capacity to cast off   
   >memes like religion and pursue greater cooperation.   
      
   I'm afraid that the correlation between religion and stupidity is   
   nowhere near as high as I (and probably you) would wish. Certainly   
   greater intelligence does not produce more cooperation. Here not only   
   wisdom, but some humility is required, again not a common correlate with   
   intelligence.   
      
   >  If we could hand out   
   >smart pills then the majority of the human population could become   
   >secular over-night.   
   >   
   If this were true it would prove that the average EU citizen was smarter   
   than the average citizen of the U.S. Even though I come in the former   
   category I do not believe this to be the case. Joseph is right to point   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca