XPost: comp.robotics.misc, comp.ai.philosophy   
   From: lesterDELzick@worldnet.att.net   
      
   On 27 Feb 2005 17:27:46 -0800, "dave.harper"    
   in comp.ai.philosophy wrote:   
      
   >   
   >Lester Zick wrote:   
   >> On 26 Feb 2005 15:44:40 -0800, "dave.harper"    
   >> in comp.ai.philosophy wrote:   
   >>   
   >> >   
   >> >Lester Zick wrote:   
   >>   
   >> >> >> On 24 Feb 2005 07:13:34 -0800, "dave.harper"   
   >> >   
   >> >> >> in comp.ai.philosophy wrote:   
   >>   
   >> >You missed my point. If religious reasons are one of the   
   >motivations   
   >> >behind research, then the "end product" can be skewed to favor those   
   >> >motivations. For instance, if research produced a result that   
   >> >supported evolution, then many scientists that want continued   
   >funding   
   >> >from religion-oriented sources might re-word or omit some of their   
   >> >findings to appease those that provided the means.   
   >>   
   >> And you missed my point that if motivations are subjective every   
   >> motivation would skew results because subjective considerations   
   >> cannot be objectively identified and compensated for.   
   >   
   >In order to apply your arguement to reality, you'd have to believe that   
   >all motivations skewed results equally. I find that assumption flawed.   
   > If that was true, then 3rd party and independant studies wouldn't be   
   >any more valid in the public's opinion than others.   
      
   Not at all. I don't have any reason to suppose they're skewed equally.   
   They're just all skewed.   
      
   >> >How about research that benefits humanity, and isn't skewed by   
   >biased   
   >> >sources?   
   >>   
   >> What exactly benefits humanity according to what biases? I think   
   >> you'll find the only answer are utilitarian measures or your own   
   >> biases and subjective ideas of relative value.   
   >   
   >And I contend that all biases are not equal.   
      
   And I never contend they were equal. You're the one who claimed that I   
   contended that they're equal for some incomprehensible reason.   
      
   > How can   
   you proove an   
   >AI's programmed biases would be worse than a human government?   
      
   I can't. What I can prove is that an ai's programmed biases would not   
   be human and not those of the governed and that a government's biases   
   may not be either but are much more so than the programmed biases of a   
   non human.   
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
      
       
   As I   
   >mentioned before, an open-source AI wouldn't be able to hide biases as   
   >easily as a human government.   
      
   And you know this how?   
      
   >> >> Motivations are what drive research and motivations are   
   >> >> subjective. You don't like religious motivations and I agree. It   
   >> >> doesn't, however, speak to the significance of the science that   
   >> >> results from religious motivations as opposed, say, to academic   
   >> >> career advancement motivations.   
   >> >   
   >> >If the results are skewed towards the motivations, as stated above,   
   >> >then the science is distorted.   
   >>   
   >> The ends science aims at are always distorted by the subjective   
   >> motivations of those ponying up the bucks. Let me know when you   
   >> find a science that isn't distorted by the motivations of those doing   
   >> or paying for the science.   
   >   
   >And again, all motivations aren't equal magnitude "skewers".   
   >Religious-backed research, which is likely to overlook results pointing   
   >to evolution or the like, is less likely to produce valid results in   
   >some fields. For example, say a researcher motivated to cure a type of   
   >cancer. If he finds a drug that works via a biological mechanism that   
   >lends credit to evolution, do you really think a religious funder would   
   >be just as likely to pursue the drug as a non-religious funder? And on   
   >the other hand, if a the drug lended proof of God, do you think the   
   >non-religious funder would be equally less likely to fund than the   
   >religious backer in the first scenario?   
      
   Well this is all very ad hoc secular mindbending. I don't hold any   
   special brief for religion. But all you're doing is making a general   
   attack on religious motivation without so much as a by-your-leave.   
      
   >> >> >As opposed to leaving political decisions to the tender mercies   
   >of   
   >> >> >career politicians, some of which are corrupt and have hidden   
   >> >agendas,   
   >> >> >as well as being hampered by special interest groups that   
   >eliminate   
   >> >> >many options that an AI wouldn't have to eliminate...   
   >> >>   
   >> >> I'd rather leave political decisions to the tender mercies of me.   
   >> >   
   >> >I'm sure many people back in the 50's would have said "I'd rather   
   >leave   
   >> >piloting to the tender mercies of me, not some computer". If you've   
   >> >flown in the past decade, chances are your life was in the hands of   
   >a   
   >> >computer for some of the flight. Having said that, current   
   >computers   
   >> >are far from being able to make political decisions... just keep in   
   >> >mind that perspectives about technology change as technology's   
   >> >capacities increase and are proven.   
   >>   
   >> There's a huge difference between computers and technology as tools   
   >> and computers and technology as substitutes especially for   
   >utilitarian   
   >> measures of subjective values for people.   
   >   
   >Currently, yes... But how do you know what potential new technology and   
   >discover holds in the future? That statement may join the leagues of   
   >other quotes said in the past.   
      
   New technology for subjective mechanics? By all means. That will just   
   make ai artifacts subjective and we can all stop pretending they're   
   gods who'll do for us what we can't do for yourself. If you want to   
   believe in god, I suggest undoubtedly less expensive alternatives.   
      
   >That statement reminds me of a few other quotes:   
   >   
   >"Radio has no future."   
   >- Lord Kelvin (1824-1907), British mathematician and physicist, ca.   
   >1897.   
   >   
   >"What can be more palpably absurd than the prospect held out of   
   >locomotives traveling twice as fast as stagecoaches?"   
   >- The Quarterly Review, England (March 1825)   
   >   
   >"This `telephone' has too many shortcomings to be seriously considered   
   >as a practical form of communication. The device is inherently of no   
   >value to us."   
   >- Western Union internal memo, 1878   
   >   
   >> I have no idea why   
   >> you think your subjective and utilitarian motivations are pure and   
   >> those of others are not or that those of an ai artifact would not be.   
   >   
   >They're not, and I never said they were "pure" on a "perfect value"   
   >yardstick. What I did say was that an AI governing tool/government,   
   >which may or may not uplhold my values, might be LESS biased (via   
   >transparency) and more efficient than a human based government.   
      
   Well it would sure as hell be a more efficient, transparent, and less   
   biased upholder of the programmer's values. That was never the issue.   
   The issue is and always has been whether it would be a more efficient,   
   transparent, and less biased upholder of the governed's values. Quite   
   often the real motivation of people advancing such arguments is to   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|