XPost: comp.robotics.misc, comp.ai.philosophy   
   From: lesterDELzick@worldnet.att.net   
      
   On 26 Feb 2005 15:44:40 -0800, "dave.harper"    
   in comp.ai.philosophy wrote:   
      
   >   
   >Lester Zick wrote:   
   >> On 26 Feb 2005 07:52:35 -0800, "dave.harper"    
   >> in comp.ai.philosophy wrote:   
   >>   
   >> >   
   >> >Lester Zick wrote:   
   >> >> On 24 Feb 2005 07:13:34 -0800, "dave.harper"   
   >   
   >> >> in comp.ai.philosophy wrote:   
   >   
   >> >1. It just means that the researchers are more likely to be biased   
   >> >towards results that don't conflict with any religious beliefs.   
   >>   
   >> Science is pursued for a variety of subjective reasons not open to   
   >> scrutiny. What's important is the end product and not the motivation.   
   >   
   >You missed my point. If religious reasons are one of the motivations   
   >behind research, then the "end product" can be skewed to favor those   
   >motivations. For instance, if research produced a result that   
   >supported evolution, then many scientists that want continued funding   
   >from religion-oriented sources might re-word or omit some of their   
   >findings to appease those that provided the means.   
      
   And you missed my point that if motivations are subjective every   
   motivation would skew results because subjective considerations   
   cannot be objectively identified and compensated for.   
      
   >> >2. It means that some other valid research isn't pursued based on   
   >> >religious reasons.   
   >>   
   >> What's the criterion for valid research as opposed to invalid   
   >> research?   
   >   
   >How about research that benefits humanity, and isn't skewed by biased   
   >sources?   
      
   What exactly benefits humanity according to what biases? I think   
   you'll find the only answer are utilitarian measures or your own   
   biases and subjective ideas of relative value.   
      
   >> Motivations are what drive research and motivations are   
   >> subjective. You don't like religious motivations and I agree. It   
   >> doesn't, however, speak to the significance of the science that   
   >> results from religious motivations as opposed, say, to academic   
   >> career advancement motivations.   
   >   
   >If the results are skewed towards the motivations, as stated above,   
   >then the science is distorted.   
      
   The ends science aims at are always distorted by the subjective   
   motivations of those ponying up the bucks. Let me know when you   
   find a science that isn't distorted by the motivations of those doing   
   or paying for the science.   
      
   >> >As opposed to leaving political decisions to the tender mercies of   
   >> >career politicians, some of which are corrupt and have hidden   
   >agendas,   
   >> >as well as being hampered by special interest groups that eliminate   
   >> >many options that an AI wouldn't have to eliminate...   
   >>   
   >> I'd rather leave political decisions to the tender mercies of me.   
   >   
   >I'm sure many people back in the 50's would have said "I'd rather leave   
   >piloting to the tender mercies of me, not some computer". If you've   
   >flown in the past decade, chances are your life was in the hands of a   
   >computer for some of the flight. Having said that, current computers   
   >are far from being able to make political decisions... just keep in   
   >mind that perspectives about technology change as technology's   
   >capacities increase and are proven.   
      
   There's a huge difference between computers and technology as tools   
   and computers and technology as substitutes especially for utilitarian   
   measures of subjective values for people.   
      
   >> >This was a thought exersize, and I'm not saying AI could or will   
   >ever   
   >> >become a factor in government. However, one thing that would make   
   >AI   
   >> >less suseptable to corruption is that the programming could be open   
   >> >source. Politicians can always spin things, lie, etc. It would be   
   >a   
   >> >lot harder to hide corruption in an open source code.   
   >>   
   >> We can open source the code for human intelligence.   
   >   
   >Pardon? Please do... if you can, you'll win the more academic awards   
   >than anyone in history. A "human source code" would have billions of   
   >different variations, and considering things like mood, things are   
   >impossibly complex. Besides, source code changes with nurturing and   
   >enviroment, so unless you can download the "source code" from a person   
   >directly (not just determined via DNA), then you're going to have   
   >problems providing a source code.   
      
   Well opening the source code for human intelligence is perhaps an   
   overstatement. What I meant specifically was the source code for   
   consciousness and conscious beings. We can never get at subjective   
   circumstances except by executing the source code in individuals. So   
   we regress the problem but don't solve it just by having open source.   
      
   >> That isn't going   
   >> to make the results of that code any less subjective in terms of the   
   >> results of its mechanization. Show me a political robot and I'll show   
   >> you a subjective and corrupt robot with hidden agendas.   
   >   
   >So you're saying anything and everything that is programmed to make a   
   >political decision is corrupt? Lets say a computer was programmed to   
   >determine the best way to distribute food in order to feed the most   
   >people. You're saying it's impossible to program an uncorrupt AI to   
   >make that decision?   
      
   I'm saying that that objective is a corruption by definition since it   
   assumes a subjective value judgment not evident in the fact of ai.   
   Basically all you're suggesting is that ai artifacts should employ   
   your own corruptions instead of someone elses. I have no idea why   
   you think your subjective and utilitarian motivations are pure and   
   those of others are not or that those of an ai artifact would not be.   
   Subjective motivations are endemic to the problem of utility. We only   
   get from one utilitarian standard to some other through corruption of   
   one in morphing from it to another.   
      
   Regards - Lester   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|