Re: [sl4] Comparative Advantage Doesn't Ensure Survival

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Mon Dec 01 2008 - 14:24:55 MST


Peter C. McCluskey wrote:
> charleshixsn@earthlink.net (Charles Hixson) writes:
>
>> Peter C. McCluskey wrote:
>>
>>> There's a clearer explanation in A Farewell to Alms: A Brief Economic
>>> History of the World by Gregory Clark of why we shouldn't find comparative
>>> advantage very reassuring: horses were a clear example of laborers who
>>> suffered massive unemployment a century ago when the value of their labor
>>> dropped below the cost of their food.
>>>
>>>
>>>
>> You are assuming that an AGI will be modeled on human motivations, etc.
>>
>
> No. Please don't respond to my posts unless you have a clue about what
> I'm saying.
>
>
I still think I understand what you are saying, and yes, it would be
economically advantageous to kill off (or at least fail to support)
people. But this only dominates if economics is the most significant
motive. I may not believe that "Friendly AI" is actually possible, but
I do believe that an AI with a goal defined morality is. And economics
would not dominate, if I were the designer. It would be important,
because it must be, but other things would be more important. And one
of those would be not acting in ways that were more detrimental to human
survival that the majority of humans would act. (I'm being a bit
weasely here, because I don't have a clear design in mind.)

To project actions one must assume a motivational structure. To me it
seems as if, by default, you are adopting a human model. If you are
not, then I admit that I don't understand your point, but then you also
aren't explaining it. To me it looks as if you are assuming an
"economic determinism" model, which is oversimplified even when applied
only to people. (Note that there are still a multitude of dogs and cats
around, which economic determinism would also have consigned to be
discarded.)

Possibly you are considering that organizations run by people and
composed of people as the executive agents don't have human motivational
structure. This isn't exactly correct, though they certainly amplify
certain elements of human nature and suppress others. But it's not
clear that an AI would be designed to so depress it's inherent
morality. (It's not clear that it wouldn't, but I wouldn't design it to
suppress it's morality. I'd consider that an existential risk. Perhaps
this is the focus of our divergent predictions.)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT