Re: On the dangers of AI

From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Aug 17 2005 - 02:07:48 MDT


Justin,

A quick reply:

justin corwin wrote:
> On 8/16/05, Richard Loosemore <rpwl@lightlink.com> wrote:
>
>>All I can say is that you are not talking about the issue that I raised:
>> what happens when a cognitive system is designed with a thinking part
>>on top, and, driving it from underneath, a motivation part?
>>
>>You've taken aim from the vantage point of pure philosophy, ignoring the
>>cognitive systems perspective that I tried to introduce ... and my goal
>>was to get us out of the philosophy so we could start talking practical
>>details.
>
>
> Rather the opposite. I too have little patience with philosophy. I
> spoke from a practical perspective. If, like you say, a sentience
> deserving of the title will always choose to become more moral, but
> never less moral, why do humans occasionally move from moral actions,
> to immoral actions? This occurs, even to very intelligent people. It
> is a fact.

My first take is that you are asking about humans here, and they are
built, by evolution, with some pretty dangerous motivations. Even
intelligent people are not immune to their ravages, sometimes.

But the real cause of the problem is the lack of two things in these people:

1) A deep understanding of what a motivation system is, and how many of
our inclinations are governed by simple mechanisms that simply do not
have to be there.

2) The ability to switch these pesky motivation modules off.

I believe we can account for all the inclinations toward destructive
behavior by simply appealing to these factors. Or, vice versa, if
people had the ability to understand and cut out or dampen some modules,
their behavior would be very different.

Richard



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT