RE: Revising a Friendly AI

From: Ben Goertzel (ben@intelligenesis.net)
Date: Wed Dec 13 2000 - 12:10:30 MST


Hi,

> Under routine circumstances, however, the verbal thoughts are in immediate
> control. (Note that I do not say the conscious mind is in control, since
> emotions can also exert major influence over verbal thoughts.) With
> respect to long-term goals, verbal thoughts are in effectively complete
> control.

I just don't believe you. This is not how it feels my mind work, nor does
it explain
how preverbal children or chimpanzees can do complicated things....

> > But I was noting one plus: it integrates relatively useful goal
> systems all
> > through our minds in subtle & complex ways.
>
> The return on integration is scarcely greater than the investment in
> instinct... maybe less.

I don't know how to make this kind of judgment. But my work practically
building an AI
has given me a lot of respect for what evolution has achieved in terms of
integrating system-level
goals throughout a complex self-organizing control structure.

> > If Ai systems don't evolve, they'll have to get this some other
> way, that's
> > all. It's far from impossible.
>
> What AIs need is the correct decision, the Friendliness. Why do they need
> the tangle to get it? What's wrong with supergoal and subgoal?

Friendliness is not in reality a goal that can expressed by a simple
mathematical
formula. Like all real concepts, in principle it's a big, complicated mess
which can
only be mastered through experience. Gathering this experience, given
finite computational
resources requires a complex mind
in which various goals guide a network of processes.

As we all know through our real-world life, ethical behavior isn't about
mastering a rule or having
a simply-declared belief; it's an attitude which has to permeate our being
to be really useful in guiding
real-world behavior.

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT