RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 30 2003 - 07:34:46 MDT


> > As a parent, I would interfere with my child's self-determination if
> > they were going to do something sufficiently destructive. I'd also try
> > to change them so that they no longer *wanted* to do the destructive
> > thing.
>
> Angels and ministers of grace preserve us, Ben, I hope you were talking
> about an AI and not a human child! Just reach into a human mind and
> tamper like that? Thank Belldandy my own parents didn't have that
> capability or I'd be a nice, normal, Orthodox Jew right now.

Well, if one of my children were suicidal (far from the case, thankfully!!),
I would try to help them to not be suicidal, via medication, psychotherapy,
etc. What is this but "reaching into a human mind and tampering"? If they
were confused and not sure they wanted my help, damn right I'd try to push
my help on them. And after my intervention had succeeded, in all
probability they'd thank me.

I have not had that kind of experience with my kids -- they're rather weird
(surprise, surprise!) but generally happily and well-behavedly so. However
I HAVE seen other parents deal with similar situations. For instance,
parents of a kid who was taking way too many drugs, and involved with a lot
of senseless vandalism and violence and other destructive behavior. They
sent him to a posh "reform school" -- against his protests -- and it
actually did reform him, and now I believe he's genuinely glad they did.

You mention religion -- now, I'm fairly ardently non-religious, but one of
my kids is very interested in the Bible and shows signs of potentially
becoming a religious person. Am I gonna try to force him not to be that
way? Of course not. Am I gonna tell him really clearly and repeatedly and
calmly what I think of religious dogma, and am I gonna give him stuff to
read reflecting a scientific view of religion? For sure.

> > Because we have values for our kids that go beyond the value of
> > volition/self-determination...
>
> What about the kids' values for themselves? Parents don't own children.

Well, we may not own our children -- but when you're stopping your
2-year-old from running across the busy street even though their volitional
self-determination tells them to do so, there's some kind of cousin to
"ownership" going on...

Compared to most parents I see, I tend very much toward "children's rights"
... for instance, I have given my children the choice of whether to attend
school or not, and for a while they were home-schooled. However, even I, as
a lenient and child's-rights-respecting parent, can envision many situations
where I would unreservedly try to "tamper" with my children's motivational
structure (like the examples described above, of suicidal or extremely
self-destructive behavior).

There is a radical philosophy of child-raising ("Taking Children Seriously")
that believes parents should never try to coerce their children at all. See
http://www.eeng.dcu.ie/~tcs/ . One individual involved with this philosophy
is physicist David Deutsch, a pioneer of theoretical quantum computing.
Before having kids myself I might have agreed with this, but in the thick of
raising three kids I have to say it seems unproductively extreme.

> > But we presumably don't want a Friendly AI to take on this kind of
> > parental role to humans -- because it's simply too dangerous??
>
> Because I think it's wrong. Direct nonconsensual mind modification seems
> unambiguously wrong. I'm not as sure about gentle friendly advice given
> in the service of humane goals not shared by the individual; that strikes
> me as an ethically ambiguous case of "What would humanity want to happen
> here?"

Well, there is a big middle ground between "direct nonconsensual mind
modification" and "gentle friendly advice."

The kinds of things normal parents can and will do to affect their
children's motivations and attitudes lie in this middle ground.

Similarly, the interesting moral issues pertaining to future AI's
interfering with humans, also lie in this middle ground.

After all, if a superintelligent AI is going to interact with us, it's
certainly going to influence us, and if it's really clever it may be able to
achieve amazing amounts of "coercion" by indirect means. It may be able to
lead us along just as easily as we can lead a dog into a room it's afraid
of, by tempting it there with doggie treats.

Free will itself is a can of worms, now isn't it? Can we tell a future AI
not to mess with our free will, when we don't even know what it is? ... and
when we in fact know from cognitive neuroscience (e.g. Gazzaniga's work)
that much of our subjective experience of freedom and conscious
decision-making is illusory, being a matter of the conscious mind making up
decision-stories for things that the unconscious mind has already decided.
If the future AI understands human psychology at all, it will understand the
complexity and fuzziness of the distinction between influencing a human's
unconcsious and controlling its conscious decision-making processes.
Actually "volition" is just as tricky as "happiness", maybe more so..

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT