From: Durant Schoon (durant@ilm.com)
Date: Mon Apr 02 2001 - 13:08:24 MDT
PREFACE:
(I stopped reading FAI when I got to the references to Greg Egan's Diaspora.
Why? I don't want to have the book spoiled for me (yes, I'm that silly).
I've finished "The Origins of Virutue" Sunday (BIG thumbs up!), and
started Permutation City (also BIG thumbs up!). So if my understanding of
FAI is shallow, you'll know why).
> From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
>
> How do you implement "fragile feelings", anyway? An AI's evaluation of a
> strong Bayesian binding between others' images of verself and the AI's
> actual self? Peculiar pictures held by others resulting in sharp
> self-reevaluations in ways that have strong qualitative effects on
> behavior? I just don't see that happening, and certainly not in ways that
> are understandable or predictable using human emotional intuitions. The
> beliefs that have large effects on behavior are not *always* the ones that
> are most confirmed by immediate experience (and hence too confirmed to be
> flipped around by others' opinions) - but as a general rule, yes. And
> those *non*-experientially-confirmed beliefs that affect immediate
> behavior were almost certainly put there by programmer affirmation,
> meaning that it would take sensory information at least as strong as a
> deliberate programmer affirmation to trash them.
So let me ask, will the nascent FAI have a "sense of self", which can have
a positive or a negative emotional attachment? When the AI is young and still
working on self optimization, it is conceivable to me that one parameter to
tweak is "propogation of self worth". When ver human teacher/programmer/role
models tells ver that something ve did was wrong, that lesson should be
incorporated and used to affect future behavior. Is there a middle step,
though? Is there an internal sense of failure, of having done something
wrong before the attempt to correct the problem? Humans have this. Do AI's
need it?
At that point we can ask about the speed of the change and the depth the
the effect of negative self worth in a "fragile" AI. Will AI's have feelings
to hurt? (this is a very basic question that's probably answered somewhere
in FAI, so you can send me a link that I'll read after FAI, after Diaspora,
after Permutation City :-)
-- Durant x2789
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT