Re: SIAI has become slightly amusing

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Jun 04 2004 - 13:01:02 MDT


Ben Goertzel wrote:
>
> The analogy with raising young children seems to work OK here (not
> perfectly).
>
> A child isn't always cautious enough either.
>
> I tend to be a permissive parent, which means that I basically let my
> kids do what they want. I don't place restrictions on how much TV they
> watch, for instance, even though I don't like TV and I think when they
> grow up they might look back and wish they hadn't watched as much TV in
> their childhood (not that they watch it constantly or anything...).
>
> However, if one of them is about to do something really dangerous or
> stupid, then I stop them, invoking the "When you grow up, you'll look
> back and be glad I stopped you!" principle.

Okay... now, note that it's YOU who decide how to be permissive towards
your kids.

If kids tried to decide how permissive their parents should be, it wouldn't
work out at all.

A powerful but dangerous analogy. Still, it helps show why the *initial
dynamic* uses extrapolated volition. I don't think it'll be our volition
to live out our lives as pawns to our own volitions - more reliably so than
one can rely on a parent not to try and take over their child's life;
between parents and children there are genetic conflicts of interest, and
opportunity for parental selfishness, and parents are not always
knowledgeable, or smart, or the people they wish they were. Our
extrapolated volitions will know, as human parents do not know, the limits
of their ability to guess on our behalf.

> Similarly, I think, a FAI should weight "free choice of sentient beings"
> pretty high among the values it tries to optimize.

And you'll pick the weighting yourself? And it'll last for the next
billion years? Ben, I don't think I've ever seen you try to think of a
single thing that could go wrong with *your own* solutions, whatever
criticism you apply to mine.

I'll make it a challenge: Can you show me a single bit of self-criticism,
a single extrapolation of error or catastrophe in your own plans, in any of
your online pages? (No sudden updates, that's cheating.)

> I note that my preference for free choice over extrapolated volition,
> wherever possible, is an ETHICAL value choice, it's not something that
> can be argued for or against rationally, at least not in a definitive
> way.

And yet my ethical choices change, often after I learn new sciences. Isn't
that odd?

> However, it might be possible to show that having an FAI weight "free
> choice of sentient beings" very highly is somehow pragmatically
> impossible, whereas having an FAI weight collective volition very highly
> is more plausible. I don't believe that this is true, but, Eliezer, if
> you think you have a demonstration of this I'd like to see it.

If you can build FAI at all, you can build libertarian FAI, though one with
that high art would also see the structural disadvantage, the moral danger.

It's not possible or impossible. It's wise or unwise.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT