Re: Friendly Existential Wager

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Jun 28 2002 - 22:28:07 MDT


Eliezer S. Yudkowsky wrote:
> Mark Walker wrote:
> >
> > E.Y. thinks Friendliness first, B. G. thinks AGI first. Who is right?
> > Suppose we don't know. How should we act? Well either attempting to
> > design for Friendliness before AGI will be effective in raising the
> > probability of a good singularity or it will not.
>
> Actually, my philosophy differs from Ben's in that I think that you need
> substantially more advance knowledge, in general, to bring any kind of
> AI characteristic into existence, including Friendliness. From Ben's
> perspective, this makes me arrogant; from my perspective, Ben's reliance
> on emergence is wishful thinking. I do think that understanding of
> Friendly AI follows from understanding of AI, rather than the other way
> around. You can't have Friendly AI without AI; you can't build moral
> thoughts unless you know what thoughts are and how they work.
>

How will you demonstrate that you know what thoughts are and how
they work? I demonstrate in in an implementation that thinks,
no? It can't be demonstrated that you really know simply in
words. Philosophers have been having a go at that for quite
some time now, with and without recently findings of cognitive
science.

If you are to build moral thoughts then you require not only an
understanding of thought but of morality. I am not sure exactly
what your take is on morality currently after recent exchanges
about it being ok to send young FAIs into combat and mixed
signals regarding whether or not the human race is expendable.

As emergence is what our intelligence grew out of in large part
to start with I don't find that setting up that which
intelligence may likely emerge from is so unreasonable.
Although that doesn't seem a reasonable characterization of what
Ben is doing either.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT