FAI and objective morality (was: Re: An essay I just wrote on the Singularity.)

From: Nick Hay (nickjhay@hotmail.com)
Date: Tue Jan 06 2004 - 03:03:17 MST


On 05/01/04 17:29:32, Perry E. Metzger wrote:
>> it's experiential rather than empirical. I could explain these
>> experiences in detail, but it would be a lot of work, and I don't
>> think it would be very useful to anyone in this forum.
>
> On the contrary, it is the heart of our discussion. Can you produce
> an objective and absolute morality to feed in to the Friendly AI? If
> not, the project seems to have a bit of rot right at the core.

Friendly AI is not about a bunch of humans writing up an "objective and
absolute morality" and creating a mind with that morality. It's not
about creating a mind with any given morality at all, although you will
give it some starting ideas. If you go through Eliezer's writings on
the matter (the local FAI expert) you'll find 1) he doesn't claim to
have an objective and absolute morality and 2) he doesn't suggest you
need one.

FAI is more about creating an AI with has the ability to reason about
right and wrong, about what its goals should be, as you can but other
animals or AIs can't. Transferring something equivalent to the complex
functional adaptations underlying human morality, the cognitive
mechanisms humans implicitly use but never argue about (unless they're
studying how brains work). Sharing altruism. You want to give the FAI
the ability to understand human moralities, in all their divergent
glory, and try and figure out ways to help, to improve things, where
that's possible.

The second paragraph is not a good characterisation of what Friendly AI
is (it's a difficult problem which may not have a solution about which
I still know little), but the first paragraph is a good description of
what it isn't.

On a different note, here's a falsifiable definition of objective
morality. As humans grow up their morality (their sense of what is
right and wrong, what kinds of things they think they should do, what
kinds of things they want to do, etc) changes. Unfortunately, due to
hardware limitations, humans don't get a change to really grow up. Not
only do we die before we get a change, but we have hardware limitations
to the kinds of growing up we can do (we can't even modify or
understand the architecture of our own minds!). Suppose that all
humans, when they grow up sufficently long and become sufficently
smart, convergence on the same morality, the same sense of right/
desirable. This morality would be reasonably called "objective" or
perhaps "convergent".

I don't suppose this actually happens, although it could.

- Nick Hay



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT