Re: Imposing ideas, eg: morality

From: m.l.vere@durham.ac.uk
Date: Tue May 16 2006 - 08:19:06 MDT


> If morality is purely relative, then by definition we cannot instill an AGI
> with our own morality (otherwise it would, from the perspective of the AGI
> at least, be objective -- a given)

I dont think this is so. Whilst IMO, the balance of evidence is overwhelmingly
against the existence of an objective morality, loads of people believe their
relative moralities to be objective. I am certain it is theoretically possible
to give an AI any set of goals we like, and have the AI follow them as if they
were an objective morality.

> As you can see, this problem boils down to two things:
> (a) Debates about morality
> (b) Understanding AGI
>
> I suggest that (a) is not SL4 or easily solvable, and that (b) is what we
> should concern ourself with?

So, we work on how to build an AGI, and how to make it follow a 'morality'.
Then we let someone else decide what that morality should be. Sounds clever.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT