Re: Flawed Risk Analysis (was Re: SIAI's flawed friendliness analysis)

From: Samantha (samantha@objectent.com)
Date: Thu May 22 2003 - 23:04:04 MDT


On Thursday 22 May 2003 09:40 am, Gary Miller wrote:

> The point is that you teach the FAI morality, ethics, and let it
> develop it's moral compass early on before it is ten times
> smarter than you.
>
> Once it's character has been established I don't believe it's going
> to turn evil on you at that point.

Please point out the foolproof version of ethics and morality,
absolutely ironclad and logical for all minds of sufficient capacity,
that this sort of trust is based upon. Please point out what you
mean by "evil". Would any future decision that does away with all
human beings be considered "evil" for instance? That you or I might
say so (not a given on this list by past dialogue) does not
guarantee that the AI would consider it so indefinitely. So where
is the solid bedrock upon which we can predict the evil-ness
(whatever that is to us) of any AI's future decisions based merely
upon our understanding and comfortableness with its moral/ethical
reasoning and decisions to date?

> The source of most criminal and antisocial behavior is readily
> apparent when you examine the childhoods and upbringing the
> criminals had up to their teen years.
>

No, I don't think so. An AI will not be socialized as humans are.
It will not have our evolutionary bounds on aggression and basis of
compassion (limited as these may be). This will be a loner (unless
there are others of comparable ability) intelligence, not a sentient
socialized into the human community. So drawing parallels from
human upbringing is strikingly unconvincing even if one believes the
overplayed explanation for human criminality.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT