RE: SIAI's flawed friendliness analysis

From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Thu May 29 2003 - 19:38:24 MDT


Ben,

> Ben: if that is really Bill's definition of happiness, then of course
> a superintelligent AI that is rigorously goal-driven and is given this
> as a goal will create something like euphoride (cf "The Humanoids") or
> millions of micromachined mannequins.

> Bill said earlier: "Happiness in human facial expressions, voices and
> body language, as trained by human behavior experts".

I don't think that's Bill's definition of happiness - it just one way that
AGIs might tune in to what people are feeling. Same as we use these
body language clues. If you smile I think that at least fleetingly you are
happpy. But why you are happy and what it means for you the smile is
not going to tell me (or anyone).

I know that Bill expects AGIs to model the world around them, including
the other sentient being they observe. To do this model building the
AGI would have to have spophisticated understanding of people and
their circumstances.

Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT