From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Sat May 03 2008 - 13:40:08 MDT
> > I wouldn't underestimate the possibility for AI goals to drift because
> > of inconsistency. We'd want the AI to care somewhat about the
> > happiness, survival and freedom of humanity; I doubt we will be able
> > to phrase these in a very consistent way.
>
> That's a valid point. Transparent ratings and weighings can make the
> whole consistent though in terms of the abstracted goal incorporating
> the different dimensions (though certainly difficult to "properly").
> Contrast with the "oh I'm on a diet, I'll just have one or twelve"
> approach of hoo-mans, where there is no clear hierarchy and tradeoffs
> are done on a whim.
And we're trying to translate those human urges into AI-comprehensible
terms; hence the challenge...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT