Re: Maximizing vs proving friendliness

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Fri May 02 2008 - 06:13:03 MDT


> >You cannot have any help from AI because that would mean the AI is
> >helping to reprogram its own goals.
>
> You're splitting things into two pieces when you don't need to, and
> then arguing that each piece must precede the other so it can't be
> done. The two pieces are the AI writing code and the AI determining
> what its goals are. It is possible to solve both problems at once by
> putting the code generation into the utility function.

Also, I see no problem in having an AI dedicated to doing nothing but
working internally on a particular task, printing out the results two
days later, then turning itself off. We can use limited AI's to help
design general AI's.

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT