Re: AGI w/ NO goals

From: Eric Rauch (erauch@gmail.com)
Date: Tue Dec 13 2005 - 14:43:34 MST


Alright what if we give it a limited goal (i.e. self improve until you are
smarter then every living human combined). Would an intelligence of this
magnitude just stop operating because it had reached the goal that had been
set for it?

On 12/13/05, Kaj Sotala <xuenay@sci.fi> wrote:
>
> > it would be programmed to take data as an input and perform analysis on
> the
> > data, it also would be given the ability to self-improve, and lets
> assume we
> > also succeed in making it self-aware. Would it just sit their idle
> because
> > no goals had been specified?
>
> Yes.
>
> Also, giving it the ability to self-improve wouldn't do anyone any good
> if it didn't have a goal to improve itself towards. (Maybe it could
> start to perform self-modification at random, if it was specifically
> designed to do so - but random changes would most likely just break
> things, and even "conduct random changes in your own code" is a goal.)
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT