RE: Hard takeoff [WAS Re: JOIN: Joshua Fox]

From: H C (lphege@hotmail.com)
Date: Thu Feb 09 2006 - 10:05:02 MST


>Nanotech is not necessary for awakening. Existing nanotech is also not
>necessary for expanding computational resources, but will make a huge
>impact on how things turn out a few "steps" of change beyond awakening.

Who knows, maybe nanotech IS necessary for awakening.

>(Takeoff hardness) = derivative of {(initial
>efficiency)x(goals)+(additional resources)}^(computational resouces).
>
>That formula is probably very wrong - my math sucketh - but hopefully you
>get my drift.
>
>
>>>4) AI goals (how much it wants to improve)
>>
>>The only concievable case in which an AI's goals would limit its
>>self-improvement would be some programmer enforced boxing, which is a bad
>>idea in the first place.
>
>How it goes about self-improvement is a limiting factor.
>
>Converting all nearby matter to computronium-of-the-moment is the most
>rapid way to self-improve in the short term.
>
>Sitting back (self improving, without assimilating more resources), gives
>an expantive AI time to think about which resources it needs to assimilate
>now, which resources should be left till later, and which "resources" have
>merit in existing untouched.
>
>>Self-improvement is good for any goal in general.
>
>Yes, but aquiring resources for improvement, although fast, is not
>necessarily the best.

I see your point here, although I imagine if the rate of thought is
super-human, then it probably takes much longer to assimilate the resources
than it does to establish how to go about assimilating resources. Meaning,
it would probably have time to think about optimizing it's actions while
those actions are in their processes of occuring. I imagine it would be
enough "ahead of the game" to be in a constant push for more resources.

>
>>In summary, if you have an intelligent system, hard take-off is both
>>desirable and probable.
>
>I refute that. Firstly, you haven't said "Friendly".

Because Friendliness is not necessary to say here.

>Secondly, "hard take-off" encapsulates a number of scenarios that, even if
>the AI is friendly to sentients, are otherwise undesireable.

I meant desirable from the intelligent entity's POV.

>>where the necessary and sufficient factor is computational resources.
>
>See above.

I think your function can be simplified a little bit. You might as well
abstract the "goals" from the equation entirely. Essentially, when you
"take-off", you are taking off in a direction. The direction is what your
"goals" are defining. What you are "moving" in that given direction is.. the
Universe. In order to be considered an intelligent entity, you must have the
means to interact with the Universe in such a manner as to potentially
increase the complexity with which you are capable of interacting with the
Universe. So that part is given. The only remaining variable is cognitive
capacity- which is the interface between your goal, that is, your
directional faculty, and your means of activity: the means with which the
Universe is steered.

Given a certain depth of cognitive analysis for Universe steering, I imagine
the most desirable point at which you would steer the Universe, is the point
at which you have the most amount more depth of cognitive analysis for
steering the Universe. Obviously this isn't a direct rule, because as an
ultimate end it would be rather pointless, but intuitively it seems to be
one of the most influential of rules that a seed AI would follow, as the
guiding point to its supergoal.

I could be wrong, but I'm not.

-hegem0n



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT