RE: Hard takeoff [WAS Re: JOIN: Joshua Fox]

From: H C (lphege@hotmail.com)
Date: Wed Feb 08 2006 - 13:35:54 MST


>but you can't use that fact to predict that it will escape to create a hard
>take-off.
>

It sounds like you think hard take-off is bad or undesirable. The only
situation where hard is less desirable than soft is when you are doing a
crappy job of ensuring Friendliness. In which case you are probably screwed
anyway.

>Also,
>
>Computational resources are not the only limiting factor.
>
>Factors that influence how hard the takeoff "knee" is include:
>
>1) Computational resources

really!?

>2) Other resources - particularly nanotech.
> - it doesn't have to be replicators. Tunnelling electron
>microscope-level nanotools etc will make it much easier for a "runaway AI"
>to create replicators

Why would nanotech be a necessary resource for hard take off, other than
possibly for computational resources? It's wouldn't be.

>3) "first instance efficiency" - I know there's a better term, but I can't
>remember it. If the first code only just gets over the line, and is slow
>and clunky --> slower takeoff

ie. need more computational resources.

>4) AI goals (how much it wants to improve)

The only concievable case in which an AI's goals would limit its
self-improvement would be some programmer enforced boxing, which is a bad
idea in the first place. Self-improvement is good for any goal in general.

In summary, if you have an intelligent system, hard take-off is both
desirable and probable: where the necessary and sufficient factor is
computational resources. Furthermore, the amount of computing power
necessary for hard take-off is unknowable except with direct reference to
the specifications of the actual intelligent system.

-hegem0n



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT