From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 02 2005 - 23:26:42 MST
Russell Wallace wrote:
>
> Well, I originally had the impression you believed it would be
> possible to create a seed AI which:
>
> - Would provably undergo hard takeoff (running on a supercomputer in a basement)
> - Or else, would provably have e.g. a 99% probability of doing so
Good heavens, no! I was needing to create a system that would provably
remain <Friendly> (according to some well-specified target) *if* it
underwent hard takeoff. Proving in advance that a system undergoes hard
takeoff might be possible but it isn't nearly so *important*.
> I'm confident both of these are self-evidently wrong; the things we're
> dealing with here are simply not in the domain of formal proof.
An informal argument is just formal probabilistic reasoning you don't
know how to formalize.
> Do I now understand correctly that your position is a slightly weaker
> one: it would be possible to create a seed AI which:
>
> - In fact has a 99% chance of undergoing a hard takeoff, even though
> we can't mathematically prove it has?
Or 80%, whatever, so long as it has a guarantee of staying-on-target
<Friendly> if it does undergo hard takeoff.
> If so, then I'm still inclined to think this is incorrect, but I'm not
> as confident. My intuition says each step might have a 99% chance of
> being successfully taken, but the overall process of hard takeoff
> would be .99^N; I gather your intuition says otherwise.
Correct. If one path doesn't work out, take another.
>>My studies so
>>far indicate that humans do these things very poorly
>
> Compared to what standard?
When a car has three flat tires and a broken windshield and is leaking
oil all over the pavement, you don't need to see a new car to know this
one is broken. But since you ask: A Bayesian standard, of course. Why
do you think cognitive psychologists talk about Bayes? It's so that
they have a standard by which to say humans perform poorly.
>>yet because we can
>>try, there must be some component of our effort that works, that
>>reflects Bayes-structure or logic-structure or *something*. At the
>>least it should be possible to obtain huge performance increases over
>>humans.
>
> Bear in mind that for any evidence we have to the contrary, human
> ability at strongly recursive self-improvement is zero.
Which is why I pointed out that your argument applies equally to the
impossibility of writing code, for which we do possess evidence to the
contrary.
>>Why should a system that works probabilistically, not be refinable to
>>yield very low failure probabilities? Or at least I may hope.
>
> I hope so too, but the refining has to be done by something other than
> the system itself.
This sounds like the old fallacy of a modular system not being able to
copy a module into a static form, refine it, and execute a controlled swap.
>>But at least a volition-extrapolating FAI would refract through humans
>>on the way to deciding which options our world will offer us, unlike
>>natural selection or the uncaring universe.
>
> There may be something to be said for that idea, if it can actually be
> made to work.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST