**From:** Eliezer S. Yudkowsky (*sentience@pobox.com*)

**Date:** Thu Feb 28 2002 - 10:16:06 MST

**Next message:**polysync@pobox.com: "Re: Seed AI milestones (was: Microsoft aflare)"**Previous message:**ben goertzel: "Re: Seed AI milestones (types of self-modification.)"**In reply to:**ben goertzel: "Re: Seed AI milestones (types of self-modification.)"**Next in thread:**Michael Roy Ames: "Re: Seed AI milestones - complexity barriers"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

ben goertzel wrote:

*>
*

*> I say something in words
*

*> that, to me, clearly represents my intuition. Then Eliezer (for instance)
*

*> slightly misinterprets it and rephrases it in a way that's (in my view)
*

*> almost but not quite right.
*

Well, at least the situation is symmetrical.

*> And I feel like there's nowhere else to go with
*

*> the discussion *except* into the domain of mathematics, because ordinary
*

*> language is not precise enough to make the distinctions that need to be
*

*> made. Of course, one can do without special symbols, if one wants to
*

*> construct extremely complex sentences that effectively describe mathematical
*

*> structures and use specially defined terms, but that's not really preferable
*

*> to me.
*

When I find myself in a situation like that, I generally try to break down

what is being argued into smaller pieces which are easier to clarify, or

argue about if that's what's necessary. Breaking down a hard takeoff into

specific epochs, for example. But as one breaks the discussion into smaller

pieces, it becomes harder and harder to construct a mathematical model,

because mathematical models are not computer models; an equation has a hard

time capturing if-then branches, discontinuous functions, self-reference,

complex systems with functionally distinct parts. I can imagine developing

a computer model of a hard takeoff and then playing with the assumptions to

see which emergent behaviors tended to be robust across a wide range of

models.

To put it another way, if "intelligence" is a single quantity, then you may

be able to try and model a hard takeoff in terms of y' = f(y). But as soon

as you realize that computational subsystems sum to cognitive talents and

cognitive talents combine with domain expertise to yield domain competencies

and that domain competencies in programming then turn around to modify

computational subsystems; and that the programmers and the AI are

co-modifying the system; you then have to build the computer model first,

and see what behaviors it exhibits, and *then* try to describe those

behaviors with simpler equations - if you can find any.

I've also been sorely tempted of late to build a computer model. But I'd

build the model without trying to make it mathematically elegant. I'd toss

in things such as functions that were linear in one place, exponential in

another place, then logarithmic for a stretch. Why? Because that's what

real life is likely to be like. Behaviors that don't survive that kind of

noise are probably not strongly emergent enough to have a chance of telling

us something about the real world.

I don't trust simple models of hard takeoffs.

But, you can always read the debate between Max More and Ray Kurzweil on

kurzweilai.net if you're interested; Kurzweil mentions some discussion with

Moravec about formal models of hard takeoffs. Though I think Moravec is

just using y' = f(y).

-- -- -- -- --

Eliezer S. Yudkowsky http://intelligence.org/

Research Fellow, Singularity Institute for Artificial Intelligence

**Next message:**polysync@pobox.com: "Re: Seed AI milestones (was: Microsoft aflare)"**Previous message:**ben goertzel: "Re: Seed AI milestones (types of self-modification.)"**In reply to:**ben goertzel: "Re: Seed AI milestones (types of self-modification.)"**Next in thread:**Michael Roy Ames: "Re: Seed AI milestones - complexity barriers"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:37 MDT
*