RE: Loosemore's Proposal [Was: Re: Agi motivations]

From: Michael Vassar (
Date: Tue Oct 25 2005 - 13:27:26 MDT

>1) "Prove" that an AGI will be friendly?  Proofs are for mathematicians. 

Computer scientists *are* mathematicians.

>Do not simply assert that proof is possible, give some reason why we should
> >believe it to be so.

Do you have a reason for believing it to not be so? It doesn't appear to me
that you do. It seems to me that you credibly assert that some systems are
complex, and that some intelligent systems are complex (all known examples
in fact), but offer no evidence that all possible intelligent systems are

>Specifically, I think that we (the early AI researchers) started from the
> >observation of certain *high-level* reasoning mechanisms that are
>observable >in the human mind, and generalized to the idea that these
>mechanisms could be >the foundational mechanisms of a thinking system. The
>problem is that when we >(as practitioners of philosophical logic) get into
>discussions about the >amazing way in which "All Men Are Mortal" can be
>combined with "Socrates is a >Man"to yield the conclusion "Socrates is
>Mortal", we are completely oblivious >to the fact that a huge piece of
>cognitive apparatus is sitting there, under >the surface, allowing us to
>relate words like "all" and "mortal" and >"Socrates" and "men" to things in
>the world, and to one another, and we are >also missing the fact that there
>are vast numbers of other conclusions that >this cognitive apparatus
>arrives at, on a moment by moment basis, that are >extremely difficult to
>squeeze into the shape of a syllogism. 

Richard, I am among the most polite and far from the most knowledgable
member of this community, but reading something like this can only elicit a
groan from me. Of course what you say is true. What a cliché. One could
learn this from watching the Discovery Channel. Next I expect you to point
out that an AGI doesn't need god to give it an immortal soul and to support
your claim by challenging biblical infallibility.

>In other words, you have this enormous cognitive mechanism, coming to
> >conclusions about the world all the time, and then it occasionally comes
>to >conclusions using just *one*, particularly clean, little subcomponent
>of its >array of available mechanisms, and we naively seize upon this
>subcomponent >and think that *that* is how the whole thing operates.

Funny, "Intelligence Doesn't Fit on a T-Shirt" is how Eliezer would have
summarized the above ten or eleven years ago.
Can you seriously imagine Ben Goertzel would have spent the amount of time
discussing AI with Eli that Ben has spent if Eli didn't even know these
sorts of basics.

>Humans are intellectual systems with aggressive M/E systems tacked on
> >underneath. They don't need the aggression (it was just useful during
> >evolution), and without it they become immensely stable.

Huh? Humans are stable without aggression? Ask a neuroscientist. Anyway,
UFAI doesn't mean aggression. The archives make it very clear what UFAI

>I think that we could also understand the nature of the "attachment"
> >mechanisms that make human beings have irrational fondness for one
>another, >and for a species as a whole, and incorporate that in a design. 
>I think we >could stud the effects of that mechanism, and come to be sure
>of its >stability. And, at the end of the day, I think we will come to
>understand the nature of >M/E systems so well that we will be able to say
>with a fair degree of >certainty that the more knowledge an AGI has, the
>more it tends to understand >the need for cooperation.  I think we might
>(just might) discover that we >could trust such systems.

Yes we could trust them. Could we trust their creations X degrees removed?
The problem is that such AIs could be turned into Seed AIs, and we couldn't
trust the derived seed AIs. The above two paragraphs are very sensible if
you assume that singularity is impossible, or if you plan to use non-seed
AIs to prevent singularity. Otherwise its suicide.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT