From: fudley (firstname.lastname@example.org)
Date: Wed May 19 2004 - 22:21:33 MDT
On Wed, 19 May 2004 18:05:29 -0400, "Eliezer S. Yudkowsky"
>I tried to reason about the incomprehensibility of superintelligence without
>understanding where the incomprehensibility came from
There are 2 reasons people donít understand something, too little
information and too much complexity. In the case of an AI you have both
>*potentially* enables a human to fully understand some optimization
>processes, including, I think, optimization processes with arbitrarily
>large amounts of computing power.
Weíre talking about a brain the size of a planet but even retarded
children retain the ability to surprise us, some can paint beautiful
pictures, some can multiply enormous numbers in their head, and some like
to cut people up with chain saws. And the fact that with just a few lines
of code I can write a program that behaves in ways you can not predict
does not bode well for the success of your enterprise.
>The essential law of Friendly AI is that you cannot build an AI to
>accomplish any end for which you do not possess a well-specified
Any intelligence worthy of the name will eventually start to set its own
goals; being happy seems like a reasonable one to set, and nobody knows
where that will lead.
> You may be thinking that "intelligences" have self-centered "best interests".
I do indeed.
>Rather than arguing about intelligence, I would prefer to talk
>about optimization processes
Iím confused, do you want to make an artificial intelligence or an
artificial optimization processes.
>Optimization processes direct futures into small targets in phase space.
Then what you call optimization processes are just a low rent version of
inelegance incapable of producing novelty. Why even bother to build one?
>An FAI ain't a "hugely complicated program"
John K Clark
-- http://www.fastmail.fm - IMAP accessible web-mail
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT