From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed May 19 2004 - 23:25:07 MDT
fudley wrote:
> On Wed, 19 May 2004 18:05:29 -0400, "Eliezer S. Yudkowsky"
> <sentience@pobox.com> said:
>
>>I tried to reason about the incomprehensibility of superintelligence without
>>understanding where the incomprehensibility came from
>
> There are 2 reasons people don’t understand something, too little
> information and too much complexity. In the case of an AI you have both
> problems.
Too little information about something I build? And the abstract
invariant may generate "unpredictable" huge complexity, but it will be
complexity guaranteed not to violate the invariant.
>>*potentially* enables a human to fully understand some optimization
>>processes, including, I think, optimization processes with arbitrarily
>>large amounts of computing power.
>
> We’re talking about a brain the size of a planet but even retarded
> children retain the ability to surprise us, some can paint beautiful
> pictures, some can multiply enormous numbers in their head, and some like
> to cut people up with chain saws. And the fact that with just a few lines
> of code I can write a program that behaves in ways you can not predict
> does not bode well for the success of your enterprise.
Why? I don't need to predict an arbitrary program you wrote. I need to
choose a dynamic process that predictably flows within certain invariants.
>>The essential law of Friendly AI is that you cannot build an AI to
>>accomplish any end for which you do not possess a well-specified
>>*abstract* description.
>
> Any intelligence worthy of the name will eventually start to set its own
> goals; being happy seems like a reasonable one to set, and nobody knows
> where that will lead.
Hence the talking about "optimization processes". You are making all
sorts of statements about "intelligences" "worthy of the name" that are as
orthogonal to Friendly AI as they are orthogonal to natural selection.
>>You may be thinking that "intelligences" have self-centered "best interests".
>
> I do indeed.
If we are talking about things likely to pop up in the real world, rather
than the space of things that are worthy of names, then you are making a
rather large error. The fruits of anthropomorphism; using human empathy
to model things that aren't human.
Didn't you just get through saying to me that you didn't understand
"intelligences"? How are you making all these wonderful predictions about
them? By putting yourself in their shoes, and expecting them to behave
like other things you know. That trick flat-out doesn't work, period.
>>Rather than arguing about intelligence, I would prefer to talk
>>about optimization processes
>
> I’m confused, do you want to make an artificial intelligence or an
> artificial optimization process.
I want to embody a kind of dynamic called a Friendly AI. An FAI is
definitely an optimization process. I have no idea what you think
"intelligence" is, except that you claim not to be able to understand it
when you add computing power to it, and that you think it will have a
self-centered goal system.
>>Optimization processes direct futures into small targets in phase space.
>
> Then what you call optimization processes are just a low rent version of
> inelegance incapable of producing novelty. Why even bother to build one?
Not true. An optimization process can have complicated ends, and can find
novel means to those ends. What I would guarantee is that they will be
good ends, and that the novel means will not stomp on those ends. This I
think I can do within an optimization process, whatever you believe about
"intelligence".
>>An FAI ain't a "hugely complicated program"
>
> Huh?!
An FAI doesn't share the characteristics of "hugely complicated programs"
as you know them. It may be a complex dynamic, but it's not a computer
program as you know it.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT