Re[2]: The dumb SAI and the Semiautomatic Singularity

From: Cliff Stabbert (cps46@earthlink.net)
Date: Mon Jul 08 2002 - 16:53:52 MDT


Monday, July 8, 2002, 5:14:18 PM, Eliezer S. Yudkowsky wrote:

ESY> Mike & Donna Deering wrote:
>>
>> Would it be possible to change the design slightly to avoid volition,
>> ego, self consciousness, while still maintaining the capabilities of
>> complex problem solving, self improvement, super intelligence?

ESY> As I pointed out earlier, no matter how impossible something seems, it
ESY> could always be possible to an entity of slightly higher intelligence,
ESY> even within your own species.

Maybe, maybe not. If your definition of "intelligence" turns out to
depend on self-reflection, then it may per definition be impossible. I
don't know if you have a formal definition of intelligence (I'm still
working through GISAI). I know I don't. But I do have a working,
mostly implicit, I-know-it-when-I-see-it one, which I very strongly
suspect intimately involves self-awareness.

To wit: wouldn't a "super-intelligent" AI capable of "self-improvement"
realise pretty quickly that it was improving itself? If I were trying
to build something like this without volition and consciousness, I'd
worry that such a creation would rapidly bootstrap itself into them.
At what age do humans become self-aware, and how? At what point in
the evolution of humanity did self-awareness become possible, and how?

--
Cliff


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT