From: Russell Wallace (firstname.lastname@example.org)
Date: Wed May 25 2005 - 00:46:46 MDT
On 5/25/05, Richard Kowalski <email@example.com> wrote:
> On Tue, 2005-05-24 at 16:22 -0700, Eliezer S. Yudkowsky wrote:
> > ornamentation and tinsel. I don't think humans could build an AI that had no
> > goal system at all until it was already a superintelligence.
> Have you produced any papers or speeches that further clarify or
> validate this thought? Do you know of anyone else who has come to the
> same or similar conclusion independently?
For what it's worth, it seems clear to me. To become superintelligent,
the AI must be self-improving. (You will agree hand-coding an SI isn't
humanly feasible?) Therefore, it must have a goal system that directs
it to self-improve, if nothing else.
(At least, it must have something that _behaves like_ such a goal
system. Perhaps said system need not contain goals written in a
declarative form that humans would recognize; but a superintelligent
AI with an opaque goal system is <understatement of the month>
contraindicated on safety grounds </understatement of the month>.)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT