From: Samantha Atkins (sjatkins@gmail.com)
Date: Mon Jul 19 2004 - 10:54:42 MDT
I must be missing something. I don't see why having the AI's
decisions "fit the sprit of this abstact invariant as humans would
judge it" is actually necessary. The very few humans that can even
grasp the problem are radically limited in their ability to judge
possible solutions objectively and at workable depth.
Assuming for a moment that this is needed I don't see why it would
require scans of human brains to do so or at least further scans than
had already been done by the relevant branches of science. I believe
there is a strong expectation that the AI will have a deep
understanding of human psychology. Why would this not be enough?
-s
On Sat, 17 Jul 2004 21:19:05 -0400, Emil Gilliam <emil@emilgilliam.com> wrote:
> Again, quoting the Collective Volition page:
>
> > Friendly AI requires:
> >
> > 1. Solving the technical problems required to maintain a
> well-specified
> > abstract invariant in a self-modifying goal system.
> (Interestingly,
> > this problem is relatively straightforward from a
> theoretical standpoint.)
> >
> > 2. Choosing something nice to do with the AI. This is about
> midway in
> > theoretical hairiness between problems 1 and 3.
> >
> > 3. Designing a framework for an abstract invariant that
> doesn't automatically
> > wipe out the human species. This is the hard part.
>
> It seems that in order to understand a Friendly abstract invariant with
> the deepness of (2), and to understand what does or does not fit the
> spirit of this abstract invariant as humans would judge it, a seed AI
> would have to know an immense number of details about human brains. If
> this is so, then there may be no practical way for the seed AI to know
> all these details without scanning actual humans -- but, as SIAI's
> strategy currently goes, we don't want it to have any capability of
> this sort until takeoff time, and by that time the job of the Seed AI
> programmers should be *done*.
>
> Is "finding a way out of this deadlock" a useful way of characterizing
> any part of (3)'s complexity?
>
> - Emil
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT