Re: SIAI's direction

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sat Oct 23 2004 - 21:30:26 MDT


"Wei Dai" wrote:

> I think SIAI's greatest accomplishment so far is to
> illustrate how hard it would be to build a safe AI, and
> how dangerous an unsafe AI would be.

Whilst agreeing that SIAI's illumination of these to aspects of AI has been
a high point, I cannot agree that it is the greatest accomplishment. Both
the difficulty of AI and the danger of unsafe AI have been pointed to
numerous times by numerous people in the past.

The various documents Eliezer has put together, including CFAI, LOGI and
more recently the CV postings to the Wiki are innovative and contain new
insights. Surely these would qualify as greater accomplishments than the
aforementioned illustrations.

[snip]
> Furthermore, it seems there is a conflict between
> safety and other desirable qualities, such as open
> mindedness and philosophical curiosity.

What conflict are you referring to? Please be specific.

> Do we really want to live under the control of an
> AI with a rigid set of goals, even if those goals
> somehow represent the average of all humanity?

Straw man. No one at SIAI has suggested this.

> An AI that may be incapable of considering the
> relative merits of intuitionist vs classical
> mathmatics, because we don't know how to program
> such capabilities into the AI, or considers this
> activity a waste of time, because we don't know how
> to embed such pursuits into its goal structure?

More phantoms... where are you getting this stuff?

> Many of us are interested in the Singularity partly
> in the hope of one day being able to solve or at
> least explore with greater intelligence long
> standing moral and philosophical problems.

These goals I share. SIAI's plans do not disallow this outcome, on the
contrary they enable it.

[snip]
> Since AI is only a means, and not an end, even to
> the SIAI (despite "AI" in its name), I wonder if
> it's time to reevaluate its basic direction.

I have seen no reasonable arguments for a change of direction. Greater
intelligence that is friendly to humans will give us the tools
(intelligence) needed to avoid existential disasters that loom larger every
day. What is your alternative?

> Perhaps it can do more good by putting more resources
> into highlighting the dangers of unsafe AI, and to
> explore other approaches to the Singularity, for
> example studying human cognition and planning how to
> do IA (intelligence amplification) once the requisite
> technologies become available.

There are others already doing this. See Kurzweil et al.

> Of course IA has its own dangers, but we would be
> starting with more of a known quantity.

We currently do not know all of the details about how humans work. Of what
we do know we have already identified many areas and occasions where our
cognition is irrational. I for one do not wish to place my fate in the
hands - mind, rather - of irrational beings. To the extent that this
happens already (modern political system) I am unnerved, to put it mildly.

> Even if things go bad, we end up with something that
> is at least partly human and unlikely to want to
> fill the universe with paper clips.

If 'things go bad' it doesn't matter if the UFAI is partly human or not: you
will be just as dead.

Let us face the problem (of UFAI) squarely, and solve it (with FAI). Just
because it is a big problem that we don't currently have a clear answer to
doesn't mean we should flinch from tackling it.

Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT