Re: Designing human participation in the AI ascent

From: Jack Richardson (
Date: Sat Jun 30 2001 - 12:13:52 MDT


Thanks for your response to my message.

The idea I'm presenting for consideration is the importance of human
participation in the ascent towards the Singularity. I'm suggesting that
there may be an alternative to building the AI while we remain just as we

One issue here is whether the ascent can be sufficiently controlled so that
we can follow as best we can what is going on. One way to do this is to have
the AI itself have as a part of its architecture not just the communication
of what changes it is making to its code, but providing a
human-understandable explanation as to what it is doing and what it expects
the next steps will be. One control might be that it would have to get a
positive response from its programmers that they have sufficient
understanding before it could proceed to the next iteration of its

Another area for consideration is whether the AI in its advanced but still
pre-singularity mode could make recommendations as to what enhancements to
human intellectual capacity would work best to allow us to follow it further
on its path. The advanced pre-singularity phase is a key point where a lot
of useful things might be done, and I'm suggesting that the architecture of
the AI be designed in a way that we could all get the benefit of this phase.

It is certainly true that after the Singularity occurs all bets are off.
This is why controlling the advanced phase is crucial. If we don't have near
total confidence in the friendliness of the AI, we should never let it get
beyond this point. Being able to extensively interact with it during this
phase, even with the risk that someone else might create one first, is
necessary to establish that confidence.



----- Original Message -----
From: Eliezer S. Yudkowsky <>
To: <>
Sent: Thursday, June 28, 2001 11:40 PM
Subject: Re: Designing human participation in the AI ascent

> Jack Richardson wrote:
> >
> > In this way, humans could choose to be an
> > active participant in the transition to transhuman experience. A
> > friendly AI could be guided to include this goal as a key outcome of its
> > primary activity.
> That's not how Friendly AI works. If two-way interaction with a human is
> necessary to grow up, then a Friendly AI might do so; if not, no amount of
> nagging will make it happen. A mature Friendly AI is an independent
> altruist operating within the human frame of reference, not a chattel.
> So, for example, you can't put an "Easter Egg" in the "goal suggestions"
> that say, "Just before the Singularity, broadcast the voice of John Cleese
> saying 'And now for something completely different.'" Trying to do this
> has exactly the same effect as a programmer, or a random human fresh off
> the street, saying "I think it'd be really funny if, just before the
> Singularity, we hear the calm and assured voice of John Cleese saying 'And
> now for something completely different.'" If the John Cleese thing is
> something that people want, it will happen; if not, not; making the
> specific suggestion probably doesn't make much of a difference. Likewise
> for an attempt to do a Merged Ascent or Synchronized Singularity.
> -- -- -- -- --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT