Re: Basement Education

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jan 24 2001 - 20:01:45 MST


Samantha Atkins wrote:
>
> "Eliezer S. Yudkowsky" wrote:
> >
> > What kind of experience? Experience about grain futures, or experience
> > about how to design a seed AI?
>
> In order for the SI to run much of anything other than its own
> development it will need both.

An SI may need both; an SI can get both, very easily, through a few
nondestructive brain scans. The question is how much experience is
required during the pre-nanotechnology stage.

> > > I'm sure you'd agree giving an inexperienced newborn AI access to nanotech
> > > is a bad idea.
> >
> > If it's an inexperienced newborn superintelligent AI, then I don't have
> > much of a choice. If not, then it seems to me that the operative form of
> > experience, for this app, is experience in Friendliness.
>
> You certainly do have a choice. If you do not hook the system up in such
> a way that it controls hardware manufacturing at all levels until it is
> a bit more seasoned, that would be a quite prudent step.

Prudent, maybe; effective, almost certainly not. A superintelligence has
access to *me*. Ve has access to external reality... would ve really
notice all that much of a difference whether the particular quark-swirls
ve contacts are labeled "hardware manufacturing" or "Eliezer Yudkowsky"?

If you're gonna win, win *before* you have a hostile superintelligence on
your hands. That's common sense.

> > Where does experience in Friendliness come from? Probably
> > question-and-answer sessions with the programmers, plus examination of
> > online social material and technical literature to fill in references to
> > underlying causes.
>
> That would not be enough to develop common sense by itself. Too much is
> assumed of the underlying presumed human context in the literature.

I think you're wrong about this.

> I think you are guessing wrong unless quite a bit of the detailed common
> sense is developed or entered before the young AI goes off examining
> papers and running simulations. Knowing the architecture of human minds
> is not sufficient for having working knowledge of hwo to deal with human
> beings.

Well, I disagree. In my own experience, the amount of real-world
experience needed decreases pretty sharply as a function of the ability to
theorize about the causation of the observed experiences you already have.

> > Incidentally, I should note that, as I visualize this "gradual growth"
> > process, it shouldn't take very long. From the moment the AI realizes a
> > hard takeoff lies ahead to the moment human-timescale phase terminates,
> > should be... oh... twelve hours or so. Because the instant that the AI
> > says it's ready for a hard takeoff, you are operating on Singularity time
> > - in other words, six thousand people are dying for every hour delayed.
> > Ideally we'd see that the AI was getting all the Friendliness decisions
> > more or less right during the controlled ascent, in which case we could
> > push ahead as fast as humanly possible.
>
> How can human programmers can answer a sufficient number of the AIs
> questions in a mere 12 hours?

If the human programmers need to provide serious new Friendship content
rather than just providing feedback on the AI's own actions, then one may
be justified in going a little slower. If the AI is getting everything
right and the humans are just watching, then zip along as fast as
possible.

> AI time is not the gating factor in this
> phase. And there is no reason to rush it. So many people dying per
> hour is irrelevant and emotionalizes the conversation unnecessarily.
> Letting the AI loose too early can easily terminate all 6 billion+ of
> us.

Yes, that is the only reason why it makes sense to take the precaution at
all. I do not believe that so many people dying per hour is
"irrelevant". I think that, day in, day out, one hundred and fifty
thousand people die - people with experiences and memories and lives every
bit as valuable as my own. Every minute that I ask an AI to deliberately
delay takeoff puts another hundred deaths on *my* *personal*
responsibility as a Friendship programmer. In introducing an artificial
delay, I would be gambling with human lives - gambling that the
probability of error is great enough to warrant deliberate slowness,
gambling on the possibility that the AI wouldn't just zip off to
superintelligence and Friendliness. With six billion lives on the line, a
little delay may be justified, but it has to be the absolute minimum
delay. Unless major problems turn up, a one-week delay would be entering
Hitler/Stalin territory.

> > If the AI was Friendliness-savvy enough during the prehuman training
> > phase, we might want to eliminate the gradual phase entirely, thus
> > removing what I frankly regard as a dangerous added step.
>
> How does it become dependably Friendliness-savvy without the feedback?
> Or do I misunderstand what gradual phase you want to eliminate?

I think so - the scenario I was postulating was that the AI became
Friendliness-savvy during the pre-hard-takeoff phase, so that you're
already pretty confident by the time the AI reaches the hard-takeoff
level. This doesn't require perfection, it just requires that the AI
display the minimal "seed Friendliness" needed to not take any precipitate
actions until ve can fill in the blanks by examining a nondestructive
brain scan.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT