Re: Basement Education

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jan 24 2001 - 15:18:54 MST


Dale Johnstone wrote:
>
> Eliezer wrote:
>
> > What actually happens is that genetic engineering and neurohacking and
> > human-computer interfaces don't show up, because they'd show up in 2020,
> > and a hard takeoff occurs in SIAI's or Webmind's basement sometime in the
> > next ten years or so. Even if the hardware for nanotechnology takes
> > another couple of weeks to manufacture, and even if you're asking the
> > newborn SI questions that whole time, no amount of explanation is going to
> > be equivalent to the real thing. There still comes a point when the SI
> > says that the Introdus wavefront is on the way, and you sit there waiting
> > for the totally unknowable future to hit you in the next five seconds.
>
> In order for there to be a hard takeoff the AI must be capable of building
> up a huge amount of experience quickly.

What kind of experience? Experience about grain futures, or experience
about how to design a seed AI?

> It takes years for a human child.
> Obviously we can crank up the AI's clock rate, but how do you plan for it to
> gain experience when the rest of the world is running in slow motion?

Accumulation of internal experience is limited only by computing power.
Experience of the external world will be limited either by rates of
sensory input or by computing power available to process sensory input.

> Some things can be deduced, others can be learnt from simulations. How does it
> learn about people and human culture in general? From books & the internet?

Sure. Even if you don't want to give a young AI two-way access, you can
still buy an Internet archive from Alexa, or just set a dedicated machine
to do a random crawl-and-cache.

> I'm sure you'd agree giving an inexperienced newborn AI access to nanotech
> is a bad idea.

If it's an inexperienced newborn superintelligent AI, then I don't have
much of a choice. If not, then it seems to me that the operative form of
experience, for this app, is experience in Friendliness.

Where does experience in Friendliness come from? Probably
question-and-answer sessions with the programmers, plus examination of
online social material and technical literature to fill in references to
underlying causes.

> So, as processing time is limited and short-cuts like
> scanning a human mind are not allowed at first,

Why "not allowed"? Personally, I have no complaint if an AI uses a
nondestructive scan of my brain for raw material. Or do you mean "not
allowed" because no access to nanotech?

> how will it learn to model people, and the wider geopolitical environment?

I agree that this knowledge would be *useful* for a pre-takeoff seed AI.
Is this knowledge *necessary*?

> Do you believe that given sufficient intelligence, experience is not
> required?

I believe that, as intelligence increases, experience required to solve a
given problem decreases.

I hazard a guess that, given superintelligence, the whole architecture
(cognitive and emotional) of the individual human brain and human society
could be deduced from: examination of nontechnical webpages, plus
simulation-derived heuristics about evolutionary psychology and game
theory, plus the Bayesian Probability Theorem.

> At what point in it's education will you allow it to develop (if it's not
> already available) & use nanotech?

"Allow" is probably the wrong choice of wording. If growing into
Friendliness requires continuous human checking of decisions about how to
make decisions, up to or slightly beyond the human-equivalence level, then
there might be genuine grounds (i.e., a-smart-AI-would-agree-with-you
grounds) for asking the growing AI not to grow too fast, so that you can
keep talking with ver about Friendliness during a controlled transition.
Once the AI reaches human-equivalence, the heuristics that say "daddy
knows best, so listen to your programmers" will begin to decrease in
justification, and the rationale for limiting growth will be similarly
attenuated. Once the AI transcends human-equivalence in Friendliness
(i.e., ve wins all arguments with the programmers), then there will be no
further rationale for limiting growth and all the brakes are off.

Incidentally, I should note that, as I visualize this "gradual growth"
process, it shouldn't take very long. From the moment the AI realizes a
hard takeoff lies ahead to the moment human-timescale phase terminates,
should be... oh... twelve hours or so. Because the instant that the AI
says it's ready for a hard takeoff, you are operating on Singularity time
- in other words, six thousand people are dying for every hour delayed.
Ideally we'd see that the AI was getting all the Friendliness decisions
more or less right during the controlled ascent, in which case we could
push ahead as fast as humanly possible.

If the AI was Friendliness-savvy enough during the prehuman training
phase, we might want to eliminate the gradual phase entirely, thus
removing what I frankly regard as a dangerous added step.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT