From: Samantha Atkins (samantha@objectent.com)
Date: Wed Jan 24 2001 - 19:34:11 MST
"Eliezer S. Yudkowsky" wrote:
>
> Dale Johnstone wrote:
> >
> > Eliezer wrote:
> >
> > > What actually happens is that genetic engineering and neurohacking and
> > > human-computer interfaces don't show up, because they'd show up in 2020,
> > > and a hard takeoff occurs in SIAI's or Webmind's basement sometime in the
> > > next ten years or so. Even if the hardware for nanotechnology takes
> > > another couple of weeks to manufacture, and even if you're asking the
> > > newborn SI questions that whole time, no amount of explanation is going to
> > > be equivalent to the real thing. There still comes a point when the SI
> > > says that the Introdus wavefront is on the way, and you sit there waiting
> > > for the totally unknowable future to hit you in the next five seconds.
> >
> > In order for there to be a hard takeoff the AI must be capable of building
> > up a huge amount of experience quickly.
>
> What kind of experience? Experience about grain futures, or experience
> about how to design a seed AI?
In order for the SI to run much of anything other than its own
development it will need both.
>
> > I'm sure you'd agree giving an inexperienced newborn AI access to nanotech
> > is a bad idea.
>
> If it's an inexperienced newborn superintelligent AI, then I don't have
> much of a choice. If not, then it seems to me that the operative form of
> experience, for this app, is experience in Friendliness.
>
You certainly do have a choice. If you do not hook the system up in such
a way that it controls hardware manufacturing at all levels until it is
a bit more seasoned, that would be a quite prudent step. Friendliness
is not enough. Forgive me if I forgot, but where are you getting common
sense? From something like Cyc?
> Where does experience in Friendliness come from? Probably
> question-and-answer sessions with the programmers, plus examination of
> online social material and technical literature to fill in references to
> underlying causes.
>
That would not be enough to develop common sense by itself. Too much is
assumed of the underlying presumed human context in the literature.
> > So, as processing time is limited and short-cuts like
> > scanning a human mind are not allowed at first,
>
> Why "not allowed"? Personally, I have no complaint if an AI uses a
> nondestructive scan of my brain for raw material. Or do you mean "not
> allowed" because no access to nanotech?
>
> > how will it learn to model people, and the wider geopolitical environment?
>
> I agree that this knowledge would be *useful* for a pre-takeoff seed AI.
> Is this knowledge *necessary*?
>
Before it runs anything real-world I would say it is pretty necessary.
> > Do you believe that given sufficient intelligence, experience is not
> > required?
>
> I believe that, as intelligence increases, experience required to solve a
> given problem decreases.
>
> I hazard a guess that, given superintelligence, the whole architecture
> (cognitive and emotional) of the individual human brain and human society
> could be deduced from: examination of nontechnical webpages, plus
> simulation-derived heuristics about evolutionary psychology and game
> theory, plus the Bayesian Probability Theorem.
>
I think you are guessing wrong unless quite a bit of the detailed common
sense is developed or entered before the young AI goes off examining
papers and running simulations. Knowing the architecture of human minds
is not sufficient for having working knowledge of hwo to deal with human
beings.
There is no way that understanding the fundamentals of human beings
would give understanding of the actual geo-political situation as it
exists. And extracting it from news sources and broadcasts again
requires a lot of common sense knowledge.
> > At what point in it's education will you allow it to develop (if it's not
> > already available) & use nanotech?
>
> "Allow" is probably the wrong choice of wording. If growing into
> Friendliness requires continuous human checking of decisions about how to
> make decisions, up to or slightly beyond the human-equivalence level, then
> there might be genuine grounds (i.e., a-smart-AI-would-agree-with-you
> grounds) for asking the growing AI not to grow too fast, so that you can
> keep talking with ver about Friendliness during a controlled transition.
> Once the AI reaches human-equivalence, the heuristics that say "daddy
> knows best, so listen to your programmers" will begin to decrease in
> justification, and the rationale for limiting growth will be similarly
> attenuated. Once the AI transcends human-equivalence in Friendliness
> (i.e., ve wins all arguments with the programmers), then there will be no
> further rationale for limiting growth and all the brakes are off.
>
> Incidentally, I should note that, as I visualize this "gradual growth"
> process, it shouldn't take very long. From the moment the AI realizes a
> hard takeoff lies ahead to the moment human-timescale phase terminates,
> should be... oh... twelve hours or so. Because the instant that the AI
> says it's ready for a hard takeoff, you are operating on Singularity time
> - in other words, six thousand people are dying for every hour delayed.
> Ideally we'd see that the AI was getting all the Friendliness decisions
> more or less right during the controlled ascent, in which case we could
> push ahead as fast as humanly possible.
>
How can human programmers can answer a sufficient number of the AIs
questions in a mere 12 hours? AI time is not the gating factor in this
phase. And there is no reason to rush it. So many people dying per
hour is irrelevant and emotionalizes the conversation unnecessarily.
Letting the AI loose too early can easily terminate all 6 billion+ of
us.
> If the AI was Friendliness-savvy enough during the prehuman training
> phase, we might want to eliminate the gradual phase entirely, thus
> removing what I frankly regard as a dangerous added step.
>
How does it become dependably Friendliness-savvy without the feedback?
Or do I misunderstand what gradual phase you want to eliminate?
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT