From: Dale Johnstone (DaleJohnstone@email.com)
Date: Wed Jan 24 2001 - 14:41:45 MST
Eliezer wrote:
> What actually happens is that genetic engineering and neurohacking and
> human-computer interfaces don't show up, because they'd show up in 2020,
> and a hard takeoff occurs in SIAI's or Webmind's basement sometime in the
> next ten years or so. Even if the hardware for nanotechnology takes
> another couple of weeks to manufacture, and even if you're asking the
> newborn SI questions that whole time, no amount of explanation is going to
> be equivalent to the real thing. There still comes a point when the SI
> says that the Introdus wavefront is on the way, and you sit there waiting
> for the totally unknowable future to hit you in the next five seconds.
In order for there to be a hard takeoff the AI must be capable of building
up a huge amount of experience quickly. It takes years for a human child.
Obviously we can crank up the AI's clock rate, but how do you plan for it to
gain experience when the rest of the world is running in slow motion? Some
things can be deduced, others can be learnt from simulations. How does it
learn about people and human culture in general? From books & the internet?
I'm sure you'd agree giving an inexperienced newborn AI access to nanotech
is a bad idea. So, as processing time is limited and short-cuts like
scanning a human mind are not allowed at first, how will it learn to model
people, and the wider geopolitical environment?
Do you believe that given sufficient intelligence, experience is not
required?
At what point in it's education will you allow it to develop (if it's not
already available) & use nanotech?
Cheers,
Dale.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT