From: Dale Johnstone (DaleJohnstone@email.com)
Date: Wed Jan 24 2001 - 20:11:01 MST
> To me the big problem with the hard takeoff is that even if the increase
> intelligence through progressive self-modification is exponential, the
> exponent may be 1.0003
> I.e., learning about one's own brain may continue to be hard for quite a
Yeah, I don't doubt a human equivalent mind is possible with enough of
todays hardware, and crucially the right architecture. What I'm unconvinced
of is that learning will be fast enough for a hard takeoff as claimed.
With nanotech however, all bets are off.
> The problem you cite, Dale, seems not a big problem. If AI's are learning
> about the physical world or each other, then learning may happen at
> much faster than human time-scales. The problem you cite is only relevant
> to experiential learning about human interactions.
> But AI's may lose interest in humans soon.
My point entirely.
Considering humans currently have lots of nukes pointing at each other it's
in the AI's best interests to understand these easily frightened animals
Do you believe your AI may loose interest in humans?
If you really believe you can build a real AI, then you should really think
through the consequences.
Sorry to keep throwing buckets of cold water over everyone, but if you're
worth your salt, you're really talking about the end of the human race here.
It's often treated so blasť like an episode of Star Trek in transhumanist
circles. Yes, this is the SL4 list, but that doesn't mean we take it any
The reason I'm questioning is not because I'm being antagonistic or that I'm
particularly interested in the answers per se, but more the reasoning that's
gone on behind them. People are too eager to believe in magic.
Could someone else pick up the devil's advocate line now? I'm not enjoying
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT