From: James Higgins (email@example.com)
Date: Sat Jun 22 2002 - 17:05:59 MDT
At 01:51 PM 6/22/2002 -0400, you wrote:
>Ben Goertzel wrote:
>>Eliezer, while we're all free to our own differing intuitions, it seems
>>wrong to me to can feel "dead certain" about something we've never seen
>>before, that depends on technologies we don't yet substantialy understand.
>If I can figure out how to solve a problem myself, I usually feel
>comfortable calling it "dead certain" that a superintelligence can solve
>it. Motivations might be different, although in this case it is very
>difficult to see why they would be, but if I can see at least one easy way
>to go from superintelligence to nanotechnology in a matter of days or
>weeks, there are probably others.
So, are you "dead certain" that the AI won't purposely stop evolving its
intelligence for a period of time? Possibly to focus on something else
(that we may not even know exists)? I think the only thing we can be "dead
certain" about is that we can't be certain about anything regarding the SI
(at least at this point)...
>>I think the period of transition from human-level AI to superhuman-level AI
>>will be a matter of months to years, not decades.
>I suppose I could see a month, but anything longer than that is pretty
>hard to imagine unless the human-level AI is operating at a subjective
>slowdown of hundreds to one relative to human thought.
A month? Are we talking about going from human-equivalent to SI or merely
significantly beyond human level? I could, possibly, see getting into the
lower stages of super-human in a month, but getting to SI would take much
longer I would imagine. Espicially since it may take considerable time to
design & produce the next level of hardware at the beginning. My guess is
that it would take numerous hardware iterations to get to super-human
level, which could easily take months to years. And the jump to nano-tech
or other extremely advanced production methods will take super-human level
to invent/implement. Thus the early stages will likely be very slow, but
will exponentially increase in speed. But, then again, virtually anything
is actually possible. We won't really know until we get there
Personally, I'd like to see someone like Ben be at the helm of the first
successful SI. I'd feel much more comfortable about the future of Humanity
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT