From: William Pearson (email@example.com)
Date: Sun Feb 21 2010 - 05:10:45 MST
On 21 February 2010 00:58, Thomas McCabe <firstname.lastname@example.org> wrote:
> This Singularity FAQ was co-authored by myself and Kaj Sotala. It is
> not intended to be an introduction to the Singularity; rather, it is
> intended to answer the questions of those who have already heard about
> the Singularity, but still have questions about some of the issues.
> Please feel free to send me comments if you think there are bits that
> could be improved.
I think the following needs some justification.
>Q2). Shouldn't the "exponential spiral" of an intelligence explosion quickly wind down, because each AI is more complex, and thus harder to reprogram, than the previous AI?
>A2). This is, indeed, one possible scenario. However, prudence demands that we also consider the worst case scenario, not just the best case. It could be the case that an intelligence explosion peters out quickly, but how would we know that ahead of time? The only way to test it is by going ahead and creating an intelligence explosion, because there's no way to know before you actually do it.
Why is that the only way to test it? Most facts about the world have
echoes so that we can judge their likelihood without experiencing them
directly. For example judging whether going faster than light is
possible by observing gravitational lensing etc. There should be some
possible worlds where explosion is possible, and some where it is not
and there should be some logical reasons based on the properties of
each world that classify them as such. If we can develop a theory of
intelligence, under real world conditions, we should be able to
identify these properties and judge which type of world we are in.
This, I think, deserves more brain power than it gets.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT