From: Phillip Huggan (email@example.com)
Date: Wed Aug 03 2005 - 14:51:16 MDT
Daniel Radetsky <firstname.lastname@example.org> wrote:
On Tue, 2 Aug 2005 20:03:43 -0400
Randall Randall wrote:
>> Given this, safeguards must be built into the superhuman intelligence
>> directly. This, of course, is the Party position, the common wisdom, on this
>>list. This is why (I believe) people on this list keep insisting that some
>> exploit may exist: assuming that they do not predetermines failure in the
>> case that they do, while the reverse is not true.
>I have less charitable reasons for why people insist on the existence of
>exploits, but even if you are right, that doesn't mean that acting as though
>exploits exist is automatically the rational thing to do. For example, if we
>thought that there was an pretty low probability that there were exploits
>existed, but that if there were we would be screwed if we didn't account for
>them, although we probably could account for them if we spent an extra 20 years
>before the debut of AI. In this case, given the very low probability of the
>existence of exploits and the astronomical waste of 20 years without AI, it
>seems like the decision to ignore the possibility of exploits has more expected
>value, although I may just be reacting to my bias to avoid sure losses.
The waste of 20 years is unlikely to be important in the grand scheme of things with heaven and hell still in doubt, unless that twenty year period represents the closing of a cosmological window in which we could have done something like alter the topology of the universe from an open to a a desired closed state. The debut of AGI is properly weighed against the debut of Molecular Manufacturing, any new technologies which hasten either or both of these, and all the other hundreds of low probability risks which would impact society enough to disrupt AGI/MM. The latter occurs 1%-10% per decade and is rising in time.
Start your day with Yahoo! - make it your home page
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT