From: Nick Bostrom (firstname.lastname@example.org)
Date: Wed Dec 14 2005 - 07:39:20 MST
Michael Vassar wrote:
>At any rate, I think that the chance of "us" making a difference directly
>is FAR greater than that of "us" making a difference through our influence.
I think my intuition would be the other way around, that it is somewhat (I
wouldn't say "far" let alone "FAR") more likely that the ideas we come up
with will have some appreciable influence on whoever creates the first AGI
than that we will be the ones doing it (taking "we" here to mean the people
currently on this list).
>>The cases are different. Opting to go for Oracle first does need not to
>>mean giving up a powerful technology that adversaries would then inquire
>>instead, nor does it require global coordination.
>It requires both. Global coordination so no-one else does a harder
>take-off seed AI and the risk of someone else doing a harder takeoff seed
>AI first since by going for an Oracle you are slowing down.
I was assuming the case where going for the Oracle would not slow one down
much. If it would be a great slow-down then it might - depending on the
circumstances, e.g. what others where doing - be optimal to pursue a
riskier strategy, just as one might go for a crude definition of CEV if
refining it would take lots of philosophical work and if one knew that some
other group were about to launch an fatal AI that didn't feature any sort
of friendliness at all.
Director, Future of Humanity Institute
Faculty of Philosophy, Oxford University
10 Merton Str., OX1 4JJ, Oxford +44 (0)7789 74 42 42
Homepage: http://www.nickbostrom.com FHI: http://www.fhi.ox.ac.uk
For administrative matters, please contact:
Miriam Wood (Projects Officer and PA)
+44(0)1865 27 69 34 email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT