From: Aleksei Riikonen (aleksei@iki.fi)
Date: Mon Jan 12 2009 - 19:13:25 MST
On Mon, Jan 12, 2009 at 11:38 PM, Vladimir Nesov <robotact@gmail.com> wrote:
> On Mon, Jan 12, 2009 at 5:26 PM, Aleksei Riikonen <aleksei@iki.fi> wrote:
>
>> Suppose that we don't really learn how to build FAI before we learn
>> e.g. to scan human brains and construct simulated humans in simulated
>> universes that we can run at huge subjective speedups.
>>
>> What would be the safest (realistic) thing to do?
>
> Use hierarchies of communicating simulated Friendly AI programmers to
> optimize reliability of FAI design they create, peer-reviewing the
> theoretical and early experimental work between groups composed of
> different combinations of selected individuals running for
> considerable subjective time. Bootstrap implementation of FAI from
> this system, including nested simulated worlds in which AI is to start
> growing.
I like that answer. Seems strictly superior to what I came up with.
Anyone have criticism for this answer?
Could we say that we have a solution for FAI in this answer; a
successful strategy that can be implemented if only we can buy enough
time by preventing anyone from launching non-FAIs before such a
project can complete?
On Mon, Jan 12, 2009 at 11:36 PM, Charles Hixson
<charleshixsn@earthlink.net> wrote:
>
> That does seem like a good reason to run the simulations, but you are
> assuming that the person in charge is a good guy who can be trusted with
> that kind of power, in which case why not use him as your first upload?
> Given human political organization the only things that keep the
> power-hungry psychopaths from scrambling to be an upload are:
> 1) They wouldn't see the upload as themself, and neither would the upload
> see itself as them.
> 2) They don't believe it's possible anyway.
> 3) They don't understand the amount of power that a computer running the
> country (under nominal external direction) would have.
>
> If such a thing happens, and we're very lucky, the first upload will be
> someone like Craig Ventner. He's an egomaniac, but not a psychopath.
>
> Simulations as you propose would be a good rational solution. I just don't
> see them as the kind of thing our political systems are good at achieving.
I think smart power-hungry psychopaths might very well refrain from
scrambling to be the first upload because of the realization that if
such a scramble were to take place, they would be competing with a
substantial number of roughly equally capable individuals, and
therefore wouldn't be likely to win. Cooperation to prevent the
scramble in the first place seems like the more rational solution, and
the choice the smartest ones would make (whether psychopathic or not).
Their coalition could then prevent the less smart psychopaths from
acquiring too much power.
-- Aleksei Riikonen - http://www.iki.fi/aleksei
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT