Re: Building a friendly AI from a "just do what I tell you" AI

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Thu Nov 22 2007 - 16:03:55 MST


--- Panu Horsmalahti <nawitus@gmail.com> wrote:

> Here's a FAI design:
>
> First, there is an swarm AGIs of about the human intelligence level, with
> the average human knowledge and task of creating a Friendly AI. The AGIs are
> living in a virtual simulation without their knowledge, and the AGIs are
> really slowly rewriting themselves to increase their intelligence. If humans
> can code a FAI, then it must be possible for these AGIs too.

Why does it matter that the AGIs don't know they are in a simulation? What if
we knew the universe was simulated?

Also, each level of the simulation decreases both the available computing
power and the complexity. Where does the increased intelligence come from?

> There is another computer that the AGIs don't know about, that sometimes
> reads through the data from the AGI virtual world and creates a virtual
> simulation (with the help of supercomputers) to test that FAI design. The
> FAI has a really low probability to predict the possibility of a virtual
> world simulation, so the probability of escaping is low. The computer does
> some tasks trying to verify whether the FAI design works.

How do you build an FAI tester with a better test for friendliness than what
humans could do manually?

> All this is run in another virtual layer with a computer observing possible
> escapes from the virtual layer, and this layering can be made 100 times if
> necessary.

How much computing power will this require, and where will you get it?

> One more note: the whole simulation inside simulations is located in an
> island booby trapped with nuclear bombs. If an extra electronic signal is
> detected the whole thing blows up.

So now you are letting a complex, poorly understood process control nuclear
bombs, where the most dangerous thing it could do is not use them?

> Not safe enough. That *island* is a an
> actual simulation run in another real island. Actually, that second island
> is a simulation too. I would like to see an UFAI to escape that.

1. The computer says "I have successfully achieved godlike intelligence
guaranteed to be friendly. You may now safely connect me to the internet".

2. Somebody else with a lot less computing power than your system requires
builds an intelligent computer worm, or a self replicating robot or bacteria
with swarm intelligence, just to see if it works, and worries about how to
program it later. Which is more likely to happen first?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT