From: Wei Dai (firstname.lastname@example.org)
Date: Thu Nov 22 2007 - 15:03:12 MST
Panu Horsmalahti wrote:
> First, there is an swarm AGIs of about the human intelligence level, with
> the average human knowledge and task of creating a Friendly AI. The AGIs
> living in a virtual simulation without their knowledge, and the AGIs are
> really slowly rewriting themselves to increase their intelligence. If
> can code a FAI, then it must be possible for these AGIs too.
Interesting idea. Have you considered that we ourselves are a swarm of AGIs
of about the human intelligence level, with the task of creating a Friendly
AI? Perhaps we are actually living in a simulation now for the purpose of
creating a Friendly AI for someone else? But if we can realize that this is
a possibility, so can any potential FAI that we create. So how does whoever
is running the simulation tell whether an apparently Friendly AI is really
Friendly, or just biding its time until it gets released?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT