From: Mike Dougherty (firstname.lastname@example.org)
Date: Fri Feb 24 2006 - 19:35:29 MST
I wonder if this would be like having two containers of water (lets call
them ionically charged positive and negative as indicators of their
"upbringing" as friendly or unfriendly) Once they are released into a larger
containing vessel they will ultimately mix together and dilute each other to
possibly create ambivalence. (have we discussed Apathetic AI or Completely
Ignorant AI?) I think the point of growing "friendly" AI is to make sure it
is the first sibling and that it continues to grow in such a way that the
resulting mixture of FAI and potential future UFAI is that our world always
contains a little more "Friendly."
I agree that the definition of Friendly is somewhat subjective, and that it
would be good evolutionary design to begin with several strategy providing
correlated checks and balances. Who wouldn't?
On 2/22/06, turin <email@example.com> wrote:
> Maybe we shouldn't make just one SI, maybe we should do things
> "biologically" and make a population of SI. I don't know if thinking about
> the difference between 1 or several SI helps solve the problem either, but
> we talk a lot about the first SI and never talk about what happens when we
> have more than one.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT