From: Cliff Stabbert (cps46@earthlink.net)
Date: Fri Jul 12 2002 - 11:17:59 MDT
Thursday, July 11, 2002, 12:08:48 AM, Mike & Donna Deering wrote:
<snippysnipsnip>
MDD> This involves taking over the world. This involves maintaining
MDD> a comfortable lead in intelligence over every other being on
MDD> the planet. This involves limiting the intelligence advancement
MDD> of all of us.
This does not necessarily follow. I can think of a few scenarios in
which that is not the case:
1) The FAI calculates that we can never equal or beat it on our
current neurological substrate.
2) The FAI calculates that equalling or beating it in intelligence
would of necessity result in / be accompanied by "Friendly"
behavior on our part.
MDD> Understandably, a limit that is suffiently far
MDD> away is not of much practical effect, but we are still left in
MDD> the philosophical position compared with the AI, of pets.
That's the case under any assumption of artificial super intelligence,
although the "pet" phrasing is based our perspective on these things.
In terms of impact, we may be more in the position of germs or dust;
in terms of the quality of our relationship, it's too hard to say.
-- Cliff
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT