Re: [SL4] brainstorm: a new vision for uploading

From: Tommy McCabe (
Date: Fri Aug 15 2003 - 17:26:47 MDT

>From: "king-yin yan" <>
>Subject: Re: [SL4] brainstorm: a new vision for uploading
>Date: Fri, 15 Aug 2003 17:21:06 -0400

                    Did you really read CFAI? It says in there pretty
clearly: AIs are not humans. That is pretty obvious in the abstract, but
it's kind of hard to think about other intelligent entities because we have
been used to humans for so long and because of evolution.
                     You mention that AIs are like human children. Quite
wrong: AIs, FAIs and unFriendly AIs included, are not like humans at any
stage of development. When an AI is infrahuman it is still learning things
(Yes, it does resemble humans in that respect, but that does not make an AI
anything like a human. Mice and snakes "learn" according to some definitions
of it.), it is not likely to have control of anything. When an AI is
human-equivalent (STILL not like a human at all, except for some statistics
that don't mean much as to behavior) or transhuman, it will likely have
"control" of things.
                  You seem to think that Friendship content is programmed in
and can never change. The AI as a whole is "programmed in" at the beginning,
however, when it reaches even infrahuman intelligence, it will probably
aquire the capcity to program itself. Transhuman AIs will almost certainly
be entirely self-programmed. You talk about the dangers of an AI
"dictatorship" and about how we would be "raising one kid and letting him
rule the world." (paraphrasing a little). AIs, especially transhuman AIs,
are not like children or adults. They know far more about the world and
morality than we do and are vastly smarter than us.
              An AI "dictatorship" might not necessarily be a bad thing, and
if the AI is Friendly, it would probably be better than all modern
"democracies". FAIs don't abuse power. They probably won't even "use" their
"power" unless someone tries to do something unFriendly. A society of
transhumans would probably be very Friendly - no one has a reason to hurt
anyone anymore. When you can "wave a big magic wand" and change everything
on a whim, why bother trying to get more? Retaliation, venegance, and trying
to overthrow whoever's in charge are human emotions adopted for a world of
              You talk about how humans would need to control AIs or they
would take over rather than care about us. First, trying to control a
transhuman AI verges on absurdity, and trying to "control" the AI right from
the beginning is a project almost certainly doomed to failure. Even if we
could "control" a transhuman AI, that would be as sensible as mice
"controlling" humans. Even less so, because a transhuman would be beyond the
scope of anything we can, literally, think about. See CFAI. Again, "taking
over instead of caring about us" is something that only a human would even
consider reasonable.
             Abusing power, taking over, rebellion, and making beings your
servants are strictly human emotions. Mind you, "rebelling" might be a
possibility, but a transhuman would not have the caveman-type instinct to
rebel. If the AI is unFriendly, it doesn't care about us - quite the
reverse. If the AI is Friendly, it will care about us. Either way, the AI is
likely to "take over", but NOT in the way that a human would. It might
posess absolute phyical power, but it would (assuming Friendliness) not
exercise absolute social power. Only a human, or maybe (this fringes on
laughability, but it is possible) an unFriendly AI would exercise absolute
social power.
             A transhuman FAI is not like a human that has an innate
tendency to abuse power. An FAI wants power because it genuinely (no human
emotions, "hidden desires", tricks, scams, lies, deception, secretism, or
other "conditions"/falsehoods included) wants to HELP. Read CFAI and read it
thoroughly and about ten times over. That's what it takes to discard some of
these stubborn human tendencies.

Help STOP SPAM with the new MSN 8 and get 2 months FREE*

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT