From: Gordon Worley (email@example.com)
Date: Sun Feb 09 2003 - 17:22:05 MST
On Sunday, February 9, 2003, at 06:40 PM, Michael Roy Ames wrote:
> I consider the development of a seed AI to be a continuous process.
> developmental point at which we would want to introduce the concept of
> 'competition' should be carefully chosen. Same thing for 'deception'.
> My argument is, that these concepts are so basic, and such an intrinsic
> part of human society and language, that they their introduction should
> not be left to chance, but rather be properly encapsulated within a
> larger context.
> We are attempting to create a generally intelligent mind, one that can
> understand all the possible options, and *want* to choose Friendly
> To provide a flawed education to such a being, one that omits the nasty
> bits about humans, would IMO be a huge mistake. It would leave the
> growing mind with a skewed view of humans and human society, one that
> almost certainly would lead to poor decision-making in the real world.
> Friendliness is about dealing with the issues head-on, and taking
> friendly actions/decisions... not hiding
> important/pertinent/uncomfortable knowledge out of fear.
Okay, maybe there are some things I'm not making clear. We have been
assuming that we need to teach a Friendly seed AI. Just for a moment,
if we consider a FSAI that is designed such that it can't help but be
Friendly, there will be no need to train it; we can just set it loose
on the world and everything will be hunky-dorry. We can even let
those-who-we-shall-not-name be responsible for it, tell it all sorts of
garbage, and still have everything turn out fine. Hurray.
Now, as we have been assuming, we need to train the FSAI to make sure
it stays Friendly. The former situation is preferable, in which case
we can teach it just about anything or just hook it up to the Web and
it'll figure things out, but may not be possible or, even if it is, not
provably that our FSAI is such a FSAI. Given this, until the seed AI
is all grown up and able to make its own decisions fully in accordance
with Friendliness (or, more generally, morality), we stand the chance
of teaching it something wrong. It's not a matter of hiding things
from the seed AI, but a matter of protecting it from ideas that it
can't handle without breaking it. Some forms of breakage, if we're
lucky, will be immediately obvious so that we can pull the plug. More
likely, it will learn how to do something like lie and we won't know
until we're already being spread with marmalade.
As far as the Friendly AI growing up with a skewed view of reality,
fear not, it is a Bayesian Reasoner. If reality changes for the FAI it
will figure this out and adapt quickly. If humans suddenly appear,
this won't require years to figure out; the FAI will say "Oh, look real
humans. Gee, I guess they act differently than my programmers said
they did. But, then my programmers were human and, from what I have
observed about humans and what I know about seed AI development, I can
understand why that happened.". A Bayesian Reasoner cannot be totally
screwed up by miseducation unless that miseducation takes away their
rationality, meaning that they are no longer a Bayesian Reasoner.
-- Gordon Worley "The only way of finding the limits http://www.rbisland.cx/ of the possible is by going beyond firstname.lastname@example.org them into the impossible." PGP: 0xBBD3B003 --Arthur C. Clarke
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT