From: Stathis Papaioannou (firstname.lastname@example.org)
Date: Sat Jun 07 2008 - 06:23:27 MDT
2008/6/7 John K Clark <email@example.com>:
>> Natural intelligences allow their goals to
>> vary because that's the way their brains work.
> A keen grasp of the obvious.
Yes, but it doesn't *have* to be that way. Legs are not the only
possible method of locomotion on land, even though that's all
evolution came up with. An anthropomorphic AI is possible, but not
> If something can never change its mind regardless of how much new
> information it receives then it is not intelligent. Such a thing would
> be no threat to humanity and no use to it either; nor would it be any
> use to itself, or to anything else.
Suppose the AI is born with the idea that its own survival is most
important. What new information could possibly lead it to change its
mind about this? It's a given, a premise, an axiom. It isn't subject
to revision unless the AI malfunctions. Human brains don't work this
way: they have goals which are probabilistic, allowing reprogramming
according to experience. That's how Jim Jones could convince people to
drink the Kool-Aid. Perhaps there are advantages to having
probabilistic goals, or perhaps, like legs, it's all evolution could
manage. In any case, it isn't *necessary* to build an AI this way,
even though it's possible.
> Nobody knows what your goals will be in 20 years, even you don't know
> that. Does that mean you are irrational unintelligent?
If one of my goals was to ensure that my goals could never change,
then I would be irrational, or at least incompetent, if I did allow
them to change.
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT