From: Mitch Howe (mitch@iconfound.com)
Date: Tue Jul 16 2002 - 22:00:54 MDT
I offer a narrower definition of wisdom that may prove useful to the
discussion of AI.
Wisdom: The consistency or degree to which the best decision is made based
on the information available and the goals of the decision maker.
The key here is "based on the information available." By this definition,
teenage foolishness (foolishness = lack of wisdom) can only really be
attributed to things like raging hormones that cause them to act in ways
they know full well will move them farther away from their long or even
short term goals and values. A male teenager who exposes himself, makes
catcalls, and shouts lewd statements in an attempt to persuade a female
passerby to have sex with him (sex being his goal, readily accepted from his
hardwiring) will probably meet with little success. If he previously
learned from a credible source (experience is one, but not the only
possibility) that the most reliable means of seducing women generally begin
with cultivating a perception of emotional intimacy, then I would call this
teenager a fool. But, if he does not have this useful tidbit of information
at his disposal, he may in fact be making the best possible decision with
the information he has available. After all, if it is sex he is after, why
shouldn't he approach the topic directly? If a woman did the same to he
would respond quite readily. It just isn't logical to assume that some
roundabout ritual involving restaurants and boring conversations should be
required. Such an unlearned teenager is not foolish - just clueless. Now,
if he persists in using this approach without success over a long period of
time without attempting to figure out why it wasn't working, I would again
consider him a fool (on the meta level of "decision making process for
making decisions") ...unless his past offered clear reinforcement to the
idea that blind persistence pays.
(--Can we have a pool Dad?
--No.
--Can we have a pool, Dad?
--No.
--Can we have a pool, Dad?
--No.
--Can we have a pool, Dad?
--No.
--Can we have a pool, Dad?
--Fine! Just shut up!)
Now, to bring AI into the picture, we can first take the horror of an
AI-induced existential disaster into consideration and decide whether the AI
was evil/good, wise/foolish, clueless/learned, or some combination of these.
In the oft-mentioned scenario where an AI converts the biosphere into
computronium in order to solve some paradox or other, we must ask ourselves:
1) What was the AI's most powerful, overriding goal? Was this goal something
we would consider intrinsically good or evil?
This is the level at which I feel the morality of an AI must be defined.
If, for example, the AI's goal encouraged the expansion of its own
intelligence and disregarded the volition of other sentient intelligences, I
would call it evil. (Others' opinions may vary.) If the cause of the
destruction is not found as an obvious outcome of a supergoal, then we must
move on to other possibilities.
2) What information did the AI have at its disposal?
If for some strange reason the AI's "upbringing" included a severely
restricted curriculum that included copious practical knowledge of
nanotechnology and no awareness at all of human concepts of morality, then
we can, of course, not consider it foolish for destroying the earth in its
quest to solve a problem. This is true even if the AI's supergoal was to
maximize Friendliness; the AI was acting based on the best information it
had, which was so utterly restricted as to totally stunt its concept of
Friendliness and turn it into a Golem. (Note to self: encourage broad and
largely self-guided curriculum for seed AI)
3) How intelligent was the AI? Was its decision making process rushed by
external factors?
I'm going to use intelligence here as raw computational power and
memory - the capacity with which one can mull over complicated problems in
an in-depth way. Many kinds of problems do not have obvious, elegant routes
to their own solutions -- or these routes may not be known to the mind
working on the problems (see #2). Other problems, particularly of the
decision making variety, have a huge number of possible solutions that must
be internally explored to test for desirability (in terms of
values/supergoals -- see #1). If, in our destructive scenario, the AI had
some externally imposed deadline for deciding whether or how to solve the
paradox, and if this deadline was too short for its level of intelligence to
evaluate the possible outcomes of a runaway intelligence-boosting program,
then this AI can not be considered foolish for making what amounts to a
rushed decision; the information it had on hand to work out a better
solution may have been unavailable due to time constraints. If the AI's
physical capacity (memory, etc) was too small to contain or solve the
problem regardless of allotted time, the AI was "stumped" -- and in this
case catastrophically "stupid" (unintelligent) - but still not necessarily
foolish. We would have to ask if or why it acted without being
appropriately certain that its action would have no serious negative
consequences.
4) Did the AI's programming allow for random or irrational decision making?
I'll admit that I am assuming, but I don't see any AI programmers valuing
Kirk's human side over Spock's vulcan logic to the point they are expressly
working irrational behavior into their designs. But, in the event that such
programming did occur, whether intentional or otherwise, then such an AI
would be acting at times in ways that do not correlate with its own
goals/values -- even if these goals were good, even if it had adequate
information available, and even if it had the intelligence/time to make a
good decision. This is either a broken, buggy, or intentionally dangerous
AI, and, by my definition, foolish. I cannot think of any other situation
that would earn an AI this description.
So, the only really foolish AI is one that is broken at the level of logical
continuity. The degree to which a program is free from such breaks is,
therefore, the only real measure of wisdom. Any two AIs that always act in
accordance with their own goals are wise - even maximally wise (call it
"Wise" with a capital "w"), since regardless of how stupid or uneducated one
Wise AI was compared to the other, they would both be making the best
possible decisions based on the information available to them.
So, greater intelligence would not equate at all to greater wisdom in an AI.
By the same token, lesser intelligence would not equal foolishness. But
these statements do not mean that the actions of one Wise AI would always be
as desirable to an observer as the actions of another Wise AI. The obvious
reason is that these entities may not share the same supergoals, and that
the subjective observer will prefer one goal over another. But also
important to consider is the fact that one Wise AI might have more
information to work with than the other (age is irrelevant, merely
accumulated knowledge), allowing it to make better decisions. And, finally,
one Wise AI may just be clearly smarter than the other - and in a universe
filled with big, time-sensitive questions, this means that one Wise AI would
be getting "stumped" or making rushed decisions more often than the other.
Greater intelligence should thus be something you fear in AI's that are
unwise, evil (according to your own definition), or pressured to act in
spite of inadequate education. Greater intelligence should be something you
approve of in AI's that are Wise and good (by your definition), regardless
of whether or not the AI is pressured to act and regardless of how educated
it is, since greater intelligence cannot help but increase the odds of the
AI making a well thought-out decision (one that is the result of a completed
internal review using all the information available to it).
I recognize that my definition of wisdom does not match up well to that of
many others, but I feel that it is fairer than most by having clearly
defined criteria. More traditional definitions are too easily accompanied
by shields of mystery that discourage the outsider from finding out whether
one's claim to wisdom is based on anything substantial. The alleged lone
guru at the top of a mountain may be obviously old, and is likely an
accomplished climber. But neither age nor goat-like dexterity ought to
instill any confidence in the man's accumulated knowledge, intelligence, or
values. I would, in fact, find it quite reasonable to assume that he is
lazy or indifferent if he spends all that time on the mountain (he would
have to climb back up if he left), and unlikely to be up on scientific
knowledge, current events, or social situations. He would probably claim
that his wisdom comes from some kind of internal enlightenment -- that all
these other things I think lend credibility to someone's solution are
nothing compared to his spiritual awareness. Such awareness is, of course,
a purely internal and subjective phenomenon indistinguishable from outright
lie to anyone but him (and maybe even him!). In point of fact, he may cluck
like a chicken and be perfectly Wise by my definition merely because he
knows almost nothing but always acts in accordance with his goals based on
the paltry information he has available. But I am in no way arguing that
wisdom should ever be sufficient of itself to gain the trust of me or anyone
else.
I think the traditional concept of "street smarts" as something that can
only come from experience (and is never the same as "book smarts") is silly
as the clucking guru for similar reasons. I think it is perfectly possible
for someone to gain knowledge of the street through a third party, such as a
particularly insightful book. Books have a reputation for leaving out
certain details that might ultimately matter in a dark alley, but even I
know that I should never expect to win at Three-Card Monty, despite never
having been in a position to try.
QUIZ:
If you wanted the answer to some really deep philosophical question, which
of the following would you trust to come up with the best answer for you
personally?
a)Yourself, after 20 minutes of reflection.
b)Yourself, after 80 years of reflection.
c)Some guru on a mountain (or analogue in any religion of your choice)
d)An SI possessing all of the knowledge you do currently, plus a lot more,
after what amounts to 10 human years of reflection. (10 actual seconds)
e)An SI possessing all of the knowledge you do currently, plus a lot more,
after what amounts to 5 million human years of reflection. (57.9 actual
days)
You were probably torn between either "b" or "e", but the important thing is
that you preferred these to "a" or "d". (If you chose "a", "c", or "d" then
this entire discussion has probably seemed pointless to you and I apologize
for wasting your time.) Time does matter to you, which really means that
the total amount of thought and knowledge put into the problem matters to
you (intelligence, available information). If you chose "b", you probably
believe that there is something about your own human experience that is
essential to the question that can never be appreciated by an SI of any
sophistication. Reasons why this is probably not the case are discussions
for other occasions (many of which have already passed). The correct answer
is "e" :)
But on a more serious note, try this question:
Suppose the entire planet was looking for an answer to "What is the best
future for us?" Which of the following do you think would have the best
chance of coming up with the best answer for the greatest number of people?
a)Yourself, after 20 minutes of reflection.
b)Yourself, after 80 years of reflection.
c)Some guru on a mountain (or analogue in any religion of your choice)
d)Some committee of humans of your choice, after 20 years of reflection.
e)An uploaded version of any of the above, with time factors increased to a
10x subjective level. (Same real time amounts).
f)An SI possessing all of the knowledge you and this committee do currently,
plus a lot more, after what amounts to 10 human years of reflection. (10
actual seconds)
g)An SI possessing all of the knowledge you and the entire human race do
currently, plus a lot more, after what amounts to 5 million human years of
reflection. (57.9 actual days)
The correct answer is.
Rats! 42 again!
--Mitch Howe
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT