From: Michael Roy Ames (email@example.com)
Date: Sun May 30 2004 - 12:06:54 MDT
> OK. Try this scenario . . . . An FAI believes that it
> has a good goal enhancement. It tests it. It looks OK.
> It implements it. Circumstances subsequently change in
> a REALLY unexpected manner. Oops! Under these very
> odd new circumstances, the "enhancement" turns out to
> be catastrophic for the human race . . . .
> But wait! There are three other FAIs that are in close
> communication with this FAI. Friendliness dictates that
> they should work in close co-operation in order to
> prevent errors of this type. The other FAIs have not
> made this particular "enhancement" and correctly
> evaluate the situation for what it is. They outvote
> the FAI with the "enhancement" and the "enhanced" FAI
> (who is still mostly Friendly and merely mistaken)
> rolls the "enhancement" back (or modifies it) so that
> the human race lives for the next moment or so . . .
One can always imagine a way that something will break. This is important
to acknowledge the possibility, and address it. Whether there is one
collection of processes you point to and call it a single FAI, or call it
multiple FAIs, it will be possible that it will break. The plan AFAIK is to
reduce this possibility to near zero.
It does not follow that a catastrophe that will break one AI will be less
likely to break multiple AIs. That would be anthropomorphic thinking. For
human beings it is demonstrably true that groups have greater abilities to
address unexpected situations. It is true because:
a) several humans have more data than one,
b) several humans have more MIPS than one,
c) humans are optimized by evolution to work in groups as well as
d) several humans have greater physical work capacity than one.
None of these points *has* to be true of AI. AI is not like humans in these
> Good engineering often dictates redundancy. Common
> sense (which ain't so common - yes, I know) strongly
> promotes checks and balances. Human history shows
> that when diversity of opinion is allowed to flourish
> that good things happen and that when diversity is
> suppressed that BAD things happen. You seem to be
> flying in the face of a lot of good consensus about
> safety measures without a reason except "I am at a
> loss to understand what is gained . . . ."
The concepts 'redundancy' and 'diversity of opinion' are different concepts.
It can be confusing to co-present them as two sides of the same coin.
Redundancy in engineering often reduces risk of failure, and where this
applies, I am sure it will be used in the creation of FAI. Diversity of
opinion in human society is seen as a good thing in the West. The metaphor
does not translate well to the operation of FAI. The "good consensus about
safety measures" applies to humans. We are not building a human. That does
not automatically rule out the safety considerations that work with humans.
Diversity can be contained/generated from within a single object named FAI
just as from mutually independent objects - and probably with less work.
There are often many ways of looking at a given problem, and different
viewpoints reveal/hide different aspects of that problem. Your argument
seems to imply that a single object named FAI would be unable to analyze
problems in multiple ways - meaning: as if being looked at by multiple
humans. I would say that is a false implication.
You have a valid point, one that has been noted and addressed for some time.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT