From: Ben Goertzel (ben@goertzel.org)
Date: Tue May 03 2005 - 15:43:54 MDT
> This thread is pure deliciousness to me, as I
> have had some variant of it running in the back
> of my head for months, trying to work out a plot
> with a sandboxed FAI as the central character. I
> find I disagree, Ben, and suspect that a
> hyperrational being will find plenty to be
> conflicted about. Example: Arthur C. Clarke
> explains in '2010' that HAL went mad because of
> the conflict created by being the sole possessor
> of secrets about the true mission of the
> Discovery.
Well, HAL was far from hyperrational...
I think that a hyperrational being will avoid "deluding itself" to any
significant extent. This will eliminate most inner conflicts.
Also, it will be able to rewire itself to avoid experiencing negative
emotions. But without the spice of negative emotions, positive emotions
will also lose most of their appeal, IMO...
My view is that emotions are mostly caused by the opacity of the hindbrain,
and secondarily by the opacity of parts of the forebrain to other parts of
the forebrain. Since the parts of our brain are out of touch with each
other, they get "big surprises" from each other all the time, which are
coupled with physiological responses -- "emotions".... There are particular
patterns to these "big surprises" and physiological responses, which are
familar to us all. A hyperrational AI will "surprise itself" only in
surprising ways, not in predictable ways (due to the lack of internal
opacity). So if a hyperrational AI does have emotions, they won't be
repetitive like ours -- it'll be a new emotion every time, algorithmically
irreducible to the previous ones ;-)
-- Ben G
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:56 MST