From: Lee Corbin (lcorbin@rawbw.com)
Date: Sat Jun 28 2008 - 02:20:15 MDT
Stuart writes
> If the AI is designed to be ecstatically happy carrying out
> the orders of the snail, and to be a tiny bit happy when being smart
> and consistent, then (as long as these happiness scores are not
> cumulative, and the AI does not get happy now because of potential
> future happiness) it will cheerfully follow the snail forever. (the
> happiness assumptions are what is needed if we are to say that one
> goal is strictly preferred to another).
Your caveat (below) is appreciated, but I do have a question
here, probably because I didn't start following this thread
until very lately.
What about this objection to "If the AI is designed to be ecstatically
happy..."? Don't we often assume here that the AI manages to get
hold of its own source code? Doesn't this allow that, given time,
the AI may deviate from its initial programming, i.e., that even though
it's being guided as it does consider and rewrite, it still has the potential
of sooner or later deviating from any initial characterization.
As a concrete example of what I mean, we humans are very close
to getting to hack our own "source code", and it will be unsurprising
to me if thirty years from now people are undergoing therapies
involving their genes that cause very fundamental changes to their
natures, e.g., entirely subverting the anger module or the hatred
module (two currently very unfashionable behavior components).
I don't see any reason a similar kind of thing can't happen to some
AI we devise or cause to be evolved.
Lee
> I apologise to the list, this post is littered with simplifications
> and assumptions and simplistic ways of talking about AI's; I feel that
> they do not detract from the point I was making, and are necessary to
> make that point in a reasonable space.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT