From: Brian Atkins (brian@posthuman.com)
Date: Thu Jun 27 2002 - 21:30:47 MDT
These two quotes:
"It does not seem at all strange to me to partially rely on the advice of
an appropriate group of others, when making an important decision. It seems
unwise to me *not* to." - Ben
"c) whomever creates an AGI, intrinsically has enough wisdom that they should
be trusted to personally decide the future of the human race" Ben claiming
this is what Eliezer believes (and implicitly what Ben does not believe?)
do not seem to match up with the sentiment here:
"But intuitively, I feel that an AGI with these values is going to be a
positive force in the universe – where by “positive” I mean “in accordance
with Ben Goertzel’s value system”." - Ben's idea of how an AI figures out
what's "right"
Another interesting quote from Ben's AI Morality paper to give us all a
warm fuzzy feeling:
"What happens when the system revises itself over and over again, improving
its intelligence until we can no longer control or understand it? Will it
retain the values it has begun with? Realistically, this is anybody’s guess!
My own guess is that the Easy values are more likely to be retained through
successive drastic self-modifications – but the Hard values do have at least
a prayer of survival, if they’re embedded in a robust value dual network with
appropriate basic values at the top. Only time will tell."
>From my quick read his FAI ideas boil down to hardcoding part of his
morality into his AI, and training it to know about the rest, and then
turning it loose hoping it somehow sticks to it.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT