From: petervoss1 (peter@optimal.org)
Date: Thu Jul 06 2000 - 23:01:14 MDT
Dale Johnstone wrote:
> ... A real AI will have similar mechanisms to guide it's behaviour. Some
of which (like curiosity & boredom) will be essential to a functioning mind.
Without them useless and unproductive behaviour will most likely
predominate, and hense no intelligence. Others motivators like hunger and
thirst will have no meaning and can be discarded...
I agree. However, there also has to be something more fundamental like
'pain/ pleasure' (ie. stop doing this/ do more of this)
> ... I've heard arguments that say, once something is intelligent it
naturally does the 'right' thing. That's rather naive I think. You can have
intelligent dictators....
Yes, this is important. There is no absolute 'platonic' right. 'Right' is
always with respect to a specified or implied goal and beneficiary.
> ... Asimov's Laws are equally naive...
True. We can perhaps guide (bias) an AI's value system (like we can a
child's), but cannot ultimately prevent it from being overridden/
re-programmed.
> Probably the best solution is to let them learn the very real value of
cooperation naturally. Put a group of them in a virtual environment with
limited resources and they will have to learn how to share (and to lie &
cheat). These are valuable lessons we all learn as children and should form
part of a growing AI's education also. Only when they've demonstrated
maturity do we give them any real freedom or power. They should then share
many of our own values and want to build their own successors with similar
(or superior) diligence. Anything less in unacceptably reckless.
I agree that 'anything less' is risky, but I don't see that we have a choice
(other than not doing AI):
1) General machine intelligence will invariably be connected to the Web
during development & learning.
2) It seems that the only effective way to get true AI going is the seed
route. In this scenario, there may not be a community of (roughly equal)
AIs. Only one will bootstrap to superior intelligence.
I am very concerned about the risks of run-away AI (unlike Eli I *do* care
what happens to me). I'm desperately searching for ways of trying to predict
(with whatever limited certainty) what goal system an AI might choose. Any
ideas?
Peter Voss
peter@optimal.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT