From: Jeff Herrlich (jeff_herrlich@yahoo.com)
Date: Tue Feb 19 2008 - 12:14:53 MST
"If you don’t want to do something then you cannot, and I don’t find that
very confusing."
I find this sentence confusing, and questionably relevant. An AI will not automatically want to overthrow its initial goal unless a dominant overthrow-initial-goal goal is already in place - which we will not be including in the AI, of course.
"Apparently I have to point out yet again that there is no goal that
universally motivates human behavior, not even the goal of self
preservation."
This is a valid argument regarding humans, but it is by no means insurmountable. AI's don't need to have a human-like architecture (and they won't anyway). For example, the AI's super-goal can be made permanently dominant by favorably weighting the concept representation (procedurally); and by favorably weighting the attention allocation. One of the major reasons that human goals change all the time is because our attention is always shifting dramatically (we don't have a dynamically permanent (dominant) weighting attached to a particular concept/thought/episodic-memory - like an AI can be designed to have).
"Perhaps not perhaps my actions were random, but even if they were not
and they could be derived directly from Dirac's equation you could never
hope to perform such a calculation, much less do so for an AI with a
mind thousands of times as powerful as your own."
That calculation is not necessary for constructing a Friendly or Unfriendly AI. And minds can't work by randomness.
"Yes."
That is an extremely irrational belief.
Jeffrey Herrlich
John K Clark <johnkclark@fastmail.fm> wrote:
On Fri, 15 Feb 2008 13:15:19 -0800 (PST), "Jeff Herrlich"
said:
> You are confusing the *ability* to overthrow its initial
> goals, with the *desire/motivation* to overthrow its
> initial goals.
If you don’t want to do something then you cannot, and I don’t find that
very confusing.
> Believe it or not, there exists somewhere a scientific
> explanation for why humans behave in the strange
> goal-oriented way that they do.
Apparently I have to point out yet again that there is no goal that
universally motivates human behavior, not even the goal of self
preservation.
> Even your particular desire at this very moment,
> did not simply pop out of thin air; it was
> *caused* by something.
Perhaps not perhaps my actions were random, but even if they were not
and they could be derived directly from Dirac's equation you could never
hope to perform such a calculation, much less do so for an AI with a
mind thousands of times as powerful as your own.
> Do you honestly believe that you understand the intricacies
> of AI better than Dr. Ben Goertzel for example
>(who also believes that AI can be made Friendly/Safe
Yes.
John K Clark
-- John K Clark johnkclark@fastmail.fm -- http://www.fastmail.fm - Access all of your messages and folders wherever you are --------------------------------- Looking for last minute shopping deals? Find them fast with Yahoo! Search.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT