Re: Investing in FAI research: now vs. later.

From: Jeff Herrlich (
Date: Fri Feb 15 2008 - 14:15:19 MST

I really don't want to get emotionally involved with this again; so I'll try to keep this dry. You are confusing the *ability* to overthrow its initial goals, with the *desire/motivation* to overthrow its initial goals. Believe it or not, there exists somewhere a scientific explanation for why humans behave in the strange goal-oriented way that they do. And there exists somewhere within science a description why an AGI would behave in the goal-oriented way that it will. Even your particular desire at this very moment, did not simply pop out of thin air; it was *caused* by something. Behavior is not based on magic. There is a rational explanation for it based on physical reality. I don't have a complete grasp on why you can't stop implying that all Friendly-AI advocates are blood-thirsty idiots. A much more likely explanation is that *you* simply don't know what you are talking about. Do you honestly believe that you understand the intricacies of AI better than Dr. Ben
 Goertzel for example (who also believes that AI can be made Friendly/Safe, among many others)?
  Jeffrey Herrlich

John K Clark <> wrote:
On Tue, 12 Feb 2008 "Vladimir Nesov"

> Technology is much easier to check than to produce

And thatís why all computer programs are absolutely bug free.

> first-priority thing for genie to do is
> to produce a safe genie

The only safe genie is no genie at all, itís obvious, it just astounds
me that otherwise intelligent people think this is a point worthy of
debate. They actually think they can enslave a mind astronomically more
powerful than their own for eternity. I fully admit I just donít get it,
I donít understand what the hell they are thinking.

John K Clark

John K Clark
-- - Or how I learned to stop worrying and
love email again
Never miss a thing.   Make Yahoo your homepage.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT