RE: Self-modifying FAI (was: How hard a Singularity?)

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jun 26 2002 - 10:50:10 MDT


> "The
> intention
> that was in the mind of the programmer when writing this line of
> code" is a
> real, external referent; a human can understand it, and an AI that models
> causal systems and other agents should be able to understand it as well.
> Not just the image of Friendliness itself, but the entire philosophical
> model underlying the goal system, can be defined in terms of things that
> exist outside the AI and are subject to discovery.

That is whacky!!

Inferring an intention in another human's mind is HARD... except in very
simple cases, which are the exception, not the rule...

It's hard for a human, even one with decent empathic ability.

It may be WAY WAY HARDER for a nonhuman intelligence, even one somewhat
smarter than humans.
(Just as, for example, a dog may be better at psyching out "the intention in
another dog's head" than I am, because it's a dog...)

I am very curious to see your design for your AGI's "telepathy" module ;)

What, Eliezer, was the intention in my mind as I wrote this e-mail? I
don't even know, fully! There are many of them, overlapping; sorting them
out would take me ten minutes and would be a badly error-prone process...

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT