From: Ben Goertzel (ben@goertzel.org)
Date: Sun Feb 20 2005 - 18:08:12 MST
> FAI is a higher bar than AGI; it requires the
> same AI knowledge and raw intelligence, but it also rules out all
> the easy ways out (i.e. probabilistic self-modification, brute force
> GAs) and requires the right attitude to existential risk. Actually
> developing FAI theory from scratch is even worse; I'm not aware of
> anyone other than Eliezer who has made significent progress with it.
...
> Ditto. I can also think of a few folks already working on AGI who
> would probably be good contributors to an FAI project, IF they chose
> to change their approach to the problem.
Ah, I see ... so in your list of qualifications you're including "shares the
SIAI belief system" ;-)
> > CV is an interesting philosophical theory that so far as I can
> > tell has very little practical value.
>
> It's true that there aren't any implementation details provided,
> but wouldn't you agree that it is a clear statement of intent?
Not really. I don't think the concept of "collective volition" has been
clearly defined at all.
In his essay on the topic, Eliezer wrote:
"
In poetic terms, our collective volition is our wish if we knew more,
thought faster, were more the people we wished we were, had grown up farther
together; where the extrapolation converges rather than diverges, where our
wishes cohere rather than interfere; extrapolated as we wish that
extrapolated, interpreted as we wish that interpreted.
"
I haven't read any convincing argument that this convergence and coherence
can be made to exist...
Talking in detail about collective volition reminds me of a case I knew of
in the 80's, where a bunch of topologists spent a couple years proving
theorems about a certain obscure type of topological space. Which later was
shown not to exist at all: all their papers in fact had been about the empty
set. Oops!
> If you
> converted your 'joyous growth' philosophy into a provably stable
> goal system I'd personally consider it a valid FAI theory,
It's the "provable" part that's tough! One can develop it into an
"intuitively seems like it should be stable" goal system.
FYI, it's "choice, growth and joy" not just "joyous growth", though. The
choice part is important where issues like the survival of humans are
concerned...
> though
> I'd still prefer CV because I don't trust any one human to come
> up with universal moral principles.
But do you
a) trust a human (Eli) to come up with an algorithm for generating moral
principles (CV)
b) trust the SIAI inner-circle to perform a "last judgment" on whether the
results of CV make sense or not before letting an FAI impose them on the
universe..
???
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT