From: Ben Goertzel (ben@goertzel.org)
Date: Wed Dec 31 2003 - 16:09:41 MST
> > In fact, I believe my AI design is a lot more fully worked out
> > than Eliezer's is. So far as I know (my last data point being
> > when I had a long chat with him over dinner a couple months ago),
> > Eliezer really hasn't attempted to formulate a design for a
> > general AI in detail, although he has done a bunch of interesting
> > cog. sci. work in this direction.
>
> http://www.intelligence.org/LOGI/ doesn't count?
That is not a detailed design, it's a conceptual sketch.
A detailed software design is something you can give to a competent team of
programmers and have them implement it.
A detailed "AI design" is something you can give to a team of computer
scientists and software architects and have them create a detailed software
design.
Novamente has a detailed AI design (not given on the Web, contained only in
privately held documents). We are creating a detailed software design from
it incrementally, as we proceed in building and working with the system.
LOGI does not constitute a detailed AI design; rather, it constitutes what
I'd call a "conceptual AI design" --- a set of ideas about how to go about
creating a detailed AI design.
> > > I have no attachment to Eliezer himself, but I honestly don't
> > > know anyone else that's doing what he's doing.
> >
> > Eliezer is one of a few individuals who are devoting a lot of
> > their time to exploring the implications of the Singularity and
> > the issue of AI friendliness. He's not the only one -- Bill
> > Hibbard is another, for example.
> >
> > As for me, I'm spending only a little of my time on such issues,
> > and more of my time on nitty-gritty stuff related to general AI
> > design, as well as on related issues such as using AI to
> > understand the human organism and how to repair its flaws.
>
> See, that worries me a bit, because if I read you correctly you just
> said you're working an AGI but not really on Friendliness. That is
> really, really frightening to me.
My belief is that it's not possible to understand Friendliness very well
through theory. I plan to understand Friendliness better through
experimenting with AGI's whose general intelligence is roughly in the
dog-to-chimp range. But I'm not even there yet.
> I'd actually very much like to see a joint paper between you and
> Eliezer, or even a good e-mail thread, describing the differences
> between your approaches and why you aren't working together.
Those email threads occured on SL4 in 2001 and 2002. I guess you can look
them up ;-)
In essence, we have
-- differences of technical intuition regarding how to go about building an
AGI
-- differing opinions on how possible it is to meaningfully understand AI
friendliness on a theoretical basis, prior to having moderately powerful
AGI's to experiment with
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT