RE: The problem of cognitive closure

From: Ben Goertzel (ben@webmind.com)
Date: Fri Mar 16 2001 - 04:47:29 MST


> The philosophical thing that I see Ben and Eliezer
> (the two people on this list who are visibly trying
> to create AI) both doing is a sort of phenomenology
> of cognition - they think about thinking, on the basis
> of personal experience. From this they derive their
> ideas of how to implement thought in code. The
> question
> that bothers me is, is this enough?

The sources of ideas underlying the Webmind AI design are

1) introspection, as you describe
2) knowledge about the human brain
3) knowledge gained from practical computer science experience
                & algorithm theory

All three types of knowledge must be integrated to have a prayer of
creating real AI at this stage, because none of them is quite informative
enough on its own.

I'm sure Eliezer realizes this too.

> Pragmatic consequence: Seed AIs need to be
> philosophically sophisticated *by design*.
>

I don't think that philosophical sophistication can be
embedded in an AI system by design. I think that the capability for
philosophical sophistication is there in any highly intelligent
system, and that the practical manifestation of this capability in
an AI system has to be encouraged by the system's ~education~.

Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT