From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Fri Jun 09 2006 - 12:41:29 MDT
On Fri, Jun 09, 2006 at 02:22:15PM -0400, Martin Striz wrote:
> On 6/7/06, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:
>
> >> BTW, they will also probably be the most complex thing that
> >> they have to deal with and learn about. It is logically
> >> impossible to contain a complete internal model of yourself.
> >
> >Why do people keep saying that?
> >
> >http://en.wikipedia.org/wiki/Quine
>
> That's an interesting gimmick, but a quine has no internal model
> of itself. My point was that, just as we often can't predict our
> own future actions because we are oblivious to the substrate level
> action of our minds, an AI won't be able to simultaneously model
> of its substrate level activity, so there will be some lack of
> information, and some error.
The fact that *humans* can't do it proves nothing. We have very
small, fragile, and imperfect memories, for one thing.
-Robin
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT