Re: Mathematical Model of GLUTs and Lookups

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Sat Apr 19 2008 - 14:07:05 MDT


On 18/04/2008, Lee Corbin <lcorbin@rawbw.com> wrote:
> I had written
>
>
> > > I don't think that the domain of the [your] function is [should be] S,
> > > however. The domain of the function G is the GLUT itself; G takes
> > > entries of the GLUT to other entries.
> >
>
> and Stuart writes
>
>
> > Still some confusions (I see G as a function on S - it takes one state
> > of the system, and gives you another (subsequent) state, so is a
> > function)! But the main ideas are there.
> >
>
> Perhaps we resolve this by noting that the first GLUT (involving no
> hashing) has entries that *are* polynomials, or which *are* states
> of a computer or brain, or which *are* generations of the Life Board?
> Maybe we are saying the same thing? It's probably a minor point,
> but I like math, and precision feels good :-)

A function f on S is something which, if given an element s of S,
gives another element f(s) of S. A GLUT is something which, if given
an s of S, gives the "next state", which is also an element of S.
Hence GLUT's are functions.

> > This is especially true if you consider "partial truths": a partial truth
> > on a GLUT is just some sort of sub-GLUT, a partial truth of a
> > theory of consciousness can be considerably simpler (even relative
> > to the theory of consciousness).
> >
>
> Very interesting. Sort of parallel to partial function. Might examples be
>
> (A) suppose that the huge GLUTh or GLUTa passes not only
> a Turing Test, but convinced several people, each given
> three hours, that it is conscious as well as intelligent. (B) a subset
> of (A) consisting of the first hour for each interrogator
> (with the GLUT becoming suddenly silent afterwards)
> (C) a subset of (A) but with gaps, so that (I believe) the GLUT
> just ignores many questions, though somehow seems aware
> in many cases that it had failed to answer, or aware that in its
> opinion the interrogator had been off-line
>
> If these don't work, then I'm not following your last paragraph.

My last paragraph is the idea that started the whole thing. Let us
imagine that a GLUTh can recognise friend's faces on a computer
screen, and write down their names. This GLUTh could also pass a full
Turing test (and thus qualifies as conscious).

However, here I have briefly described a condition (recognise people's
faces) which is an easily described sub-category of being a conscious
human. However, if we were to define the sub-GLUTh needed to perform
that task, then that definition is not much simpler than the GLUTh
itself. Hence translating the task "recognising friend's faces" into
GLUT terms is punishing, even if we know that "recognising friend's
faces" is a subset of what we generally accept to be human
consciousness.

So some sub-properties of consciousness are easy to describe, even if
their hash equivalents are nearly as hard as consciousness itself.

> Julian Barbour wrote, as you probably know, a book with that
> title, and there are other theories which attempt to reduce time
> to something else (e.g. configuration space or mathematical
> structures in Platonia). I still want to avoid having to believe in those,
> for several reasons, most notably because it *could*
> (in my eyes, at least) reduce moral decisions to gibberish.

Moral decisions have their portion of gibberish - the basic axioms of
ethics are arbitrary. As we explore deeper concepts, we have to keep
in mind that if we throw away things that don't make perfect sense, we
end up throwing away everything.

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT