Sentience

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jun 02 2004 - 08:10:56 MDT


Eliezer:

You say that you don't have any idea what I mean by "sentience."

If you check out dictionary.com or any other dictionary you will get the
basic idea. I'm not being flippant. As inspection of any dictionary
indicates, the concept wraps up two different ideas, which I feel belong
together, but you may feel do not belong together.

One core meaning of sentience is:

1) "Being an intelligent system that contains components devoted to
modeling and understanding itself and its relationship to its
environment, and determining its own overall actions."

Another core meaning of sentience is:

2) "Possessing conscious awareness"

Note that 2 may be made empirical by changing it into:

2') "Reports experiencing conscious awareness"

I separate 2 and 2' in order to dissociate the issue of the coherence of
the notion of sentience, from the issue of the "reality" of
consciousness. If you are not comfortable with the notion of
consciousness, you should still be comfortable with 2', the notion of
reported consciousness.

According to my own philosophy of mind, I think any system that is
"sentient" in sense 1 is almost surely going to be sentient in senses 2
and 2'.

On the other hand, if you believe that there are probably going to be
systems that are intelligent, self-modeling, self-understanding and
self-determining yet do NOT report having the experience of
consciousness, then naturally you're going to find the concept of
sentience to be ill-defined, because you don't consider the two meanings
1 and 2/2' to naturally adhere together.

I think that, although you may disagree with it, you should have no
problem understanding the hypothesis that

1 ==> 2

in the above definitions. Of course, this is a conceptual rather than
fully scientifically/mathematically specified statement.

-- Ben

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> Of Eliezer Yudkowsky
> Sent: Wednesday, June 02, 2004 9:26 AM
> To: sl4@sl4.org
> Subject: Re: FAI: Collective Volition
>
>
> Ben Goertzel wrote:
>
> > Eliezer,
> >
> > About this idea of creating a non-sentient optimization process, to
> >
> > A) predict possible futures for the universe
> > B) analyze the global human psyche and figure out the "collective
> > volition" of humanity
> >
> > instead of creating a superhuman mind....
> >
> > I can't say it's impossible that this would work. It goes
> against my
> > scientific intuition, which says that sentience of some sort would
> > almost surely be needed to achieve these things, but my scientific
> > intuition could be wrong. Also, my notion of "sentience of
> some sort"
> > may grow and become more flexible as more AGI and AGI-ish systems
> > become available for interaction!
>
> I have not the vaguest idea of what you mean by "sentience".
> I am still
> proposing reflectivity - superhuman reflectivity, in fact.
> Why are you so
> horrified at my proposing to omit something if you do not
> know what you
> mean by the term, let alone what I mean by it?
>
> > However, it does seem to me that either problem A or B above is
> > significantly more difficult than creating a self-modifying AGI
> > system. Again, I could be wrong on this, but ... Sheesh.
>
> Yes, saving the world is significantly more difficult than
> blowing it up.
> I would rise to the challenge, raise the level of my game
> sufficiently to
> change the default destiny of a seed AI researcher, rather
> than walking
> into the whirling razor blades of which I was once ignorant.
> I understand
> if you decide that the challenge is beyond you, for yes, it
> is difficult.
> But it is harder to understand why you are still on the
> playing field,
> endangering yourself and others.
>
> > To create a self-modifying AGI system, at very worst one has to
> > understand the way the human brain works, and then emulate
> something
> > like it in a more mutably self-modifiable medium such as computer
> > software. This is NOT the approach I'm taking with Novamente; I'm
> > just pointing it out to place a bound on the difficulty of
> creating a
> > self-modifying AGI system. The biggest "in principle"
> obstacle here
> > is that it could conceivably require insanely much
> computational power
> > -- or quantum computing, quantum gravity computing, etc. --
> to get AGI
> > to work at the human level (for example, if the microtubule
> hypothesis
> > is right). Even so, then we just have the engineering problem of
> > creating a more mutable substrate than human brain tissue, and
> > reimplementing human brain algorithms within it.
> >
> > On the other hand, the task of creating a non-sentient optimization
> > process of the sort you describe is a lot more nebulous (due to the
> > lack of even partially relevant examples to work from). Yeah, in
> > principle it's easy to create optimization processes of arbitrary
> > power -- so long as one isn't concerned about memory or processor
> > usage. But contemporary science tells us basically NOTHING
> about how
> > to make uber-optimization-processes like the one you're
> envisioning.
> > The ONLY guidance it gives us in this direction, pertains
> to "how to
> > build a sentience that can act as a very powerful optimization
> > process."
>
> Again, I've got no clue what you mean by 'non-sentient'. I was still
> planning on using recursive self-improvement,
> self-modification, total
> self-access or "autopotence" in Nick Bostrom's phrase, full
> reflectivity,
> et cetera.
>
> > So, it seems to me that you're banking on creating a whole
> new branch
> > of science basically from nothing,
>
> Not from nothing. Plenty of precedents, even if they are not
> widely known.
>
> > whereas to create AGI one MAY not need
> > to do that, one may only need to "fix" the existing sciences of
> > "cognitive science" and AI.
>
> Does that mean that you'll create something without
> understanding how it
> works? Whirling razor blades, here we come.
>
> > It seems to me that, even if what you're suggesting is
> possible (which
> > I really doubt), you're almost certain to be beaten in the race by
> > someone working to build a sentient AGI.
>
> By "sentient AGI" you mean recursive self-improvement thrown
> together at
> random, which of course will be sentient whatever that means, because
> humans are sentient therefore so must be an AGI? Or is
> sentience just this
> holy and mysterious power that you don't know how it works,
> but you think
> it is important, so I'm committing blasphemy by suggesting that I not
> include it, whatever it is?
>
> Seriously, I don't see how anyone can make this huge fuss
> over sentience in
> AGI when you don't know how it works and you can't give me a
> walkthrough of
> how it produces useful outputs. I have a few small ideas about
> half-understood architectural quirks that give humans the
> belief they are
> conscious, architectural quirks to which I applied the term
> "sentience".
> Evidently this was a huge mistake. I hereby announce my
> intent to build
> non-floogly AI.
>
> > Therefore, to succeed with this new plan, you'll probably need to
> > create some kind of fascist state in which working on AGI
> is illegal
> > and punishable by death, imprisonment or lobotomy.
>
> Maybe that's what you'd do in my shoes. My brilliant new
> notion is to
> understand what I am doing, rather than randomly guessing,
> and see if that
> lets me finish my work before the meddling dabblers blow up
> the world by
> accident. Though in your case it does begin to border on
> being on purpose.
>
>
> "Sentient", "non-sentient", this is phlogiston, Greek
> philosophy. You
> cannot argue about something you do not understand, mystical
> substances and
> mystical properties of mystical systems. I intend to unravel
> the sacred
> mystery of flooginess, which I have already come far enough
> to declare as a
> non-mysterious target. Then, having unravelled it, I will
> either know how
> to build a non-floogly optimization process, or I will know
> in detail why
> floogling is necessary. I am only announcing my moral
> preference in favor
> of non-floogly optimization processes. I won't definitely say it's
> possible or impossible until I know in more detail where the present
> confusion comes from, and I certainly don't intend to make up
> campfire
> stories about the mystical powers of floogly systems until I can do a
> walkthrough of how they work. Do a walkthrough, not tell
> stories about
> them. As far as I can guess, the whole thing will turn out to be a
> confusion, probably a very interesting confusion, but a
> confusion nonetheless.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT