From: Brian Atkins (brian@posthuman.com)
Date: Tue Aug 16 2005 - 22:30:24 MDT
Richard Loosemore wrote:
>
> No, not at all! I am saying that a sufficiently smart mind would
> transcend the mere beliefs-about-goals stuff and realise that it is a
> system comprising two things: a motivational system whose structure
> determines what gives it pleasure, and an intelligence system.
>
> So I think that what you yourself have done is to hit up against the
> anthropomorphization problem, thus:
>
> > A Seed AI which
> > believes its goal to be paper clip maximization will
>
> wait! why would it be so impoverished in its understanding of
> motivation systems, that it just "believes its goal to do [x]" and
> confuses this with the last word on what pushes its buttons? Would it
> not have a much deeper understanding, and say "I feel this urge to
> paperclipize, but I know it's just a quirk of my motivation system, so,
> let's see, is this sensible? Do I have any other choices here?"
But why specifically would it for some reason use that understanding to override
it's goal system in such a way?
This seems the crux of the matter, and where I think there is indeed some
anthropomorphizing going on with your hypothesis.
>
> If you assume that it only has the not-very-introspective human-level
> understanding of its motivation, then this is anthropomorphism, surely?
> (It's a bit of a turnabout, for sure, since anthropomorphism usually
> means accidentally assuming too much intelligence in an inanimate
> object, whereas here we got caught assuming too little in a
> superintelligence!)
Here you are incorrect because virtually everyone on this list assumes as a
given that a superintelligence will indeed have full access to, and likely full
understanding of, its own "mind code".
But again, having such access and understanding does not automatically and
arbitrarily lead to a particular desire to reform the mind in any specific way.
"Desires" are driven from a specific goal system. As the previous poster
suggested, if the goal system is so simplistic as to only purely want to create
paperclips, where _specifically_ does it happen in the flow of this particular
AGI's software processes that it up and decides to override that goal? It simply
won't, because that isn't what it wants.
>
> To illustrate: I don't "believe my goal is to have wild sex." I just
> jolly well *like* doing it! Moreover, I'm sophisticated enough to know
> that I have a quirky little motivation system down there in my brain,
> and it is modifiable (though not by me, not yet).
>
> Bottom Line:
>
> It is all about there being a threshold level of understanding of
> motivation systems, coupled with the ability to flip switches in ones
> own system, above which the mind will behave very, very differently than
> your standard model human.
>
I'm just not seeing how that will work. It sounds very similar to how Eliezer
and I first imagined things might magically work back in 1999, but after taking
the time to actually try and work out exactly how real AGI software might behave
it just didn't fall out of the system. It's something that will have to very
carefully be worked out and designed into the system in very specific ways, and
a system without such a design will not as far as we can see automatically
behave in that manner.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST