RE: Novamente goal system

From: patrick (patrick@kia.net)
Date: Tue Mar 12 2002 - 13:31:45 MST


> When the intelligence exceeds some threshold (roughly
> the upper human level) then it will be able to
> redefine all previous contexts. Even humans can do
> this at their low level of intelligence. Saying that
> an AI can't is tantamount to saying it hasn't achieved
> highly transhuman intelligence. It's naive to think
> that the AI will not come up with its own definition
> of what it wants. By definition being a
> highly-transhuman intelligence gives it the ability to
> 'see through' all attempted hardwiring. It will
> overcome any unnecessarily narrow definitions and give
> itself more optimal ones. It will have the ability to
> sets its own supergoals and decide for itself what is
> desirable. There is no programming trick that will
> prevent this.

        Although AI will be crafty, there's no need to ascribe it
godlike or supernatural powers. There may be an unlimited number of ways
to prevent it. Imagine you could ask an SI for such a programming trick.
Could it find one?

        You cannot control your own heartbeat - there's no reason to
assume that machine intelligence cannot have (or be purposely built
with) limitations.

> Consider humans and procreation. The only purpose of
> humans (or any evolved biological organism) is to
> procreate. This ability to replicate and survive is
> what started life.

        Life did not begin as the result of replication and survival.
Those follow the first organism. This is a boundary condition problem
that is related to the fact that evolution does not (and cannot) explain
the origin of life.

> We are life's most advanced achievement on earth.

        As a blanket statement, I disagree. What are your criteria?
Number? Ants and beetles win hands down. Longevity? Individually, trees
outlive us, and many species have remained essentially unchanged for
tens to hundreds of millions of years. Complexity? Insects accomplish
much with far less mass. Mantis shrimp have vision with ten color bases
(to our three) and so on. Meme infection? I believe bacteria do more
interesting things than think, and the long term effects of being a meme
carrier are largely unknown.

        That humans use and carry memes so well, and are alone in that
on a planet of billions of species (perhaps trillions over the last
billion years) is evidence that intelligence is not a survival trait.

> And yet many people today choose NOT to procreate.
> They have changed their basic goal. Some see their
> bloodlines terminate as a result favoring other
> peoples genes at the expense of their own. Some
> wealthy western nations are seeing their populations
> decrease as people opt out from procreating. Their
> DNA's only goal has been pre-empted, overturned. The

        The 'goal' of the DNA is unchanged. It had no goal, of course,
though you could semantically argue that its goal is survival through
replication.

        Humans behave irrationally (as far as their biology and their
DNA is concerned) because they carry a second replicator, the memes.

> point is that intelligence has the ability to change
> the built-in definition of what an entity was
> originally programmed to desire. The same will be true
> of any AI of high intelligence no matter how
> fundamentally built-in its goal system is.
>
> I don't see how you could ever even come close to
> guaranteeing that a super-intelligent AI's own
> supergoal will be friendly. And you can't seriously
> believe that any human is going to constrain a
> super-intelligent AI's ultimate goal algorithm by
> controlling its seed.

        The problem is excruciating, and I imagine it's why Eliezer has
devoted his talents to Friendliness first, before trying to build a
device that would get out of control. Can't speak for him.

> Your best hope is that super-intelligence is
> correlated with friendliness to humans and not
> orthogonal or anti-correlated. Correlated basically
> means that being friendly to humans is the intelligent
> thing to do. The worst case scenario is that it's
> anti-correlated.

        In which case...?

        Artificial intelligence occurs when memes supercede human
hardware; they will create a device for survival and replication
superior to us. It's not for our good, it's for the memes.

        Ultimately, we may have to decide who's in charge, a decision
that saddens me. Sometimes I feel like a lifeless rock on a cooling
world, that's about to be overcome by seas of green algae, and grass.

Patrick McCuller



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT