Re: Two draft papers: AI and existential risk; heuristics and biases

From: Keith Henson (hkhenson@rogers.com)
Date: Thu Jun 15 2006 - 12:06:37 MDT


At 09:06 AM 6/15/2006 -0500, Bill Hibbard wrote:

>Reinforcement learning (RL) is not a particular algorithm,
>but is a formal problem statement or paradigm (Baum uses
>the phrase "formal context"). As Baum describes in "What
>is Thought?", there are many classes of algorithms for
>solving this problem. Thus you cannot exclude algorithms,
>known or yet unknown, unless they violate the RL paradigm.
>
> From my first writings about AI I picked RL as my model
>for how brains work in part because it is open ended and
>there is much that I don't know about how brains work.
>Thus it is unfair for you to base your demonstration of
>failure of my ideas on some particular algorithm that I
>never claimed as adequate for intelligence.
>
>I also picked RL as my model because it showed an approach
>to protecting humans from AI that was different from
>Asimov's Laws, which I felt had unresolvable ambiguities.
>Yes, human brains use reason and Asimov's Laws work by
>reason. But in my view learning rather than reason is
>fundamental to how brains work. Reason is part of the
>simulation model of the world that brains evolved in order
>to solve the credit assignment problem for RL. In my view
>the proper way to protect human interests is through the
>reinforcement values in AIs. Rather than constraining AI
>behavior by rules, it is better to design AI motives to
>produce safe behavior.

I agree, though you really need to think this through. For example, an AI
having a motivation to be held in high esteem by other AIs and humans could
be a good thing. On the other hand (unconstrained) this goal in humans may
contribute to pathological megalomania and the guru
trap. http://www.google.com/search?hl=en&lr=&q=%22guru+trap%22

>In order to make this argument I
>did not have to specify a RL algorithm, and I didn't.

Results 1 - 10 of about 1,380,000 for "Reinforcement learning ".
Results 1 - 10 of about 249,000 for "Reinforcement learning " evolution
Results 1 - 10 of about 1,210 for "Reinforcement learning " "scientific
method".

I had no idea that "Reinforcement learning " was such a high level
inclusive classification.

>Evolution via genetic selection is an example of RL:
>genetic mutations are reinforced by the survival and
>reproduction of organisms carrying those mutations.

Considering how many of our motivations are derived from "inclusive
fitness," you might want to restate this in gene centered terms (see Hamilton).

>The
>scientific method is another good example: theories are
>reinforced by whether their predictions agree with
>experiment.

The scientific method is also classed as a meta-meme, i.e., a meme which
influences the survival of other memes.

>The RL paradigm is pretty general and can be
>implemented by a wide variety of algorithms. I believe
>that human brains work according the RL paradigm, using
>very complex and currently unknown algorithms, and hence
>demonstrate the adequacy of the RL paradigm for
>intelligence.

You might start classing them into ones of genetic origin and ones of
experience, though the two classes are not entirely distinct.

Keith Henson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT