Re: Transcript. please? (Re: AI-Box Experiment 3)

From: Phil Goetz (philgoetz@yahoo.com)
Date: Mon Aug 22 2005 - 13:49:30 MDT


--- Michael Wilson <mwdestinystar@yahoo.co.uk> wrote:

> > 1a. I am familiar with many independent and
> > well-substantiated cases of appearances of the Virgin
> Mary.
> > I give them small probabilities of truth, because I
> gave
> > them small priors. I'd need details of each case to
> > overcome these low priors. Likewise with the AI-box.
>
> What's the competing hypothesis for AI-box, and what
> support gives it such a high probability? What other
> reasons
> do you have for doubting the 'humans are easily convinced
> of
> things they don't expect to be convinced of' hypothesis?

The competing hypothesis is that there is some trivial
reason that Eliezer is winning these experiments.

> > 1b. Someone once said that a scientist is someone who,
> > instead of believing something because he sees it, is
> > someone who believes something because he has a theory
> for
> > it. I can't construct a theory for the AI-box results
> > without more details of what happened.
>
> You can't construct a specific theory of what happened.
> But the aim is not to make a specific prediction. The aim
> is simply to predict if a prepared, intelligent person
> can be convinced.

And I won't "believe" that the outcome of the experiment
means that a prepared, intelligent person can be convinced,
unless I can construct a theory as to how that would
happen.

> > 3. The outcome is meant to suggest that, if Eliezer can
> > convince an arbitrary human to do something they are
> > dead-set against doing, then so can an AI. But if
> Eliezer
> > had such powers, why can't he consistently convince
> people
> > on SL4 to change their views?
>
> Again, this isn't the aim of the experiment, and this
> much more general prediction would be harder to make or
> justify. The question is whether the very specific action
> of letting an AGI of unknown ethics out of a box is
> something that people can be convinced to do.

The fact that Eliezer can't persuade
an arbitrary person even of things Eliezer believes
is evidence against the hypothesis that he is able to
persuade them to let the AI out of the box, and in favor
of the hypothesis that there's a trivial trick to it.

- Phil

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT