Re: Transcript. please? (Re: AI-Box Experiment 3)

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon Aug 22 2005 - 13:14:08 MDT


> 1a. I am familiar with many independent and
> well-substantiated cases of appearances of the Virgin Mary.
> I give them small probabilities of truth, because I gave
> them small priors. I'd need details of each case to
> overcome these low priors. Likewise with the AI-box.

The prior is low because the hypotheses 'it is a fake' and
'it's random chance', the first well supported by experience
and the second well supported by statistics, are much more
likely than 'there is a supernatural entity contradicting
all known physics that decided to create pattern X in location
Y'. What's the competing hypothesis for AI-box, and what
support gives it such a high probability? What other reasons
do you have for doubting the 'humans are easily convinced of
things they don't expect to be convinced of' hypothesis?

> 1b. Someone once said that a scientist is someone who,
> instead of believing something because he sees it, is
> someone who believes something because he has a theory for
> it. I can't construct a theory for the AI-box results
> without more details of what happened.

You can't construct a specific theory of what happened.
But the aim is not to make a specific prediction. The aim
is simply to predict if a prepared, intelligent person can
be convinced. Given the likely difference between the
specifics here and the specifics of any real encounter, the
the extra predictive power gained by building a more
detailed model of these encounters is probably marginal.

> 2. I want to know whether Eliezer is using the same
> "trick" each time.

Eliezer, are you using the same trick each time?
 
> 3. The outcome is meant to suggest that, if Eliezer can
> convince an arbitrary human to do something they are
> dead-set against doing, then so can an AI. But if Eliezer
> had such powers, why can't he consistently convince people
> on SL4 to change their views?

Again, this isn't the aim of the experiment, and this
much more general prediction would be harder to make or
justify. The question is whether the very specific action
of letting an AGI of unknown ethics out of a box is something
that people can be convinced to do.

> This suggests a solution to boxing an AI: Create a list
> advocating the position that AIs will be friendly. Wait
> until that list's Marc Geddes-equivalent appears. Have him
> be the gatekeeper.

Go for it.

> 4. I'm interested in the transcripts for what they say
> about human psychology.

Ditto, but my satisfying my curiosity is a minor consideration
compared to convincing people that AI-boxing is an incredibly
risky (i.e. probably futile, but a small risk of blowing up the
world is still an insane risk under most conditions) strategy.
 
 * Michael Wilson

                
___________________________________________________________
How much free photo storage do you get? Store your holiday
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT