Re: Is there evidence for, "Humans are going to create an AI," to be a probable hypothesis?

From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Oct 23 2007 - 19:05:53 MDT


On 10/23/07, William Pearson <wil.pearson@gmail.com> wrote:
> I can't currently get around the problem that we haven't had any
> instances of this happening. In a way we have had negative instances
> of some hypotheses involving AI, e.g. each planck time we don't create
> a realistic intelligence could be counted as evidence that we won't
> create one in the next planck second (and this hypothesis is very
> reliable, to date). And by induction it is not probable to create one
> in any planck time. And until we do create one, we shouldn't have a
> reason for increasing the probability of one being created, and we
> should be forever decreasing it.

If you look at the 21st century as yet another random hundred-year
interval, yes, the prior probability is very low due to Laplace's Law
of Succession. However, the prior probability does not equal the
posterior probability; there is a great deal of strong evidence for
the "AGI during 21st century" hypothesis, such as the presence of a
pre-existing general intelligence (humans), and the tools to create
AGI (computers).

> Now you could argue that the probability of creating an AI in any
> given time period is independent of one another.

Each additional time period without AGI does count as evidence against
AGI's feasibility. But there is a huge amount of other evidence to
consider, a lot of which has already been documented and posted
online. See http://www.intelligence.org/AIRisk.pdf for some basics.

> We have no evidence
> for this meta-hypothesis either, due to not having created an AI for
> us to analyse the distributions of how they are created. Although we
> have a fair amount of evidence that is consistent with the hypothesis
> that the probability of creating an AI not independent of time, and
> just very low.

Suppose I have a black box, inside which is a colored block. The block
is either red or blue, but the box only has a hole wide enough for
individual photons to escape, one at a time. I want to show the
physics community that the block is red, so I turn on a light bulb,
and count the colors of the photons coming out. At the end of the day,
ten million are blue, and one million are red. I then publish data on
the red photons in a journal, and surely everyone must now agree that
the block is red; after all, we have a million published pieces of
evidence that it is so.

> Possibly you could look at the number of people that have put there
> mind to creating something new, and see how many actually achieved
> there goal.

It's "their", please use proper grammar. And please try to understand
probability theory better- if ten million people dream of creating
AGI, and one succeeds, this does not mean that the probability of the
human race creating AGI is one in ten million!

> How to get a good delineation of what to include as
> evidence would be problematic in this case (e.g. should the alchemists
> and there philosopher's stone be counted), and it is likely that we
> will have far more evidence of people being successful compared to the
> number of unknown failures.

The unknown failures are unknown for a reason- they have little effect
on history. Thomas Edison tried two thousand different filaments for
the incandescent light bulb. What were they? I have no idea, because
in the grand historical calculus the total failures don't count.
Partial failures (non-Friendly AGIs) may have huge negative impacts,
though.

> Or the kurzweil way, which I will paraphrase as: Defining AI as part
> of the type of computer system with a high resource usage and showing
> that the hypothesis that we have been increasing the resources
> available of computer systems by a certain rate over time has a lot of
> evidence.

This is not Ray Kurzweil's hypothesis.

> Now I don't like this one much, because while we have
> evidence we will increase resources available to computers, there is
> no evidence we will create the right computer system for intelligence
> given sufficient resources.

We *know* that neurons and silicon are equivalent (in computability
theory terms). For a proof-of-concept AGI, you could simply do out the
QED calculations for the wave function of all 10^25-odd atoms in a
human brain- it would take longer than the age of the universe with
current hardware, but it would be fully human-equivalent.

> Is there any principled way of deciding which way of calculating the
> probability of humans creating AI is the better to base decisions off?
> PTL?

Yes, there is- Bayesian probability theory, see
http://www.yudkowsky.net/bayes/bayes.html.

> Now my knowledge of bayesian decision theory is rusty, so it may well
> be that I am missing something or my analyses are faulty. Any pointers
> to things already written? And note I am looking for a body of data I
> could feed to a Bayesian classifier, so no general human type
> arguments for AI.

You must always feed all the data you have to a Bayesian classifier,
or you get nonsense like the red block example above. If you know some
of the data is faulty, the knowledge "Data XYZ is faulty" is itself
data, and should be fed in along with the original data. If you're
trying to decide what "set" of data to feed into the classifier,
you're going about it wrong. See
http://omega.albany.edu:8008/JaynesBook.html.

> Will Pearson
>

 - Tom



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT