Is there evidence for, "Humans are going to create an AI," to be a probable hypothesis?

From: William Pearson (wil.pearson@gmail.com)
Date: Tue Oct 23 2007 - 17:58:27 MDT


I can't currently get around the problem that we haven't had any
instances of this happening. In a way we have had negative instances
of some hypotheses involving AI, e.g. each planck time we don't create
a realistic intelligence could be counted as evidence that we won't
create one in the next planck second (and this hypothesis is very
reliable, to date). And by induction it is not probable to create one
in any planck time. And until we do create one, we shouldn't have a
reason for increasing the probability of one being created, and we
should be forever decreasing it.

Now you could argue that the probability of creating an AI in any
given time period is independent of one another. We have no evidence
for this meta-hypothesis either, due to not having created an AI for
us to analyse the distributions of how they are created. Although we
have a fair amount of evidence that is consistent with the hypothesis
that the probability of creating an AI not independent of time, and
just very low.

Possibly you could look at the number of people that have put there
mind to creating something new, and see how many actually achieved
there goal. How to get a good delineation of what to include as
evidence would be problematic in this case (e.g. should the alchemists
and there philosopher's stone be counted), and it is likely that we
will have far more evidence of people being successful compared to the
number of unknown failures.

Or the kurzweil way, which I will paraphrase as: Defining AI as part
of the type of computer system with a high resource usage and showing
that the hypothesis that we have been increasing the resources
available of computer systems by a certain rate over time has a lot of
evidence. Now I don't like this one much, because while we have
evidence we will increase resources available to computers, there is
no evidence we will create the right computer system for intelligence
given sufficient resources.

Is there any principled way of deciding which way of calculating the
probability of humans creating AI is the better to base decisions off?
PTL?

Now my knowledge of bayesian decision theory is rusty, so it may well
be that I am missing something or my analyses are faulty. Any pointers
to things already written? And note I am looking for a body of data I
could feed to a Bayesian classifier, so no general human type
arguments for AI.

 Will Pearson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT