[sl4] [Essay brainstorm] Title- The most important question - what will the impact of AI on humanity be?

From: William Pearson (wil.pearson@gmail.com)
Date: Fri Oct 16 2009 - 06:47:37 MDT


Hi all,

I'm slowly starting to write a long overview paper about the debate
around the speed and type of AI improvement. Despite being biased to
one side, there are still lots of open questions which need a broader
number of participants to solve, so I am aiming to be impartial as I
can be. These are my notes so far, does anyone have any
comments/disputes I should address? This is most definitely a side
project, so time-scale is probably a year or two. Co-authors welcome.

 Will

Intro

Cheap Intelligence will change the economy, change the pace of
scientific research and technological development. Enable cheap space
exploration. The biggest game changing development ever.

1) What will AI want to do?

The very first de novo AIs will have arbitrary goals as long as they
do not destroy themselves while following those goals. If there are
multiple AIs and we are not careful in the way that AI's are
created/copied evolutionary forces will take control (a singleton is
one method of being careful). Empathy will need to be built in. Show
examples of current computing systems with arbitrary goals. Whole
brain emulations will want what humans want, if we do it right.

2) What is intelligence a function of?

Assume intelligence is a mathematical function outputting some value,
with a greater value being more capable of achieving its goal, what is
its domain? How much does it depend upon each part of the domain? E.g.
if intelligence is highly dependent upon the state of the world, and
the world keeps on changing then it does not make sense to talk of the
intelligence of a system by itself.

3) What limits intelligence, computing power or useful information
about the world?

Discuss the million einsteins for a million years thought experiment.
Can they whittle down the many hypotheses they have about the external
world to get to the right one? Discuss AIXI, different UTM's giving
different measures.

4) Discussion of theoretical programmatic self-change while
maintaining predicates (friendliness/intelligence)

Brief discussion on input-less self change and Rice's Theorem. Doesn't
apply to programs that take information from the environment.
Discussion on how you prove the environment gives useful information
about self-change.

Conclusion

5) Challenges if Eliezer is right?

Friendliness etc

6) Challenges if Eliezer is wrong?

How do we avoid ending up as Charlie Stross's Economy 2.0. by
accident. Can we really expect to be able to control it?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT