Re: Questions for Eliezer, SIAI and consciousness

From: Eliezer Yudkowsky (
Date: Tue Oct 26 2004 - 14:58:37 MDT

sam kayley wrote:
> What probability do you give for consciousness involving strange physics?

I'm now as confident that consciousness does not involve strange physics as
I was once confident that it did. I would say "No" with great force, and
expect to be wrong on at least one in three questions of equal difficulty.

> An AI, even one designed to self-improve has some basic assumptions which it
> does not make sense to change, such as its goal, and probably several more
> subtle ontological assumptions. Is it a design goal for SIAI not to include
> assumptions likely to result in nonconsensual uploading of humans even if
> evidence is found that physical effects occur in brains that do not occur
> elsewhere in the observable universe?

Back when I thought consciousness was strange physics, yeah, this was one
of the mental tests I threw at FAI designs. But this is an example of a
behavior that should arise naturally from a good design, *not* something
that should need explicit patching - explicit patching indicates a poor
design. I threw that challenge at potential FAI designs to see if they
solved the problem naturally. In this case, if human brains involve
strange physics and human consciousness intrinsically requires strange
physics, then until you duplicate the strange physics and find some way to
transfer state from one strange-physics phenomenon to another, any attempt
to upload a human or even a laboratory sample of living brain tissue will
fail visibly when it is impossible to formulate a non-strange-physics
device that predicts the neurons' behaviors. An FAI's direct physical
examination of neurons and of human brain designs should yield the same
conclusion, that strange physics is involved and faithful uploading to a
normal computing device is not possible, via abstract reasoning.

Incidentally, Roger Penrose explicitly does not deny that it is possible to
create artificial consciousness. He just denies that you can do it on
Turing machines. Penrose explicitly accepts that if you fathom the
postulated strange physics of neurobiology, you should be able to do
artificial consciousness.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT