Re: Infeasibility of AI-boxing revisited

From: Alfio Puglisi (puglisi@arcetri.astro.it)
Date: Tue Sep 03 2002 - 07:31:10 MDT


On Tue, 3 Sep 2002, Eliezer S. Yudkowsky wrote:

>http://www.newscientist.com/news/news.jsp?id=ns99992732
>
>A self-organizing electronic circuit surprisingly turned itself into a
>radio receiver.

I would not count this as evidence. The article is putting too much
sensationalism in it.
The researchers were trying to "breed" an oscillator, generating
random hardware and selecting, breeding and mutating the best performers.
They were not looking at the product, but only at its output. As usual,
the evolved programs found some shortcut in the ruleset, and used some
not-thought-in-advance method of solving the problem (in this case,
generating a certain wave). They stumbled in some nearby RF transmission.
For what we know, they could have used some unfiltered interference in
their power supply, some constant micro-vibration of the room due to the
building UPS, or something else you would not think as an oscillator.

Something similar happened in a Sweden when they tried to evolve a flying
robot bird.
(http://www.reuters.com/news_article.jhtml?type=search&StoryID=1329646
They were checking for the "maximum altitude" variable, or something
similar. The birds found that standing on their wings they could increase
the altitude, and with great efficiency too, so the control software
selected that kind of behaviour...

In both cases, the problem was the insufficient detail of the selecting
rules (the oscillator must work in 1,000 different locations and
temperatures, the birds should actually move when flying, etc.)

I would take this more as an input for future AI-breeding programs - the
ones that will have to choose the most intelligent AIs from the new
release pack. Can we find a definition of "intelligence" good enough to
discriminate the brights AIs from the cheaters ones in a practical way?

Alfio



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT