Re: Changing the value system of FAI

From: Dani Eder (
Date: Wed May 10 2006 - 05:26:12 MDT

 "Eliezer S. Yudkowsky" <> wrote:
> build a bridge that
> stays up, it is not necessary to be able to
> calculate of any possible
> bridge whether it stays up. It is enough to know at
> least one bridge
> design such that you can calculate that it supports
> at least 30 tons.

To give a real world example, the wooden bridge
where the road I live on crosses the stream on my
property was rated at 13 tons. A fully loaded logging
truck (approx 40 tons) tried to cross the bridge a
few months ago. It broke the bridge and fell in the
stream. So it is not just how you design a system
that matters, but how you operate it, too.

If your goal is a safe or friendly AI, designing
the AI software alone is not sufficient. Working
out the theory behind the algorithms is a necessary
first step. When it come to designing a real
system, however, you will need to consider
facilities, hardware, personnel, and operating
procedures as well as software.

If you don't want the AI's code and data to get
inadvertantly corrupted, you might specify the
type of file system for the hard drives, RAID level,
backup power supply, and data backup plan.

If don't want the AI's goals to change too much
too fast, you might specify they are hard coded
in a ROM chip, and require extensive human
analysis before updating them. You might also
monitor the AI's goal stack, code size, processor
temperature, and network traffic and signal an
alert if anything changes unexpectedly.

Right now we are equivalent to where nuclear physics
was when they were working out how fission works,
and the possibility of a chain reaction. But
it's a long way from working out the theory to
a working nuclear power plant.


Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT