Re: Building a friendly AI from a "just do what I tell you" AI

From: sl4.20.pris@spamgourmet.com
Date: Tue Nov 20 2007 - 18:28:05 MST


A lot of people got sidetracked by questioning the basic assumptions.
So I have to state them more explicitly and detailedly:

* The OAI is not able and doesn't want to do anything besides
answering questions in a restricted way: it will output media
(text/pictures/videos/audio). Think of it as a glorified calculator AI
(GCAI). The reason this is so, is because this is the way it was
designed.
* It will not go into an infinite loop or decide that it needs to turn
the whole earth/universum into computronium if it is faced with a
question beyond it's capabilities. If you use your desk calculator and
press the key "pi" the calculator doesn't start an infinite loop in
order to calculate all digits of pi, but it just outputs pi to some
digits. If you ask the GCAI:
- "Calculate pi" it would ask back:
- "How many digits do you want?"
- "I want them all!"
- "Sorry, I cannot do that."
- "Ok, give me the first 3^^^3 digits."
- "Sorry, but the universe will be dead before I can finish this task.
If you want I can still start calculating it, just interrupt me when
you want something else."
etc...
* It CANNOT modify itself, nor does it want to modify itself.
* When outputting media the GCAI won't do anything unexpected like
embedding subliminal messages in order to manipulate humans. It also
won't output an infinite or very high number of pages beyond our
reading capacity, unless specifically asked to do so.

I hope you get the idea. All this is based on the assumption that it
is easier to do a GCAI according to the specifications above than a
FAI. Why? Because it is very hard to define what friendly is supposed
to be or do. On the other hand we all know how a calculator is
supposed to behave itself.

Roland



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT