Re: Building a friendly AI from a "just do what I tell you" AI

From: Stathis Papaioannou (
Date: Sun Nov 18 2007 - 18:07:18 MST

On 19/11/2007, Thomas McCabe <> wrote:

> The OAI analyzes your instruction, concludes that it would require a
> lot of computing power to design an FAI, and then turns the planet
> Earth into computronium before spitting out the design plans.

But it wouldn't do that: it would explain, just as you have, that the
planet would have to be converted to computronium, and (if you're not
sure) you can ask for clarification as to what the consequences of
this would be. It's then up to the humans to decide whether or not to
follow the advice: the OAI has no agenda of its own other than
answering questions or doing as it is asked. The humans might still
decide to do something stupid, but that has always been the case
anyway, and one would hope that at the very least they would be more
aware of the risks of their actions.

Stathis Papaioannou

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT