Re: Building a friendly AI from a "just do what I tell you" AI

Date: Wed Nov 21 2007 - 08:50:44 MST

On Nov 21, 2007 9:24 AM, Tim Freeman -

> Saying "the idiot user shouldn't have implemented a plan he didn't
> understand" doesn't work. Humans can't tell with any reliability
> whether they accurately understand something. There is unavoidable

True, that's why I think it would be easier to build a FAI with the
intermediate step of building an OAI first. Nothing can protect us
from our own stupidity except maybe a FAI, but we are not there yet.


Hard: Humans -> FAI
Easier: Humans -> OAI -> FAI


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT