Re: Why friendly AI (FAI) won't work

From: Harry Chesley (chesley@acm.org)
Date: Wed Nov 28 2007 - 18:20:46 MST


Byrne Hobart wrote:
>
>> First, to be useful, FAI needs to be bullet-proof, with no way for
>> the AI to circumvent it. This equates to writing a bug-free program,
>> which we all know is next to impossible. In fact, to create FAI, you
>> probably need to prove the program correct. So it's unlikely to be
>> possible to implement FAI even if you figure out how to do it in
>> theory.
>
> Does it need to be perfect, rather than 1) better than previous
> versions, 2) able to recognize errors, and 3) highly redundant? For
> example, your AI could be motivated to ensure Kaldor-Hicks efficient
> transfers of wealth, /and/ to ensure maximally beneficial transfers
> of wealth -- and if it finds that Goal #2 is interfering with Goal
> #1, it would drop the second goal until it could come up witha a
> better way to fulfill it without hurting Goal #1. I mean, if you're
> designing a system for, say, routing trains, you don't need a perfect
> Get Things There Fast routine, as long as you have a very
> high-priority, low-error Don't Crash Things Into Each Other routine.

I would have thought that 1 and 2 were as likely to be buggy as anything
else. Redundancy, though, has some possibilities. Sort of like putting
something in multiple bags, so that if some leak, it's still OK. But
that could equally well be applied to other approaches.

>> Second, I believe there are other ways to achieve the same goal,
>> rendering FAI an unnecessary and onerous burden. These include
>> separating input from output, and separating intellect from
>> motivation. In the former, you just don't supply any output channels
>> except ones that can be monitored and edited.
>
> Monitored and edited by whom? This dragon-on-a-leash theory
> presupposes that we can pick the right leash-holder, and ensure that
> the leash stays where we want it. That's very nearly incompatible
> with the notion that an AI is valuable enough to be worth creating
> and powerful enough to make a difference.

Monitored by us or by programs we create, which can be much simpler than
an AI and hence easier to get right. The best current example I can
think of is programmed trading, where we already have AIs (simple ones)
picking stocks and automatically buying for selling them. But there are
safeguard programs that shut the whole system down if certain parameters
are exceeded.

As to it being worthwhile, I'd say you need to evaluate that on a
case-by-case basis. But you're right, it certainly won't be as powerful
as a completely unleashed AI.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT