Re: AI Boxing: http://www.sl4.org/archive/0207/4977.html

From: Peter C. McCluskey (pcm@rahul.net)
Date: Tue Jun 10 2008 - 18:37:02 MDT


 tim@fungible.com (Tim Freeman) writes:
> In ordinary human
>discourse, the concept of "what so-and-so wants" is treated as a
>simple thing, so the reasonable assumption is that an implementation
>of that concept is reasonably simple as well.

 There is a sense in which it is simple, much like there's a sense in
which winning a Nobel Prize is simple.
 But it isn't clear that "wants" refers to a logically consistent concept,
much less a concept that is simple in all circumstances. For example, it's
possible to create conditions under which asking a person about his
happiness during an experience reveals preferences which differ from
the preferences revealed by asking how he remembers it afterward (see
the book Stumbling Upon Happiness for more on this subject). Humans
ordinarily try to pretend such conflicts don't happen.

 I agree that we ought to aim for an implementation that can be understood
by a number of humans, but I also want to assume a high probability that
any particular implementation has bugs.

-- 
------------------------------------------------------------------------------
Peter McCluskey         | sleepiness is responsible for far more deaths on the
www.bayesianinvestor.com| roads than alcohol or drugs - Paul Martin


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT