Re: AI-Box Experiment 3: Carl Shulman, Eliezer Yudkowsky

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Mon Aug 22 2005 - 12:00:12 MDT


On Sun, 2005-08-21 at 23:34 -0400, Peter de Blanc wrote:
> On Sun, 2005-08-21 at 19:54 -0700, Eliezer S. Yudkowsky wrote:
> > Oh, don't I just wish. If I had that kind of charisma, even toned
> > down to
> > obey rationalist ethics, SIAI would be a lot larger by now.
>
> Can you elaborate on this a little (ethics)?
>

Actually, feel free to ignore that request. I believe you're talking
about this:

On Sun, 2001-02-04 at 21:48:41 -700
> 1: Sysop (observes): Samantha's volitional decision is that she would
> like me to offer advice as long as I don't use persuasive methods
> that
> 'force' her decision - that is, use persuasive methods that are
> powerful
> enough to convince her of false things as well as true things.

And also this post:

http://sl4.org/archive/0211/5693.html

I had been contemplating that the selection pressures on ethics do not
necessarily select for ethics which will help you achieve your stated
goals, and I think that all rationalists need to carefully consider the
usefulness of their ethics.

Ethics are heuristics, not values, and in this case I think it is a
useful heuristic. Even with a large amount of skill in manipulating
people, the consequences of using invalid arguments become very
unpredictable.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT