Re: Two draft papers: AI and existential risk; heuristics and biases

From: David Picon Alvarez (eleuteri@myrealbox.com)
Date: Thu Jun 08 2006 - 13:09:18 MDT


From: "Bill Hibbard" <test@demedici.ssec.wisc.edu>
> Here's a way to think about it. From your post you clearly
> would not do anything to permanently turn on human pleasure
> centers. This is based on your recogition of human
> expressions of happiness and your internal model of human
> mental procoesses and what makes them happy. Given that the
> SI will have as accurate recognition of expressions of
> happiness as you (my point 1) and as good an internal model
> of what makes humans happy as you (my points 5 and 6), then
> why would the SI do something to humans that you can clearly
> see they would not want?

I think this is making some unwarranted assumptions.

I'm sure humans would be happy with their pleasure centres on, I just don't
think happiness particularly matters. Also, even if the pleasure center
trick wouldn't apply, the perhaps simplest way to make humans happy is to
modify them so they are always maximally happy.

Happiness is a means, as far as I can see. Making it an end misses the
point.

Note when I say I don't think happiness particularly matters I'm not saying
it is entirely indifferent. Obviously suffering is ceteris paribus bad,
happiness is ceteris paribus good. It's just I don't think it matters to the
point of optimizing for it.

--David.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT