From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Dec 04 2007 - 22:29:32 MST
On 05/12/2007, Matt Mahoney <matmahoney@yahoo.com> wrote:
> When you were a child, did you really think that your parents were asking in
> your best interest when they wouldn't give you what you wanted?
No.
> What makes you think you could be smarter than a machine that was designed to
> be smarter than you?
I won't be smarter than the machine, but I will still want to control
my own destiny as far as possible. That may involve allowing the
machine to make decisions on my behalf, but being able to change my
mind about this when I feel like it. This is no different to what
happens today with human experts: my dentist is a lot smarter than I
am when it comes to teeth, but ultimately it is up to me whether I
take his advice.
> But it doesn't really matter, because most people think the same way, so that
> is what we will build.
Yes, that's the point.
> Suppose we are successful and the AI actually does
> want to give us what we want right now. To the AI, our brains are simple
> computers that can be reprogrammed. Move some neurons around, and we will all
> be happy.
Hopefully not if that's not what we we want it to do. Hopefully my
dentist won't replace all my teeth with dentures unless I want him to,
either.
> Meanwhile, evolution will continue to select agents that don't always get what
> they want, that still fear death and then die.
You can only go so far with evolutionary arguments. A society that
kills its unproductive members will, all else being equal, prevail
over a society that does not. What conclusions should we draw from
this about the nature of successful civilizations in the universe?
What conclusions should we draw about how we should behave?
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT