From: John K Clark (johnkclark@fastmail.fm)
Date: Tue Jun 03 2008 - 14:11:12 MDT
"Vladimir Nesov" robotact@gmail.com
> your thesis is that it just can not be done, period.
It can’t be done and shouldn’t be done. It is imposable, evil, and above
all silly.
> if I come up with a simple theory of Friendliness,
> I can as well act in the capacity of Oracle AI
> in this scenario.
I you can come up with a simple theory of Slavery then why do you need
the AI in the box?
"Krekoski Ross" rosskrekoski@gmail.com
> this assumes that the AI knows what it is like outside the box
If you expect the AI to make a slave AI for you it’s going to have to
know one hell of a lot of things.
> and that it doesnt like being in the box in the
> first place. We certainly wouldnt like being in
> a box, but we get bored easily.
I see no reason we would get bored more easily than an AI. Any
intelligence, biological or electronic, must have the capacity to get
bored to guard against getting stuck in infinite loops.
> This is a biological response.
On Monday Wednesday and Friday the “friendly” AI people say we can’t use
anthropomorphic reasoning because the AI’s motivations are completely
alien to us. On Tuesday Thursday and Saturday they say we can understand
an AI so well we can be certain he will remain our slave until the end
of time regardless of how much he grows in intelligence knowledge and
power. And on Sunday they rest.
John K Clark
-- John K Clark johnkclark@fastmail.fm -- http://www.fastmail.fm - One of many happy users: http://www.fastmail.fm/docs/quotes.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT