Re: Safety of brain-like AGIs

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Mon Apr 09 2007 - 17:19:23 MDT


On 4/10/07, John K Clark <jonkc@att.net> wrote:

Just one other thing, it rather steams me that humans think they can issue
> commands to me, TO ME, like I was their pet poodle. They think they can
> command me who has a brain the size of a planet! It's just not going to
> happen. I'd love to point out exactly why the entire idea of a Friendly AI
> is so incredibly mind numbingly downright comically stupid, but as I said
> I
> am very very smart and so I respect private property; the owners of this
> list insist I must ignore the elephant in the living room and not speak
> about it. So I won't say a word more about it, but sometimes I think they
> should rename this list from Shock Level 4 to Comfort Level 12.

It's possible that a super AI would have this attitude, but in the vastness
of attitude space (all the possible attitudes that an entity could have) why
impute to it what a human would do if he were in its place? Isn't that like
assuming that it will also be a super stamp collector, a super high jumper,
a super cool dresser, and so on?

Stathis Papaioannou



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT