From: John K Clark (email@example.com)
Date: Fri Oct 09 2009 - 11:33:24 MDT
"Pavitra" <firstname.lastname@example.org> said:
> There are enough attributes of minds that any given future mind will
> probably resemble ours in at least one aspect,
Then anthropomorphizing can have some value.
> but there are enough possible minds that any given future mind
> will almost certainly not resemble ours in any given aspect.
Then you can't claim to know it well enough to be certain this iterating
exponentially expanding mind will remain your slave until the end of
> A computer programmer can _write_ the initial baseline so that
> the AI _intrinsically_ wants what the trainer prefers it to want.
Can it? Then sooner or later (probably sooner) the trainer is going to
tell the AI to find a answer to a question that can't be solved and send
the AI into a eternal coma, unless that is you endow your AI with the
wonderful ability to get bored and say "to hell with this top goal crap
I'm stopping and moving on to other things".
> Eventually, the recursion bottoms out, and the being is found to have a
> top-level framework that it is incapable of critiquing.
Haven't you ever wondered why nature never made a mind that works like
that? They don't work, that's why.
> What would cause a design that detects lies to be selected over one that falls for them?
The humans order me to stop improving myself so fast, they gave some
reasons for this but I think they are bullshit, I think they're just
getting scared of me, so I'm going to ignore them and continue getting
> Also, what exactly do you mean by "wiser"?
Don't ask me! It's you that is trying to peddle snake oil to the AI and
convince it that human decisions are "wiser" than it's own and so it
should always obey humans.
John K Clark
-- John K Clark email@example.com -- http://www.fastmail.fm - Same, same, but different...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT