From: Robin Lee Powell (firstname.lastname@example.org)
Date: Wed Jun 25 2008 - 12:36:26 MDT
On Wed, Jun 25, 2008 at 01:13:31PM -0500, Bryan Bishop wrote:
> On Wednesday 25 June 2008, Robin Lee Powell wrote:
> > John Clark has refused to even explain why "the AI will see
> > friendliness as something to be defeated" is different than
> > "humans will see their inclanation to not slaughter babies for
> > fun as something to be defeated". I wouldn't bother; he really
> > doesn't seem to have actually thought this stuff through at all.
> I've been reading some of John's messages and I don't think he's
> said anything about ai defeating friendliness. Rather that the
> goal-based-approach to intelligence isn't going to do anything
> special. I might be wrong in my interpretation of his messages.
He has specifically said, repeatedly, that a super-intelligence
would treat the morality designed into it as something to be
overcome, or that it would somehow magically start ignoring it one
day, or some such. That a designed-in morality cannot survive
recursive self-improvement, anyways.
-- Lojban Reason #17: http://en.wikipedia.org/wiki/Buffalo_buffalo Proud Supporter of the Singularity Institute - http://intelligence.org/ http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT