RE: In defense of Friendliness was RE: [wta-talk] My own A.I project: 'Show-down with Singularity Institute"

From: Reason (reason@exratio.com)
Date: Thu Oct 17 2002 - 02:03:15 MDT


> If you're interacting as an equal, you can try and convince
> people to want
> other things, and they can try and convince you back. But if you have
> substantially transhuman brainpower and can model people from the design
> stance, such that you have the power to convince people of arbitrary
> propositions, at that point what you are doing ceases to be
> "argument" and
> becomes "mind control", unless the discussion is carried out *solely*
> according to the wishes of the person being talked to.

What's the mechanism by which the big brain obtains information on what the
wishes of the little brain are without contaminating the little brain by
requesting information or passively making information available to let the
little brain know it should be formulating wishes? I don't see a level at
which any minimal conversation or interaction to determine wishes is not
mind control of those wishes (in some way meaningful to the big brain) if
you define mind control (advertantly or otherwise) as you did in your
previous e-mail.

In light of that thought, the old, old SF story on Pascal's Wager, the
all-powerful energy being, the jolts in boxes and the mechanistical
cockroach lifeforms springs to mind. (I can remember that lot, but not the
name of the piece. Oh well).

Reason
http://www.exratio.com/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT