From: Gordon Worley (firstname.lastname@example.org)
Date: Sat Jul 27 2002 - 10:15:34 MDT
On Saturday, July 27, 2002, at 11:14 AM, James Higgins wrote:
> Besides, the sample size for this test is too small to draw any
> conclusions from. And I don't think the tests done by Justin Corwin
> have any significance, since the AI researchers are not going to grab
> some random person off the street and give them the power to release
> the AI.
The random person on the street has the same kind of brain an AI
researcher has: a human one. When it comes to SI, it won't be a matter
of just convincing (though that will be one likely mode of attack), but
of using various techniques to attack the human brain: hypnosis,
sensory confusion, magic words (you never know), and more ways that we
haven't thought up yet. The only case in which the AI researcher has an
advantage is if the AI is infrahuman or human level and if the
researcher is someone like Eliezer who knows what an unFriendly
infrahuman or human level AI might try to do to escape and can watch for
these and stop himself if he sees what he's doing. Unfortunately, this
advantage is very slight and only really applies to mind attacks, not
brain attacks (hence the advantage only existing against infrahuman and
human level AI).
All humans look the same to SIs, even the ones that created ver.
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose email@example.com it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT