Re: AI Boxing: http://www.sl4.org/archive/0207/4977.html

From: Byrne Hobart (bhobart@gmail.com)
Date: Tue Jun 03 2008 - 09:13:05 MDT


> The assertion that there is no such combination of words is equivalent
> to the assertion that the human brain is perfectly secure. Given that
> more complex systems have more vulnerabilities (all else equal) and
> that brains were evolved rather than designed, it seems to me to be
> wildly implausible that there are no possible exploits for the brain.
>
> It would not surprise me to learn that there were exploits which
> required only seconds to perform verbally. It would, however,
> surprise me to learn that Eliezer had discovered one; the space of
> possibilities is large, and there's no reason to think that a human
> could reason their way to such a thing.

There are all kinds of low-probability exploits! People get suckered into
voting the wrong way, paying for worthless stuff, worshiping fictional Gods,
etc. I think we should work on defining 'exploit', and I suggest that a
working definition is: "Convincing an entity that an act, which furthers the
exploiters goals, actually furthers the entity's goals, even though this act
may actually harm the entity." That covers lies, scams, most advertising,
even more politics, and nearly all religion.

It's dangerous to think of a human mind as a linear system -- you don't get
root access, you just get a low-probability sudo.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT