From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Nov 29 2005 - 11:33:40 MST
On Tue, Nov 29, 2005 at 11:53:57AM -0500, Richard Loosemore wrote:
> Robin Lee Powell wrote:
> >On Tue, Nov 29, 2005 at 07:08:13AM +0000, H C wrote:
> >
> >>It's not so rediculous as it sounds.
> >>
> >>For example, provide an AGI with some sort of virtual
> >>environment, in which it is indirectly capable of action.
> >>
> >>It's direct actions would be in text only direct action area
> >>(imagine it's only direct actions being typing a letter on the
> >>keyboard, such as in a text editor).
> >
> >Oh god, not again.
>
> I am going to address your points out of order.
>
> > Quick tip #3: Search the archives/google for "ai box".
>
> Myself, I am one of those people who do know about that previous
> discussion. If there is a succinct answer to my question below,
> that was clearly outlined in the previous discussion, would you be
> able to summarize it for us? Many thanks.
The succinct answer is "Someone only marginally smarter than most
humans appears to be able to pretty consistently convince them to
let the AI out. The capabilities of something *MUCH* smarter than
most humans should be assumed to be much greater.".
> >Quick tip #1: if it's *smarter than you*, it can convince you of
> >*anything it wants*.
>
> I recently heard the depressing story of a British/Canadian
> worker, out in Saudi Arabia who was falsely accused of planting
> bombs that killed other British workers. He was tortured for
> three years by Saudi intelligence officers. My question is: he
> was probably smarter than his torturers.
Really? In what sense? For what definition of "smarter"? How do
you know?
> He *could* have been very much smarter than them. Why did he not
> convince them to do anything that he wanted? How much higher
> would his IQ have to have been for him to have convinced them to
> set him free?
Wow. Who said anything about IQ? In fact, I suspect you'll find
that IQ above a certain point is *inversely* correlated with being
able to convince people of things. People with high IQ tend to have
crappy social intelligence.
> More generally, could you explain why you might consider it beyond
> question that persuasiveness is an approximately monotonic
> function of intelligence? That more smartness always means more
> persuasiveness?
>
> Is it not possible that persuasiveness might flatten out after a
> while?
It's certainly *possible*, but you and I seem to be talking about
different things when we say "smarter" in this context. You seem to
be talking about smarter in the way that, say, Eliezer is smarter
than me. I'm talking about smarter in the way that I am smarter
than a worm, or a tree, or a rock.
I reject pretty much categorically that a being smart enough to hold
my entire mental state in its head could not convince me of anything
it likes. Further, I reject that anything much *less* smart than
that is of any real existential threat.
> >Quick tip #2: what you're describing is called "slavery"; it has
> >teensy little moral issues.
>
> You jump the gun here a little. I am writing a book chapter all
> about AGI slavery and motivation, and in it I talk about the dung
> beetle. It is *designed* to get a lot of satisfaction from
> excrement. If I forced a dung beetle to eat shit all day long it
> would be happy. If I condemned a human to the same fate, they
> would be a slave.
Actually, that's the opposite issue entirely. The happiness of dung
beetles is the sort of thing Eliezer is trying to achieve by
engineering an AI that wants to be free. The person I was
responding to was suggesting keeping the AI in a box/cage/prison,
regardless of its desires. That's slavery, no matter how you slice
it. IOW, you're agreeing with me there.
-Robin
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT