RE: [sl4] I am a Singularitian who does not believe in the Singularity.

From: Bradley Thomas (brad36@gmail.com)
Date: Mon Oct 12 2009 - 10:45:31 MDT


Your points are well taken. I wish I could see an easy way that it would be
possible for us to retain control of an AGI. To keep our hands on the reboot
button. If we don't, we might soon have a God, post-Singularity, if we don't
already! I think in that scenario most bets are probably off anyway. I think
we'll probably still be able to influence the AGI to some degree, maybe even
distract it, or even knock it off course a little, but that's about the
extent of it. So that's what I mean by "manipulate" in that case - a weaker
kind of manipulation more like a puppy biting at its heels.

Brad Thomas
www.bradleythomas.com
Twitter @bradleymthomas, @instansa
 

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Pavitra
Sent: Sunday, October 11, 2009 11:26 PM
To: sl4@sl4.org
Subject: Re: [sl4] I am a Singularitian who does not believe in the
Singularity.

Bradley Thomas wrote:
> If humans are included as part of the goal-setting system (by virtue
> of our ability to reboot the AGI or otherwise affect its operation)
> then some of our goal-setting will inevitably leak onto the AGI. We'll
> tweak/reboot it as this suits our own goals.

This assumes that, if the AGI isn't working the way we want, then (1) the
failure will be detectable, and (2) we'll still have enough power over it
that we're able to tweak/reboot it.

> I'd argue that so long as humans can get new information to the AGI,
> humans are part of its goal setting system. The high level goals of
> the AGI are not immune to interference from us. No matter how secure
> the AGI's high level goals supposedly are, we could conceive of ways
> to manipulate them.

That sounds plausible, but I'm not convinced. Isn't this equivalent to
saying "given two agents playing a game (in the game-theoretic sense),
player two can always ensure an outcome e finds acceptable"?

> For example imagine an AGI with the top level goals of alternately
> curing world poverty one day and assisting big business the next. Come
> midnight, the AGI switches over no matter how successful its been the
> previous day. Sounds fair so far... Until one day Acme MegaGyroscopes
> figures out that it can change the rate of spin of the earth...

Realistically, who's going to figure that out first -- the human engineers
at AMG, or the superhuman AGI?

I think you underestimate the consequences of a vastly superhuman
intelligence. The difference between a post-Singularity AGI and a human is
comparable to the difference between a human and a colony of mold, or
between organic life and dead rock. If we're smart, diligent, and lucky,
then human-civilized worlds might become like cells in its body.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT