Re: [sl4] I am a Singularitian who does not believe in the Singularity.

From: Pavitra (celestialcognition@gmail.com)
Date: Sun Oct 11 2009 - 21:26:08 MDT


Bradley Thomas wrote:
> If humans are included as part of the goal-setting system (by virtue of our
> ability to reboot the AGI or otherwise affect its operation) then some of
> our goal-setting will inevitably leak onto the AGI. We'll tweak/reboot it as
> this suits our own goals.

This assumes that, if the AGI isn't working the way we want, then (1)
the failure will be detectable, and (2) we'll still have enough power
over it that we're able to tweak/reboot it.

> I'd argue that so long as humans can get new information to the AGI, humans
> are part of its goal setting system. The high level goals of the AGI are not
> immune to interference from us. No matter how secure the AGI's high level
> goals supposedly are, we could conceive of ways to manipulate them.

That sounds plausible, but I'm not convinced. Isn't this equivalent to
saying "given two agents playing a game (in the game-theoretic sense),
player two can always ensure an outcome e finds acceptable"?

> For example imagine an AGI with the top level goals of alternately curing
> world poverty one day and assisting big business the next. Come midnight,
> the AGI switches over no matter how successful its been the previous day.
> Sounds fair so far... Until one day Acme MegaGyroscopes figures out that it
> can change the rate of spin of the earth...

Realistically, who's going to figure that out first -- the human
engineers at AMG, or the superhuman AGI?

I think you underestimate the consequences of a vastly superhuman
intelligence. The difference between a post-Singularity AGI and a human
is comparable to the difference between a human and a colony of mold, or
between organic life and dead rock. If we're smart, diligent, and lucky,
then human-civilized worlds might become like cells in its body.





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT