From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Feb 15 2004 - 09:32:58 MST
Keith Henson wrote:
>
> It seems to me that the core would have to be absolutely impervious to
> outside influences--which is in conflict with intelligence--to the
> extent that intelligence has to do with learning. Otherwise units at the
> ends of communication delays would diverge.
Okay, as a proof of principle, let's take a generic optimization process
(i.e., a paperclip SI) and decompartmentalize learning into Bayesian
learning of world-models and the expected utility equation with a constant
utility function. See:
http://intelligence.org/friendly/features.html#causal_bayesian
http://intelligence.org/CFAI/design/clean.html#reinforcement
(CFAI doesn't call anything by its proper name - "cleanly causal" should
be translated as "expected utility", "Bayesian Probability Theorem" is
Bayes' Theorem.)
The point is that you can perform all learning necessary to the task of
transforming everything in sight into paperclips, and you won't have
conflicts with distant parts of yourself that also want to transform
everything into paperclips - the target is constant, only the aim gets
updated.
Programming in explicit cooperation with distant self-parts, or
maintaining integrity of a philosophically growing Friendly AI, are more
complex subjects. The former looks doable and theoretically
straightforward; the latter looks doable and theoretically complex.
> I suppose every AI could be
> broadcasting its total information stream into memory and receiving the
> memory dumps from every other AI. It would have to treat the experience
> (memory) of other AIs with equal weight to its own. That would keep at
> least close ones in sync, but if there are growing numbers of these
> things, the storage problem will get out of hand no matter what media is
> being used. (In fact, it might make the case for very few AIs. Even on
> per star would get out of hand.)
Hm... I infer that you're thinking of some algorithm, such as
reinforcement on neural nets, that doesn't cleanly separate model
information and utility computation.
> The problems this creates are bad enough that far apart AI cores would
> be forced to consider themselves as different "individuals" just by
> weight of different (unsync'ed) post creation experiences. I think this
> is true even if closer ones engaged in total mind melding.
In human beings all the axes of "individual" versus "group" are conflated:
many memories versus one memory, many clusters versus one processor,
different goals versus same goals, different plans versus same plans, and
so on.
Different memories stored in local nodes of an optimization process
sprawled over long interprocessor communication delays does not equate to
conflict of interest.
> With FTL there doesn't seem to be an obvious limit. Without . . .
> eventually your brain undergoes a gravitational singularity.
Only if you want to keep each computing element within small-N clock ticks
of every other computing element. This is the case with the human brain,
for which Anders Sandberg calculated S = (single instruction time /
communication delay) ~ 1. See "The Physics of Information Processing
Superobjects".
Actually, with FTL or without FTL, if you try to keep S ~ 1 or S < bound,
you run into problems with your brain collapsing gravitationally. Without
FTL, because of the lightspeed delay; with FTL, because the necessary
density of FTL relays to keep all processors within N hops also grows,
albeit logarithmically (I guess). In either case, you can either slow
down your processors or accept a lower S.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT