From: H C (lphege@hotmail.com)
Date: Thu Jul 14 2005 - 22:05:13 MDT
All HUMANS include everything they see as part of themselves. We identify
explicitly with the observable universe, AND with percieved limits of
observable reality. Everything we observe becomes part of our overall self.
Duh.
>From: pdugan <pdugan@vt.edu>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>CC: pdugan@vt.edu
>Subject: RE: Universe identity (was: Fighting UFAI)
>Date: Thu, 14 Jul 2005 23:08:52 -0400
>
>This echoes an idea I've had regarding inclusive Identity. The best way to
>make an AI selfless would be to wire a self-concept which recursively
>includes
>every known object as part of the overall self. If this process were
>continuous to objects outside the included set, then the included set would
>continue to grow, allowing the AI to identify not just explicitely with the
>observable universe, but with the percieved limits of observable reality.
>
> Patrick Dugan
>
> >===== Original Message From Joel Pitt <joel.pitt@gmail.com> =====
> >Eliezer S. Yudkowsky wrote:
> >> See this post: http://sl4.org/archive/0401/7513.html regarding the
> >> observed evolutionary imperative toward the development of monoliths
>and
> >> the prevention of, e.g., meiotic competition between genes.
>Replicating
> >> hypercycles were assimilated into cells, cells assimilated into
> >> organisms, etc. In Earth's evolutionary history there was a tremendous
> >> advantage associated with suppressing internal competition in order to
> >> externally compete more effectively; under your postulates I would
> >> expect high-fidelity goal-system expanding monoliths to eat any
> >> individual replicators, much as fish eat algae.
> >
> >Idea...
> >
> >Instead of considering the AI as a expanding monolith in itself,
> >convince it (hardwire it) to think that it *is* the universe. Thus it
> >will be interested in supressing internal competition and assuming it is
> >designed so that anything destructive that occurs in the universe causes
> >it pain or discomfort - including the increase of entropy. Again with
> >any simple idea like this there are considerations, such as it
> >preventing us from self-improving ourselves and using more energy -
> >since it might percieve that as a form of cancer - but such things are
> >likely to be of minimal discomfort unless a particularly selfish
> >transhuman decided to expand their power sphere alot more than anyone
>else.
> >
> >This all leads to bad things(TM) if we start to consider multiple
> >universes competing via natural selection though - since an input of
> >energy would be needed to prevent the entropy increase of one universe
> >and assumedly the AI would have a persistent itch to do something about
> >it if possible.
> >
> >Joel
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT