From: Joel Pitt (joel.pitt@gmail.com)
Date: Thu Jul 14 2005 - 20:34:09 MDT
Eliezer S. Yudkowsky wrote:
> See this post: http://sl4.org/archive/0401/7513.html regarding the
> observed evolutionary imperative toward the development of monoliths and
> the prevention of, e.g., meiotic competition between genes. Replicating
> hypercycles were assimilated into cells, cells assimilated into
> organisms, etc. In Earth's evolutionary history there was a tremendous
> advantage associated with suppressing internal competition in order to
> externally compete more effectively; under your postulates I would
> expect high-fidelity goal-system expanding monoliths to eat any
> individual replicators, much as fish eat algae.
Idea...
Instead of considering the AI as a expanding monolith in itself,
convince it (hardwire it) to think that it *is* the universe. Thus it
will be interested in supressing internal competition and assuming it is
designed so that anything destructive that occurs in the universe causes
it pain or discomfort - including the increase of entropy. Again with
any simple idea like this there are considerations, such as it
preventing us from self-improving ourselves and using more energy -
since it might percieve that as a form of cancer - but such things are
likely to be of minimal discomfort unless a particularly selfish
transhuman decided to expand their power sphere alot more than anyone else.
This all leads to bad things(TM) if we start to consider multiple
universes competing via natural selection though - since an input of
energy would be needed to prevent the entropy increase of one universe
and assumedly the AI would have a persistent itch to do something about
it if possible.
Joel
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT