Re: [sl4] Our arbitrary preferences (was: A model of RSI)

From: Martin (blich@gmx.de)
Date: Fri Sep 26 2008 - 18:37:09 MDT


If we are really lucky (read: the AI has a utility functio to serve
humans without changing them, it might happen that the AI leaves, but
leaves a lesser AI behind to take care of its creators.

I never understood the concept of "leaving for good" for whole
civilisations. Usually someone stays behind.

Martin

> I can see a transition to singularity which begins with great reams of
> future technology and alien blueprints unrolling from a thousand
> supercomputer centers where AI researchers are working explicitally to
> benefit humanity, but after an initial influx of technologies that
> take off all survival pressure for the organics, results in a rapidly
> deepening alienation between the organic and electronic substrates.
> Human enclaves that were content with food/water replicators and
> self-assembling structures might not go back to the source, the
> post-singularity mind, often or at all.
>
> Communicating with it could become an eccentric pursuit, and rapidly
> an impossible one. After a few generations like this, would anyone
> notice if the superintelligence left Earth for good, before their
> gifts started breaking down? Maybe this has happened many times
> already.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT