Re: [sl4] Our arbitrary preferences (was: A model of RSI)

From: Nick Tarleton (nickptar@gmail.com)
Date: Fri Sep 26 2008 - 09:16:21 MDT


On Fri, Sep 26, 2008 at 10:28 AM, Matt Mahoney <matmahoney@yahoo.com> wrote:

> --- On Fri, 9/26/08, Stuart Armstrong <dragondreaming@googlemail.com>
> wrote:
> > > Could someone remind me again, what are we trying to
> > achieve with a singularity?
> >
> > The survival of some version of humanity. Beyond that, the usual
> > eternal meaninful happiness and immortality stuff. Beyond that, we all
> > disagree.
>
> We don't agree about the first part either. A singularity could take many
> forms that result in human extinction, extinction of all DNA based life, or
> extinction of all life (the latter being the only stable attractor in an
> evolutionary process).

Well, the entire point is to see that this doesn't happen.

> At best it will result in godlike intelligence that (by definition) bears
> little resemblance to humanity, and which will be unobservable to any humans
> who are still present.

You've completely lost me; why couldn't we observe a superintelligence?

Furthermore, our quests for happiness and immortality serve to increase our
> evolutionary fitness, but only if they cannot be obtained.

And?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT