Re: My attempt at a general technical definition of 'Friendliness'

From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Sat Jan 22 2005 - 00:31:55 MST


--- Marc Geddes <marc_geddes@yahoo.co.nz> wrote:

> --- Thomas Buckner <tcbevolver@yahoo.com>
> wrote:
> > I agree with just about everything Harvey
> > Newstrom says, and am especially impressed
> with
> > the definition scheme he presents; but I do
> have
> > some disagreement as follows:
>
> I (Marc) was the one trying to come up with a
> definition scheme. Harvey was giving a
> critique.
>
I was referring (sloppily) the the nested precise
engineering definitions he used as an example. As
to your definition scheme, I think most of the
critique is valid; but I know I couldn't do any
better, and wouldn't try. I think a precise
definition of Friendliness is impossible (well,
problematical) until we fully grasp our own needs
and wants, now *and* in the future! Suppose our
well-placed limitations on the FAI succed
perfectly in preventing it from going against our
will, and in the process prevent it from doing
something unforeseeable which we need?

When I think of the maze of possible exigencies
we might need to define for FAI, of all the
individual decisions we might need to make on
whether to permit or deny the FAI to act, I fear
ending up with a nested precise definition tree
which is itself too complex for human
comprehension! I hope I'm wildly wrong about
that.

Friendliness is a brutally complex issue. Think
how simple it was to define success with the
Apollo moon landing. Go. Get rocks and photos.
Come back alive. Hard to do, but easy to define.
With FAI, on the other hand, one might say
defining success *is* the hard part. The
implementation cannot even properly begin with
ill-defined goals.
Be of good cheer: if it were easy everybody would
be doing it.

> I don't think the general concept of an Omega
> Point
> neccesserily implies Tipler's particular
> scheme. As
> you point out, it appears that the specific
> model that
> Tipler gave his book has been falsified by
> empirical
> observations.
>
> The general condition for an Omega Point is
> simply
> that the average rate of information processing
> per
> unit space approaches infinity in bounded time.
> There
> could be many different ways that this could
> come
> about.
>

I sure hope so. Another way I might suggest is
that some other universes do have the proper
collapsing properties to enable infinite
processing under Tipler's scheme, and can
recreate copies of us. But if a copy of me
experiences all the goodies in that Omega Point,
is it really 'me'? Who knows? I would have no
control over that, being in this universe. Might
as well pray. Unfortunately, it's lack of control
that makes the concept shade into religion. Even
if one did live in that collapsing universe, the
collapse would be aeons in the future. I regard
Omega Point as an umbrella term for any such
infinite free lunch, and worry more about more
immediate crises (i.e. this century, not x to the
x years from now). After all, what if we achieve
FAI but find that although it can give us a
galactic civilization of dazzling splendor, even
it cannot get us to the Omega Point because we
were born in the wrong universe with an
inescapable heat death? (It's 2 degrees
Fahrenheit outside right now, and that's probably
affecting my thinking).

Tom Buckner

                
__________________________________
Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
http://my.yahoo.com
 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT