Re: Safety of brain-like AGIs

From: Shane Legg (shane@vetta.org)
Date: Thu Mar 01 2007 - 15:46:36 MST


On 3/1/07, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
>
> As I remarked on a previous occasion, for purposes of discussion we may
> permit the utility function to equal the integral of iron atoms over
> time. If you can't figure out how to embody this utility function in an
> AI, you can't do anything more complicated either.

I don't see the point in worrying about whether one can integrate iron
atoms,
indeed this type of thinking concerns me.

What worries me is that in 10 short years the world financial system may
suddenly start to buckle under the weight of a mysterious new hedge fund...

whilst the sl4 list is still debating the integration of iron atoms and what
the
true meaning of meaning may or may not be.

Shane



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT