RE: Fighting UFAI

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Jul 14 2005 - 13:32:21 MDT


HI,

> Your suggested AI design strategy B strikes me as a hideous
> mistake under the
> guise of motherhood and apple pie, for reasons we have already discussed.

I'm not entirely happy with it either, but it at least seems *possibly*
workable, unlike Collective Volition ;-)

For sure, I wouldn't release a highly clever AGI with that goal system (or
any other one) today; it's obvious that a lot more research is needed...

> Aside from that, I accept your correction. All utility functions
> that do not
> contain explicit specific Friendly complexity that attach
> intrinsic utility to
> e.g. the life of humans, for whatever reason, are equally
> dangerous. Whether
> the utility function reads "humans" or "sentient beings" is a
> separate issue;
> I will concede that humans are a special case of sentient beings.
> Basically I
> meant to say that I don't give a damn whether our future light
> cone ends up as
> paperclips or staples.

OK, in that case I think I basically agree with you...

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT