Re: FAI prioritization

From: mwaser@cox.net
Date: Fri Apr 04 2008 - 11:10:51 MDT


> Aren't we jumping ahead? We have yet to solve the very non-trivial problem of
> defining what "friendly" means.

No. I defined it sufficiently for any INTELLIGENT system in the last e-mail. To repeat:

So how about:
     Love one another OR
     Play well with others OR
     Help one another OR, at a minimum,
     Don't step on others

> Such questions only seem to lead to endless debate with no resolution. How
> can we ask what we will want when we don't know who "we" will be?

Your problems arise because your ethics are unclear. Clarify your ethics (your TRUE goals) and everything else becomes crystal clear. If your ethics depend upon *who*s and *we*s, you are lost. Your ethics need to be based upon "entities" and NOTHING else (and yes, by that I *DO* mean basically all *thinking* things to include animals and I do *NOT* mean in proportion to how much they think -- despite what you and other bigots think, *everything* is equal and, in the long run, stomping on someone/something else is only hurting yourself).

> I prefer the approach of asking "what WILL we do?" because "what SHOULD we
> do?" implies a goal relative to some intelligence whose existence we can't
> predict.

My, that's *very* Aleister Crowley of you (which is not to be taken that I disagree).

> I believe AI will emerge . . . .

I've seen and believe that I understand your beliefs. May I ask you to open yourself to the possibility that the dark, gloomy future that you portray is merely fearful conservatism and that the future could easily turn out to be a wonderful, glorious thing?

     Mark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT