From: Michael Roy Ames (firstname.lastname@example.org)
Date: Sun Nov 24 2002 - 15:29:43 MST
In response to Gordon Worley you wrote:
> What I'm really after is a superhumanly intelligent AI that
> a) values humans & sentient life generally
> b) is highly aware of ethical issues and their difficulty
> and importance
> I.e., an "Ethically Aware, Human-Friendly AI."
I consider this goal to be a good one (and to have a high level of
> But I think you mean the statement a different way -- you, like
> Michael Roy Ames, seem to believe that there is some True and
> Universal Moral Standard, which an FAI will find....
> Well, maybe it will. I'm not confident either way....
Neither am I confident of this outcome, but its worth a shot don't
you think? And as to my belief (or lack of it): I have none. The
definition of Rightness is just a definition. If it is useful, then
great! If not, scratch it and try again.
> My work toward an Ethically Aware, Human-Friendly, superhumanly
> intelligent AI is independent of the outcome of philosophical
> debates about the existence or otherwise of universal morality.
As it should be. Indeed, if universal morality turns out to be a
useless concept, I will also drop it like a hot coal. However, if
there is a way to 'ask the universe the question' like: which of
these 38 options is the most Right? Then, wouldn't that be clear up
a lot of guessing? (This question is asked only half-rhetorically)
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT