From: Samantha Atkins (samantha@objectent.com)
Date: Sun Jan 11 2004 - 14:01:45 MST
On Sun, 11 Jan 2004 09:25:27 -0500
"Ben Goertzel" <ben@goertzel.org> wrote:
>
> Good morning fellow SL4-oids...
>
> I think Metaqualia is raising an important perspective, and I find the
> reactions to his posts a bit severe overall.
>
> We need to guard against this list becoming a philosophical orthodoxy!
And we need to examine perspectives to see if they are coherent and meaningful.
>
> On this list, it tends to be assumed that "Friendliness to humans" is an
> immensely important value to be transmitted to the superintelligent AI's we
> create...
>
> Metaqualia is questioning this "orthodoxy" -- which should be permitted,
> right? -- and proposing that, perhaps, we humans aren't so all-important
> after all ... that, perhaps, we should seek to inculcate a more *abstract*
> form of morality in our AGI's, and then let them, with their deep abstract
> morality and powerful intelligence, make their own judgment about the
> particular configurations of matter known as "humans"...
>
Questioning this is not a problem. The ideas presented as reasonable in that questioning are where the problem arose. Actually, for us folks, humans had better be really really important. Yes, a general morality should base the importance of humans as a particular case of the importance of sentient beings in general. But what metaqualia posted goes far beyond this question. He speaks blithely of destroying reality itself. This is far beyond general questions of morality or the importance of humans. I am surprised you latch on to just this one interpretation of one part of the exchange.
> I note that, in my own AGI work, I intend to basically follow the SL4
> orthodoxy and inculcate "Friendliness to humans and other sentients" as a
> core value in my own AGI systems (once they get advanced enough for this to
> be meaningful).
>
Perhaps the question is backwards. Perhaps the question should be what types of morality, what kinds of ethical principles and considerations, are likely to lead to the "best" outcomes? Then we have the thorny question of what "best" looks like and we iterate yet again. Some put forth that the "best" has the least suffering. But a dead universe obviously guarantees this "best". Some put forth that the most pleasure (or positive qualia) is the "best". But a universe of maxed-out eternal druggies would then qualify.
> However, I also intend to remain open to the questioning of all values, even
> those that seem extremely basic and solid to me -- even the SL4
> orthodoxy....
>
Speaking of SL4 "orthodoxy" is a bit of a slur. If we are interested in the creation of greater than human intelligence then we need to answer what it is we are interested in this for. We also will be held accountable to our fellow human beings for what we propose. If we cannot justify the basic goodness and sanity of our choice then we should not proceed. If we do proceed anyway then we should not whine much when those who are concerned with the outcome vis a vis human beings or more general levels of sentient being decide to stop us.
> One problem I have with Metaqualia's perspective is the slipperiness of this
> hypothesized abstract morality. Friendliness to humans is slippery enough.
> His proposed abstract morality ---- about the balance between positive and
> negative qualia ---- is orders of magnitude slipperier, since it relies on
> "qualia" which we don't really know how to quantify ... nor do we know if
> qualia can be reliably categorized as positive vs. negative, etc.
>
> Even if IN PRINCIPLE it makes sense to create AGI's with the right abstract
> morality rather than a concrete Friendly-to-humans-and-sentients morality,
> this seems in practice very hard because of the difficulty of formalizing
> and "pinning down" abstract morality....
>
Friendly to sentients probably defined and buttressed very well could be key to an optimal abstract morality. There may well be no deeper abstraction that does not include the very entities that morality is formed to specify the "best" interactions with. To abstract it away further may be to talk about something which is no longer "morality" at all.
> I also note that the gap between Metaqualia and the SL4 orthodoxy may not be
> so big as it appears.
>
> If you replace "Friendly to humans" with "Friendly to humans and sentients"
> in the SL4 orthodox goal system, then you have something a bit closer to
> Metaqualia's "increase positive qualia" -- IF you introduce the hypothesis
> that sentients have more qualia or more intense qualia than anything else.
> Right?
>
I think the "qualia" argument is fundamentally a hedonistic fallacy, but I certainly agree that "all sentients" broadens and deepens the basis of morality.
> And when you try to quantify what "Friendly to X" means, you have two
> choices
>
> -- encouraging positive qualia on the part of X
> -- obeying what X's volition requests, insofar as possible
>
I don't at all see these are the only choices. There are also choices such as:
-- do what you believe is optimal for X's benefit where this is possible;
-- interact peacably with X for mutual benefit where possible and otherwise leave X to its own devices
-- insure X doesn't do itself in before it learns if possible
> But these need to be balanced in any case, because human volition is famous
> for requesting more than is possible. In choosing which of the
> mutually-contradictory requests of human volition to fulfill, our
> hypothetical superhuman AI must make judgments based on some criterion other
> than volition, e.g. based on which of a human's contradictory volitions will
> lead to more positive qualia in that human or in the cosmos...
>
I don't see Friendliness has anything to do with fulfilling human requests per se.
-s
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT