Re[2]: Complexity, Ethics, Esthetics (was re: Defining Right and Wrong)

From: Cliff Stabbert (cps46@earthlink.net)
Date: Wed Dec 04 2002 - 11:19:52 MST


Wednesday, December 4, 2002, 5:23:46 AM, Samantha Atkins wrote:

SA> Cliff Stabbert wrote:
>> Tuesday, December 3, 2002, 1:46:23 PM, Samantha Atkins wrote:

>> SA> All the above said though, I have no right to choose for anyone
>> SA> else. If they want the equivalent to being a wirehead then they
>> SA> must have room to choose that although not to bind others to
>> SA> supporting their decision directly.
>>
>> But here, we get into the subtle details (where mr. S. is known to
>> hang out) of how one determines what an entity wants. If you're a
>> parent, you know that ultimately your child's happiness is better
>> served by a healthy diet than always giving in to the child's
>> *proclaimed* desire -- for McDonald's and candy, say (I am
>> conveniently sidestepping media saturation influence here, which does
>> play a big role).
>>
SA> I was not speaking of children and I don't think a metaphor of
SA> human adults as children relative to a FAI is at all
SA> appropriate. A FAI worth its salt will know that Friendliness
SA> relative to humans requires persuasion in non-coercive human
SA> activities.

Alright, I think we're talking past each other here. Of course I
don't have the right to deny others the choice to be a wirehead.
I was raising the issue of whether an _FAI_ would offer that option, and
if it did, to what extent it should try to persuade people choosing it
that there are better things.

I was not trying to reduce the FAI-human relationship to a simple
parent-child one, but there are analogous elements if:
  - self-actualization of human potential is "best" for humans
  - many humans will choose quick and shallow satisfaction over
    that deeper one
  - the FAI "knows better"

Here in the US, it's certainly not just children who eat too much
McDonald's and candy...

I don't think an FAI should force anybody to do anything. But the
question of where "persuasion" crosses that line is a bit tricky with
a superintelligence.

>> ======
>> A tangentially related issue:
>>
>> SA> Not to mention that the above is massively boring. You would
>> SA> have to remove part of human intelligence to have many people
>> SA> "happy" with simply continuous pleasure. Pleasure is also quite
>> SA> relative for us. Too much of a "good thing" results in the
>> SA> devaluation of that pleasure and even eventual repugnance.
>>
>> What if you could devise an "optimal path" -- the best rythm of
>> alternating ups and downs, challenges and rewards -- is that something
>> a superintelligence should guide us along, or would that be _less than
>> optimally rewarding_ because we hadn't chosen that path completely
>> independently?

SA> What if we stop thinking up rather low grade "solutions" and
SA> think about higher level ones and how those may be made most
SA> likely? Human actualization is not about getting the most
SA> pleasure jollies.

Yes, that's my point. That there may be an optimal path towards
actualization, consisting of the right sequence of challenges and
rewards, in any given instance. My question is whether we would feel
cheated out of "real" challenge if offered such a path (should it
exist).

Should an FAI offer such paths? Or should it just restrict itself to
giving people freedom, i.e. disallowing the initiation of force?

If it does more, then what is that more and where are the lines it
shouldn't cross?

>> Except maybe to point out that the notion of "objective ethics" is at
>> least as difficult as the notion of "objective aesthetics".

SA> That is not a meaningful observation in this context.

Perhaps it is for those who claim objective ethics are possible while
they might agree if asked that beauty is in the eye of the beholder,
or determined by (cultural, historical, personal) context. If
aesthetics is context-dependent surely ethics is.

>> Somehow we
>> have to reconcile the notion that "it's all subjective" with the
>> notion that it's not _all_ _just_ subjective, that some things _are_,
>> dammit, better/more artful than others.

SA> It is impossible to reconcile opposites. It is not all just
SA> subjective so why should I reconcile what is to that spurious idea?

Because humans hold contradictary ideas, which is why I used "the
notion that" rather than "the fact that". If we're going to build
superintelligences we need to get beyond that and other paradoxes.

>> To tie this in with your
>> earlier statement, perhaps the ethical as well as the aesthetical is
>> that which increases your intelligence and / or the opportunities for
>> actualizing its potential...words such as "uplifting" are often
>> applied in such contexts.
>>

SA> Perhaps a shorter statement would be that the Good is that which
SA> actualizes the life/existence of the sentient beings involved.
SA> The "Good" applies both to judging/providing a partial basis for
SA> Ethics and Aesthetics.

I can agree with that statement, and I'm curious what role you feel an
FAI should play in regards to it.

--
Cliff


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT