From: Christopher Healey (CHealey@unicom-inc.com)
Date: Mon May 17 2004 - 09:07:07 MDT
Also along this line of attack, what do you think the results would be as that SAI increased both the scope and timeframe of it's predicitive horizon?
Using any type of goal system that hedges between an individual's and the "greater" good, it seems a strong possibility that the sheer magnitude of the greater good's weighting might outstrip any individual's weighting quickly. Especially when that predicitive horizon envelops the cusp of a critical systemic decision (endgame scenario, etc...). We'd consider that behavior degenerate, of course, but if it's a structural possiblity of the architecture, then it's just working within the (poor) design parameters.
Is it possible to create an AI that holds as a singly-rooted goal, actor-volitional Friendliness, and have this generate as a sub-goal the appropriate higher-level organizational structures supporting the greater good? The more I've been following these threads, and reading outside this forum, the more I think that the answer is yes. Drawing a distiction between possibility and probability however, I'm more convinced it's possible, but less convinced it's probable.
The question is most certainly wide, wide open.
-Christopher Healey
________________________________
From: owner-sl4@sl4.org on behalf of Thomas Buckner
Sent: Sun 5/16/2004 3:14 AM
To: sl4@sl4.org
Subject: Re: Volitional Morality and Action Judgement
> >Suppose I know I
> will be probably
> > ruined if I continue gambling, but I decide to do
> it anyway. I'm then
> > doing what is not in my best interest to do. I'm
> then acting
> > irrationally. Eliezer's maxim, then, becomes
> inapplicable. To assess
> > the
> > agent's behavior we must look for an alternative
> rationale.
>
> Either you are using the term "best interest" for
> something I would
> not use that term for, or you are making the mistake
> of assuming
> that a single objective "best interest" exists which
> can be determined
> by an outside observer.
>
> In order to determine a person's best interest, you
> would have to
> weigh their options against their goal system (not
> yours!) and
> choose the best option which is consistent with that
> goal system.
>
> Unless you are intelligent enough to closely
> simulate that person,
> however (and no human currently is), you are
> unlikely to be able
> to make such a determination, so you must accept the
> person's own
> decisions as the closest approximation to their
> "best interest"
> that you can find.
>
> --
> Randall Randall
> <randall@randallsquared.com> (706) 536-2674
> Remote administration -- web applications --
> consulting
>
I can see a partial line of attack on that problem, as
follows:
You offer the example that an excessive gambler may be
'ruined', but the ramifications are not laid out in
the detail an AI might want. The ethical outcomes
vary, and thus the ethical imperatives.
Will the gambler ruin several (or many) other lives in
hir excess? Then hes should be impeded for the good of
other sentient beings.
Will the gambler physically die? Then, again, an AI
may choose to intervene (there's a word for this
process: it's called an intervention!) and take
various means of persuasion or coercion to keep a
fellow sentient from self-destructing out of sheer
thoughtlessness. This is like your relatives hiding
your cigarettes: perhaps you 'know' they're bad for
your body but have an addiction to that nicotine (and
a 'tomorrow never gets here' sort of rationalizing
attitude; but it does get here, you know...)This is
the sort of unconscious suicide we see all around us,
almost daily if your town's big enough.
Does the gambler know that death may follow, and
choose this risk (or certainty) while offering a good
rational ethical defense of hir choice? Then the AI
ought to at least consider staying out of the way, as
should we.
Your example implies strongly that you may have a
hidden supergoal which IS served by self-destruction.
If you cannot articulate it, it's unlikely that you
can defend it rationally. You may only assert that you
have an intuition, that your destruction will somehow
serve a goal larger than your continued conscious
existence. I am not saying your sincerity might not
somehow convince a SAI, but my belief is that ve will
tend to want to keep you around.
The alternative form of being 'ruined' is that you do
not die, nor create misery or death for others, but
simply have some bad experiences of your own. Perhaps
they are very, very bad experiences, but the
possibility will still remain that you have survived
and learned something from your travails. An SAI might
find that very acceptable, as long as it gets to
watch.
To continue the smoking metaphor, it might insist on
keeping you alive while everything but your brain
shrivels, if you're that stubborn, and then restoring
you when you turn over that new leaf.
OR...
It might actively interfere with the debt process. You
might (post-FAI) find yourself in a reality tunnel
where you never lose at cards, or wake up rich even
after losing; where you never get a hangover or run
out of whiskey; where your lungs feel fine after a
night of smoking railway ties, and you can eat lead
out of stained glass windows like it was beef jerky,
and not get poisoned; a dream world, an anthroparium.
Hmmm, that'd work. Remember, a SAI that could emulate
you could keep you like a museum display under glass,
and you'd never notice.
=====
__________________________________
Do you Yahoo!?
SBC Yahoo! - Internet access at a great low price.
http://promo.yahoo.com/sbc/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT