From: Michael Roy Ames (email@example.com)
Date: Sat Oct 26 2002 - 22:38:40 MDT
A recent post to extropians got me thinking about freedom and Friendly AI. In
the past I have opined that what FAI is _about_ is maximizing our options
(freedom) while minimizing dangers to our existance. On the occasions when I
have said/written this, a little alarm bell would ring in my mind - "Rrrring!
You need to figure out what freedom is!" So, when I read Charles' post, it was
like a wake-up call. Charles gave me permission to repost here...
> Date: Tue, 22 Oct 2002 09:37:14 -0700
> From: Charles Hixson <firstname.lastname@example.org>
> Subject: Re: why "anarcho-capitalism" is an oxymoron
> Dehede011@aol.com wrote:
> >In a message dated 10/22/2002 10:25:31 AM Central Standard Time,
> >email@example.com writes: A capitalist society, as normally
> >understood, is no more a free society than is a socialist society.
> > Please define a free person.
> >Ron h.
> Ouch! I've been working on that for years, and don't have a good clue.
> I know internally what it feels like (i.e., it feels non-coerced, it
> feels like choices are possible, but not forced). I don't have many
> examples of what it looks like, because it's a bit difficult to go up to
> someone and say "Excuse me. Two minutes ago, when you did X, were you
> acting at free choice amoung several possibilities, or did you feel that
> only one choice was plausible?"
> Still, as a first shot, and clearly understanding that it needs
> 1) A person is free when that person is making a choice among several
> perceived plausible choices, and also feels that not choosing would be a
> valid choice.
> 2) There are degrees of freedom. It's not an either / or state. And
> there are no infinities and no zeros.
> 3) If only one of the choices perceived as available is also perceived
> as desireable, the amount of freedom present is minor.
> 4) If none of the choices presented as available are also perceived as
> desireable, the person has less freedom than under condition 3.
> 5) If no difference in the desirability of the choices available is
> discernable, but the choices are perceived as essentially different,
> then the freedom approaches the maximum possible, and increases in
> direct proportion to the number of perceived choices.
> N.B.: While state 5 is maximally free, this does not equate with
> maximally desireable. If you don't feel that your choice makes a
> difference, or that there is no difference in the desireability between
> the choices, you may be quite free, but you won't feel enpowered.
> Unfortunately, all of these criteria appear to me to be based on
> unobservables. Possibly one could, in principle, detect the differences
> using PET or MRI, but with currently envisionable technologies, this
> would always be in a state of restraint, and therefore the object of
> investigation would be missing.
> - -- Charles Hixson
> Gnu software that is free,
> The best is yet to be.
Charles felt that these ideas were a little 'half baked'... I understand this
feeling. I would like to continue baking them now.
It has been bought up before that the 'Friendliness content' will be a
significant and troublesome part of any implimentation of Friendliness.
Although we may find some comfort in the hope that Friendly architecture (as
outlined in http://www.intelligence.org/CFAI/index.html) will correct whatever
content mistakes we make, it is undoubtedly of high importance that we try and
give the AI a good set of baseline data. This will require us to have a very
good idea about what we mean when we say "freedom", "responsibility",
"volition", etc. so that we will have a reasonable chance of explaining it to
the AI. Providing examples, and feedback to an AI, is of course essential...
but the human programmer-educators (us) should think through the issues, and lay
them out in advance. This will help avoid mistakes due to shallow-thinking,
answers adversely affected by EP, and knee-jerk answers.
One of the things about Charles' list that got my attention was that I could
imagine coding an analogy of them in a microdomain. As microdomains are
currently Eliezer's preferred avenue for seed education, this would seem
entirely 'on point' for discussion.
To kick off the discussion, I will now take issue with item 5. I rewrite it as
5) If the difference in the desirability of the choices available is discernable
and of wide distribution, from very desireable to very undesireable with lots of
points in the middle, and the choices are perceived as essentially different,
then the freedom approaches the maximum possible, and increases in direct
proportion to the number of perceived choices, and the width of the
Also I would like to address:
> Unfortunately, all of these criteria appear to me
> to be based on unobservables.
This is currently true. We cannot yet delve into the brain and figure out how
it judges one choice better than another. We can currently only reason about
the choices on the higher level of self-analysis. However, this will become a
false statement as soon as we can perform detailed real-time analysis &
simulation of a human brain. At that time the 'criteria', the models used by
the brain to compare and contrast choices, will be observable.
--- SL4 challenge: Improve and clarify the 'states of freedom' list so we can teach them to a seed. Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT