Collective Volition: Wanting vs Doing.

From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Sat Jun 12 2004 - 15:29:25 MDT


"Eliezer Yudkowsky" wrote:
>
> People are speculating about what a collective volition
> might do (sigh), but this has nothing to do with the
> initial dynamic SIAI would write, or what a collective
> might *actually* do.
>

I detect the smoke of confusion from some readers of
http://sl4.org/bin/wiki.pl?CollectiveVolition and suspect it is caused by a
part of Eliezer's article that I also found confusing. Let me lay it out
for you...

Collective volition (CV) defined as "our wish" or "a theory about what
people want" seems a clear definition, and relatively easy to grasp as an
idea. Eliezer's article discusses how we might think about what humans
want, both individually and as a group. It proposes measures (eg: distance,
muddle, spread, coherence) that might assist us in this endeavour. These
ideas seem to be quite useful in enabling clearer thought about people's
desires for the future.

However, the article doesn't stick with this definition of CV, but attempts
to broaden it starting in the section titled "Collective volition is an
initial dynamic." This broadening of the definition seems confusing. It
muddies the definition as initially presented, and conflates the measurement
aspect of CV with activities that might be taken toward reaching some of the
goals identified by our CV.

AFAIK the 'initial dynamic' portrayed in the article is SIAI's first working
version of Friendly AI: call it FAIv1. One of FAIv1's first jobs (as
presented in the article) will be to find out what we want - to obtain
first-hand data. Once obtained this data would be a set of models defined
as: our CV at that point in time. When Eliezer writes in the article:

"If there is some chain of moral logic so self-evident that any human who
hears with open mind must follow, then that is what the initial dynamic of
collective volition would produce on the next round."

This is a bit confusing... so in my mind, I translate it to read:

"If there is some chain of moral logic so self-evident that any human who
hears with open mind must follow, then that is what ** FAIv1, having
obtained our collective volition readout, would incorporate into its moral
logic ** on the next round."

It is confusing to talk about CV *doing* something. CV is data used by FAI
to help figure out what humans *want* to do. The 'optimization process'
then has to figure out just how to get from the existing situation to a
desired future situation, without 'breaking any eggs'.

Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT