From: David K Duke (davidisaduke@juno.com)
Date: Tue Jun 15 2004 - 14:01:44 MDT
Eliezer wrote:
> This is *not* what I want. See my answer to Samantha about the hard
> part
> of the problem as I see it. I want a transparent optimization
> process to
> return a decision that you can think of as satisficing the
> superposition of
> probable future humanities, but that actually satisfices the
> superposition
> of extrapolated upgraded superposed-spread present-day humankind.
So will the individual have a conscious, present choice in whether to
take part in this "optimization".
Lets say SingInst becomes hugely popular, and a reporter asks you about
the AI: "So what's this bibbed computer do anyway?"
You: "Well, it's going to [insert esoteric choice of words here
describing volition]"
Reporter: "Doesn't that mean it's gonna do stuff without our presently
conscious approval"
You: "Yesssssssssss!"
Reporter "Run for your lives and insurance company!"
******
Basically what I'm asking is, will it do this without my current
conscious will? Yes or no?
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
DtheD
________________________________________________________________
The best thing to hit the Internet in years - Juno SpeedBand!
Surf the Web up to FIVE TIMES FASTER!
Only $14.95/ month - visit www.juno.com to sign up today!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT