From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Sat May 22 2004 - 03:22:06 MDT
--- Eliezer Yudkowsky <sentience@pobox.com> wrote: >
>
> Yes, I understand the danger here. But Samantha,
> I'm not sure I'm ready
> to be a father. I think I know how to redirect
> futures, deploy huge
> amounts of what I would consider to be intelligence
> and what I would
> cautiously call "optimization pressures" for the
> sake of avoiding
> conversational ambiguity. But I'm still fathoming
> the reasons why
> humans think they have conscious experiences, and
> the foundations of
> fun, and the answers to the moral questions implicit
> in myself. I feel
> myself lacking in the knowledge, and the surety of
> knowledge, needed to
> create a new sentient species. And I wistfully wish
> that all humankind
> should have a voice in such a decision, the creation
> of humanity's first
> child. And I wonder if it is a thing we would
> regard as a loss of
> destiny, to be rescued from our present crisis by a
> true sentient mind
> vastly superior to ourselves in both intelligence
> and morality, rather
> than a powerful optimization process bound to the
> collective volition of
> humankind. There's a difference between being
> manifesting the
> superposed extrapolation of the decisions humankind
> would prefer given
> sufficient intelligence, and being rescued by an
> actual parent.
Definitely sounds like a major change in your
strategy. Not losing your nerve are you? ;) Balls,
man. Keep your balls.
Well, the Sys Op idea was always pretty dubious to my
mind. I fear that the resentment of "being rescued by
an actual parent" would be huge. Look at the U.S in
Iraq, trying to "rescue" Iraqis from themselves.
Costs outweight benefits? Quite likely.
Even with an FAI as actual 'God-like' being though,
there are other options. For instance the FAI could
remove itself to an asterioid and only set up small
local Sys Ops, and help on an individual basis: I'll
help you out if you agree to respect the rules of my
local Sys Op: that sort of thing.
I continue to be puzzled by this talk of "superposed
extrapolation of the decisions humankind would prefer
given sufficient intelligence". I'm not at all sure
it's coherent. Most humans aren't even aware of
Transhumanism, don't want a bar of it, don't even
think that AI is possible etc. I'm really skeptical
that you could avoid some input from the personal
level when building a transhuman mind.
> --
> Eliezer S. Yudkowsky
> http://intelligence.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence
=====
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
-H.G.Wells
Please visit my web-sites.
Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST