From: Tommy McCabe (email@example.com)
Date: Fri Jan 02 2004 - 18:58:18 MST
--- "Perry E. Metzger" <firstname.lastname@example.org> wrote:
> Tommy McCabe <email@example.com> writes:
> >> So one can have AI. I don't dispute that. What
> >> talking about is
> >> "Friendly" AI.
> > Humans can, obviously not must, but can be
> > the human equivalent of Friendliness. And if one
> > do it in DNA, one can do it in code.
> Can they? I am not sure that they can, in the sense
> of leaving behind
> large numbers of copies of their genes if they
> behave in a
> consistently altruistic manner.
> BTW, I am not referring to aiding siblings or such
> -- Dawkins has
> excellent arguments for why that isn't really
> altruism -- and I'm not
> talking about casual giving to charity or such
> What I'm talking about is consistently aiding
> strangers in deference
> to your own children, or other such activities. I'm
> far from sure that
> such behavior isn't powerfully selected against, and
> one sees very
> little of it in our society, so I'm not sure that it
> hasn't in fact
> been evolved out.
For the ten thousandth time, Darwinian evolution and
its associated complex functional adaptations do not
apply to transhumans. A Friendly transhuman would be
concerned both about vis offspring and sentients ve's
never heard of. And you're telling me Ghandi isn't
altruistic? Never mind if altruism is an accident- I'm
not a professor in evolution - but it is possible.
> >> > And if you have Friendly human-equivalent AI,
> >> You've taken a leap. Step back. Just because we
> >> we can build AI
> >> doesn't mean we know we can build "Friendly" AI.
> > Again, there are humans which are very friendly,
> > humans weren't built with friendliness in mind at
> I don't think there are many humans who are friendly
> the way "Friendly
> AI" has to be. A "Friendly AI" has to favor the
> preservation of other
> creatures who are not members of its line (humans
> and their
> descendents) over its own.
Read CFAI extermely thoroughly, (literally), then take
a hammer, and pound CFAI into your head
(metaphorically, please!). The concept of a goal
system that centers around the observer is an
evolutionary complex functional adaptation, and those
don't appear in AIs. See MistakesOfClassicalAI on the
SL4 Wiki. The behaviours of favoring yourself at all
and sentients you are related to is anthropomorphic,
not because it isn't possible in an AI, but because
it's built into our heads and it's entierly optional
in an AI. To an AI that hasn't been built with an
observer-centered goal system, the self/not self
distinction is exactly the same type of thing as the
memory/hard drive distinction.
> >> >> There are several problems here, including the
> fact that there
> >> >> is no absolute morality (and thus no way to
> >> >> determine "the good"),
> >> >
> >> > This is the postition of subjective morality,
> which is far from
> >> > proven. It's not a 'fact', it is a possibility.
> >> It is unproven, for the exact same reason that
> the non-existence of
> >> God is unproven and indeed unprovable -- I can
> come up with all
> >> sorts of non-falsifiable scenarios in which a God
> >> exist. However, an absolute morality requires a
> bunch of
> >> assumptions -- again, non-falsifiable
> assumptions. As a good
> >> Popperian, depending on things that are
> non-falsifiable rings alarm
> >> bells in my head.
> > Perhaps I should clarify that: subjective morality
> > not only unproven, it is nowhere near certain.
> > is objective morality. The matter is still up for
> > debate. A good Friendliness design should be
> > compatible with either.
> 1) I argue quite strongly that there is no objective
> morality. You
> cannot find answers to all "moral questions" by
> making inquiry to
> some sort of moral oracle algorithm. Indeed, the
> very notion of
> "morality" is disputed -- you will find plenty of
> people who don't
> think they have any moral obligations at all!
And you can find people who think that Zeta aliens are
communicating with us about the giant planet that is
going to come careening through the solar system
stopping the Earth's rotation in May 2003 (yes, May
2003, the month that has already passed).
> Taking a step past that, though, it is trivially
> seen that the bulk of
> the population does not share a common moral code,
> and that even those
> portions which they claim to hold to they don't
> generally follow. Even
> the people here on this mailing list will
> substantially agree on major
> points of "morality".
> There is, on top of that, no known way to establish
> what is 'morally
> correct'. You and I can easily ascertain the
> "correct" speed of light
> in a vacuum (to within a small error boun) with
> scientific tools. There is, however, no experiment
> we can conduct to
> determine the "correct" behavior in the face of a
> moral dilemma.
> By the way, one shudders at what would happen if one
> could actually
> build superhuman entities to try to *enforce*
> morality. See Greg
> Bear's "Strength of Stones" for one scenario about
> where that
> foolishness might lead.
I don't have the book. Please elaborate on that.
> >> >> that it is not clear that a construct like
> this would be able to
> >> >> battle it out effectively against other
> constructs from
> >> >> societies that do not construct Friendly AIs
> (or indeed that the
> >> >> winner in the universe won't be the societies
> that produce the
> >> >> meanest, baddest-assed intelligences rather
> than the friendliest
> >> >> -- see evolution on earth), etc.
> >> >
> >> > Battle it out? The 'winner'? The 'winner' in
> this case is the AI
> >> > who makes it to superintelligence first.
> >> How do we know that this hasn't already happened
> elsewhere in the
> >> universe? We don't. We just assume (probably
> correctly) that it
> >> hasn't happened on our planet -- but there are
> all sorts of other
> >> planets out there. The Universe is Big. You don't
> want to build
> >> something that will have trouble with Outside
> Context Problems (as
> >> Ian Banks dubbed them).
> > Another rephrasing: the first superintellgence
> > knows about us.
> It doesn't matter who knows about us first. What
> matters is what
> happens when the other hyperintelligence from the
> other side of the
> galaxy sends over its probes and it turns out that
> it isn't nearly as
> "friendly" and has far more resources. Such an
> intelligence may be the
> *last* thing we encounter in our history.
I don't know what happens when superintelligences come
into conflict. Really. I can pretty much guarantee,
however, that the first superintellgence will be the
last one before the Singularity (in the solar system).
> >> >> Anyway, I find it interesting to speculate on
> possible constructs
> >> >> like The Friendly AI, but not safe to assume
> that they're going to
> >> >> be in one's future.
> >> >
> >> > Of course you can't assume that there will be a
> >> > Singularity caused by a Friendly AI, but I'm
> >> > darn sure I want it to happen!
> >> I want roses to grow unbidden from the wood of my
> >> writing desk.
> >> Don't speak of desire. Speak of realistic
> >> possibility.
> > I consider that a realistic possibility. And the
> > probability of that happening can be influenced by
> I'm well aware that you consider the possibility
> realistic. I
> don't. Chacun a son gout. However, I'm happy to
> continue explaining
> why I think it would be difficult to guarantee that
> an AI would be
> >> >> The prudent transhumanist considers survival
> in wide variety of
> >> >> scenarios.
=== message truncated ===
Do you Yahoo!?
Find out what made the Top Yahoo! Searches of 2003
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT