Re: An essay I just wrote on the Singularity.

From: Tommy McCabe (rocketjet314@yahoo.com)
Date: Thu Jan 01 2004 - 18:46:11 MST


--- "Perry E. Metzger" <perry@piermont.com> wrote:
> Tommy McCabe <rocketjet314@yahoo.com> writes:
>
> > --- "Perry E. Metzger" <perry@piermont.com> wrote:
> >>
> >> Tommy McCabe <rocketjet314@yahoo.com> writes:
> >> > True, but no disproof exists.
> >>
> >> Operating on the assumption that that something
> >> which may or may not
> >> be possible will happen seems imprudent.
> >
> > It seems like a very reasonable idea that what can
> be
> > done, by dumb evolution, in a few gigabytes of DNA
> can
> > be done in programming code by humans.
>
> So one can have AI. I don't dispute that. What I'm
> talking about is
> "Friendly" AI.

Humans can, obviously not must, but can be altruistic,
the human equivalent of Friendliness. And if one can
do it in DNA, one can do it in code.

> > And if you have Friendly human-equivalent AI,
>
> You've taken a leap. Step back. Just because we know
> we can build AI
> doesn't mean we know we can build "Friendly" AI.

Again, there are humans which are very friendly, and
humans weren't built with friendliness in mind at all.

> >> > If anyone thinks they
> >> > have one, I would be very interested. And
> there's
> >> > currently no good reason I can see why Friendly
> AI
> >> > shouldn't be possible.
> >>
> >> I can -- or at least, why it wouldn't be stable.
> >
> > Then please, by all means, show me the proof.
> >
> >> There are several
> >> problems here, including the fact that there is
> no
> >> absolute morality (and
> >> thus no way to universally determine "the good"),
> >
> > This is the postition of subjective morality,
> which is
> > far from proven. It's not a 'fact', it is a
> > possibility.
>
> It is unproven, for the exact same reason that the
> non-existence of
> God is unproven and indeed unprovable -- I can come
> up with all sorts
> of non-falsifiable scenarios in which a God could
> exist. However, an
> absolute morality requires a bunch of assumptions --
> again,
> non-falsifiable assumptions. As a good Popperian,
> depending on things
> that are non-falsifiable rings alarm bells in my
> head.

Perhaps I should clarify that: subjective morality is
not only unproven, it is nowhere near certain. Neither
is objective morality. The matter is still up for
debate. A good Friendliness design should be
compatible with either.

> >> that it is not
> >> obvious that one could construct something far
> more
> >> intelligent than
> >> yourself
> >
> > Perhaps we truly can't construct something vastly
> more
>
> Stop. You cut my sentence. I don't doubt that dumb
> forces can build
> intelligences -- we're an example of that after all.
> I said:
>
> >> that it is not obvious that one could construct
> something far more
> >> intelligent than yourself and still manage to
> constrain its
> >> behavior effectively,
>
> and don't edit my words that way again if you want
> me to reply.
>
> >> and still manage to constrain its behavior
> >> effectively,
> >
> > You can't 'constrain' a transhuman.
>
> And so, I don't believe we can guarantee that you
> can create a
> Friendly AI in the process of creating a superhuman
> intelligence.

This rests on the assumption that all workable
Friendliness theories need to involve 'constrainment'.
If so, we should abandon AI ASAP.

> >> that it is not clear that a construct like this
> would be able to
> >> battle it out effectively against other
> constructs from societies
> >> that do not construct Friendly AIs (or indeed
> that the winner in
> >> the universe won't be the societies that produce
> the meanest,
> >> baddest-assed intelligences rather than the
> friendliest -- see
> >> evolution on earth), etc.
> >
> > Battle it out? The 'winner'? The 'winner' in this
> case
> > is the AI who makes it to superintelligence first.
>
> How do we know that this hasn't already happened
> elsewhere in the
> universe? We don't. We just assume (probably
> correctly) that it hasn't
> happened on our planet -- but there are all sorts of
> other planets out
> there. The Universe is Big. You don't want to build
> something that
> will have trouble with Outside Context Problems (as
> Ian Banks dubbed
> them).

Another rephrasing: the first superintellgence that
knows about us.

> >> Anyway, I find it interesting to speculate on
> possible constructs
> >> like The Friendly AI, but not safe to assume that
> they're going to
> >> be in one's future.
> >
> > Of course you can't assume that there will be a
> > Singularity caused by a Friendly AI, but I'm
> pretty
> > darn sure I want it to happen!
>
> I want roses to grow unbidden from the wood of my
> writing desk.
>
> Don't speak of desire. Speak of realistic
> possibility.
 
I consider that a realistic possibility. And the
probability of that happening can be influenced by us.

> >> The prudent transhumanist considers
> >> survival in wide
> >> variety of scenarios.
> >
> > Survival? If the first transhuman is Friendly,
> > survival is a given,
>
> No, it is not, because it isn't even clear that
> there will be any way
> to define "Friendly" well enough. See "no absolute
> morality", above.

That is the problem of Friendliness definition, which
Eli knows a lot better than I do. A hard problem, I admit.

__________________________________
Do you Yahoo!?
Find out what made the Top Yahoo! Searches of 2003
http://search.yahoo.com/top2003



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT