Re: Donate Today and Tomorrow

From: Ralph Cerchione (figment@boone.net)
Date: Thu Oct 28 2004 - 14:43:49 MDT


Slawomir, everybody, an alternative viewpoint...

"Marc Geddes" <marc_geddes@yahoo.co.nz> wrote...
> --- Slawomir Paliwoda <velvethum@hotmail.com> wrote:
> > Not when you start thinking about the consequences
> > of the cause you are supporting. SIAI failing to build safe and humane
SI
> > is not the worst thing that can happen. The worst thing that can happen
is,
> > actually, SIAI succeeding at making SI that would later turn
> > Unfriendly. It makes sense to support the cause when it is shown how the
project
> > won't lead to UFAI in a way potential donors can understand. In absence
of
> > comprehension, the only thing left is trust.
>
Okay, I'm not utterly convinced that creating fully sentient AI is nearly as
simple or inevitable as some enthusiasts like to insist. In fact, I think
it's telling that people closely involved in this research tend to make a
point of moderating their predictive statements after a while, even if they
remain optimistic. Personally, I suspect that given our present levels of
technology and intelligence (or close to them), creating AI could take many
decades.

The key point, however, is "given our present levels...(or close to them)."
I don't think we're going to be stuck in our present cognitive range for
several more decades. We're making too many advances in areas such as
genetics, nootropics, non-invasive enhancement methods, accelerated
learning, etc. Not to mention that too many technological advances apply
directly to areas that can enhance that above research (the Human Genome
Project, new computers systematically engaged in basic independent
scientific research, improvements in scanning human brain functions, etc).

To be quite frank, I think we're apt to have transhumans running around
before too much longer. And if we have superhuman intelligence in actual
human researchers, that makes many technological possibilities that much
more likely.

In short, we're apt to have some form of AI available in the not-too-distant
future anyway. My own perspective is: If Friendly Artificial Intelligence
research doesn't cost that much, why not lay some groundwork out for it now?
Even if we don't get an actual FAI in the next couple of decades, if we have
intensively pursued several different routes to that goal for the next
twenty years, we'll have laid down a beaten path for future researchers.
Which means that a newly emerging transhuman or technological savant is more
apt to follow an effective route to FAI than to UFAI, all things being
equal.

>From my perspective, then, Friendly Artificial Intelligence research is a
bit of an insurance policy. It doesn't have to be an existential
all-or-nothing gamble. I'll settle for hedging my bets, personally. =)

> That's another point possibly deterring potential
> donars. If Sing Inst is capable of actually 'saving
> the world', the flip side is that they are also
> capable of actually destroying it if their approach
> fails.
>
Actually, this is true. I take comfort in the belief that even if AI doesn't
emerge as a result of incremental stages of progress -- an improved assembly
line 'bot here, a masterful expert system there, a simple "AI" scientific
researcher there -- but in a sudden, blinding burst, I still think that
we'll be seeing such incremental progress in other areas of computer
research as well as in the development of potential transhumans. Meaning: I
suspect we'll have a lot more intelligence and resources to apply to this
problem before we actually solve it.

Incidentally, the human race technically already faces a number of
existential threats. Nuclear weapons, various potential environmental or
natural catastrophes, a potential super-plague or nanowar... we're quite
capable of getting wiped out without AIs.

> Someone donating to Sing Inst may actually be
> *accelerating the destruction of the world* if Sing
> Inst creates an UFAI.
>
Duly noted. =) But seriously, people are going to continue computer
research, and they're going to continue AI research. Is an Institute
dedicated to creating a "warmer, fuzzier AI" that terrifying an option?
Given the alternative of people creating an AI who've never considered the
problem, or who assume that transcendent intelligence will make their
machine not only omniscient and omnipotent, but omnibenevolent as well?

> You can understand the dilemna of someone who agrees
> with basic Singulatarian concepts, but has serious
> doubts about the specific approach of Sing Inst...
>
> For instance take my stubborn claims:
>
> *No pratical (real-time) general intelligence without
> sentience is possible
>
This may be correct, but as I've pointed out before, we already have
computers conducting basic research in biotech/pharmaceuticals without
having come close to full sentience. Which means we may be able to
manufacture and commit vast amounts of effective brainpower to these
problems very soon indeed. Failing that, we're already applying that
intellect to problems related to creating full-fledged transhumans.

> *No completely selfless AI is possible
>
Ack. Depending on the level of intelligence... Hmm, I'm really not sure how
firm this point is. I'll hear out your arguments, of course. But subsentient
AI may not even be sufficiently self-aware to be self-centered. And higher
intelligences... I think anything we say about some hypothetical being with
a basic intellect that's a mere thousand times superior to our own, even
before counting its vastly swifter processing speeds, is probably rather
speculative at this point. =)

> *Collective Volition is impossible for Singleton AI to
> calculate and can't be imposed from the top-down
>
> What should I do?
>
> (a) Trust that Eli is right and I'm simply mistaken?
>
He is the Great Leader.

> (b) Stick to my guns and doubt Eli?
>
Do you question the _Great_Leader_?! =)

> (c) Snap and start scribbling incomprehensible
> diagrams and spouting gibberish?
>
This would be my option. =)

A very interesting post, Slawomir. I won't tell you what to do, actually. In
fact, my (still limited) resources are presently far more focused on
developing transhumans, because that means research into self-enhancement,
which has proven rather beneficial to my efforts to improve my financial
status and thus to acquire the significant resources to effect these other
issues.

Choose the strategy that makes the most sense to you. Obviously. =)

Ralph



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT