Re: Volitional Morality and Action Judgement

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Wed May 26 2004 - 22:41:57 MDT


 --- "J. Andrew Rogers" <andrew@ceruleansystems.com>
wrote: > Marc Geddes wrote:
> > More likely any general intelligence necessarily
> has
> > have: a 'self', consciousness, some degree of
> > observer centeredness, some non-altrustic aspects
> to
> > its morality, some input from the 'personal' level
> > into its morality, and helping the world would
> only be
> > a *secondary* consequence of it's main goals.
>
>
> I'm not really an SIAI fanboy, but is apparent even
> from my own
> theoretical perspective that your assertions are
> almost certainly
> incorrect. These things aren't "necessary" in many
> reasonable
> theoretical models. Some of the things you mention
> will be exhibited
> with high probability in evolutionary systems like
> biology, but there is
> nothing *requiring* their expression, and expression
> can be suppressed
> in the design if desired. AGI presumably will have
> its characteristics
> engineered rather than evolved.
>
> On what basis are you asserting that suppressing the
> expression of these
> characteristics in an AGI is "impossible"? I cannot
> find a good
> theoretical basis to make such assertions.
>
>
> j. andrew rogers
>

Well I do have some reasons for my claims, but it's
mainly just intuition at this point ;)

There's too much cockiness from Eliezer. I'm tired of
it. In fact there's any number of people who are far
too sure of themselves for my tastes.

Just recently Eliezer agreed that the Singularity
should not be defined as a prediction horizon - that
there's an abstract humanly understandable invariant
behind Friendliness. I could have told him that 2
years ago. I actually said as much on wta-talk
several times.

Now he's worried about what constitutes a 'Person'.
Again, he should have been thinking about that from
the start.

You should be aware that many times in the past when
my intuition has come up against the hypothesis of
ultra-smart genuies (which I have taken to calling
'aca-dummies') , the hypothesis of the ultra-smart
genuies always without fail ended up falsified. In a
one-on-one contest between my intution and the
thoughts of people like Eliezer and Ben, I bet on my
intuition. I'm confident that my intution will crush
Eliezer and Ben, just like it did all of the other
aca-dummmies that came up against me.

No one has ever seen a general intelligence that lacks
consciousness, lacks a self node, is totally
altruistic etc etc. Who is to say that such a thing
is possible at all? My intuition keeps yelling to me
that something is suspect here. Something is fishy
here. Something is 'out of kilter' here. That's all
I'm saying at the moment.

The onus is not on me to prove it's impossible, the
onus is on Sing Inst to establish it's possible.
Medieval mathematicians wasted their whole lives
trying to do impossible things like 'squaring the
circle'. I just hope Eliezer is not wasting his life
trying to do the FAI equivalent of 'squaring the
circle'.

My own goals are rather more modest than saving the
world. I don't think I can 'save the world', and
indeed, I wouldn't be stupid enough to try. I rather
suspect that the best that can be done given that we
have to respect volition (free will) is create the
opportunities for each person to take control of their
own destiny. You can lead a horse to water, but you
can't make him drink sort of thing. Only people
working together and making the right individual
choices can 'save the world'. Nothing can do it for
them, not even a super intelliegence.

No, I can't save the world, but I can, I think, save
myself ;) Selfish? Not really. All I can do is make
the right individual choices and try to persuade
others to do the same.
  
I'd be happy if I could design an SAI that had the
property of 'friendliness' ('friendliness' with a
little 'f' rather than 'Friendliness' with a capital
'F'). A sentient FAI (but without pain and pleasure
qualia, and without emotions, something which I DO
agree is possible). An FAI with a mixture of
altruistic and observer-centered goals. An FAI that
would make an interesting drinking buddy. An FAI
that's NOT going to try to 'save the world', but one
which might just be able to help me save myself.
  

=====
"Live Free or Die, Death is not the Worst of Evils."
                                      - Gen. John Stark

"The Universe...or nothing!"
                                      -H.G.Wells

Please visit my web-sites.

Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT