[SL4] JOIN: James Barton plus Re: Programmed morality
From: James.Barton@sweetandmaxwell.co.uk
Date: Tue Jul 18 2000 - 06:25:46 MDT
Hi all.
Not really sure what's relevant about me, but:
born 21 September '76, living in London.
university background in Physics & Philosophy, but hoping to do an
MSc in Computing.
lifelong neophile, and intellectual omnivore / dilettante.
apologies if I ramble a bit - I haven't assimilated all the
background on this topic yet.
Attitude towards Singularity: sceptical but very hopeful.
> > You may be correct in that only one will reach the singularity.
> > Exponential growth means whoever is in the lead should win.
However
> > the AI may decide to make a billion+ copies of itself on the way &
> > coordinate as a society, or group mind. By that time it's already
out
> > of our hands. I expect we'll be uploaded into an archive & our
atoms
> > used more efficiently.
>
> Um, a couple of disagreements here. One, I don't see why it would
make
> copies of itself. Just because you and I grew up in a "society"
full of
> vaguely similar people doesn't mean that's the best way to do
things.
Two reasons an AI might want to make a copy of itself:
1) It places a high value on itself or its goals, and figures a
backup somewhere is good insurance. And lots of backups make better
insurance.
2) If it places a value on answers sooner rather than later, it might
want to create not merely backups but running copies of itself. These
may be nearly identical, or configured to work on specified sub-
problems.
These courses of action depend on the AI knowing enough about its
substrate, and learning or deducing what's on the other end of the
network connections.
Now, re: morality.
Are any of you familiar with the Naturalistic Fallacy? Essentially,
it says that no physical fact leads to a moral fact - e.g. nothing
about a cute defenceless infant makes it wrong to torture it.
Obviously, we all think it is wrong, but that's because of a separate
moral fact, something perhaps like, "Torture is never justified".
No convincing system of ethics has been developed to my mind, and I
think we're left with Russell's comments in talking about Nietzsche:
we can only decide whether something is right by examining it with
our own internal sense of right and wrong, rather than comparing it
with some external, verifiable set of rules.
I suggest that an AI will have no internal sense of right and wrong,
as this in a product of our evolution. If we give it one, it may
choose to overwrite it in subsequent redesigns. Perhaps no "bad"
thing.
Eliezer, your demonstration that there are significant goals, even
starting from tabula rasa, in http://intelligence.org/tmol-faq/logic.html
is interesting.
Can you get point 11 to read "All done: G2.desirability > 0 "?
Without knowing that something is positively desirable, how can an AI
make an informed choice to act towards it? I mean, assuming that G1
is negatable, how can the AI decide between G1 and -G1?
And can G2 really be specified on such little information about G1?
Anyway, just a few opening thoughts.
James
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:35 MDT