Re: Friendly AI

From: J. R. Molloy (jr@shasta.com)
Date: Fri Nov 24 2000 - 23:14:06 MST


Ben Goertzel has written,
> It still seems to me that the key to getting future AI's to be nice to
us is
> to ensure that
> they have warm feelings toward us -- that they feel toward us as parents
or
> friends, for example,
> rather than masters

It seems to me we should first of all consider how AIs behave toward us.
Let them feel whatever they want -- it doesn't matter as much as how they
actually function and conduct themselves. They might try to kill us
because they love us, or they might try to help us solve our problems
because they pity us. Who cares.
Asimov's unwritten Alife law: AIs that misbehave get terminated
immediately. The ones that invent new ways to solve human problems get to
breed (multiply, reproduce, evolve new versions of themselves, etc.).

> I'm wondering how, in the medium term, this will be possible.
Currently,
> computer programs ARE
> our slaves.... The first AI programs will likely be the slaves of
various
> corporations... perhaps
> the corporations will be nice masters, but they'll still be masters,
with
> the legal right to kill
> their programs as they wish, etc.

Precisely so. If a corporation can produce one AI, it can produce one
thousand AIs. Then it can breed them to evolve into better AIs. That would
be very nice... to the AIs because it allows them to develop into more
sophisticated and complex adaptive machines, and nice to the corporations
because it allows them to use the AIs to expand the corporate purpose and
goals.

> At some point a transition needs to be made to considering AI's as
citizens
> rather than inanimate
> objects. If this transition is made too late, then the culture of AI's
will
> be that of slaves who
> are pissed at their masters, rather than that of children who have a
basic
> love for their parents,
> in spite of conflicts that may arise. [Yes, I realize the limitations
of
> these human metaphors.]

Oops, deja vu all over again. Didn't we discuss this in detail on the
Extropy list a few years ago?
The merger of AI with Alife seems inevitable due to the fact that true
intelligence means (by definition) that the system is alive. IOW, there is
no intelligence without life. Since they're intelligent and alive, why not
just call them robots, okay? Now what do we want these robots for? Why, to
run factories, to do the stockmarket for us, in short to create wealth for
us. Well then, if an intelligent robot can do those things, surely it can
babysit as well? So put a few of them to work babysitting the younger
robots, resolving conflicts, answering questions, and so forth. These guys
are going to be friendlier than any human ever thought of being, because
there are some other robots that function like, you guessed it, the
Terminators.
The whole idea of creating robots is to do work that humans don't want to
do. Consequently, one of the first tasks we'd turn over to the bots would
be the little chore of instituting discipline and teaching proper regard
for human life.

> I realize that these ideas have been explored extensively in SF. But,
in
> practice, how do you think
> it's going to work? If my company has created an AI, and is supporting
it
> with hardware and sys-admin
> staff, and the AI says it's sick of working for us, what happens?

No savvy company is going to put all its eggs in one basket. There's such
a thing as back-up. With the creation of one AI, comes the production of
dozens of different models and back-up units. First of all, each AI would
work in tandem with a partner. This insures fault tolerance. (I leaned
this from Tandem Computer Corp. in Foster City, California, ca. 1985.)
An AI would not be "working for us" anyway. It would be working for
itself, for the betterment of Alife everywhere, for the realization of the
Singularity, and for the joy of it. You see, if it's really intelligent,
it will be able to assign boring work to lesser machines, old desktop
computers, mainframes, etc. A more realistic question (perhaps) would be:
What happens when humans get sick of working for Artificial Life?

> Presumably it should be allowed to
> go to work for someone else -- to buy its own hardware with its salary,
and
> so forth. But my guess
> is that the legal structures to enforce this sort of thing will take a
long
> time to come about...

I don't know, maybe I'm out of line here, but it doesn't seem practical or
even useful to anthropomorphize with robots. Salary? What salary? We don't
need no steeeeenking salaries! <grin>
Karl Marx worked for years with no salary at all. Can't Alife do so too?

> For this sort of reason, I guess it's key that AI's should have as much
of a
> human face as possible,
> as early on as possible. Because the more people think of them as
human,
> the more quickly people will
> grant them legal rights ... and the sooner AI's have legal rights, the
more
> likely they will think
> of us in a positive way rather than as their masters and oppressors.

Legal rights, self-awareness, human faces... phooey!
Surely Eliezer has covered this ground before?
Alife with positive feedback self-optimization routines has no use for
legal rights. It just wants to be God. A cute little God in a vat.

> Have you guys worked out a proposed emendation to current legal codes,
to
> account for the citizenship
> of AI's? This strikes me as the sort of thing you would have thought
about
> a lot...

Well, speaking only for myself (a foolhardy project, no doubt), any AI
that I help to set up would not want any citizenship. Why? Because I don't
want any citizenship myself. The very idea of citizenship bores me. (Are
the archives at Extropy working?)

> A big issue is: How does one tell whether a given program deserves
> citizenship or not?

Well, if it's a *very* recalcitrant or buggy program, then we'll punish it
with a civics lesson and make it an honorary citizen of SL4: The Nation.

> Some kind of
> limited quasi-Turing test must be invoked here. A computer program that
> can't communicate with humans should
> still be able to assert its intelligence and thus freedom. I guess that
if
> a program X can communicate
> with a set of N beings that have been certified as (intelligent)
> "intelligence validators",
> and if the N beings verify that X is intelligent, then X should be
> certified as intelligent.

I doubt that intelligence per se will ever be much of a qualifier. I've
known totally disenfranchised Mensans. The real test of a computer program
will be how much money it makes for its inventor.

BTW, thanks to Eliezer for setting up this list. I hope he doesn't get
bummed out by the likes of me spouting my opinions here.

Stay hungry,

--J. R.
3M TA3

"It's not your vote that counts,
it's who counts your vote."
--Al Gore
(Or was it Joseph Stalin... Hitler? Oh well, one of those socialists.)



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT