RE: Quest for trans guide.

From: Ben Goertzel (ben@goertzel.org)
Date: Sat May 04 2002 - 10:09:43 MDT


> Why do I care if you don't share my ideas about justice? Because you are
> writing the code of the friendliness system of the first AI. You
> are going
> to be training it in friendliness, and testing it against your
> moral system.

I'd like there to be a little more clarity on this point.

So far as I know, Eliezer is not, right now, "writing the code of the
friendliness system of the first AI."

Rather, he is thinking hard about how to create a seed AI, and perhaps doing
some high-level design and some very partial prototyping.

There are others besides Eliezer who are actually much further along than
Eliezer in the attempted-AI-creation process, and are actually coding
would-be seed-AI's according to their own designs and ideas. Peter Voss and
I (both members of this list, with separate projects) are examples.

And there are others who don't share the seed-AI/hard-takeoff philosophy
(which Eliezer, peter and I do share, in spite of relatively minor
conceptual differences of viewpoint) but are working hard at coding
artificial general intelliences (AGI's). Pei Wang, a former collaborator of
mine, is another example; he's been coding part-time on his would-be
artificial general intelligence (AGI) for quite some years. Jason Hutchens,
when a-i.com existed, was making some decent progress on his own would-be
AGI as well; I'm not sure what he's up to now. It's important to recognize
that someone doesn't necessarily have to "believe" in the hard takeoff to
create an AGI system capable of experiencing/initiating such a thing.

Now, Eliezer believes that all others who are working on seed AI and AGI are
off on the wrong track, in one way or another. He apparently believes that
he has a better chance than anyone else of achieving the grand goal. If you
choose to share his belief, that's fine. I happen not to share Eliezer's
belief in this regard, though I agree with him on a large number of other
things. More so than Eliezer, I tend to think there are many possible
"right tracks" to AGI and seed AI, just as there are many ways to achieve
human flight (think dirigibles, airplanes and helicopters...).

I think that Eliezer, along with many others, has contributed and is
contributing very valuably to the quest for seed AI and to the general quest
for Singularity acceleration. I don't intend any disrespect for his work or
his intellect. And I'd be pretty damn thrilled if he were to unveil his
top-secret "baby Friendly AI mind" tomorrow. However, so far as I know, it
is just not true that he is "writing the code of the friendliness system of
the first AI."

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT