RE: continuity of self [ was META: Sept. 11 and Singularity]

From: Ben Goertzel (ben@goertzel.org)
Date: Tue Sep 17 2002 - 07:18:14 MDT





Samantha wrote:
> > And I still think that, based on all the evidence available,
> the Friendly AI
> > route has the highest chance of success, out of all the possibilities
> > raised.
>
> I am not so sure these days. Eliezer and team, afaik, haven't
> really got moving on implementation. You know your own status
> far better than I.

My own team's situation is as follows. Engineering is proceeding at a slow
but steady pace (4 people working half-time on the AGI project, and spending
the other half of their time on related AI product work; plus a few other
part-timers). I am working hard on a full mathematical and verbal
formalization of the AI design, which will be done within 6 months (within
2-3 months minus all the graphs and diagrams...). We're going to publish a
technical book containing this overview of the design (hopefully in late
2003) and following that we'll make a major push to get significant funding
for the project. In 1998-2001 we had over 40 R&D staff on our previous
project, Webmind, and we intend to get Novamente back to this level within a
few years.

Giving precise timing estimates is always fraught with difficulty. The
following numbers should be taken with a grain of salt, because there are
plenty of uncertainties involved. Completion of the engineering of the
system will occur in the time-frame 2003-2005 [2004-2005 if the current team
does not expand in any way]. Then comes the teaching phase, which will
undoubtedly involve a lot of tuning and refactoring of parts of the system.
If our design for AGI is basically correct and workable, we could have a
human-level AI by 2010, or conceivably by 2007-2008 if things just go
swimmingly well and we get a bigger team in the 2004-2005 timeframe.

Of course, if our AI design is totally wrong (as Eliezer believes, based on
having read a very rough, early draft of the current book manuscript), then
all we'll discover by 2006 or so is that our AI design is unable to be
taught !!!

In short, we have a long path ahead of us, but we're pretty happy to have a
detailed design that appears to us to plausibly account for all aspects of
human-level intelligence.

For those not familiar with my Novamente Artificial General Intelligence
project please see www.realai.net.

For those who want to wade through some of Eliezer's and my arguments about
the Novamente system, see the archives of this list, sometime around May of
this year I think.

> It is an unknown whether AI can be
> produced in the time frame (< 30 years) I believe is the most
> hyper-critical. And of course it would be really useful if it
> existed next week.

Even if my own and other current AGI projects fail, I think Kurzweil has
made a very strong case that detailed computational emulation of the human
brain will likely be possible in another 30 years or so.

> But no pressure! :-) If it is produced it is
> another huge unknown if it will be "friendly" in the rather
> collogial sense of actually being a boon, a freeing of humanity
> from so many limitations and so much suffering in a positive
> sense or not.

Naturally, I have faith in my own team's approach to AI friendliness, but I
do worry about some future scenarios.

Actually, I worry less about my initial AI being unfriendly, than I do about
scenarios like: "human-level but not yet transhuman AI's are taken by
governments and brainwashed to produce advanced weaponry, leading to the end
of us all."

Yes, brainwashing an advanced AI will be hard. But do we know enough yet to
estimate just how hard?

> As unlikely as sufficient shift of institutions and people's
> perceptions and practices are, I think that is at least as
> likely to work as FAI.

Well, my intuition differs, but obviously we're in a domain of highly weak &
scattered bits of evidence here, so it can't be expected that different
intelligent, rational, insightful observers are necessarily going to agree!


> > But I can't make this statement with enough confidence to say that other
> > possible solutions shouldn't be avidly pursued too. As far as
> I can tell
> > right now, they should be.
> >
>
> Yes. That is also my conclusion. I plan to attempt the shift
> of consciousness approach. Someone has to.

Well, best of luck ;_) you have my support, for what it's worth...


ben g






This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT