From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jan 23 2005 - 12:24:12 MST
> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Brian
> Atkins
> Sent: Sunday, January 23, 2005 1:35 PM
> To: sl4@sl4.org
> Subject: Re: When does it pay to play (lottery)?
>
>
> David Clark wrote:
> > The chance that *any* implementation of AI will
> > *take off* in the near future is absolutely zero. We haven't
> the foggiest
> > clue exactly what it will take to make a human level AI, let alone an AI
> > capable of doing serious harm to the human race.
>
> Your two statements of certainty are in direct conflict with each other,
> so I don't see how you can hold both at the same time.
I don't see that his attitude is self-contradictory.
I would say "The chance that anyone will build a ladder to Andromeda
tomorrow is effectively zero. We don't really have any clear idea what
technology it would take to build such a ladder." No contradiction there.
I don't know how to do it, but I know that none of our current methods come
close to sufficing.
In contradiction to David Clark, I do NOT think that is the case with AGI.
IMO, the chance of an AI "taking off" in the near future is low, but not
nearly so low as that of a human building a ladder to Andromeda.
And of course I think that I do have a very clear idea what it will take to
make a human-level AI... ;-)
By the way, in a prior post David Clark said that he didn't agree with my
"agents-based" approach to AGI. In fact, Novamente is not really
agents-based under any sensible interpretation of that term (I realize that
the word "agent" has sometimes been stretched beyond all sensible meaning,
in the computer science community). Webmind, the AI system I worked on in
the late 1990's and 2000, was agents-based, but Novamente is not (though
Novamente and Webmind have many other commonalities).
-- Ben G
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST