From: Brian Atkins (brian@posthuman.com)
Date: Mon Jan 24 2005 - 10:39:21 MST
Ben Goertzel wrote:
> Hi,
>
>
>>It's interesting that you are using the exact same flawed analogy class
>>as Ben did in his response. The answer is the same as I gave him: your
>>analogy is incorrect because we know with great detail using existing
>>physics whether a given car design will actually result in a working car
>>or not. With a given AGI design we do not know at all (you may think you
>>know, but if I ask you to prove it you will be unable to provide me
>>anything near the level of concrete proof of design-worthiness that a
>>car designer could provide using a physics model).
>
>
> Hmmm.... Brian, IMO, this is not quite correct.
>
> We know with great detail that Cyc, SOAR and GP (to name three AI
> systems/frameworks) will not result in an AI system capable of hard takeoff.
>
> And, we know this with MORE certainty than we know that no one now knows how
> to build a ladder to Andromeda.
>
> IMO, it's more likely that next year some genius scientist will come up with
> a funky self-organizing nano-compound creating a self-growing ladder to
> Andromeda, than that one of these simplistic so-called AI systems will
> self-organize itself into a hard takeoff. Not all scientists would agree
> with me on this, but I guess most would.
>
Disregarding issues I still have with your analogies, I think you are
sidestepping the original statement from David, which was regarding
*any* AGI implementations in the *near future*. I interpret that to
include things like Novamente within the remainder of this decade, but
perhaps that's not what he meant?
Here's the quote again:
"The chance that *any* implementation of AI will
*take off* in the near future is absolutely zero. We haven't the foggiest
clue exactly what it will take to make a human level AI, let alone an AI
capable of doing serious harm to the human race."
That combined with his casual "Oh and even if it does take off it will
be constrained by current day hardware" kind of set me off. These aren't
the kinds of attitudes that you would expect or hope to hear from people
doing this kind of work.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT