From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jun 26 2002 - 15:35:19 MDT
About Eugen's suggestion that FAI and SIAI are dangerous...
It is certainly worth seriously considering such a suggestion.
But it doesn't take much thought to come to the opposite conclusion.
In my view, the arguments for FAI and SIAI being dangerous are still highly
unclear.
I can understand an argument that *AGI development itself* is dangerous to
the human race. It is.
This was part of Bill Joy's famous Luddite argument. There are three basic
counterarguments against it:
1) the benefits to humans outweigh the risks, probabilistically
2) the human perspective is narrow, so AGI should be pursued even if it's
bad for humans
3) there's no practical way to stop AGI development, so why bother yakking
about a non-real option
But consider, what happens if Friendly AI thinking and organizing ceases.
Would this decrease the chance of AGI happening? Not hardly. Nearly none
of the AI work going on today is in any way directly connected to Friendly
AI. Although I have little respect for the vast bulk of mainstream AI work,
nevertheless, I can see that the CS, cognitive science, philosophy of mind
and neuroscience communities are gradually moving toward a better and better
understanding of how to build AI. I don't see the Friendly AI meme as
contributing substantially to AI development at this moment. Eliezer is the
only AI research I know of, for whom Friendly AI is truly central to his AI
approach.
Rather, if FAI were to disappear, the inevitable march toward AGI would
continue... and the odds would be greater still of getting an UNfriendly
AI....
So even if you think AGI is a big danger and should be stopped, ask
yourself: if AGI *isn't* stopped, isn't FAI a good thing to have around, to
at least minimize the odds of disaster?
I hasten to add that, personally, I think AGI should be pursued avidly
[though not recklessly], in spite of the dangers....
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT