From: Brian Atkins (brian@posthuman.com)
Date: Mon Jan 24 2005 - 18:38:41 MST
David Clark wrote:
>
> If you have all this
> proof and evidence that AI is so scary or so imminent please share your
> sources.
>
I know you want to drop this topic, but I think you're still missing the
point I've been trying to make all along which is that neither I nor you
nor anyone yet has enough proof for taking strong stances regarding AGI
safety. I have not made any definite claims, other than pointing out
that uncertainties exist. You on the other hand have made several strong
assertions in the last few posts, which if they turn out to be wrong,
could be extremely dangerous.
If you were simply being modest when you said that no one knows enough
about AGI to create one yet, then by all means let us know the details
of your knowledge that backup your AGI assertions. If not, then you must
agree with me that there is no basis for such strong assertions, and
therefore for safety we should proceed with AGI development as if those
assertions will turn out to be wrong.
Maybe you haven't yet imagined and internalized the feeling of
accidentally igniting the atmosphere. If you had, I can't imagine why
you would be making any non-proven assertions regarding practices and/or
assumptions that would reduce AGI development safety. Put yourself in
Oppenheimer's shoes and spend some time there, because if you are
planning to actually attempt coding an AGI (even a prototype in my
opinion) you are effectively placing yourself in such a position.
I think what Eliezer just posted fits here too:
"But I do not know how to calculate the space of AGI programs that go
FOOM. (It's an odd inability, I know, since so many others seem to be
blessed with this knowledge.) I don't know Novamente's source code, and
couldn't calculate its probability of going FOOM even if I had it. I
just know the first rule of gun safety, which is that a gun is always
loaded. Even if I had a mathematical proof that the gun wasn't loaded
(which I don't) I would treat the gun as if it were loaded anyway, to
avoid forming bad gun handling habits.
This is kindergarten stuff. I know this. Harvey knows this. Anyone else
who deals professionally with a risk to life and limb knows this. People
who don't know this win Darwin Awards. The only problem with the field of
AI is that one project that doesn't understand the kindergarten-level rules
of safety can potentially take the planet down with them. Otherwise
natural selection would take care of the problem, just like it takes care
of kids who think that guns aren't loaded."
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST