From: Brian Atkins (brian@posthuman.com)
Date: Tue Jan 25 2005 - 12:53:37 MST
Ben Goertzel wrote:
>
>>I think what Eliezer just posted fits here too:
>>
>>"But I do not know how to calculate the space of AGI programs that go
>>FOOM. (It's an odd inability, I know, since so many others seem to be
>>blessed with this knowledge.) I don't know Novamente's source code, and
>>couldn't calculate its probability of going FOOM even if I had it. I
>>just know the first rule of gun safety, which is that a gun is always
>>loaded.
>
>
> IMO, this is foolish.
>
> It is obvious that a GP-based optimizer running on less than a million PC's
> (to be conservative) is not gonna take-off, transcend, become self-aware,
> etc.
>
> It's obvious that Cyc, SOAR and EURISKO are not going to do so.
>
> True, we can't prove this rigorously -- but we also can't prove rigorously
> that any given virus in a bio lab isn't going to mutate into a horrible
> disease that's going to kill us all next week. On these, and many other
> matters, we rely on scientific intuition bolstered by our much relevant
> knowledge.
>
I'm sorry, but I again think your analogy is being used to appear to
prove a point in a weaker way by sweeping aside AGI-specific qualities.
Although I am not a bio expert, it is my understanding that bio labs
have various levels of safety protocols which have been developed to
deal safely with unknown organisms which are suspected of potentially
being dangerous. But there are at least two problems with the analogy:
1. Usually the lab begins using strong protocols _after_ the organism
has already shown it is dangerous - but with an AGI this may be too late
to wait before applying the strongest safety protocols. Bio labs have
the luxury of fooling around with strange totally unknown things using
low safety protocols initially, because if something does go wrong it
won't be the end of the world. With an AGI, it's probably not as simple
as quarantining a lab or small geographic area when something bad happens.
2. The bio protocols have themselves been developed based on extremely
strong knowledge we have about how to protect from bio dangers - but for
AGIs we do not yet have such widely accepted protocols or enough
knowledge to create real safety with high confidence (IMO).
Clearly, AGI-specific safety protocols need to be developed and refined
as we gain more knowledge, and they will always almost certainly have to
be stricter in some ways than bio or other areas of science if we are
going to pay more than lip service to the issue. Which makes sense
because you are attempting to form last resort protections from
something that is a heck of a lot smarter than a virus.
-- Brian Atkins Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT