From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jun 04 2004 - 11:55:13 MDT
Eliezer,
> You speak of my overconfidence, I who propose failsafes and
> backup plans,
> and write papers that describe possible classes of error in
> my systems, and
> do not try to guess myself what morality humanity will want in five
> thousand years. You show no sign of this *in-practice*
> humility. You
> speak of my arrogance and overconfidence, and you show not
> one sign of
> taking safety precautions far in advance, or considering the
> errors you
> might have made when you make your proposals.
This particular sub-thread (how you think I'm not "safe" enough in my
approach to AGI) has been repeated too many times, I'm sure it's very
very boring to the members of this list by now.
As I've said before, I don't think any of us knows enough to know what
sorts of precautions will need to be taken to mitigate the risk of AGI's
going bad. I think we will be able to discover this through (non-mad)
scientific experimentation with infrahuman AGI's.
I think that a lot of speculation about safeguards and abstract
Friendliness theory, at this point, when we understand so little about
AGI itself, is largely a waste of time.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT