From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jun 26 2002 - 20:13:39 MDT
> Ben Goertzel wrote:
> >
> > I think that, if the posited organization is going to exist, we need
> > someone to "head" it, and this shouldn't be me or Eliezer or
> anyone else
> > with their own AGI design and their own agenda to push.
>
> I do not endorse the existence of such an organization. However, if it
> existed I would expect Nick Bostrom to chair it - I honestly
> can't think of
> any other candidate for the position. (With respect to John
> Smart, I can't
> see him in such a position any more than I or Ben.)
>
> That said: This is a fucking stupid suicidal idea.
>
> Sincerely,
> Eliezer.
Eliezer,
I think that the creation of such an organization right now would be *a bit
premature*, which is why I said 2-3 years in the future might make sense
(and that may be overoptimistic, depending on how much progress anyone's AGI
project makes during that time frame).
However, I really think it's silly to call the idea "stupid" or "suicidal."
At worst, in my view, it would be a useless sideshow; and at best it could
serve to infuse a very big decision with some additional wisdom.
Remember -- in my proposal, this is an *advisory* group anyway, so it
wouldn't have real power over what an AGI's owner does with it.... How then
could it be suicidal?
What you propose instead seems to be: "The world should trust that whomever
first creates a seed AI, is probably wise enough to make all decisions
regarding the advent of the Singularity." I think your proposal is pretty
darn dubious, as it relies on your psychological theory that only a person
of supreme wisdom can possibly create a seed AI -- and I think this
psychological theory is pretty darn dubious.
I don't intend to spend my time currently forming committees, I intend to
continue to spend it working on designing, engineering, testing and (later)
teaching an AGI. But if/when Novamente gets to near-human intelligence, I'm
going to be wise enough not to trust my own wisdom, and I'm going to do what
I've suggested: assemble a committee of Singularity wizards to help me
monitor Novababy's progress, help me teach the thing, and help make the deep
decisions the thing will lead to...
And I hope very much that, if *your* AGI design/engineering efforts bear
fruit and produce a near-human-level AGI, you *at that point* will have seen
the error of assuming that your AGI-creation prowess necessarily implies
your immense personal wisdom... and I hope that you will, at that point,
follow some methodology similar to what I've described.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT