RE: FAI: Collective Volition

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jun 02 2004 - 06:29:33 MDT


Eliezer,

About this idea of creating a non-sentient optimization process, to

A) predict possible futures for the universe
B) analyze the global human psyche and figure out the "collective
volition" of humanity

instead of creating a superhuman mind....

I can't say it's impossible that this would work. It goes against my
scientific intuition, which says that sentience of some sort would
almost surely be needed to achieve these things, but my scientific
intuition could be wrong. Also, my notion of "sentience of some sort"
may grow and become more flexible as more AGI and AGI-ish systems become
available for interaction!

However, it does seem to me that either problem A or B above is
significantly more difficult than creating a self-modifying AGI system.
Again, I could be wrong on this, but ... Sheesh.

To create a self-modifying AGI system, at very worst one has to
understand the way the human brain works, and then emulate something
like it in a more mutably self-modifiable medium such as computer
software. This is NOT the approach I'm taking with Novamente; I'm just
pointing it out to place a bound on the difficulty of creating a
self-modifying AGI system. The biggest "in principle" obstacle here is
that it could conceivably require insanely much computational power --
or quantum computing, quantum gravity computing, etc. -- to get AGI to
work at the human level (for example, if the microtubule hypothesis is
right). Even so, then we just have the engineering problem of creating
a more mutable substrate than human brain tissue, and reimplementing
human brain algorithms within it.

On the other hand, the task of creating a non-sentient optimization
process of the sort you describe is a lot more nebulous (due to the lack
of even partially relevant examples to work from). Yeah, in principle
it's easy to create optimization processes of arbitrary power -- so long
as one isn't concerned about memory or processor usage. But
contemporary science tells us basically NOTHING about how to make
uber-optimization-processes like the one you're envisioning. The ONLY
guidance it gives us in this direction, pertains to "how to build a
sentience that can act as a very powerful optimization process."

So, it seems to me that you're banking on creating a whole new branch of
science basically from nothing, whereas to create AGI one MAY not need
to do that, one may only need to "fix" the existing sciences of
"cognitive science" and AI.

It seems to me that, even if what you're suggesting is possible (which I
really doubt), you're almost certain to be beaten in the race by someone
working to build a sentient AGI.

Therefore, to succeed with this new plan, you'll probably need to create
some kind of fascist state in which working on AGI is illegal and
punishable by death, imprisonment or lobotomy.

But I'd suggest you hold off on taking power just yet, as you may
radically change your theoretical perspective again next year ;-)

-- Ben G



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST