Re: FAI (aka 'Reality Hacking') - A list of all my proposed guesses (aka 'hacks')

From: Russell Wallace (russell.wallace@gmail.com)
Date: Fri Jan 28 2005 - 10:23:15 MST


On Fri, 28 Jan 2005 10:27:35 -0500, Ben Goertzel <ben@goertzel.org> wrote:
>
> It's really an open question what
>
> L(P, nanotech supercomputer) = ???
>
> Probably it will have aspects of AIXI and aspects of brain-ish architecture,
> and some entirely different aspects as well.

Hmm... I doubt a nanotech supercomputer will have more than five or
ten orders of magnitude advantage on the brain in terms of raw
performance (after all, the brain is already a nanotech computer!),
which I wouldn't expect to make a lot of difference to the optimal
software architecture (I'd expect to need thousands to millions of
orders of magnitude performance difference for that).

I agree L(P, nanotech supercomputer), or practical AI approximations
thereof, will probably not closely resemble the brain, but for
different reasons:

- An AI needs to be programmable; the brain needs to be evolvable;
different constraints.
- Not only do we need to be able to understand an AI well enough to
build it [1], we also need to be able to understand it well enough to
be confident of its Friendliness (not just right now, but projected
over however many generations of whatever learning/self-improvement
it'll be doing). The brain isn't designed for that.
- A von Neumann computer is much more flexible than biological neurons
(you can "rewire" things by storing pointers, timescale on the order
of microseconds, which is much faster than neurons forming new
connections). (Not that supercomputers are von Neumann machines,
strictly speaking, but they resemble them more closely than the brain
does.)
- Tradeoffs between serial speed and parallelism.
etc.

I'm curious: what aspects of AIXI do you see as being potentially
useful for practical AI?

> The mixture of references to Chaitin's omega number and Tipler's Omega Point
> is a bit confusing, no?

In truth I didn't find it so, but that may be because I didn't do more
than glance at the Omega Point references; in the context of Marc's
hypothesis 2, which was the only one I was responding to, "omega"
referred only to Chaitin's work.

[1] In a sense this is not strictly true; I can think of a potential
way to get at least semi-hard takeoff going: build the most powerful
AI system you can, put it in control of self-replicating nanomachinery
that can make use of naturally occurring raw materials, and let
evolution (which at this stage would be at least partially directed
rather than relying purely on random mutation) take over from there.
(Kids, don't do this experiment at home [2] unless supervised by an
adult [3].)

[2] i.e. in your Hubble volume.

[3] i.e. a Transcendent Power.

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT