From: William Pearson (firstname.lastname@example.org)
Date: Mon Nov 26 2007 - 18:53:19 MST
On 27/11/2007, Nick Tarleton <email@example.com> wrote:
> On Nov 26, 2007 6:40 PM, William Pearson <firstname.lastname@example.org> wrote:
> > Lets say parts of the AI develop to use the natural response times of
> > the components for timing, I have seen things suggesting that neurons
> > are used in this fashion in brains (it might be a lot more efficient
> > than centralised clocks).
> This level of hardware dependency would be utterly unnecessary,
> undesirable, and unlikely to arise.
So why did it arise in the brain? Perhaps there is some usefulness to
this arrangement that you are not seeing, because your theory of
intelligence is not complete, or perhaps not. My point was until you
have a complete and preferably constructive theory of intelligence,
you cannot completely dismiss this potential.
I can go into detail why I think decentralised/constructable timing
circuits is a good idea for intelligence, but that is more suited to
the AGI list, or off-list.
> Intelligence can find much cleaner
> solutions, especially with better hardware than neurons.
> > So then speed up the hardware may throw the
> > timings out of sync, if the memory/logic is ported blindly. And
> > similarly for speeding up an algorithm might throw it out of sync with
> > other algorithms. An AI would have to understand itself completely to
> > get speed up from hardware speed increases in the trivial fashion
> > assumed in the exponential growth of a singleton.
> Does software (except poorly designed games, or other UI-intensive
> software, with hardcoded timing loops) break when run on a faster
> system? I don't think so. An intelligence strongly dependent on
> embodiment, like a human, would have some problems because of
> desynchronization from the world (similar to the timing loop thing),
> but an AI needn't be built that way. (And a human could be speeded up
> just fine if given a rich virtual environment running at the same
I think it is a possibility all intelligence needs to be strongly
dependent upon embodiment. You cannot maintain that an intelligence
running on purely faster hardware is more intelligent than one running
on slower hardware, without saying intelligence is dependent upon
With regards to putting humans in a virtual world.This would place a
barrier between the human and culture/reality, reducing the humans
ability to get information it would need to self-improve. You cannot
get new information about the nature of the world from a simulation.
> > I do not seek to convince you of this. Just to show your virtual
> > certainty is misplaced, unless you can prove that intelligence does
> > not need this sort of system.
> It doesn't. Generalize from
> http://www.acceleratingfuture.com/tom/?p=19 : an upload in VR can be
> run arbitrarily fast and produce the same result.
1) We have no Turing Machines, merely things that would be Turing
Machines if we had infinite memory. So you would have to show that a
QED simulation of an AI would take less resources than we have/will
have in the next hundred years.
2) It depends how you define, "solve all the problems". Due to
embodiment there are some problems a simulated AI couldn't solve in
the same fashion as the un-simulated human. E.g. riding a bike through
an unwired tunnel (at least with current technology), due to energy
constraints (no super computers on a bike) and inability to receive
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT