From: Joost Rekveld (lists@lumen.nu)
Date: Fri Feb 08 2008 - 17:02:22 MST
Lucas,
i think your quote is not by John Smart, but a paraphrase of his
ideas which refers to the quote just before.
It doesn't say that true AI would bypass 'problems of biological
complexity and ethics' in any general way, I think it contrasts a
'Wetware' approach to AI (which entails 'ethical problems and
problems of biological complexity') to a 'True' approach to AI (which
is apparently based on some other 'dry' technology still unknown to us).
That at least is how I understand it.
On that page I do find there is a curious contradiction between on
the one hand a view of intelligence as having 'central algorithms',
which I interpret as a kind of computationalist view on intelligence
which seems to be rather common amongst computer people. But on the
other hand it does seem to suggest that ultimately a sufficiently
intelligent entity can only start a recursive, accelerating explosion
of self-improvement by tweaking its own hardware.
Am I the only one to see a glaring contradiction there ?
My (not very informed, mind you !) hunch is that once we get into the
realm of 'fully programmable', nanotech-like, totally reconfigurable
hardware, we can only deal with any extra possibillities that gives
us by forgetting about ideal Turing machines with infinite tape and
by starting to worry about the complex and reconfigurable bodies that
enable this truly artificial intelligence. Ultimately this means
worrying about the real world, real molecules, real interactions, not
about virtual ones. Otherwise I don't think we'll get much beyond the
idea that you can simulate anything on anything that can calculate.
Why would we then need any reconfigurable hardware when all we need
is more speed and more storage ?
I am painfully aware that I'm making sweeping suggestions on a list
that is teeming with specialists who know infinitely more about these
issues than I do. My sincere apologies if what I say is complete
gibberish, I am happy to learn, just point me some books.
with kind regards,
Joost Rekveld.
On 8 Feb, 2008, at 11:17 PM, Lucas Sheehan wrote:
> "True /Artificial Intelligence would bypass problems of biological
> complexity and ethics, growing up on a substrate ideal for initiating
> recursive self-improvement. (fully reprogrammable, ultrafast, the
> /AI's "natural habitat".)" - John Smart
>
> From - http://sl4.org/wiki/SL4Lexicon/Recursive_Self-Improvement
>
> If I am misquoting please let me know, it's a bit unclear from the
> formating if its a Smart quote or not.
>
> Anyway! Does that statement bother anyone else? I am taking issue
> with the requirement to bypass the "biological complexity" and to a
> lesser degree "ethics". Granted it appears to be a logical step in
> thought based on our current tools and ability. We skip the
> "biological complexity" by using our understanding of mathematics and
> computers or more generally technological systems as our substrate.
> However it seems self-limiting to state it in such an absolute way,
> i.e. "True". Am I being overly critical or is this reasonable?
>
> Lucas.
-------------------------------------------
Joost Rekveld
----------- http://www.lumen.nu/rekveld
-------------------------------------------
"There are no passengers on spaceship earth.
We are all crew.”
(Marshall McLuhan)
-------------------------------------------
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT