Re: Donaldson, Tegmark and AGI

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sun Aug 13 2006 - 17:44:13 MDT


On 8/13/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
>
> If I recall your argument correctly, you
>
> 1) Made the very strong assumption that the AI had no sensory access to
> the outside world in your premises, but generalized your conclusion to
> all possible belief in explosive recursive self-improvement;
> 2) Did not demonstrate the ability to calculate exactly how much sensory
> bandwidth would be needed, which of course you can't do, which makes
> your argument "semi-technical" at best according to the classification I
> gave in "A Technical Explanation of Technical Explanation";
> 3) Didn't actually give any argument against it except saying: I don't
> know how much bandwidth is actually required, but doing it with so
> little feels really absurd and ridiculous to me.
>

Okay, semi-technical; the means to numerically prove any of this either way
don't exist yet. IIRC, the most extensive debate we had on the subject was a
few months ago on extropy-chat, but I wasn't the one claiming AI can't have
extensive resources, only that it can't achieve much if it doesn't - I was
responding to the claim that an AI could achieve godlike superintelligence
merely by shuffling bits in a box in someone's basement and then just pop
out and take over the world.

Now if it does have the required resources that's different - and I don't
just mean computing power and network bandwidth, these are necessary but not
sufficient conditions. Imagine an AI embedded in the world, working with a
community of users in various organizations, getting the chance to formulate
_and test_ hypotheses, getting feedback where it goes wrong.

I think it'll look in a way more like groupware than Deep Thought, in order
to achieve the above and because it's easier to make an AI that can assist
humans than a standalone one that can completely replace them and because
most users don't particularly _want_ a machine that takes a problem
description and goes off and cogitates for awhile and comes back with an
opaque "take it or leave it" oracular answer, they want something they can
work with interactively. My take on this differs somewhat from the typical
"IA instead of AI" crowd, though, in that I think to effectively assist
humans in solving problems beyond data processing will require domain
knowledge and the intelligence to use it - IA _through_ AI in other words.

But if we achieve all that, we'll be on a path towards superintelligence in
that the system as a whole would have problem-solving ability qualitatively
superior to that of any practical-sized group of humans using mere data
processing tools. And that's the sort of capability we'll need to get
Earth-descended life out from the sentences of house arrest followed by
death that we're currently under. (As I said, I'm not certain nanotech alone
won't be enough, but I _think_ we'll need both nanotech and AI.)

So I believe superintelligence is possible - but I don't believe there's an
easy short cut.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT