What would an AGI be interested in?

From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Sun Aug 13 2006 - 18:16:01 MDT


Russell Wallace wrote:
> On 8/13/06, *Eliezer S. Yudkowsky* <sentience@pobox.com
> <mailto:sentience@pobox.com>> wrote:
>
> If I recall your argument correctly, you
>
> 1) Made the very strong assumption that the AI had no sensory
> access to
> the outside world in your premises, but generalized your conclusion to
> all possible belief in explosive recursive self-improvement;
> 2) Did not demonstrate the ability to calculate exactly how much
> sensory
> bandwidth would be needed, which of course you can't do, which makes
> your argument "semi-technical" at best according to the
> classification I
> gave in "A Technical Explanation of Technical Explanation";
> 3) Didn't actually give any argument against it except saying: I
> don't
> know how much bandwidth is actually required, but doing it with so
> little feels really absurd and ridiculous to me.
>
>
> Okay, semi-technical; the means to numerically prove any of this
> either way don't exist yet. IIRC, the most extensive debate we had on
> the subject was a few months ago on extropy-chat, but I wasn't the one
> claiming AI can't have extensive resources, only that it can't achieve
> much if it doesn't - I was responding to the claim that an AI could
> achieve godlike superintelligence merely by shuffling bits in a box in
> someone's basement and then just pop out and take over the world.
>
> Now if it does have the required resources that's different - and I
> don't just mean computing power and network bandwidth, these are
> necessary but not sufficient conditions. Imagine an AI embedded in the
> world, working with a community of users in various organizations,
> getting the chance to formulate _and test_ hypotheses, getting
> feedback where it goes wrong.
>
> I think it'll look in a way more like groupware than Deep Thought, in
> order to achieve the above and because it's easier to make an AI that
> can assist humans than a standalone one that can completely replace
> them and because most users don't particularly _want_ a machine that
> takes a problem description and goes off and cogitates for awhile and
> comes back with an opaque "take it or leave it" oracular answer, they
> want something they can work with interactively. My take on this
> differs somewhat from the typical "IA instead of AI" crowd, though, in
> that I think to effectively assist humans in solving problems beyond
> data processing will require domain knowledge and the intelligence to
> use it - IA _through_ AI in other words.
>
> But if we achieve all that, we'll be on a path towards
> superintelligence in that the system as a whole would have
> problem-solving ability qualitatively superior to that of any
> practical-sized group of humans using mere data processing tools. And
> that's the sort of capability we'll need to get Earth-descended life
> out from the sentences of house arrest followed by death that we're
> currently under. (As I said, I'm not certain nanotech alone won't be
> enough, but I _think_ we'll need both nanotech and AI.)
>
> So I believe superintelligence is possible - but I don't believe
> there's an easy short cut.
I wonder what an AGI would be interested in. There are a lot of things
to be curious about at human levels of intelligence. We can see a little
outside of our intelligence cone into things that would probably be
interesting to beings moderately more intelligent than ourselves, but
what about massively more? Are there fundamental limits to what can be
found out and known, and are these limits sufficiently small that there
is a point of diminishing returns from increasing intelligence, even if
such a thing is easy to accomplish? Is there an 'optimum' intelligence
which reduces understanding but increases happiness?

Would the most interesting thing to an AGI perhaps be itself, as the
most complex thing around?

How would an AGI overcome boredom? Would such a creation be the ultimate
nihilist? Would an infinitely intelligent being simply boil down to a
hedonist? Would we end up with Deep Thought just watching t.v. as in the
latest Hitchhiker's Guide movie?

While an AGI was developing, it is easy to see many positive goals and
creative challenges to keep it busy and intrigued. But what comes next?

Would all AGIs experience convergent evolution? Suppose it was easy to
birth a new AGI-capable entity. Would all such entities converge to be
effectively identical, or would strong individualism be possible between
such beings? Would they necessarily share goals?

Given a self-modifying reward system, would such beings choose to cease
developing, and simply exist happily?

Would an AGI be able to understand itself in its entirety, or only
components of itself?

Would an AGI have any use for emotions -- or rather irrational
perspectives which shift over time to provide new interpretations to
incoming data?

Are our own emotions a way of forcing us to reconsider things in new ways?

Cheers,
-T



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT