Re: Some considerations about AGI

From: Mike Dougherty (
Date: Mon Jan 23 2006 - 22:30:47 MST

I am curious to know if the author of the "Becoming a Seed AI
" would consider an actual test for what it takes to be a friendly AI
developer. If one were to pass the test, they would be admitted to the
resource pool and given responsibility/involvement according to demonstrated
talent and understanding. This same test would then be used to test the
evolved/created AGI. When the AGI is able to pass the test to become a
friendly AI developer, then I would argue that it should be given equal
status as any other "new guy" on the team. If this doesn't prove
consciousness or self awareness or intelligence or whatever you (anyone)
want to measure, then I doubt there is any sufficient test that can satisfy
as 'proof.'

I don't know that a definitive test is intrinsically valuable anyway. Few
of our coworkers would pass any rigorously scientific measurement for
'intelligence' - and probably fewer of our neighbors. From "Shock Level
Analysis <>"
an AGI worth testing would probably be SL3 at inception, which would include
it in a population of less than one hundred thousand - what group is
qualified to test this level of mental adaptation and preparedness to the
impending Singularity? If we live among and work with non-human AI to the
mutual advantage of both parties, what purpose is served by an ego-driven
test for superiority?

On 1/23/06, Eliezer S. Yudkowsky <> wrote:
> Richard Loosemore wrote:
> >
> > 1) Give an introduction to Heim's theory of quantum gravity, in
> > sufficient detail to allow a Physics graduate to understand it.
> Good heavens. For a nonhuman paired with a human physics graduate, this
> is a superintelligence test, not an AGI test.
> RGE Corp. made some audacious claims, but this isn't fair even to them.
> Making some allowance for hype, I think that a fair challenge to RGE, or
> any other commercial AGI company, is handing them a task sufficiently
> far beyond state-of-the-art that they could beat up Google if they
> succeeded. Say, scoring above 1000 on the SAT - though maybe that's
> still much too difficult.
> Dan Clemmensen wrote on 2002.03.01:
> > Arthur T. Murray wrote:
> >
> >> Now that Technological Singularity has arrived in the form of
> >> -- Robot Seed AI --
> >> you all deserve this big Thank_You for your successful work.
> >
> > Sorry, Arthur, but I'd guess that there is an implicit rule
> > about announcement of an AI-driven singularity: the announcement
> > must come from the AI, not the programmer. Now if you claim to
> > be a composite human/AI based SI, the rules are different:
> > I personally would expect the announcement in some unmistakable form
> > such as e.g. a message in letters of fire written on the face
> > of the moon.
> --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT