Re: AGI motivations

From: Michael Vassar (michaelvassar@hotmail.com)
Date: Sun Oct 23 2005 - 17:33:40 MDT


Sorry, I left off the first half of this e-mail.

Michael Wilson said
>Yes, uploads occupy a small region of cognitive architecture space within a
>larger region
>of 'human-like AGI designs'. However we can actually hit the narrower
>region semi-reliably if we can develop an accurate brain simulation
>and copy an actual human's brain structure into it. We cannot hit the
>larger safer region reliably by creating an AGI from scratch, at least not
>without an understanding of how the human brain works and how to
>build AGIs considerably in advance of what is needed for uploading
>and FAI respectively.

That assertion appears plausible but unsubstantiated to me. The
understanding of human brain function required to build a relatively safe
human-like AGI might be only trivially greater than that required to create
an upload, while the scanning resolution required might be much lower. It
may be much simpler to make a mind that will reliably not attempt to
transcend than to build one that can transcend safely. One way to make such
a mind is to upload the right person. It may be that building a large
number of moderately different neuromorphic AIs (possibly based on medium
res scans of particular brains, scans inadequate for uploading, followed by
repair via software clean-up) in AI boxes and testing them under a limited
range of conditions similar to what they will actually face in the world is
easier than uploading a particular person.

> > Surely there are regions within the broader category which are
> > safer than the particular region containing human uploads.
>
>Almost certainly. We don't know where they are or how to reliably
>hit them with an implementation, and unlike say uploading or FAI
>there is no obvious research path to gaining this capability.

We know some ways for reliably hitting them, such as "don't implement
transhuman capabilities".

>you know how to reliably build a safe, human-like AGI, feel free
>to say how.

Upload an ordinary person with no desire to become a god. That's one way.
Another may be to build an AGI that is such a person. How do you know what
they want? Ask them. Detecting that a simulation is lying with high
reliability, and detecting its emotions, should not be difficult.

>Indeed, if someone did manage this, my existing model of AI
>development would be shown to be seriously broken.

I suspect that it may be. Since you haven't shared your model I have no way
to evaluate it, but a priori, given that most models of AI development
,ncluding most models held by certified geniuses,are broken, I assume yours
is too. I'd be happy to work with you on improving it, and that seems to me
to be the sort of thing this site is for, but destiny-star may be more
urgent.
It's best to predict well enough to create, then stop predicting and create.
  Trouble is, it's hard to know when your predictions are good enough.

> >> To work out from first principles (i.e. reliably) whether
> >> you should trust a somewhat-human-equivalent AGI you'd need nearly as
> >> much theory, if not more, than you'd need to just build an FAI in the
> >> first place.
> >
> > That depends on what you know, and on what its cognitive capacities are.
> > There should be ways of confidently predicting that a given machine does
> > not have any transhuman capabilities other than a small set of specified
> > ones which are not sufficient for transhuman persuasion or transhuman
> > engineering.
>
>Why 'should' there be an easy way to do this? In my experience predicting
>what capabilities a usefully general design will actually have is pretty
>hard, whether you're trying to prove positives or negatives.

We do it all the time with actual humans. For a chunk of AI design space
larger than "uploads" AGIs are just humans. Whatever advantages they have
will only be those you have given them, probably including speed and
moderately fine-grained self-awareness (or you having moderately
fine-grained awareness of them). Predicting approximately what new
capabilities a human will have when you make a small change to their
neurological hardware can be difficult or easy depending on how well you
understand what you are doing, but small changes, that is, changes of
magnitude comparable to the range of variation among the human baseline
population will never create large and novel transhuman abilities, but lots
of time and mere savant abilities may be really really useful.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT