From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sun Oct 23 2005 - 13:36:38 MDT
Michael Vassar wrote:
> Trying to build a human derived AI is the general case of which
> building a human upload is a special case... I'm skeptical about the
> diferences being unavoidabld. Surely true given current computers,
> surely false given uploads. There is a substantial space of possible
> AGI design around "uploads".
This is true in principle, but not in practice. Yes, uploads occupy
a small region of cognitive architecture space within a larger region
of 'human-like AGI designs'. However we can actually hit the narrower
region semi-reliably if we can develop an accurate brain simulation
and copy an actual human's brain structure into it. We cannot hit the
larger region reliably by creating an AGI from scratch, at least not
without an understanding of how the human brain works and how to
build AGIs considerably in advance of what is needed for uploading
and FAI respectively.
> Surely there are regions within the broader category which are
> safer than the particular region containing human uploads.
Almost certainly. We don't know where they are or how to reliably
hit them with an implementation, and unlike say uploading or FAI
there is no obvious research path to gaining this capability. If
you know how to reliably build a safe, human-like AGI, feel free
to say how.
> but if someone else were to develop human-derived AIs and put them
> on the market, for instance, it would be foolish not to use them
> to assist your work, which would at that point be incredibly urgent
> and require the use of all possible tools that could be used to
> accelerate it.
I agree, I just don't think anyone is going to do it, and if they
did do it someone would turn one into an Unfriendly seed AI within
days. Indeed, if someone did manage this, my existing model of AI
development would be shown to be seriously broken.
>> To work out from first principles (i.e. reliably) whether
>> you should trust a somewhat-human-equivalent AGI you'd need nearly as
>> much theory, if not more, than you'd need to just build an FAI in the
>> first place.
>
> That depends on what you know, and on what its cognitive capacities are.
> There should be ways of confidently predicting that a given machine does
> not have any transhuman capabilities other than a small set of specified
> ones which are not sufficient for transhuman persuasion or transhuman
> engineering.
Why 'should' there be an easy way to do this? In my experience predicting
what capabilities a usefully general design will actually have is pretty
hard, whether you're trying to prove positives or negatives.
> It should also be possible to ensure a human-like enough goal system
> that you can understand its motivation prior to recursive
> self-improvement.
Where is this 'should' coming from? If you know how to do this, tell
the rest of us and claim your renown as the researcher who cracked a
good fraction of the FAI problem.
>> If you have the technology to do the latter, you might as well just
>> upload people; it's less risky than trying to build a human-like AGI
>> (though probably more risky than building an FAI).
>
> I assume you mean the former, not the latter.
Yes, sorry.
> Uploading a particular person might be (probably is) more difficult
> than the more general task of producing an (not necessarily perfectly)
> accurate simulation of a generic human brain which predictably displays
> no transhuman capabilities and only human capabilities that are rather
> easily observed (especially with the partial transparency that comes
> from being an AI).
Unless you're simulating the brain at the neuron level (including keeping
the propagation speed down to human levels) /and/ closely copying human
brain organisation, you simply can't generalise from 'humans behave like
this' to 'the AGI will behave like this'. Given this constraint on
structure, your options for getting bootstrap /content/ into the AGI are
(a) upload a human, (b) replicate the biological brain growth and human
learning process (even more research, implementation complexity and
technical difficulty and takes a lot of time) or (c) use other algorithms
unrelated to the way humans work to generate the seed complexity. The
last option again discards any ability to make simple generalisations
from human behaviour to the behaviour of your human-like AGI. The second
option introduces even more potential for things to go wrong (due to more
design complexity) and even if it works perfectly it will produce an
arbitrary (and probably pretty damn strange) human-like personality, with
no special guarantees of benevolence. Actually I find it highly unlikely
that any research group trying to do this would actually go all out on
replicating the brain accurately without being tempted to meddle and
'improve' the structure and/or content, but I digress. Thus the first
option, uploading, looks like the best option to me if you're going to
insist on building a human-like AGI.
> >Yes, noting of course that it's extremely difficult to build a
> >'human-like AI' that isn't already an Unfriendly seed AI.
>
> I strongly disagree with the above, except in so far as it's extremely
> difficult to build any "human-like AI". An AI that doesn't have
> access to the computer it is running on is just a person.
Trying to prevent an AGI from self-modifying is the classic 'adversarial
swamp' situation that CFAI correctly characterises as hopeless. Any
single point of failure in your technical isolation or human factors
(i.e. AGI convinces a programmer to do something that allows it to
write arbitrary code) will probably lead to seed AI. The task is more
or less impossible even given perfect understanding, and perfect
understanding is pretty unlikely to be present.
> An AI which others can see the non-transparent thoughts of, and which
> others can manipulate the preferences and desires of, is in some
> respects safer than a human, so long as those who control it are safe.
This looks like a contradiction in terms. A 'non-transparent' i.e.
'opaque' AGI is one that you /can't/ see the thoughts of, only at best
high level and fakeable abstractions. The problem of understanding what
a (realistic) 'human-like AGI' is thinking is equivalent to the problem
of understanding what a human is thinking given a complete real-time
brain scan; a challenge considerably harder than merely simulating the
brain.
> Finally, any AI that doesn't know it is an AI or doesn't know about
> programming, formal logic, neurology, etc is safe until it learns
> those things.
I'll grant you that, but how is it going to be useful for FAI design
if it doesn't know about these things? How do you propose to stop
anyone from teaching a 'commercially available human-like AGI' these
skills?
> A small fraction of them would try to mess with their minds and
> break.
Not necessarily a problem given indefinite fine-grained backup-restore,
assuming you're not bothered by the moral implications.
> However, it is unfortunately the case that AIs that could be seed AIs
> themselves are much more valuable to an AGI program than AIs which
> could not be seed AIs themselves.
Exactly.
* Michael Wilson
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:16 MST