From: Michael Vassar (firstname.lastname@example.org)
Date: Sun Oct 23 2005 - 12:01:20 MDT
Michae Wilson said
> > It is possible that a non-Transhuman AI with a human-like motivational
> > system could be helpful in designing and implementing an analytically
> > tractable motivational system.
>Well sure, for the same reason that it would be great to have some
>intelligence-augmented humans or human uploads around to help design
>the FAI. But actually trying to build one would be incredibly risky and
>unlikely to work, even more so than independent IA and uploading
>projects, so this observation isn't of much practical utility. What I
>think /may/ be both useful and practical is some special purpose tools
>based on constrained, infrahuman AGI.
I don't see the reasoning behind this. Trying to build a human derived AI
is the general case of which building a human upload is a special case.
Surely there are regions within the broader category which are safer than
the particular region containing human uploads. It's definitely not
something that we should try to do in the near future or plan on doing, but
if someone else were to develop human-derived AIs and put them on the
market, for instance, it would be foolish not to use them to assist your
work, which would at that point be incredibly urgent and require the use of
all possible tools that could be used to accelerate it.
> > A priori there is no more reason to trust such an AI than to trust a
> > human, though there could easily be conditions which would make it
> > more or less worthy of such trust.
>This immediately sets off warning bells simply because humans have a
>lot of evolved cognitive machinery for evaluating 'trust', and strong
>intuitive notions of how 'trust' works, which would utterly fail (and
>hence be worse than useless) if the AGI has any significant deviations
>from human cognitive architecture (which would be effectively
I'm skeptical about the diferences being unavoidabld. Surely true given
current computers, surely false given uploads. There is a substantial space
of possible AGI design around "uploads".
>To work out from first principles (i.e. reliably) whether
>you should trust a somewhat-human-equivalent AGI you'd need nearly as
>much theory, if not more, than you'd need to just build an FAI in the
That depends on what you know, and on what its cognitive capacities are.
There should be ways of confidently predicting that a given machine does not
have any transhuman capabilities other than a small set of specified ones
which are not sufficient for transhuman persuasion or transhuman
engineering. It should also be possible to ensure a human-like enough goal
system that you can understand its motivation prior to recursive
>To be sure of building a human-like AI, you'd need to either very closely
>follow neurophysiology (i.e. build an accurate brain simulation, which we
>don't have the data or the hardware for yet) or use effectively the same
>basic theory you'd need to build an FAI to ensure that the new AGI will
>reliably show human-like behaviour. If you have the technology to do the
>latter, you might as well just upload people; it's less risky than
>trying to build a human-like AGI (though probably more risky than building
I assume you mean the former, not the latter. Uploading a particular person
might be (probably is) more difficult than the more general task of
producing an (not necessarily perfectly) accurate simulation of a generic
human brain which predictably displays no transhuman capabilities and only
human capabilities that are rather easily observed (especially with the
partial transparency that comes from being an AI). It's also more desirable
of course, but multiple contingencies should be considered.
>Yes, noting of course that it's extremely difficult to build a
>'human-like AI' that isn't already an Unfriendly seed AI.
I strongly disagree with the above, except in so far as it's extremely
difficult to build any "human-like AI". An AI that doesn't have access to
the computer it is running on is just a person. An AI which others can see
the non-transparent thoughts of, and which others can manipulate the
preferences and desires of, is in some respects safer than a human, so long
as those who control it are safe. Finally, any AI that doesn't know it is
an AI or doesn't know about programming, formal logic, neurology, etc is
safe until it learns those things. The majority of humans could upload
without being seed AIs. A small fraction of them would try to mess with
their minds and break. Of those, only a competent minority would actually
be seed AIs. However, it is unfortunately the case that AIs that could be
seed AIs themselves are much more valuable to an AGI program than AIs which
could not be seed AIs themselves.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT