From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sat May 29 2004 - 01:50:41 MDT
Ben Goertzel wrote:
> Michael Wilson wrote:
>> The correct mode of thinking is to constrain the behaviour of
>> the system so that it is theoretically impossible for it to
>> leave the class of states that you define as desirable.
>
> I suspect (but don't know) that this is not merely hideously difficult
> but IMPOSSIBLE for highly intelligent self-modifying AI systems.
Building nontrivial goal systems (utility functions) that will remain
within a given class under indefinite self-modification (renormalisation)
is moderately difficult. Building goal systems that strike the complex
balance between flexibility and rigidity required to implement idealised
human goal systems is really difficult. Building goal systems that will
reliably converge on a highly complex utility function that you cannot
directly specify, only give an abstract constructive account for, is
understandably a valid justification for ordering a fresh case of Jolt
cola.
> I suspect that for any adequately intelligent system there is some
> nonzero possibility of the system reaching ANY POSSIBLE POINT
This had been said many times before by wiser heads than me, but once
again 'probabilistic self-modification is bad'. It took meembarrassinglyy
long to get this too, but it was obvious in retrospect. Of course
sufficiently implausible hardware and/or software failure can cause any
design to fail in implementation, but that risk class is very low in
sane designs.
>> Without a deep understanding of the cognitive architecture,
>> you have no way of knowing whether you are 'teaching' the
>> system what you think you are teaching it.
>
> Agreed, of course. It would also be very hard to create an AGI without
> having a deep understanding of its cognitive architecture.
I should have distinguished 'runtime architecture' from 'substrate
architecture', semantic minefield again. Call it the knowledge base
or network structure or self-written code or memories+goal_system,
whatever you like. We're close to understanding the /substrate/
architecture of the human brain and to build a working AGI from
scratch you have to understand a functional general intelligence
architecture to have any chance of success. However this is very
different from understanding how the system is doing what it is
doing in operation; the function of all AI-built code paths and
data structures up to takeoff. All of the emergence advocates I
know of think how their networks actually solve the problem is
an interesting point for further research but not necessary to
actually use the AGI. This is a terminal attitude and made worse
for seed AI because the substrate architecture quickly changes
(unless you know how to design the goal system to prevent that) too.
>> If you /do/ have a deep understanding of the architecture, then
>> you don't teach, you specify
Incidentally I was speaking of goals, not skills. We can't specify
skills because we don't have the introspective ability or CogSci
knowledge to know how we do things in that detail. However I would
be disappointed if we couldn't easily understand how every skill up
to humanish competence works.
> The cognitive architecture may be such that learning by experience
> is the most effective way for it to learn.
I would note that humans don't learn our supergoals, we are built
with them (and evolution didn't do a very good job of that). If
babies had to learn their supergoals they'd die within days.
> Specifying knowledge rather than teaching via experience may
> be possible *in principle* but it may be extremely slow compared to
> the high-bandwidth information uptake obtainable via experiential
> learning in an environment.
Goals are not knowledge (well, with introspection on the goal system
they are, but that's the AI modelling its own goal system as relevant
world knowledge, not goals as goals). Goals specify what you want done;
knowledge lets you work out how to do it. Clearly you don't have to
have an abstract definition of the entire problem as you wouldn't need
an AI if you knew that, but you /must/ have a clear abstract
specification of what you want done to avoid unpredictable (i.e. almost
certainly bad; the space of desirable outcomes is generally tiny and
the ways to leave it via lack of understanding numerous) results.
> With a complete Novamente system that is enabled to self-modify its
> cognitive schemata, there will be a greater than zero risk, and more
> careful risk analysis will be needed.
Wherever there is the possibility of evolutionary dynamics as you know
them, there is takeoff risk.
* Michael Wilson
.
____________________________________________________________
Yahoo! Messenger - Communicate instantly..."Ping"
your friends today! Download Messenger Now
http://uk.messenger.yahoo.com/download/index.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT