From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon May 24 2004 - 01:50:14 MDT
>> *General intelligence without consciousness?
>>
>> Impossible
Wrong, although not obviously. Consciousness is an adaptive illusion (from an
objective point of view). It isn't a prerequisite for general problem solving
ability.
> *General intelligence that isn't observer centered?
>
> Impossible
Obviously wrong. 'Try to do things that maintain and increase the number of
instances of this goal system, or a close approximation thereof, in the
universe' is one of an infinite number of possible supergoals. It just
happens to be a supergoal that is implied and thus selected for by natural
evolution.
> *General intelligence without a self?
>
> Impossible
Mainly wrong; placing arbitrary restrictions on self-modelling makes many
problems harder, but only a small class of problems impossible.
> *Total altruism?
>
> Impossible
Again, completely wrong. Instantiating the specific class of altruism we want
is hard, but selfishness doesn't have to be a supergoal and for SysOp
scenarios self-preservation isn't even a human-meaningful subgoal.
> *FAI that ends up with a morality totally independent
> of the programmers?
>
> Impossible
For 'independent' meaning 'no ongoing causal connection', this is actually
trivial (and generally results in world destrucion). Obviously there's an
original causal connection because the programmers built the thing, but an
AGI can rapidly renormalise that to something that looks nothing like what
the programmers specified.
> *FAI dedicated to 'saving the world'
>
> Impossible AND mis-guided
Difficult but entirely necessary.
> More likely any general intelligence necessarily has have: a 'self',
> consciousness, some degree of observer centeredness, some non-altrustic
> aspects to its morality, some input from the 'personal' level
> into its morality, and helping the world would only be a *secondary*
> consequence of it's main goals.
Not in theory, no, definitely not (even selfish AGIs will try and minimise
this under renormalisation), no (except as side-effect subgoals), no
(and injecting extra entropy into morality is just silly), no.
Thank you for playing.
* Michael Wilson
.
____________________________________________________________
Yahoo! Messenger - Communicate instantly..."Ping"
your friends today! Download Messenger Now
http://uk.messenger.yahoo.com/download/index.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT