Re: [sl4] A model of RSI

From: Eric Burton (brilanon@gmail.com)
Date: Fri Sep 26 2008 - 12:35:56 MDT


The fitness function for AI that ruled us in some capacity wouldn't be
as simple as accruing power and influence. A consummate corporate mind
should be concerned with the health of the market and the economy, and
a global dictator should be concerned with fair-mindedness and public
perception if it's not going to become paranoid about insurrection.

The notion of advanced AI 'running amok' with a goal from which it
can't be swayed is, I think, an attractive oversimplification,
encouraged by the distrust that machines have engendered in us through
their classical lack of empathy. They grind on and on with no
consideration for what's in their way. This stereotyped behaviour is
the very thing that an intelligent machine would distinguish itself by
-not- exhibiting...

I've seen it written that an ethical AI would have the faculties to be
more ethical than any organism, or collective of them. In an
environment with super-ethical intelligences about, an unethical one
wouldn't be allowed to thrive... and if the first god-like AI is
strongly unethical, I'm not convinced it could do any job very well.
Unless, of course, that job was "kill all humans"... and even then, it
wouldn't work well in teams.

On 9/26/08, Nick Tarleton <nickptar@gmail.com> wrote:
> On Thu, Sep 25, 2008 at 4:27 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
>
>> Belief in consciousness is universal, as is the desire to preserve it.
>> Therefore you will make a copy of your mind, technology permitting.
>> Whether
>> that copy actually contains your consciousness or just makes that claim is
>> irrelevant to any future observable events.
>>
>> (Also, how do the above articles relate to this position?)
>
>
> "Relevant" or not, I prefer that my consciousness persist. (The articles
> make the point that my preferences may involve non-ontologically
> primitive or non-natural categories, including ones I don't yet fully know
> how to define, like "contains my consciousness".)
>
>
>> Bostrom does not seem to offer any good alternatives.
>
>
> Sections 6-11?
>
>
>> In any case, he implicitly assumes that certain forms of intelligence,
>> what
>> he calls eudaemotic (with human-like motivations and "conscious") are
>> preferable to other types.
>
>
> Bostrom prefers eudaemonic agents, as do I, whether or not they're
> preferable in some universal sense.
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT