Re: [sl4] Universal versus 'local' Friendliness

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon Mar 07 2011 - 16:31:06 MST


I think this question should be considered in the context of what you mean by
AI. There are two forms that I think AI is likely to take.

1. The most intelligent system on Earth is currently a network of billions of
human brains and billions of computers connected by the internet. This will
probably continue to grow, with computers doing an ever growing fraction of the
thinking.

2. Robots building robots. This could take many forms, such as self replicating
nanotechnology, or more complex systems in which many different kinds of robots
build many factories, each specializing in building one kind of robot. Another
possibility is bacteria sized robots each with very little computing power but
able to communicate and form networks with immense computing power. We may
consider as a class all systems that grow on their own, without help from
humans.

There might be other possibilities, such as AI arriving from outer space, or the
spontaneous evolution of a new top predator. But I consider these scenarios less
likely than something that we build ourselves.

Considering scenario 1, the internet already favors some people over others, in
particular those with money, computers, and internet access. It is our human
sense of fairness and altruism that keeps this from going too far, as long as
humans continue to make the top level decisions.

Considering scenario 2, local altruism (not necessarily friendliness to any
group of humans, who would be a competing species) is evolutionarily stable
because it increases group fitness. Evolution still requires competition between
groups. Global altruism is not stable.
 -- Matt Mahoney, matmahoney@yahoo.com

>
>From: Amon Zero <amon@doctrinezero.com>
>To: sl4@sl4.org
>Sent: Mon, March 7, 2011 12:03:39 PM
>Subject: Re: [sl4] Universal versus 'local' Friendliness
>
>
>Hi All - Replies to multiple people in this post:
>
>
>On 6 March 2011 19:18, Monica Anderson <monica@syntience.com> wrote:
>
>Have you read http://hplusmagazine.com/2010/12/15/problem-solved-unfriendly-ai

Hadn't seen that, it's interesting, thanks Monica!

Tim Freeman said:<snip>
>>Thus the "locally Friendly" problem isn't easier to solve than the general
>>FAI problem.
><snip>
>>Thus identifying the in-group for the AI isn't going to help it win much.
>
>
>
>
>
>Thanks Tim. Yes, both very good points I think. The first has come up on other
>lists where I also posted my question. Aswell as making me realise that I
>imagined a universally Friendly AI to be some kind of Buddhist (Friendly to
>everything, erring on the side of caution - which sounds crippling to me) has
>lead to some people discussing a refined version of the question: Would a
>narrow-circle definition of "Friendlies" be easier to define & implement than a
>wide one? Why / why not? (Alternatively, *at what point* does a circle of
>defined-Friendlies become wide enough to be problematic? Is there such a
>point?).
>
>
>Your second point is a new one to me, and hasn't come up elsewhere in this
>discussion. Obviously I don't have any answers, but I appreciate the food for
>thought!
>
>
>
>
>
>On 6 March 2011 20:24, Mark Waser <mwaser@cox.net> wrote:
>
>Hi,
>>
>> The list is certainly dormant although I believe that it still sends any
>>messages to any number of people that you'd like to get the attention of.
>>
>> I'm cross-posting my answer to my blog
>>(http://becominggaia.wordpress.com/) as well in hopes of a greater chance of
>>more discussion (though I'll answer any replies wherever they appear ;-).

Hi Mark -

Thanks very much for that! I'll have to chew on this a bit and get back to you.
As I mentioned I've posted this elsewhere (Exi & ExtroBritannia), and will
definitely check out your blog for any insight. I'm hoping to pull together
anything learned from this (optimistic, I know!) and write up a summary, so will
let you know if/when I manage to make that happen.

All the Best,
Amon



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT