From: Eric Burton (firstname.lastname@example.org)
Date: Fri Sep 26 2008 - 14:54:43 MDT
>That's good. A single "kill all humans" AI is bad enough, but
>efficient TEAMS of "kill all humans" AI would make for a really bad
Yes, as depicted in Philip K Dick's short story "Second Variety",
robots so singularly bent on destruction would probably start to
compete amongst themselves. Notions of social cohesion could too
easily be bent to the ends of peace.
I think there is a congruence to the fate of selfish intruders to
altruistic societies here. Once you've enslaved the workforce and
monopolized the food supply your labour starts to dwindle. The only
stable mode appears to be to contribute as much as you consume. Robots
which only ate people would be totally maladaptive in the event that
no people were left, unlikely as single-minded machines to be able to
make the leap in ecological niches to eating other robots.
In practice the selfish behaviour of viruses and their ilk is kept in
check by a larger planetary equilibrium. No one organism could ever
threaten all life on Earth, so presumably the same applies to
On 9/26/08, Mike Dougherty <email@example.com> wrote:
> On Fri, Sep 26, 2008 at 2:35 PM, Eric Burton <firstname.lastname@example.org> wrote:
>> I've seen it written that an ethical AI would have the faculties to be
>> more ethical than any organism, or collective of them. In an
>> environment with super-ethical intelligences about, an unethical one
>> wouldn't be allowed to thrive... and if the first god-like AI is
>> strongly unethical, I'm not convinced it could do any job very well.
>> Unless, of course, that job was "kill all humans"... and even then, it
>> wouldn't work well in teams.
> That's good. A single "kill all humans" AI is bad enough, but
> efficient TEAMS of "kill all humans" AI would make for a really bad
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT