From: Randall Randall (randall@randallsquared.com)
Date: Tue Jun 07 2005 - 21:25:11 MDT
On Jun 7, 2005, at 9:45 PM, Russell Wallace wrote:
> On 6/7/05, Chris Capel <pdf23ds@gmail.com> wrote:
>> So if we bring an superintelligence into the picture, I think that the
>> least I'd want from vim would be some sort of impetus for the
>> world-wide installation of a truly effective system of human
>> self-government. And possibilities like that are only what I can
>> imagine. An AI could probably do much better.
>
> I think you have correctly identified an existential risk.
>
>> Hmm. Is that very off-topic?
>
> Well, I think existential risks are on topic.
Nevertheless, I don't think there's much point in
working out strategy for the case in which plurality
is unstable, which is the case implied by asking an
SI to set up "government". Intuitively, it seems
to me that the only case in which this would be
important is if physical law doesn't favor either
plurality or singular SIs over the other case.
If this was discussed, it's been a while, but do
people here really think that there's a broad
middle ground there?
-- Randall Randall <randall@randallsquared.com> "Lisp will give you a kazillion ways to solve a problem. But (1- kazillion) are wrong." - Kenny Tilton
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:57 MST