From: Stuart Armstrong (firstname.lastname@example.org)
Date: Mon Jul 14 2008 - 05:48:41 MDT
Sorry, didn't want to get into another libertarian debate :-)
My point was that trying to imagine what a post singularity world
would be like it hard, and that we needed to try and identify true
cultural universals that would still exist after a singularity,
assuming approximate humans are still around.
One of those universals was the threat of violence, and the related
need for government.
Then I went out on a limb, and took a guess at what the smartest
post-singluarity government would be like (in the practical sense, not
the philosphically ideal sense), assuming humans are allowed some say
in it. The main systems mentioned are AI dictatorship, and minarchies.
The point I then made (but only implicitly) is that a choice of
government philosophy is a very complicated one, full of unexpected
consequences (communism, to cite but one; you can argue that the
excesses of communism were implicit from the beggining, but many
things were implicit in communism from the beggining, which never
happened). Knowing how a government system would work in practice is
very hard, based only on the theory.
So the best guess we can make as to what the smartest government would
be like, is to take the smartest governments we have now (liberal
demoracies with large governments), and extrapolate SLIGHTLY. It won't
be a very good guess, but it'll probably be the best one we have.
Hence my comments on creating a functioning minarchy in a large
developped country; if such an event comes to pass then that would be
strong evidence that a minarchy is a practical system of government,
rivaling liberal democracies, and thus an equal candidate for a
post-singularity government. But until then, we can't assume that a
minarchy would work; it is too far from systems that have worked to
have any confidence in that (as evidence of that distance, I point to
your own comment on US governmental GDP; and, if anything,
governmental interference is higher elsewhere in the developed world).
> THEN, if we have any say in the matter right now (which
> we probably won't have), would it be better or worse for
> that Intelligence to allow complete freedom of resource usage
> (up to, but, realistically, not including threats to itself)?
I'd say that it probably is worse, simply because if there is an ideal
system of government (say a minarchy), then the AI should be imposing
it (at least initially), rather than letting humans struggle to figure
out what it is. (model for this: set down an amendable consitution).
> For example, ought that Intelligence permit one to create
> veridical historical reenactments?
Ah! A call for a value judgement! :-) I'd be totally in favour, as
long as permanent death or severe torture is not a feature of these.
If they are features, then it gets more complicated, and depends on
the details of the society surrounding it.
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:37 MDT