From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jul 23 2005 - 13:52:46 MDT
Russell Wallace wrote:
>
> Okay, in plainer language... are you familiar with the K-strategist
> versus r-strategist distinction in biology?
I thought I was, but I checked Wikipedia and had got my definitions mixed up,
either that or I read a different definition. Regardless, I still don't
understand what you think happens. Or why you think natural r-selection is
any less cruel than natural K-selection.
>>Is this what you think would inevitably happen if, starting with present human
>>society, the average IQ began climbing by three points per year? At what
>>point - which decade, say - do you think humans would be so intelligent, know
>>so much and think so quickly, that their society would turn utterly evil?
>
> Starting with present human society, create a world government with
> absolute knowledge and absolute power, capable not only of seeing into
> people's homes a la 1984, but into their very thoughts; with no
> Constitution (you don't want any hardwired protections, after all) and
> no escape, ever (nobody gets to opt out of CV). Don't you find it at
> all reasonable to suggest that society would turn utterly evil very
> quickly?
It's plausible. But I don't understand why you think this is what CEV
simulates. Obviously, CEV doesn't start out by simulating human society +
CEV. That's an infinite recursion. CEV starts out by considering abstractly
human society plus a predefined set of enhancements. The initial dynamic, the
thing SIAI builds, has *no* intended effect on the world itself - it just
writes another AI. The CEV initial dynamic cannot simulate its own effect on
the world because it has no effect - though CEV can simulate the effect of a
potential output AI.
I'm not sure if you'd want to build, into the initial dynamic, a second round
of extrapolation abstractly considering human society plus first-order CEV,
that is, human society plus the effect of the AI that is the first-order
answer of CEV. But even if you did build a second round into the initial
dynamic, human society would not start out being simulated as possessing,
under control of unenhanced human majority vote, the ability to see into
people's thoughts etc., unless that was the AI output by first-order CEV.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT