From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Jun 01 2004 - 09:05:47 MDT
Aubrey de Grey wrote:
> Eliezer Yudkowsky wrote:
>
>> People live with quite complex background rules already, such as "You
>> must spend most of your hours on boring, soul-draining labor just to
>> make enough money to get by" and "As time goes on you will slowly age,
>> lose neurons, and die" and "You need to fill out paperwork" and "Much
>> of your life will be run by people who enjoy exercising authority over
>> you and huge bureaucracies you can't affect." Moral caution or no,
>> even I could design a better set of background rules than that.
>
> Um, but if we're talking mainly here about minimising expected loss of
> life then we have to look at the best possible AI-free alternative, and
> that certainly includes curing aging and developing enormously enhanced
> automation to eliminate mindless jobs. As for politicians being drawn
> only from those curious people who want to be politicians, well, I'm not
> so sure that's so bad.
We aren't talking here about just minimizing expected loss of life, but all
those other things for which humans wouldn't mind a boost, perhaps in the
form of a change of background rules. And I don't want to *minimize*
expected loss of life. I want to drive it down to *zero*.
The best AI-free alternative? There is no AI-free alternative. Just FAI
first or UFAI first. The space of recursively self-improving processes is
out there, mathematically speaking, and sooner or later some physical
system will cross the threshold. The closest thing to an AI-free
alternative is a minimally intervening FAI, which I, for one, don't want.
It seems to me that an FAI can make huge improvements to background rules
before that starts interfering with self-determination.
Wait for humans to cure aging? You of all people know better than to
suggest this! If FAI is developed in 2018, and human doctors don't cure
aging until 2028, that is 550 million deaths attributable to whatever
decided that nonintervention. Plus further deaths attributable to causes
other than aging. If I were a Last Judge, I'd veto a minimally
interventionist FAI that didn't prevent involuntary deaths and yet
prevented anyone else from developing a second FAI, unless there were one
hell of a good reason. I can't even imagine a reason good enough, but a
Last Judge always needs to leave that possibility open; one cannot ask ten
such questions and get ten right answers.
> In particular, this:
>
>> It's not as if any human intelligence went into designing the existing
>> background rules; they just happened.
>
> isn't really so -- we invented democracy on purpose, and we've kept it
> because we prefer it to anything anyone else has come up with.
Heck, even I'd prefer democracy to what we have now (rimshot). I will
concede that some thought went into the U.S. Constitution, which lasted for
nearly a hundred and fifty years before breaking down. But the framers of
the Constitution had to work within unhelpful background rules; they only
improved slightly on a bad deal. It was still other people running your
life; you just got to veto the greater of two evils, if one became
noticeably more evil than another.
As I once wrote to wta-talk:
**
The American government is divided into three branches: The judicial, the
executive, the legislative, the media, the bureaucracy, and the party
structure. There are also numerous other factions with influence, such as
big business, NGOs, the intelligentsia, the wealthy, foreign countries
exerting diplomatic pressure, and the voters.
The voters hold enough power that no one can afford to really *really*
tick them off, but that's all. Aside from this one quirk, the voters are
a second-rank faction in any political fight.
A truck driver has power to the degree that people actually ask
him what he thinks, not to the extent that others claim to be acting on
his behalf, and the only real power the truck driver has is that no one
dares get him really *really* upset. It's far more than a peasant has,
enough to tremendously raise the standard of living in a democracy, but it
should not be confused with practical, day-to-day political power under
the current system.
**
>>> how does the FAI have the physical (as opposed to the cognitive)
>>> ability to [stop humans from doing risky things, possibly including
>>> making the things less risky]?
>>
>> Molecular nanotechnology, one would tend to assume, or whatever
>> follows after
>
> Ah, but hang on, why should we design the FAI to use MNT or whatever to
> implement its preferences itself, rather than design it to create MNT
> and then let us use MNT as we wish to implement its advice?
A collective volition is not 'designed' to do either, but to do whichever
we want. 'Designing' an AI to give advice or something like that
constitutes taking over the world - making too large a decision yourself.
I am hella leery of putting MNT into the hands of humans. It's not as bad
as putting AI-power into human hands, but still. What if a human doesn't
implement the FAI's advice? Poof, no more humans.
> Surely the
> latter strategy gives us more self-determination so is preferable,
Self-determination is not the only criterion of an acceptable outcome. And
which humans would have self-determination? The powerful humans taking
advice? All the other shmucks without access to nanotech? The millions
and millions of our dead?
> to us
> and hence to the FAI, and hence the FAI would give us that choice even
> if we'd given it the ability to use the MNT itself?
*Ding*: Speculation about collective volition output detected. Please
insert at least $10 to continue.
(But you happen to be right; of course our collective volition has the
option of only giving advice. It is simply that I think it a bad option,
because it commits what you correctly point out to be genocide.)
> And so we're back
> to humans taking or leaving the FAI's advice.
>
>> if human self-determination is desirable, we need some kind of massive
>> planetary intervention to increase it.
>
> Yabbut "massive" doesn't imply recursively self-improving.
Without a Really Powerful Optimization Process I don't know how to ask a
sufficiently powerful humane intelligence to veto those whims of mine that
only seem like good ideas.
> Again, the
> choice is between the world we can plausibly get to without AI and the
> one we might hope to have with FAI, not between the current world and
> the FAI-ful world. The risk of making UFAI when trying to make FAI has
> to be balanced against the incremental benefits of the FAI-ful world
> relative to the plausible FAI-less world.
The risk of making UFAI is widely distributed, and as Moore's Law keeps
ticking, and certain fundamental ideas of cognitive science keep spreading,
the risk goes up and up. Nanotechnology seems to me a probable hard
deadline; not even AI researchers are incompetent enough not to wipe out
the human species once they have 10^25 ops/sec or more. As nearly as I can
tell, humanity does not have a realistic option of living in an AI-free
world. Minimally interventionist FAI is a real option, I suppose, in the
sense that it is something an FAI could choose to do if there were a good
reason relative to the constitution of that AI.
> About saving lives, we can in
> principle postulate that the FAI would help us to cure aging etc. a bit
> sooner than otherwise, but I fully intend to cure aging by the time
> anyone creates any AI, F or otherwise, so I'm not inclined to give that
> component of the argument much weight.
You plan to do this *how* soon?
Also, are you getting all the other causes of death besides aging? One
involuntary death is one involuntary death too many. (I wonder if one
young death is one young death too many. I think my ground zero vote might
even be shifting that way, though I'd be surprised and flattered to learn I
was grown enough to vote.)
Also, are you stopping UFAI?
I bet I can solve ALL of Earth's emergency problems before you cure aging.
Not only that, I bet my budget requests are lower than your budget requests.
>> I can't see this scenario as real-world stable, let alone ethical.
>
> I'm not sure I'd bet serious money that it hasn't already been done!
If so, I have a major beef with our collective volition, and if our dead
aren't backed up on disk somewhere, I'm voting for the original
programmers' public executions.
(Usually I am not so vindictive, but these hypothetical seed AI programmers
are too close to myself for me to forgive them easily.)
> [ This is all on top of my belief expressed in a posting a couple of
> weeks ago that FAI is almost certainly impossible on account of the
> inevitability of accidentally becoming unfriendly -- i.e. that the
> invariants you note are necessary don't exist, not for any choice of
> friendliness that anyone would consider remotely friendly.
This looks unlikely to me from the FAI theory side. It seems easy enough
for a self-modifying Bayesian decision system to maintain a constant
utility function, and maintaining any other invariant seems a comparatively
straightforward extension.
For you to be correct would require that a coherent mind above a certain
size is impossible; this would then become the new death sentence on the
human species once aging were resolved, or more likely an outright death
sentence if the size limit is too small to permit defensive FAI. UFAI
would remain easily possible; the incomprehensible goals would drift, but
it would still go FOOM.
I suppose Nature could be that cruel. It's a math question, and math has
no privileged tendency to turn out for the best.
> In other
> words, the scenario that I would expect is that we create this thing, it
> quickly spots the flaws in our so-called invariants,
I don't think that you mean a flawed invariant, I think you mean that the
embodiment of the invariant is irreparably unstable, or that the invariant
judges it cannot survive self-modification. That last is fairly hard to
see from a theoretical perspective, but possible.
> it works out that
> these flaws are unavoidable and therefore that if it lets itself
> recursively self-improve it will probably become unfriendly awfully soon
> despite its own best efforts,
Possible.
> it puts on some sort of serious but not
> totally catastrophic show of strength to make sure that we won't ever
> again make the mistake of building anything recursively self-improving,
That wouldn't even begin to solve the problem. People would just walk
directly into the whirling razor blades, even after they were warned.
You'd have to blow up every computer manufactured after 1996, shut down the
Internet, and burn the cognitive science literature, if you wanted to have
a decent chance of no one creating AI for another century. And that would
still not solve the long-term problem, only hand it to a different
generation, and did I mention that billions of people would die as a result
of this policy?
> and then it blows itself up as thoroughly as it knows how. But this is
> outcome-speculation and not what I want to focus on here, not least
> because it's all complete hunch on my part. I want to stick to the
> presumption that I'm wrong in that hunch, i.e. that a true FAI is indeed
> possible, and explore how it could possibly improve on an AI-free world
> given humanity's long-standing and apparently very entrenched desire for
> self-determination. ]
I'll stick to my reply that you could transport the entire human species
into an alternate dimension based on anime and improve the total amount of
self-determination going on. Individuals have a desire for
self-determination; an FAI rewriting the background rules (previously
determined by evolution and other nonhumane processes) interferes with this
less than your boss telling you to work a few extra hours. As for
humanity's *collective* self-determination, that's why I said to go ahead
and attach moral responsibility to SIAI and its donors; we're human,
therefore this is still humanity fixing its own problems.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST