From: Eliezer S. Yudkowsky (email@example.com)
Date: Sun Sep 17 2000 - 20:11:03 MDT
attached mail follows:
|My Groups | transadmin Main Page | Start a new group!|
Alex Future Bokov wrote:
> PPS: Oh, one more thing. The 'forgotten' thing I wanted to ask was "How
> much intelligence is sufficient to meet the objectives? You
> demonstrated that an >AI would be smarter than an EarthWeb,
(Background for non-attendees: My opinion was that the Earthweb is a
transhuman, but not a superintelligence. It can form sequences of thoughts
that are beyond human capability, but any individual thought still has to fit
inside a single human mind. The Earthweb can be faster than an individual
human, can have vastly more knowledge, and can even think in genuinely
transhuman sequences - in those cases where multiple, intersecting experts can
build upon each others' ideas. An individual human can have one flash of
genius, or a few flashes of genius; the Earthweb, in theory, can pile
thousands of flashes of genius one on top of the other, to form sequences
qualitatively different from those that any single human has ever come up with
during Earth's previous history. But any individual genius-flash still has to
come from a single human. Thus the ideal Earthweb is a transhuman but not a
> but would
> an EarthWeb do just fine for the purposes, and with less likelihood (by
> definition) of having priorities that conflict with those of humanity,
> and requiring very little new technology? How does one even go about
> estimating the level of intelligence needed to safeguard the world from
> nanodisaster and bring about sentient matter?"
It seems to me that the problem is one of cooperation (or "enforcement")
rather than raw intelligence. The transhumanists show up in the 1980s and
have all these great ideas about how to defend the world from grey goo, then
the Singularitarians show up in the 1990s and have this great idea about
bypassing the whole problem via superintelligence. Both of these are
instances of the exercise of smartness contributing to the safeguarding of the
world - but the problem is not having the bright idea; the problem is putting
enough backbone behind it. The Earthweb is great for coming up with ideas,
but says nothing about backing, or enforcement.
In other words, it looks to me like the Earthweb would say: "Hey, let's go
build an AI!" Or perhaps the Earthweb is brighter than I am, and would see an
even simpler and faster way to do it - though I have difficulty imagining what
one would be.
The neat part about superintelligence isn't just that an SI is really smart;
it's that an SI can very rapidly build new technologies and use them according
to a unified set of motives. Before an Earthweb could even begin to replace
superintelligence as a guardian, it would have to (a) come up with a smart
plan, (b) invent the technology to implement it, and (c) ensure that the
technology was used to implement (a). It looks to me like (c) would be the
major problem, since the Earthweb by its nature is public and distributed.
But perhaps the Earthweb could come up with a clever solution even to this...
My feeling, though, is that even if the Earthweb does solve all these problems
and come up with a clever way to build a better world using only the limited
intelligence of the Earthweb, just going out and building a seed AI will still
look like an even more superior solution.
In summary, though, the power of Earthweb, as with any transhuman, would lie
primarily in its smartness, not its brute intelligence. The Earthweb can act
as guardian if and only if the Earthweb itself comes up with some really
clever way to act as guardian.
A practical problem is that all the Earthweb techniques I've seen, including
the idea-futures-on-steroids of Marc Stiegler's _Earthweb_, are still subject
to distortion by majority prejudice. If a majority of the betting money is in
the hands of folk with a blind prejudice against AIs, then only a very stable
culture - only a culture that has had the Earthweb for years or even decades -
will have the systemic structure whereby informed discussion can overcome
prejudices. In other words, the Earthweb proposals I've seen have no method
to distinguish between genius and stupidity except by using the minds of other
humans, and it will take a while before the system builds up enough internal
complexity to have a systemic method of distinguishing - independently of the
human components - the subtle structural differences between a human making a
good judgement and a human making a bad judgement.
Idea futures are only the beginning of such a method. Idea futures mean
betting money on the judgements, so that the winners of each round have
greater weight in successive rounds. Idea futures are better than blind
majority votes, but they aren't perfect. How do you use idea futures to
resolve an issue like Friendly AI, even leaving out the payoff problem? It
would take several iterated rounds on issues of the same order, with similar
stakes and similar content and similar required knowledge and *resolvable
predictions*, before the capitalist efficiencies came into play and the bad
betters started dropping out.
For the Earthweb to resolve a problem like that, it would need a systemic
structure that systematically resolved each idea into sub-issues, identifying
each assumption and deduction and sequitur, and discussing and betting on
these subcomponents separately. In other words, the Earthweb would have to
actually change the structure of thoughts and thinking and decision-making
processes, after which it's plausible that the outcome of the decision would
be better than the sum of the betting humans - if nothing else, because the
humans who made bets on the final outcome would have the granular resolution
of the discussion available for examination.
Wanna take this to SL4? ( http://sysopmind.com/sing/SL4.html )
If not, can I forward this message there? And to Extropians, Robin Hanson,
and Marc Stiegler - I think they'd all be interested.
> PPPS: Anybody who might be wondering, what I mean by EarthWeb is a sort
> of world-wide 'Slash meets eCommerce meets eBay meets email on
> steroids' that evolves into an emergent entity in its own right. As for
> >AI, see SingInst's pages.
And what I mean by "Earthweb" is the entity visualized by Marc Stiegler in the
http://www.baen.com/chapters/eweb_1.htm (chapters 1 through 6 available)
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
To unsubscribe from this group, send an email to:
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT