From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 06 2002 - 10:04:57 MST
ben goertzel wrote:
>
> Us trying to understand whether hacking the Sysop will be possible, is much
> like a very smart dog, who has intuited a little bit of what human language
> is like, trying to make projections about the complex machinations of a
> dispute in intellectual property law.
>
> ben g
Yes, Ben. That's why my actual answer to the question was "I don't
know." Look at most of the challenges I get, and my responses; I'm
usually defending the Sysop Scenario from someone who claims that it is
provably wrong due to definite-statement-about-SIs X. In response I
generally content myself with showing that ~X is at least as likely as X;
that is, there are at least as many arguments for ~X as X. (And when
someone slips up and makes a definite statement in favor of the Sysop
Scenario, I take those apart too.)
> From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
> >
> > We have some grounds to think that a superintelligence might be able to
> > get low-level control over all local material reality, because we can
> > visualize this as the result of nanotechnological competence by a
> > singleton SI. When we say "make the rules", what we really mean is
> > "control reality on a low level". We are now asking the question "Does
> > the ability to control reality on a low level suffice for immunity to
> > perversion attacks?"
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT