From: Jeff Bone (jbone@jump.net)
Date: Sat Dec 08 2001 - 14:43:01 MST
Brian Atkins wrote:
> Nothing. Superintelligences being unable to stop or predict earthquakes
> is very unlikely. Which is what I said.
Okay, got it --- why is *that* unlikely? (Forget earthquakes, think larger scale. The
point is that unless the Power controls all of spacetime, there are risks that cannot
be avoided, only predicted --- and preciction at the top end of the scale requires very
fine-grained simulation of reality, which has natural limits (Beckenstein Bound, etc.)
> But the Sysop Scenario
> comes from the idea that at least for a while we are going to be stuck
> running on computronium or worse. We will not "prefer" that, it simply
> is expected (barring magic physics) to be what we end up with. So I have
> to shoot that remark of yours down.
???
How does this "shoot down" any remark I've made? I have absolutely no contest with
your comment --- I think we'll be running on less-than-computronium for a very long
time. I don't, however, think that means we can't extrapolate, and I don't think it
means we *shouldn't* extrapolate --- particularly when balancing risk / reward
equations.
> I will also shoot down your death
> vs. copies remark since no matter how many copies you have floating
> around the solar system, if all the atoms get taken over by a Blight you
> will reach a state of "complete" death.
???
Again, I'm not sure how this "shoots down" anything, as we're in complete agreement
here.
> There are things that even SIs
> probably have to worry about. If you can accept that then you can accept
> that some form of Sysop /may/ be needed even in that future time.
Hold on, pard. I'm not arguing that superintelligence isn't necessary. IMO, it's an
ABSOLUTE necessity in order to manage some of the larger-scale risks. But that DOES
NOT logically lead to the conclusion that a "Sysop" --- i.e., potentially coercive
external intelligence --- is needed.
> What is much more important is /getting there/ in the first place. So
> I have to agree with Gordon you seem to be stuck on something that has
> little importance to pre-Singularity goings-on.
I disagree --- if you're building a moral machine that has a different notion of
morals, then you had best hope that those morals are consistent with at least your
longest-term goals.
> Friendliness is
> not about expecting any kind of certain outcome other than the one that
> is logically and rationally best for everyone, based upon what they want.
And there is the crux of the debate, such as it is: I absolutely do not believe that
any rational being free from sentiment and cultural programming can believe in any
single outcome that is "logically and rationally best for everyone, based upon what
they want." IMO, that notion is helplessly naive, prima facie inconsistent with the
observable world (and not just human society, so don't go off on the "anthropomorphic"
tangent.) Given that you guys are transcending the bounds of traditional computer
science and AI and dabbling in what may be an incredibly powerful kind of "applied
philosophy" I think that the discussion is at a minimum worth having.
> I don't see what you're saying. In the case of earthquakes for instance
> the Sysop would already know they exist. So it likely would immediately
> upon noticing that "earthquakes exist, and I have no way to predict or stop
> them" begin notifying the people and providing any alternatives it had
> to them. Like I said, if they decide to stay around it's their own fault
> when something bad happens.
"The System wishes to inform you that the collapse of the metastable vacuum state will
occur at an undetermined time in the next year with a possibility of 1 in 1x10^-60.
Such an event will result in certain annihilation of your consciousness and all
available backups. I have no alternatives to offer you at this time." (Or substitute
inescapable supernova event, etc.)
> The only other class
Wow, you sound really certain of that. I've already named a number of them, and I
believe Nick has a kind of prospective taxonomy for different existential risks.
Running up against these kinds of things --- statements of denial in the face of
provided information --- are why I wonder from time to time how thick those
rose-coloured glasses you're wearing actually are. ;-)
> of no-wins are surprise situations like say a near-
> light-speed black hole comes zooming into the solar system. Well as soon
> as the Sysop's external sensors pick it up it would let everyone know to
> clear out of the area.
What if simple physics (or other concerns, like logistics) make the catastrophe
unavoidable? "E.g., the Sun will go nova in the next in the next 10 years with 100%
certainty, and the radioactive blast wave will proceed at the speed of light outward,
eventually catching all the sublight vehicles we have, destroying them. Evacuation is
futile."
> This kind of thing would of course not be perfect safety, it is simply
> the best possible under physical limits. Still, can't beat that.
Sure, I agree. But I think it's clear --- if you look at this stuff --- that a Sysop
is going to be very concerned about gathering all available information about the
environment its constituents have to live in, and will be required to make predictions
and forecasts from detailed models.
> That latter part of my statement appears tautological, but the idea that
> an AI can be designed such that it will stay Friendly is not.
I'm not so sure. Maybe second- or third-order tautological. ;-)
> I don't think we've heard any criticism from you yet regarding either
> CFAI or GISAI. If you have comments about the feasibility of either then
> by all means let's drop the Sysop thread and get to the meat.
I've made some comments about those topics in the past... I'm trying to take a "wait
and see" approach, here.
> Right, well like I said, I trust a Friendly SI to be able to figure out
> pretty easily whether it is practical or not.
Right, but here's the tautology: "I trust a Friendly SI to figure this stuff out,
because a Friendly SI is by definition trustworthy in such matters."
> > The crux of my issue is this: "most perfect universe" is underdefined, and
> > indeed perhaps undefinable in any universally mutually agreeable fashion.
>
> It's on a person by person basis with the Sysop breaking ties :-) That's
> my story, and I'm sticking to it :-)
And I'm okay with that, I'm just *deeply* concerned with how a Sysop might break such
ties, and on what basis.
> I think your claim that there's always a tradeoff is wrong.
Okay, fine --- there's *almost* always a tradeoff between liberty and safety.
> Friendliness also BTW is not necessarily about making the world a safe
> place. As I said, it is a completely different topic and aim from the
> Sysop discussion. Friendliness is strictly about how do you build an AI
> that will be "nice".
"Nice" and "safety" are hopelessly entangled. An SI that says "how do you do?" and
"excuse me" and "looking sharp today, Brian" and so forth while turning your
neighborhood into a nanogoo breeding tank can't really be called "nice," can it?
> You read this, right? http://www.intelligence.org/CFAI/info/indexfaq.html#q_1
Yup, right when it hit the bitstream a while back.
jb
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT