Re: [sl4] Re: Property rights

From: Bryan Bishop (
Date: Wed Jul 09 2008 - 22:06:50 MDT

On Wednesday 09 July 2008, Lee Corbin wrote:
> > Specifically we were talking about 'property rights' and I
> > was wondering how it is that you want an ai to behave in
> > some characterized manner with respect to them (hint:
> > the characterization is actually incomplete but nobody
> > admits this methinks).
> What's wrong with me *wanting* or finding desirable an
> AI that does respect property rights? Or at least---this is

Nothing is wrong with wanting that. I was talking about the actual task
of figuring out what intelligence is and building that; property rights
and such are secondary, they are not properties of being intelligent as
far as we know.

> > All of these layers -- ai domination, property rights,
> > governments, these are just layers on top of the individual
> > agents, and these layers are by no means 'magical', they can
> > be created, recreated, destroyed, and most importantly
> > redesigned, and they tend to show up on their own.
> The way we always talk past each other is like this: you
> argue on one "level of practicality" and I argue on another.
> You don't seem to see traditions and societal inertia as
> a real force to be reckoned with and against which only
> *so much really is possible*! You always say, well,
> "we could do this" or "we could do that" or "what's to
> keep us from DIO" (the plural, I guess of DIY).
> I always argue on a different level, namely, that there will
> *definitely* be no overnight change in people's attitudes,
> nor in those who are in charge (say in America, for instance).
> I see that as a constraint, and you never do. So we talk
> past each other.

Yes, you illustrate this talking-past phenomena when you're more or less
pairing DOI against "overnight change". One does not need overnight
change for DOI. Who else is going to do it? Santa? :-)

> > Look at all the random micronations, perhaps. Anyway, these layers
> > can be likened to onion layers, which are wrapped around the actual
> > thing that we are all interested in here -- intelligence, i.e. that
> > person walking around with a skull and all of those silly social
> > dynamics issues that we don't really care about. We're here for
> > whatever the hell that intelligence is. All of the hypotheticals
> > about implementation of an ai dominated society are ridiculous --
> > it doesn't actually have anything to do with intelligence.
> Since this has to be about the Nth time around here, give
> me a hand. Why do we keep talking past each other?
> Why do you dismiss entirely *out of hand* the *possibility*
> of an adverse AI takeover? In other words, when I ask again,
> as I always do, WHY IS IT RIDICULOUS, for some reason,
> your answer just goes right by me. Look deeper. What am I
> failing to see? What about my position are you failing to see?

No, it's not that it's impossible, I'm disagreeing that it's such a big
issue. Yes, I am aware of the arguments for computronium conversion,
malicious ai, and so on, and I've mentioned various solutions to the
problems that we are worrying about, like people and them dying. So,
what's so ridiculous is that we have to consider those things when
building intelligence. Yes, I am aware of 'byproducts', and that's
certainly a worthwhile consideration, but we can solve those problems
anyway, so why is everyone still debating ai within the context of
dominiator scnearios when we know that there are solutions swimming
about in our minds? So instead of focusing on those issues, let's get
to work, eh? (I just added this one today), and (the classic).

> > Whether or not people happen to not know how
> > to live isn't the issue (in truth, nobody does anyway); if you're
> > afraid people will die, fine, let's go engineer solutions to those
> > problems, but let's not mix it up with the task of ai engineering
> > and design. :-)
> There you go again. "Let's go engineer solutions." When, this
> afternoon? Later this evening? Actually, my next two weekends

Next few seconds. What's your number again? Nevermind, I have it. I'll
call you.

> are booked, so I'll have to put off supplying myself and everyone
> else an immortality drug until late August. What the devil do you
> think De Grey and many, many others like him---far, far more
> capable than I am of "let's go engineer solutions to aging"---are
> doing? They *are* trying. And I cannot believe that you think
> that "we"---whoever that is---can do it for them.

Yes, I've read all of Aubrey's papers*, all the ones that I could find
at least, and I'm completely supporting his efforts, but that's not
everything that we could be doing. If you'll remember, one of his
visions is to eventually have these immortality take-off velocity farms
where we have animals that are slightly older than ourselves (always)
to make sure we're testing the next generation don't-die-tech on
them :-). So, that's an interesting idea, yes, but there's also some
other options that can help out when we're exploring the chemical
possibility space, such as mentioned on and -- such as having the biotechnology
equipment distributed to whoever wants to try it out themselves and
maybe manage a few projects of their own, which makes for this very
giant filter as more people start checking it out and what they might
be able to do (it also has other important benefits, like the potential
benefit of homebrew (prescription!) drugs (insulin? :-))).

* has my notes on Aubrey's papers,

> doing? They *are* trying. And I cannot believe that you think
> that "we"---whoever that is---can do it for them.

Do you know Aubrey's story? He started off as a computer science guy,
somewhere in Silicon Valley. To my knowledge, the story goes that he
became unsatisfied with the prospects of death, and decided to look
into it, so he has his map and engineering plans, yeah. So he's a
person too, he's not magical (OR IS HE?).

> >> I'm surprised that by now you aren't starting to be just a bit
> >> frustrated by how your own impeccably logical ideas don't seem to
> >> be affecting thousands, yet alone millions, of fellow Americans.
> >> They just look at things differently. Evolution takes a long, long
> >> time.
> >
> > I am not confused.
> Sorry. I really wasn't being satirical, even though it probably
> sounded like it. No, I mean that if there were 10^5 people just like
> you on Mars, then indeed you all would very shortly bring to fruition
> your various good ideas and schemes. But you're living in a country
> with 300 million rather relatively conservative folk who just aren't
> going to let you do whatever you want, and who aren't going to listen
> to "good sense" (as defined by Bryan and Lee).

I am not counting on 300 million people listening to "good sense". I
still don't get what you are saying though.

- Bryan

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT