Re: [sl4] Re: Property rights

From: Lee Corbin (
Date: Sat Jul 12 2008 - 19:21:01 MDT

Bryan writes

>> I always argue on a different level, namely, that there will
>> *definitely* be no overnight change in people's attitudes,
>> nor in those who are in charge (say in America, for instance).
>> I see that as a constraint, and you never do. So we talk
>> past each other.
> Yes, you illustrate this talking-past phenomena when you're more or less
> pairing DOI against "overnight change". One does not need overnight
> change for DOI. Who else is going to do it? Santa? :-)

It will be done by well-funded groups, by corporations, with, I
agree, some contributions from the merely enthusiastic.
>> > whatever the hell that intelligence is. All of the hypotheticals
>> > about implementation of an ai dominated society are ridiculous --
>> > it doesn't actually have anything to do with intelligence.
>> Since this has to be about the Nth time around here, give
>> me a hand. Why do we keep talking past each other?
>> Why do you dismiss entirely *out of hand* the *possibility*
>> of an adverse AI takeover? In other words, when I ask again,
>> as I always do, WHY IS IT RIDICULOUS, for some reason,
>> your answer just goes right by me. Look deeper. What am I
>> failing to see? What about my position are you failing to see?
> No, it's not that it's impossible, I'm disagreeing that it's such a big
> issue. Yes, I am aware of the arguments for computronium conversion,
> malicious ai, and so on, and I've mentioned various solutions to the
> problems that we are worrying about, like people and them dying.

Sorry for interrupting, but what is your solution for dying again?
Have you checked it out with people who've been studying
anti-aging and age-reversal for decades? I'll bet your solutions
are all hand-wavy----and you can post twenty (or two thousand)
links to helpful information that "we" could indeed use
to "solve" the problem of aging, but you haven't solved it. And
no one else has. Or will for a long time.

> So, what's so ridiculous is that we have to consider those things when
> building intelligence. Yes, I am aware of 'byproducts', and that's
> certainly a worthwhile consideration, but we can solve those problems
> anyway, so why is everyone still debating ai within the context of
> dominator scenarios when we know that there are solutions swimming
> about in our minds? So instead of focusing on those issues, let's get
> to work, eh?
> (I just added this one today), and
> (the classic).

That last link is bad. The reason my response is so late is that I read
the first two links. They do outline a lot of work. Unfortunately, I'm
already fully employed, and don't expect to achieve much in the way
of doing some serious programming each time I see one of your links,
or each time I see a new model that that could really contribute towards
progress in brain mapping; and besides, how would this be integrated
into the total world researcher dataset anyway? Any system administrator
of a brain or even of managing these transformations might be
interested in managing I/O, phenotypic feedback, biofeedback,
neurofeedback, acoustic feedback, or whatever. Our human cortex gene
expression data set---just what we already have here and what has been
linked to in our discussions here---could, I suppose, be displayed over
the Google Maps API or something very similar to it, and then this could
indeed be used for neurotagging and the correlation of information to
specific regions of the map, while also giving users an interface for
managing that information, selection views... but what would this lead to?
Search me! I have no idea, but it does sound like it could lead to something,

But as to the specific question you raise "so why is everyone still
debating ai within the context of dominator scenarios when we
know that there are solutions swimming about in our minds?".
Entertainment? Because some of us lack confidence that we can
churn out lab-tested solutions before total senescence takes over?
Or maybe people here debate those ideas not just for entertainment,
but also because humankind's relations with future AI hinges on
every single word we say in a forum like this! Yeah? or Meh?

>> > Whether or not people happen to not know how
>> > to live isn't the issue (in truth, nobody does anyway); if you're
>> > afraid people will die, fine, let's go engineer solutions to those
>> > problems, but let's not mix it up with the task of ai engineering
>> > and design. :-)
>> There you go again. "Let's go engineer solutions." When, this
>> afternoon? Later this evening? Actually, my next two weekends
> Next few seconds. What's your number again? Nevermind, I have it. I'll
> call you.


That is not an answer! You're deep in trouble friend, someone christ -
king of the jews. You can someone in your state
be so cool about our fate?

>> your various good ideas and schemes. But you're living in a country
>> with 300 million rather relatively conservative folk who just aren't
>> going to let you do whatever you want, and who aren't going to listen
>> to "good sense" (as defined by Bryan and Lee).
> I am not counting on 300 million people listening to "good sense". I
> still don't get what you are saying though.

I'm saying that many people here consider it interesting or
fruitful to consider the logical possibilities inherent in a sudden
AI takeoff. Okay, so you don't. Then maybe you might
consider not replying to posts like that, imploring, (the way
you do) that the subject be changed.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT