[sl4] Shock level confidence (was: Property rights)

From: Bryan Bishop (kanzure@gmail.com)
Date: Sat Jul 12 2008 - 22:22:02 MDT

On Saturday 12 July 2008, Lee Corbin wrote:
> > > I always argue on a different level, namely, that there will
> > > *definitely* be no overnight change in people's attitudes,
> > > nor in those who are in charge (say in America, for instance).
> > > I see that as a constraint, and you never do. So we talk
> > > past each other.
> >
> > Yes, you illustrate this talking-past phenomena when you're more or
> > less pairing DOI against "overnight change". One does not need
> > overnight change for DOI. Who else is going to do it? Santa? :-)
> It will be done by well-funded groups, by corporations, with, I
> agree, some contributions from the merely enthusiastic.

Is this what your magic eight balls says?

> >> > whatever the hell that intelligence is. All of the hypotheticals
> >> > about implementation of an ai dominated society are ridiculous
> >> > -- it doesn't actually have anything to do with intelligence.
> >>
> >> Since this has to be about the Nth time around here, give
> >> me a hand. Why do we keep talking past each other?
> >> Why do you dismiss entirely *out of hand* the *possibility*
> >> of an adverse AI takeover? In other words, when I ask again,
> >> as I always do, WHY IS IT RIDICULOUS, for some reason,
> >> your answer just goes right by me. Look deeper. What am I
> >> failing to see? What about my position are you failing to see?
> >
> > No, it's not that it's impossible, I'm disagreeing that it's such a
> > big issue. Yes, I am aware of the arguments for computronium
> > conversion, malicious ai, and so on, and I've mentioned various
> > solutions to the problems that we are worrying about, like people
> > and them dying.
> Sorry for interrupting, but what is your solution for dying again?
> Have you checked it out with people who've been studying
> anti-aging and age-reversal for decades? I'll bet your solutions
> are all hand-wavy----and you can post twenty (or two thousand)
> links to helpful information that "we" could indeed use
> to "solve" the problem of aging, but you haven't solved it. And
> no one else has. Or will for a long time.

So, let's say it's not "my" solution for not dying, and let's just call
it Aubrey's. I was only using it as a general reference to the fact
that people are working on it. People are -- not magical unicorns. So
if you really care about people not dying, go pursue that work, set up
giant archives and impressive life support systems, deploy better
ambulantory capacities, whatever it might take. But please don't
confuse that with the issues of the design, construction and
feasability issues of intelligence. You respnod to this same line
below ..

> > So, what's so ridiculous is that we have to consider those things
> > when building intelligence. Yes, I am aware of 'byproducts', and
> > that's certainly a worthwhile consideration, but we can solve those
> > problems anyway, so why is everyone still debating ai within the
> > context of dominator scenarios when we know that there are
> > solutions swimming about in our minds? So instead of focusing on
> > those issues, let's get to work, eh?
> > http://heybryan.org/buildingbrains.html
> > http://heybryan.org/humancortex.html (I just added this one today),
> > and http://heybryan.org/recursion (the classic).
> That last link is bad. The reason my response is so late is that I

The last link should have been http://heybryan.org/recursion.html .

> read the first two links. They do outline a lot of work.
> Unfortunately, I'm already fully employed, and don't expect to
> achieve much in the way of doing some serious programming each time I
> see one of your links, or each time I see a new model that that could
> really contribute towards progress in brain mapping; and besides,
> how would this be integrated into the total world researcher dataset

I'm not sure I know what you mean by integrated here, or really the
world researcher dataset -- do you mean the user of the system that was
mentioned from the brain mapping context / perspective of things? Like
adding collected data to stuff that is internet-accessible? That's
usually just a simple httpd server daemon, or some other protocol
service, if that's what you mean.

> anyway? Any system administrator of a brain or even of managing these
> transformations might be interested in managing I/O, phenotypic
> feedback, biofeedback, neurofeedback, acoustic feedback, or whatever.
> Our human cortex gene expression data set---just what we already have
> here and what has been linked to in our discussions here---could, I
> suppose, be displayed over the Google Maps API or something very
> similar to it, and then this could indeed be used for neurotagging
> and the correlation of information to specific regions of the map,
> while also giving users an interface for managing that information,
> selection views... but what would this lead to? Search me! I have no
> idea, but it does sound like it could lead to something, no?

Just as a network administrator manages a cluster of boxes, you can
manage the brain. Even in your day to day interactions, like opting to
partake in the consumption of coffee. Analogies to intensive care units
would be appropriate, since that's what medical professionals are doing
anyway -- just that this level of detail is hardly spent on situations
not involving the large possibility of death. So if we have this
computational system for this monitoring as well as augmentation, in a
systematic, quantifiable manner we could work on all (well, most -- I
wouldn't recommend self-trepanation) of these (personalized) transhuman
technologies we're interested in. [This doesn't get us the replicators,
for instance.]

> But as to the specific question you raise "so why is everyone still
> debating ai within the context of dominator scenarios when we
> know that there are solutions swimming about in our minds?".
> Entertainment? Because some of us lack confidence that we can
> churn out lab-tested solutions before total senescence takes over?

But that's just the point -- if you isolate the basis of intelligence
you can exploit it as it exists now to develop those technologies if
you are so unconfident of just one brain doing it. How many brains
until you feel confident? One? Two? Three? Thousands?

> >> > Whether or not people happen to not know how
> >> > to live isn't the issue (in truth, nobody does anyway); if
> >> > you're afraid people will die, fine, let's go engineer solutions
> >> > to those problems, but let's not mix it up with the task of ai
> >> > engineering and design. :-)
> >>
> >> There you go again. "Let's go engineer solutions." When, this
> >> afternoon? Later this evening? Actually, my next two weekends
> >
> > Next few seconds. What's your number again? Nevermind, I have it.
> > I'll call you.
> That is not an answer! You're deep in trouble friend, someone christ
> - king of the jews. You can someone in your state
> be so cool about our fate?

You asked when .. [and the reference is lost on me. I did my searching,
still can't get it. Are you Jesus?]

> I'm saying that many people here consider it interesting or
> fruitful to consider the logical possibilities inherent in a sudden
> AI takeoff. Okay, so you don't. Then maybe you might
> consider not replying to posts like that, imploring, (the way
> you do) that the subject be changed.

Maybe I'm digging in a dry well.

- Bryan
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap
"Genius is the ability to escape the human condition;
Humanity is the need to escape." -- Q. Uim

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT