RE: Rationality and altered states of consciousness

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Sep 21 2002 - 08:06:33 MDT


> > So, getting back to the real world, I think that BOTH
> >
> > a) singularity-oriented R&D
> >
> > b) genuine efforts to raise of the consciousness of human beings
> >
> > are under-resourced in our society, compared to what would be optimal.
> > Which is more under-resourced depends on interpretation of borderline
> > cases. (Which research counts as singularity-oriented --- do we count
> > work on faster microprocessors? Which work counts as
> > consciousness-raising -- do we count the yoga studio on the corner?)
>
> Okay. You can refuse to judge if you like. I think that we should look
> at all ongoing efforts, judge the probabilistic payoff to sentient life of
> joining each one, and join the best.

Well, your sentence as explicitly worded omits the possibility of creating a
new sort of effort, but I guess this is just a slip of verbal formulation.
I guess what you really mean is

"I think that we should look
 at all ongoing efforts and all newly proposed efforts, judge the
probabilistic payoff to sentient life of
 joining each one, and join the best. "

I have done something closely resembling that, and it's evident that my
choice is pretty close to your own... though we have both newly proposed our
own projects rather than joined existing ones.

It's not that I refuse to make my own judgment. It's that my judgments come
with multiple truth values, mostly simple a "strength" value and a "weight
of evidence" value. If we normalize them both into the intervals [0,1], I'd
say the statement

"Real AI work is the most promising direction for sentient life" has
strength~=.99 and weight_of_evidence ~= .6

Ordinary language doesn't provide for even this level of multidimensional
truth-value specification, which causes a lot of confusion. (Ordinary
uncertain logic formations don't either, amusingly enough.)

It's because my weight_of_evidence estimate is not so high, that I am not so
judgmental about others who have different opinion. I consider it
reasonably likely that others might draw conclusions based on evidence sets
not overlapping with my own; but based on the evidence set I've been able to
consider, I've drawn the strength conclusion I have.

You're always talking about BPT. As you know, I feel that standard
formulations of prob. inference are flawed because they don't take weight of
evidence into account. However, it can be taken into account within the
probabilistic framework, as in Novamente probabilistic term logic. I think
that good human inference does this also, though not in exactly the same way
as Novamente.

> This is a Categorical Imperative OK
> strategy - it works if "everyone does it" - because you aren't
> automatically joining the project that seems "best", but rather joining
> the project that combines the greatest degree of promise with the greatest
> effect you can have on it.

This is a very good formulation of one of the things I was trying to say.
You worded it better.

The only slant I was adding to this was that, humans being what they are,
the project on which one will have the greatest effect is often the project
for which one has the most passion.

Samantha has so much obvious passion for advancing human consciousness, this
leads me to believe she may have a much larger effect in that domain than in
others.

Or maybe after a couple years she'll realize current-model humans are
largely hopeless and join the Novamente team or SIAI's programming group ;>

> If one project grows too large and threatens
> to suck up all the resources, individuals following this strategy will
> tend to join small struggling projects because they expect to have a
> greater impact here.
>
> Of course, because the current distribution of efforts is so wildly
> unbalanced, the issue doesn't really arise; Singularity research is the
> least funded *and* most promising approach. I mention the above simply to
> argue that this kind of thinking works even if everyone does it.

I understand what you mean about Singularity research being very poorly
funded.

However, it's not unreasonable to believe, as Kurzweil and others do, that
explicit Singularity research is largely irrelevant at this stage, because
general hi-tech development is what's gonna bring us there. (He has
developed this perspective in a lot more detail.)

I know, though, you think he doesn't really understand the Singularity,
which brings a different dimension to the discussion.

My strong feeling is: It's incontrovertible, concrete, observable technology
results that are going to bring significant funding our way, not conceptual
explanations.

Either fabulous narrow-AI results obtained along the way to general AI
(which I realize is not a strong possibility in your own AI approach), or
(much nicer) animal-level general inteligence results obtained along the way
to general AI ... these will bring significant funding to
Singularity-oriented AGI research.

Whereas, I believe, no amount of talk about the Singularity will do so.
Because most people -- including nearly all people controlling funding --
will look at the same world we're looking at, and listen to our carefully
constructed trains of logical reasoning, and conclude that we're building
wishful-thinking-driven inferential card-houses, drawing wildly ambitious
conclusions based on speculating from scant evidence.

There's always the possibility of an extremely wealthy or powerful
"statistical outlier" investor/philanthropist coming along who really sees
things roughly the way we do. But I see this as a low-probability
occurrence. Incremental engineering results are going to be the key to
getting the substantial funding needed to accelerate our progress along the
AGI route to the Singularity.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT