From: Ben Goertzel (ben@webmind.com)
Date: Thu Nov 23 2000 - 05:58:21 MST
>
> okay, let me get this straight. if by enlightenment we mean (very
> roughly) understanding of self and by intelligence we mean (again, very
> roughly) understanding of reality apart from self, then this seems to make
> sense. since one can not observe all parts of oneself all the time (the
> observed and the observer must be separate), one may always be able to
> achieve greater self-understanding by devoting more computational
> resources to observing oneself (though intuitively there's probably a
> margin of diminishing returns looming over the horizon). in other words,
> there's always an "unseen self" that can not be eliminated; furthermore,
> this unseen self could potentially be a black hole into which the
> individual devotes more and more computational resources, attempting to
> observe verself more completely.
This is a reasonable way of putting it, sure...
The hypothesis of Zen and some other "wisdom traditions" is that we
typically
don't spend nearly enough time/space "remembering ourselves" -- not
analyzing
ourselves, but simply being acutely aware of the things we're doing ....
> but this sounds like another way of saying that omniscience is
> impossible ... which, on this list, is surely not in dispute.
>
Omniscience is impossible given our current assumptions about the physical
universe
(e.g. speed of light is the maximum speed...).
"Impossible" is a stronger term than I like to use, because given my own
finite information
base, it's not possible for me to draw conclusions with probability 1 ...
I.e., the assumptions about the physical universe aren't 100% certain
The interesting point this leads up to, then, is: OK, so omniscience is
impossible, but a system
with a LOT MORE resources is going to be able to have a LOT MORE
self-awareness/mindfulness than
we have, together with a LOT MORE external-problem-focused intelligence...
It will still be subject to the limitations I've described -- BUT, there can
be little doubt that it's
"internal landscape," its assortment of states of consciousness, will be
rather different than ours...
We have a few states of consciousness that we're in most of the time:
ordinary waking consciousness,
dreaming, hypnagogic/hypnopompic states ... then there are various drug
states, fugues, etc. Presumably
a machine with an order of magnitude greater processing power will get into
yet other states of consciousness...
Anyone have any concrete intuitions about what they might be?
To me this ties directly into the notion of Friendly AI. I think there are
states of consciousness where compassion
and ethics mean NOTHING... or almost nothing.... I can think of a couple
1) rage (definitely an altered state of awareness) ... the state people get
in when they kill other people out of anger.
Anyone who's been in a serious physical fight has probably felt the fringes
of this state coming on... unfortunately,
I think I have...
2) nihilistic indifference ... like Kirilov in "The Possessed" (also called
"The Devils") by Dostoevsky ... this guy
didn't kill anyone in the novel, but he might as well have -- he just didn't
believe the world existed, so he just
didn't care whether any other person happened to be alive or not...
Now, I'd hypothesize that an AI in the future, with a different inner
landscape of consciousness-states, may well
have a whole new assemblage of BAD (unFriendly) states of consciousness, as
well as a whole new assemblage of
good ones.... Unless we can anticipate these, how can we design in advance
a system that will lead to a
post-Singularity friendly AI system?
Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT