Re: [SL4] washington post article

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Apr 02 2000 - 17:51:04 MDT


Greg A wrote:
>
> The key element that I think you're missing, Eliezer, (and which I don't
> blame you for, since it's intentionally withheld in the preliminary
> announcement) is the (relatively) novel concept (at 30 hours old) of the
> semantic network. This only makes sense if you read this:

FYI: "Semantic network" is a term going back for forty or fifty years
of classical AI. It has a long and hated history. If you mean
something different, I *strongly* advise you, from a PR perspective, to
pick a different name.

> http://www.intelligententerprise.com/9811/online2.shtml
>
> particularly the mathematical concept of "projection." Dr. Codd noticed it
> in 1969. I noticed it in 1999. I understood it about 10 days ago, and I came
> across the Codd paper (while search for the name of SQL's inventor and the
> date of origin) on Friday morning. That killed the last of my doubt.

Well, either I have not understood your proposed use of "projections",
or I have failed to notice projections entirely.

> My non-linguistic math sucks. I'm sure you guys can see much bigger
> implications after you see the link I'm talking about.
>
> The essence of my idea (quickly maturing into a theory) is that the Codd
> concept of projection is/(can be) equivalent to semantic meaning in
> fact-based computation involving human beings.

What kind of meaning? I use an RNUI-like scale to describe what I would
call "representational bindings" or "semantic bindings", i.e. the degree
to which a model represents reality.

1) Sensory binding. This occurs when elements of the model covary with
elements of the external world (from a programming perspective, when
they covary with incoming data from a sensory device).

2) Predictive binding. This occurs when you can use the model to
predict what the incoming data will be.

3) Decisive binding. This occurs when you can use the model to
influence external reality; that is, when you can select, of three
possible actions, the one which will result in the most desired result
(as incoming data).

4) Specifiable binding. (Or "manipulative" binding.) This occurs when
you can start with a desired external result, and notice/invent an
action or actions which will lead to that result ("external result" ==
incoming data from a sensory device). The three sublevels of
specifiable binding are qualitative, when the desired outcome is
selected from the members of a finite set (more or less the same as a
decisive binding); quantitative, when the desired outcome is an integer
or real number (thus, blind search through the space of possible actions
could never suffice to reach the desired outcome); and structural - that
is, the specified result may consist of multiple interconnected subelements.

If an intelligence has a representation with a structural specifiable
binding, fully integrated with the goal system, and hierarchically
integrated so that sub-elements of a specifiable system can be treated
as subproblems in another specifiable prepresentation, then this is what
we call "intelligent design". That is, this is how we get from the
problem of "high-speed travel" to visualizing a bicycle to designing
wheels and gears and all the little pieces to machining the parts.

> [NOTE: There are lots of interesting fallout implications here, but I'm
> trying to excite the world about the business implications first before they
> figure these out. SL4 is a suitably rarefied atmosphere for exploration of
> the deeper meaning in an attention-secure arena.]

By which I assume you mean that none of your business rivals are paying
attention, although they easily could be.

> What THAT means, is that a team of links humans sharing fact-based data can
> perform fact-based computations much better than a pure machine will for the
> foreseeable future (i.e. the next 30 days).

Translate, translate... "If you can use humans as primitive operations,
you can build a 'computer program' that does cool stuff."

> Welcome to the New World of Computing, everyone. Check what I'm saying.
> Follow the links. Decide whether I'm crazy or genius or both, and let me
> know.

I followed the link. You've still lost me.

> "Essential humanity" means what's left over as uniquely human here in the
> New World. I'm hoping it will largely be the good stuff, since the good guys
> have built the first cybersocial weapon with potentially planetary scale
> effect. I think we can stay ahead of the bad guys for long enough to make it
> not matter, mainly because the bad guys won't WANT to understand what we're
> talking about.

Bill Joy wants to understand. It might work for a year, but not a
decade. And you'd be amazed at what people are willing to understand if
it'll make them a buck.

What's a "cybersocial weapon"? Is that like a collaborative filtering
mechanism designed to track movements in opinion space and steer
participants to desired foci? (I.e., is that like a massive Web-rating
system designed to track which pages are effective at converting
Democrats to Republicans and steer Democrats to those pages?) I tend to
view that sort of thing as morally unacceptable except as a special
consequence of a morally acceptable system (i.e. a web-rating system
which steers people to opinion-changing pages, regardless of creed).

> If that's not good news to this group (particularly in light of Mr. Bill
> Joy's recent Wired article), I don't know what will be.
>
> REQUEST FOR INACTION: Although this idea is revolutionary, I would very much
> appreciate it if nobody took the perceived connection outside this list for
> now. I am declaring this idea humanity's first un-patent (i.e. uncontrolled
> idea), in the tradition of copyleft, and Fact Technologies doesn't need to
> get involved in expensive legal games at this hypercritical early stage of
> the process. I'm counting on you guys for support in effecting this social
> explosion, so please prove to me that my decision to trust you was valid.

Um, please be aware that SL4 is an uncontrolled list. I do not approve
subscriptions, as I do with the Singularitarian list. Anyone could be listening.

-- 
       sentience@pobox.com      Eliezer S. Yudkowsky
          http://pobox.com/~sentience/beyond.html
                 Member, Extropy Institute
           Senior Associate, Foresight Institute
------------------------------------------------------------------------
GET A NEXTCARD VISA, in 30 seconds!  Get rates as low as 2.9%
Intro or 9.9% Fixed APR and no hidden fees.  Apply NOW!
http://click.egroups.com/1/936/6/_/626675/_/954719495/
------------------------------------------------------------------------


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:07 MDT