META: Molloy (was: Friendly AI)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Nov 25 2000 - 01:53:59 MST


"J. R. Molloy" wrote:
>
> BTW, thanks to Eliezer for setting up this list. I hope he doesn't get
> bummed out by the likes of me spouting my opinions here.

Well, to be honest - and I did ask for honesty on this list - I am a bit
bummed out by the fact that you started posting, since I think that you
make the most intellectually sterile posts of anyone I've ever known.
I've been internally debating with myself what to do about that for the
last couple of days. It's made harder by the fact that your posts, taken
individually, always manage to sound like the sort of thing that *should*
be interesting. You know, the kind of posts where people who pride
themselves on their open-mindedness say to themselves: "This is something
that should be interesting" or "Other people might be interested in this"
or "This is something that deserves to be debated", but nobody's actually
*personally* interested and the top posters are bored sick by the thought
of writing a response. That sort of thing is poison to list quality and
it's exactly the sort of thing I want to avoid for SL4.

I'm not sure SL4's readers would understand if I pre-emptively banned you,
and writing to you offline and asking you to stop posting would interfere
with the openness of the moderation process. Since you raise the subject,
though, I think that simply stating my personal opinion on the topic may
turn out to everything needed, in terms of maintaining list quality. If,
however, your posts continue to generate responses and the responses
themselves do not cover wholly new-to-the-planet-Earth intellectual
territory, then I may ask you to stop posting.

For the record, my current solution is as follows:

"Nobody should feel obligated to respond to J.R. Molloy's posts."

==

Incidentally, Spudboy, you are guilty of the same offense as Molloy -
although to a lesser degree. Our futuristic reasoning is causal, not
teleological. It's hard to explain, in words, the difference between the
clarity of extrapolation and just making up things that sound nice as you
go along, but I get the strong impression that you are doing the latter.
And you CONSISTENTLY quote the entire bodies of messages in your response,
which is technically a violation of list etiquette that only the most
valued posters are allowed to get away with.

==

Molloy and Spudboy are totally free to flame me for saying this, of
course. Anyone who disagrees with my methods of moderation is free to say
so, especially Samantha Atkins (who does maintain high post quality, but
who disagrees with me about the ethics of moderation). It seems obvious
to me that META posts, especially those critical of me, should be
moderated much less rigorously than discussion of futurism proper - unless
the META posts start to take over the list. I will continue to do my best
to ensure quality of posts on futurism proper.

==

"J. R. Molloy" wrote:
>

> Multi-AI seems to me the most promising approach to creating AI and Alife.
> Some will argue that it's too dangerous because a multi-AI experiment can
> more readily get out of control than a single AI, but I'd counter that a
> hundred AIs can respond to evolutionary tactics better than a single AI
> can, and that evolution (more than top-down coding) will most likely
> result in genuine AI systems bred to interact with humans.

"J. R. Molloy" wrote:
>
> Evolution does not stop being evolution when guided by un-natural
> selection.
>
> Check out:
> http://www.canoe.ca/CNEWSScience0008/30_robot.html
> A computer programmed to follow the rules of evolution has for the first
> time designed and manufactured simple robots with minimal help from
> people.

"J. R. Molloy" wrote:
>
> The perfectly infinite multiverse presents unlimited existential awareness
> to any intelligence (human or SI) that can grok it. "Does SI have Buddha
> nature?" asked the sanyasi.
> "Mu" replied the master.

"J. R. Molloy" wrote:
>
> Is technological singularity a "powerful notion" or is it an event that
> requires the attention of all sane human beings (ethical, intelligent,
> individual, worldly, and mindful folks everywhere).

"J. R. Molloy" wrote:
>
> It seems to me we should first of all consider how AIs behave toward us.
> Let them feel whatever they want -- it doesn't matter as much as how they
> actually function and conduct themselves. They might try to kill us
> because they love us, or they might try to help us solve our problems
> because they pity us. Who cares.
> Asimov's unwritten Alife law: AIs that misbehave get terminated
> immediately. The ones that invent new ways to solve human problems get to
> breed (multiply, reproduce, evolve new versions of themselves, etc.).

==

Spudboy100@aol.com wrote:
>
> This is a question more for the future. But if somebody gets uploaded, and
> stays inside their fantasy world, will not insanity result? Is this why there
> have been no repeated "signals" seti-wise. That civilizations join the Land
> of the Lotus Eaters and forget their primate orgins. If one spends their time
> as a artilectual rhomboid inside cybernetic, dimension -7, and forgets what
> its like to go to a bookstore and have coffee with a friend; won't this just
> serve to set us up for a bad end?

Spudboy100@aol.com wrote:
>
> I am not sure that Ben's shocklevel is Not more profound at SL2, then it is
> at SL3. If a culture has already experienced SL-2, then SL-3 is merely an
> extension. If the Galaxy is discovered to be, mostly non-biotic and
> non-intelligent; then what inpact is star-travel? Intelligent aliens created
> out of petri dishes seem much more dramatic to me, because they become the
> human species' Mind Children; to quote Moravec and Minsky. Opinions?

Spudboy100@aol.com wrote:
>
> Can there be an Ultimate SuperMind? How would we define such a mind, woulld
> it encompass the universe, or could it transcend the visible cosmos and pass
> to other cosm's? I am not trying to get to religion in a dishonest way, on
> this list; but I like the dealing with the endpoint of things, and seeing
> what real creativity and power might achieve.

Spudboy100@aol.com wrote:
>
> Have you considered scenario's when the first of these ultratechnologies will
> occur or come to full flower? What do you think has to come together,
> scientifically, culturally, and economically to have this takeoff? Any
> putative, timelines, people need to be aware of?

(Answer: NO. Everything on the ultratech list, except possibly
nanotechnology, is post-Singularity tech unless something totally
unexpected happens (see "Moving Mars" by Greg Bear), probably with
disastrous results.)

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT