From: Durant Schoon (firstname.lastname@example.org)
Date: Mon Feb 12 2001 - 14:14:46 MST
I'd just like to make a few comments (to Gordon mostly):
0) I wanted to note that my posts to the "Beyond evolution" thread should
have been labeled "[HUMOR] Beyond evolution" - Ha...I had to get
this correction in there somehow :-)
1) Gordon: Given your example, think about how incredibly important it
is to create a Friendly SI, FIRST, before anyone else creates
a Less-Than-Friendly-SI and why it is also important that this
SI have absolute control of everything so that we can all play
fair (my hunch, and many others, is that the first one will
dominate anyway because its headstart will keep it way ahead of
the competition). I believe that Eliezer's book will do a good
job of defining something we will all agree is fair (as difficult
as this task sounds). We just have to wait for it and this list
might be a good place for him to debug some of his ideas before
getting widespread feedback.
2) Thinking about Good and Evil can be very tricky, especially if you do
it wrong. Philosophers have burned countless cycles on the subject.
But this raises a good point. Will this book on the "Friendly AI"
have a primer on this subject to bring everyone from ground zero
all onto the same page? My quest for answers started in junior
high when one of my father's friends gave me a copy of Will
Durant's "History of Philosophy". Good questions mostly. Wrong
answers though. It wasn't until I got a computational view of the
universe that I finally thought I was getting somewhere. I would
hope that the "Friendly AI" book would help my former 13 year old
self (or perhaps a more precocious version of my 13 year old self),
if I'd had the chance to read it that young.
Perhaps people on this list can suggest some good authors on the
the subject...um, Cosmides and Tooby? Steven Pinker's How the
Mind Works...theories on the evolution of cooperation and
competition (Axelrod? Iterated Prisoner's Dilemma type stuff).
Richard Dawkins...Christopher Langton...Which might answer the
question: "Why do I label some things Good and others Evil?"
The rest is figuring out what we (or Eliezer) means exactly by
"Friendly". I hope you'll agree that this step is a crucial one
for our transhuman evolution.
Please don't be turned off by the end of this thread. No one would
want you to get pissed off and "show us all" by building a
Non-Friendly AI first just so you can say "See, I told you so",
though that scenario might make a good sci-fi novel :-)
I have sought long and hard looking for the answers to my questions,
so you have landed in like company. By making it to this list while
still in high school, though, you are WAY ahead of me.
Good Luck and let me know if you find anything cool I might have
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT