SL1, SL2, Seed AI and Singularity

From: Joaquim Almgren Gandara (joaal98@ite.mh.se)
Date: Wed Sep 13 2000 - 08:29:17 MDT


I'm a student at the Mid-Sweden University, and I'm currently attending an
introductory course in AI. It's a very conventional course (surprise!),
and the lecturer is unfortunately not very good. However, in an attempt at
being pedagogic (sp?) - or maybe she's just being lazy - she has announced
that each student chooses a subset of AI that interests him/her and have a
20 minute presentation about said subset. Being a member of this mailing
list, what else could I possibly choose but seed AI? Luckily, she found
the theory interesting, and my choice was approved.

I held the presentation today. Since I felt that the class what at maybe
SL0 to SL2, I started off by talking about EURISKO as a kind of background
and to put things in perspective. Then, after a brief description of seed
AI and the Singularity, I gave a sort of classical AI vs seed AI
comparison. Then I asked how many students thought that seed AI is a
plausible concept. I had managed to convince about half the class.

The three brightest students actually had some things to say. One thing
that was mildly interesting was that a seed AI might decide that emotions
(if indeed it is "born" with emotions) are simply in the way of
intelligence and thus rids itself of them. Although the argument of the
student in question was that a being without emotions can't understand a
being that has emotions and thus isn't more intelligent (which is crap),
it is rather interesting to note that instincts (and therefore emotions)
are the basis of human intelligence. The following is probably old news,
but isn't it possible that this can be a problem? Can intelligence exist
without emotions? If it can, will seeds have emotions? If not, will they
"care" about humans? If not, will they just get rid of us? This is typical
SL0-2 questions, but I think they're rather relevant, and I couldn't find
an answer anywhere (tell me if I didn't look hard enough).

Another interesting question is "Where do we begin?". I got this question
from my brother, actually, who is an experienced programmer who would love
to see a seed AI reach human equivalence and beyond. It's probably a bit
early to answer that question, or at least that's what I think, and that's
what I told the class. I did tell them that a seed would probably involve
or at least build upon lots of different subsets of classical AI, such as
perhaps a neural network to emulate a retina, et cetera.

I was a bit surprised that so many accepted the idea of transhumanism and
a "Moore's Law to the power of Y" (as I called the steep intelligence
trajectory of seed AIs), but in retrospective, I'm not sure if the people
who raised their hands could actually tell the difference between EURISKO
and a seed. They probably thought that it's almost been done already,
which on the other hand may be true in a way...

Well, just thought I'd let you know.

  Joaquim Gandara <claw@lords.com>
  http://www.ite.mh.se/~joaal98/

-----

  "'Good' is the thing that you favour,
   'Evil' is your sour flavour."

  - Marilyn Manson, "Dogma"



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT