From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 22 2002 - 10:17:32 MDT
Hey, Will. No offense or anything, but I don't think you're caught up on
the SL4 background material just yet. Despite various continuing
disagreements, there are certain terms that we try to use in a precise way.
Similarly, there are constraints on which futures can be envisioned under
certain background assumptions; it isn't all just magic. ("Magic" tends to
occupy a certain balance between anthropomorphic characteristics and minor
departures; a Singularity envisioned as magic will be too anthropomorphic.)
Will Pearson wrote:
>
> Imagine your seed AI's but with one goal, please the user.
>
> The seed ai is inside a wearable computer that has all the usual links etc. It communicates to the user with sound and a small semi-transparent lcd display. All the tech is available today. It would also have a variaty of senses that we don't have.
A "seed AI" is an AI designed for self-modification, self-understanding, and
recursive self-improvement with the intent of growth to beyond human
intelligence levels. You don't use a seed AI as a wearable computer,
although perhaps someday you might use a non-self-improving fragment split
off by a seed AI in a wearable computer, or wearable computers might be
wired into a global network that is collectively a seed AI.
There are powerful design constraints that rule out just putting together a
seed AI with some random goal system. You need to read up on Friendly AI.
> The IAS would warn you visually if something bad was about to happen, that it noticed with it's super intelligence.
You don't have wearable computers and superintelligence in the same world.
That's a worse anachronism than using chipped flint tools to repair a
personal computer. "Superintelligence" is conventionally (on SL4) defined
as an intelligence with at least hundreds or thousands of times the
effective brainpower of a human. If it doesn't have nanotechnology yet, it
will real soon. If you're using a wearable computer at this point it's
because you've decided to join a protected enclave instead of joining the
Singularity.
> If you still think that self motivated (friendly or otherwise) AI will be first or better for whatever reason, think of this as a backup plan, if programming the goals are too hard :).
Doesn't work as a backup plan. A computer program that conforms to the
immediate subgoals of the user, rather than independently originating
subgoals in the service of long-term goals, has insufficient planning
ability to support seed AI. If programming the goals are too hard,
programming a really simple unworkable system won't help.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT