From: Samantha Atkins (email@example.com)
Date: Mon Jun 02 2003 - 12:46:46 MDT
On Thursday 29 May 2003 08:38 am, Bill Hibbard wrote:
> On Mon, 26 May 2003, Eliezer S. Yudkowsky wrote:
> Any artifact implementing "learning" and capable of at
> least N mathematical operations per second must have "human
> happiness" as its only initial reinforcement value. Here
> "learning" means that system responses to inputs change
> over time, and "human happiness" values are produced by an
> algorithm produced by supervised learning, to recognize
> happiness in human facial expressions, voices and body
> language, as trained by human behavior experts.
This has long been considered unworkable as a standard of ethics.
The term "happiness" is very ill-defined and a moving target. What
of some who exhibit "happiness" in the misfortune of others, for
instance. Clearly something more than "happiness" is needed for a
rational benign ethical system.
> 2. How the regulation can be enforced.
> Enforcement is a hard problem. It helps that enforcement is
> not necessary indefinitely. It is only necessary until the
> singularity, at which time it becomes the worry of the
> (hopefully safe) singularity AIs. There is a spectrum of
> possible approaches of varying strictness. I'll describe
> a. A strict approach.
> Disallow all development of "learning" machines capable of
> at least N operations per second, except for a government
> safe AI project (and exempt "mundane" learning applications).
> This would be something like the Manhattan Project (only the
> government is allowed to build nuclear weapons, although
> contractors are involved).
Uh huh. You mean the same government that bombs countries on
supposition of WMD and that reserves the right to attack any country
or group it considers a possible threat preemptively? Why do I feel
like "government safe AI" is an oxymoron?
> The focus for detecting illegal projects could be on computing
> resources and on expert designers. Computing chips are widely
> available, but chip factories aren't. There is already talk of
> using the concentration of ownership of chip manufacturing to
> implant copyright protection in every chip. Its called TCPA
> and I'm against it - see my article at:
Yes. Forget about intellectual freedom and the revolutionary
possibilities of computation if we go down that road. Since you are
drawing on things you yourself would not like to see I assume that
you are arguing yourself into believing your proposal is unworkable?
> Illegal projects could also be detected through their need for
> expert designers. As long as the police are not corrupt or lazy
> (hence the need for an aggressive public movement driving
> aggressive enforcement), they can develop and exploit informers
> among any outlaw community. Its hard to do an ambitous project
> like creating AI without a lot of people knowing something
> about it. They are vulnerable to bribes, and they get into
> feuds and turn each other in.
Wonderful. An all powerful state assumed to be non-corrupt will safe
us from the danger of an all powerful AI? Hmmmm.
> Internationally, there could be treaties analogous to those
> controlling certain types of weapons. These would prohibit
> military use of learning machines capable of more than N
> operations per second, and would set up international bodies
> analogous to the IAEA for coordinating regulation and
You mean treaties like the one's the US unilaterally decided to back
> 4. The consent of the governed.
> AI and the singularity will be so much better if the public
> is informed and is in control via their elected governments.
> It is human nature for people to resist changes that are
> forced on them. If we respect humanity enough to want a safe
> singularity for them, then we should also respect them
> enough to get the public involved and consenting to what is
On the contrary, the public is incapable/unwilling to
understand/consider the issues. The elected officials are interested
in power and little more capable of wisely governing such work than
the people themselves. You choose to arbitrarily assume that this is
not so despite the evidence all around you.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT