Re: Fighting UFAI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jul 10 2005 - 15:59:45 MDT


Phillip Huggan wrote:
> Not really concerned about AGI efforts with an active appreciation
> of the FAI concept. But AGI without safeguards, "Blue Brain"
> succesors of the future and almost all MM (creates AGI) efforts
> concern me. An out of control AGI, which turns matter into computer
> substrate to solve some problem, or creates happy minds out of
> matter to optimize a perverted utilitarian goal-structure, seems
> likely to recognize humans as actors which could interfere with its
> goals. I want to know at what point along an AGI's evolution, will
> it be impossible to fight. An AGI with access to the internet,
> could manipulate derivatives markets, possbly hack into
> semiconductor or robotics manufacturing plants to effect some
> customized assemblies, or just Terminator-style find a method of
> gaining access to defense department nuclear/chem/bio weapons launch
> controls. Is AGI with internet the point of no return?

The point of no return is the enemy that is substantially smarter than you
are. Opposing an entity dumber than me, I have a real chance of winning no
matter what apparent advantages they start out with. Against a better mind
than mine, I would not expect to win regardless of material advantages.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT