RE: How hard a Singularity?

From: Smigrodzki, Rafal (
Date: Sat Jun 22 2002 - 13:59:05 MDT

 Eugen Leitl []

If you're working with radioactive materials, especially fissibles, nerve
agents, pathogens, recombinant DNA you're subject to them. I distinctly
hope that anything involving molecular self-replication in free
environment and ~human level naturally intelligent systems will see heavy
regulation, at least initially.

### Yes, but how? How can you inhibit hackers from hacking a human-level AI
runnning as their secretary on their own computers? Unless you assume full
transparency, and a very (inhumanly) efficient enforcement by a world
government, you will fail - you need only one HL-AI to turn into a seed.


What I'm saying the rationally selfish strategy doesn't seem to favour
agents who engage in symmetric transactions with agents unable to
reciprocate. Since we're running risk of losing empathy with the rest of
humanity when moving away via Lamarckian and Darwinian evolution it is
imperative we make very good use of that empathy while it lasts.

### Darwinian evolution of self-enhancing agents would "go beyond the call
of mad science", as Eliezer put it. Lamarckian evolution doesn't exist. FAI
(perhaps self-limiting FAI) followed by uploading is the least dangerous of
the feasible ways to go.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT