RE: How hard a Singularity?

From: Eugen Leitl (eugen@leitl.org)
Date: Sat Jun 22 2002 - 14:23:28 MDT


On Sat, 22 Jun 2002, Smigrodzki, Rafal wrote:

> ### Yes, but how? How can you inhibit hackers from hacking a
> human-level AI runnning as their secretary on their own computers?

Both the skills and the hardware base is not there. A successful AI is a
Manhattan scale project. It does get easier with time, but only in terms
of hardware resources.

> Unless you assume full transparency, and a very (inhumanly) efficient
> enforcement by a world government, you will fail - you need only one
> HL-AI to turn into a seed.

I'm not happy about transparency and efficient enforcement, let me tell
you. The only field requiring that right now is engineered pathogens for
warfare. Both AI and nanotechnology is decades away from dangerous
terrain.

> ### Darwinian evolution of self-enhancing agents would "go beyond the
> call of mad science", as Eliezer put it. Lamarckian evolution doesn't

I wouldn't call it mad science but business as usual, and quite inevitable
on the long run. Our task is to make sure this isn't going to be the end
of humanity in a major extinction event, that's all.

Lamarckian evolution is directed self-modification and self engineering of
smart beings. We see first beginnings of it in genetic therapy and
implants.

> exist. FAI (perhaps self-limiting FAI) followed by uploading is the
> least dangerous of the feasible ways to go.

That's possible, but because the risks are so high this needs complete
openness and regulation.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT