From: Phillip Huggan (email@example.com)
Date: Tue Feb 07 2006 - 20:22:37 MST
The human tech level at the time of AGI creation is very relevant too. It is possible we won't have AGI until after we have achieved a mature MNT plateau of societal stability. Bringing an AGI of uncertain friendliness into such a world would be foolish. Also after our robotics and sensor technologies mature, it is doubtful an AGI could escape detection early on....we could kick vis ass! The flipside is an AGI before MNT might save us from a nano-dictator or WWIII.
The best benchmark estimate I can come up with is that we have a 2-4% chance of rendering ourselves extinct or reverting to a pre-industrial tech level where civilization wouldn't survive the next ice-age, every decade. Some tech/social developments may exacerbate this, some may provide solutions. Decide whether or not to turn on your AGIs according to this estimate. The tallying of lives lost to diseases an AGI might cure and the tallying of the value of post-singularity utilitarianism is irrelevant and dangerous; that future will always be there for the taking, the key is for us and/or our kids to survive to and through it.
Olie L <firstname.lastname@example.org> wrote:
Factors that influence how hard the takeoff "knee" is include:
1) Computational resources
2) Other resources - particularly nanotech.
- it doesn't have to be replicators. Tunnelling electron
microscope-level nanotools etc will make it much easier for a "runaway AI"
to create replicators
3) "first instance efficiency" - I know there's a better term, but I can't
remember it. If the first code only just gets over the line, and is slow
and clunky --> slower takeoff
4) AI goals (how much it wants to improve)
Yahoo! Mail - Helps protect you from nasty viruses.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT