From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Thu Nov 04 2004 - 05:13:40 MST
--- Marc Geddes <marc_geddes@yahoo.co.nz> wrote:
> It gets worse. The existential threats may
> start
> popping up earlier than I thought. I was
> reading some
> stuff from the Center For Responsible Nano
> (Mike
> Treder's think tank). I originally thought
> nano
> wouldn't be threat until 2030 and beyond. But
> now
> Treder and others are pretty sure nano's gonna
> be
> hitting in force as early as 2020. So the
> risks are
> going to start going up after 2020. This date
> is just
> after the shortest time I think it's possible
> to get
> to FAI ;)
>
> Eliezer's best years might soon be behind him,
> as he
> said. The AI problems are popping up faster
> than he
> can solve them, and he could soon start slowing
> down
> with age. There's no funding. FAI theory
> keeps
> getting more complicated. Formalizing
> conceptual
> principles takes 15-20 years. And existential
> risks
> from nano could be as little as 15 years away.
> It's
> not looking good at all.
>
> So: Only a small band of dedicated
> Singulatarians
> know what's going on, long odds, certain death
> if we
> fail fate of the universe at stake. I love it
> :D
Not only that, but a majority of US voters just
opted for more war and end-times religious lunacy
(a government that actually prefers existential
risks to constructive projects!) The Apollo
program was cut short becuase the money was
'needed' in Vietnam. Missile-defense boondoggles
piss away more money on janitorial supplies than
your whole budget. And as for me, well, I'm two
months behind on the mortgage. This is the wrong
kind of excitement.
For what it's worth, though, I just read that a
new car racing sim had 'Bayesian AI' in it. Maybe
there are breakthroughs waiting to be made in
unexpected places.
Tom Buckner
__________________________________
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:48 MST