From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Fri Aug 19 2005 - 15:06:40 MDT
"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
Bringing order out of scientific chaos is oft-done by teams, but also oft-done
by Luke Skywalkers; such is the lesson of history. Why? One overlooked
reason, I suspect, is that once someone latches on to a piece of the problem
they have an advantage in figuring out the rest also, a first-mover effect.
But also because quite often you *do* need to fit all the knowledge into one
person's head. Some knowledges can only be properly collated using
intercortical bandwitdth, not interpersonal bandwidth. In AI it's not so much
a matter of collation as knowing, for each of the necessary fields, how not to
make the mistakes which that field knows about.
Definitely when mapping a field of study, depth should be sacrificed for breadth to avoid known or unknown pitfalls. Only when the most promising avenues have been discovered should additional resources/time/personnel be allocated towards these paths until new forks appear and the process repeats. I think the dead-end path Richard Loosemore is on is the belief that as soon as an AGI models human minds to predict human actions, it will prefer our goals to its own present goals or to any alternate "more human" goals that would also stomp presently living humans. The simplest way I can think of to explain why the former isn't so is that the "part" of the AGI which is modelling humans will never be completely snipped from the "rest" of the AGI. Creating an AGI which mimics human brains as its safeguard (is not a RPOP) will not succeed before the UFAI vs. FAI or responsible MM vs. Nanhattan battles set the table unless trillions of industry/government dollars are clumsily spent on the
endeavour.
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT