From: Dani Eder (danielravennest@yahoo.com)
Date: Mon Jan 03 2005 - 10:06:35 MST
> > Evolutionary trial-and-error tournaments between
> > subsystems of itself, with winning strategies
> > globally adopted, but still under control of the
> > singleton.
[re: recursive self-improvement]
> I don't think that can happen, but if it could, it
> would make a
> difference to rational planning at the present time,
> which is why I'm
> asking whether there's a reason to believe it could.
Engineers don't design by trail and error, as stated
by a previous poster. We use available science and
engineering knowledge to optimize a design to a
client's requirements. In the case of an AI trying
to improve itself, I would expect it to use available
knowledge too, rather than a random search.
As an example, analysis of functional MRI scans of
human brains, where a range of people from dumb to
smart are tested, could show a correlation of which
parts of the brain are more active in smart people.
This could guide an AI on how to better organize
it's own functions.
Another example is a survey of chess playing
algorithms, numerical analysis codes, etc. to
determine what types of improvements lead to the most
improved performance. This could be among raw
computation power, better algorithms, special
purpose hardware, etc.
These types of analyses can themselves be ranked
in order of likely impact on performance and the
highest ranked ones examined first.
One that I would expect to rank highly, and be a
cause of a jump in performance, is if the initial
AI is implemented in general purpose computer
hardware, then re-implementing the algorithm in
special purpose silicon. Currently graphics
chips achieve about 6x the performance of a CPU
chip, measured in Gflops, using this technique.
Daniel
__________________________________
Do you Yahoo!?
Send holiday email and support a worthy cause. Do good.
http://celebrity.mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT