From: James Higgins (jameshiggins@earthlink.net)
Date: Tue Jul 31 2001 - 23:58:06 MDT
At 02:06 AM 8/1/2001 +0200, you wrote:
> > Very true. But if we can get an AI to at least a 1.0 level, then
>give it
> > sufficient processing power so that it is much, much faster than a
>human,
> > it will progress on its own. Because it will make tiny advances
>over time
> > (on it's time scale) which would lead to 1.1, 1.2, 1.5, 2.0, etc.
>If 1.5
> > takes more processing power, it could slow itself down some but
>improve the
> > quality of its thought.
>
>I don't quite understand this. Why place the threshold at 1.0? What
>evidence is there indicating that the average human (plus AI
>advantages such as codic cortex) is smart enough to progress to higher
>levels of intelligence? Why not someone half as smart (0.5) or 1.5 or
>7.0 or any other arbitrary number? I think you're being
>antropocentric.
Pick any number you like, but others on this list have argued, quite
convincingly, that it would at least have to be intelligent enough to
understand what it was doing. It is very unlikely that something with half
the intelligence of an average human could comprehend AI software. And, so
far as I've heard, no one on here is building a "Codic Cortex" into their
software. I believe that is something that is expected to develop
eventually. I think your being picky.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT