From: James Higgins (firstname.lastname@example.org)
Date: Sun Jun 23 2002 - 13:50:54 MDT
At 12:18 PM 6/23/2002 -0600, you wrote:
>As Eli has correctly pointed out in some of his past writings, a human-level
>AGI may well be a *lot* more amenable to self-modification than an uploaded
>A human-level Novamente certainly would be.
>The big difference is that, being an engineered rather than evolved system,
>an AGI is likely to have a more elegant and comprehensible (though still
>somewhat messy and difficult, of course) design than an uploaded human
>This doesn't make self-modifying AI a panacea, but, it suggests that an AGI
>*may* have an easier time usefully self-modifying than an uploaded human...
>-- ben g
Conceded. I didn't actually think the AI would have as difficult a time as
an uploaded human, but it would still have a very difficult time. Given a
positive, exponential feedback loop even a small initial advantage could
have major long-term difference. But in the short-term it may still take
either system years to progress substantially beyond human level
intelligence. My point was that the problem should not be trivialized (as
I believe some on this list are doing).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT