From: Joaquim Almgren Gāndara (firstname.lastname@example.org)
Date: Wed Aug 01 2001 - 06:02:00 MDT
> Pick any number you like, but others on this list have
> argued, quite convincingly, that it would at least have
> to be intelligent enough to understand what it was
> doing. It is very unlikely that something with half
> the intelligence of an average human could comprehend
> AI software. And, so far as I've heard, no one on
> here is building a "Codic Cortex" into their software.
> I believe that is something that is expected to develop
> eventually. I think your being picky.
That's "~you're~ being picky". ;)
Seriously speaking, you didn't adress the other possibility. What if
it needs to be seven times as smart as a human in order to improve its
own code? Let's assume that there is no codic cortex. Let us also
assume that Ben or Eli manage to create a human-level AI. What if it
looks at it own code, just goes "Oh, wow, you've done some really cool
stuff here" and then ~can't~ improve the code? If it takes two or more
~intelligent~ people to create an AI equivalent to the ~average~
human, what's to say that the AI can create a ~trans-human~ AI? Isn't
that a leap of faith?
> I'm willing to bet that, given enough time, Ben
> could keep making improvements, though as
> time goes on it will be harder. Of course, for
> an AI this isn't a problem since it's dynamic
As time goes on, it will be harder, I agree. So hard, in fact, that it
might prove to be impossible for humans to create an intelligence
seven times smarter than themselves. Of course, if we can get the
positive feedback loop started, there's no telling how intelligent an
AI can get. But how do we start it if the AI takes one look at its own
code and just gives up?
I realise that if I'm right, humanity is doomed, which is why I want
someone to very clearly state why I'm wrong.
- Joaquim Gāndara
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT