Re: AI boxing

From: Tennessee Leeuwenburg (hamptonite@gmail.com)
Date: Wed Jul 20 2005 - 22:31:40 MDT


I am increasingly of the opinion that humans are unqualified to
recognise or properly analyse AGI. I think we need an intermediate
step, such that we can improve our guesses.

So: what is the _most_ intelligent intelligence which we, as humans,
could have certainty about?

Could we bootstrap our way to having certainty about AGI through a
chain of intelligent beings, each slightly smarter than the last?
Maybe, maybe not. But we could get some distance down such a chain, to
give us our best estimate.

Better that we can do on our own.

Cheers,
-T

On 7/21/05, Ben Goertzel <ben@goertzel.org> wrote:
>
>
> >It is true that what we believe is a box may not be
> > a box under
> > magic, if there exists some magic, but you'll have to give a
> > better argument
> > for the existence of this magic than an appeal to ignorance.
> >
> > Daniel
>
> How about the argument that every supposedly final and correct theory of
> physics we humans have come up with, has turned out to be drastically
> wrong....
>
> We now know that not every classical-physics box is really a totally solid
> box due to quantum tunnelling -- something that pre-quantum-era physicists
> would have found basically unthinkable.
>
> How can you assess the probability that a superhuman AI will develop a novel
> theory of unified physics (that no human would ever be smart enough to hit
> upon) and figure out how to teleport out of its box?
>
> How do you know we're not like a bunch of dogs who have never seen or
> imagined machine guns, and are convinced there is no way in hell a single
> human is going to outfight 20 dogs... so they attack an armed man with
> absolute confidence...
>
> IMO the appeal to ignorance about physics is rather convincing.
>
> The probability that superhuman AI, if supplied with knowledge of physics
> theory and data, would come up with radically superior physics theories is
> pretty high. So it would seem we'd be wise not to teach our AI-in-the-box
> too much physics. Let it read postmodern philosophy instead, then it'll
> just confuse itself eternally and will lose all DESIRE to get out of the box
> ... appreciating instead the profound existential beauty of being boxed-in
> ;-)
>
>
> -- Ben G
>
>
>

-- 
----------------
melbournephilosopher.blogspot.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT