Re: JOIN: Alden Streeter

From: Gordon Worley (redbird@rbisland.cx)
Date: Fri Aug 23 2002 - 14:49:14 MDT


On Friday, August 23, 2002, at 02:53 PM, Ben Goertzel wrote:

> But yet how can I know this? How can I know how rational is MY OWN
> belief
> that real AI is essentially just around the corner (i.e., perhaps 5-10
> years
> away to a baby AI with its own general intelligence, making its own
> meanings
> & learning about the world).

You can know that your own thoughts are rational up to a point. Most
FoRs (failure of rationality, with apologies to Eliezer) can be detected
if you watch what your thinking.

For example, today I was tempted to argue with some Randians who had a
little table with information set up. Then, however, I considered why I
would do this. The first answer produced was "to inform them that they
are wrong and point them in the right direction". Looking a little
deeper, though, I could see that this wasn't actually my reason. My
reason was that they are `them', I'm `us', and it's time to kick the
other tribe's ass. Giving the Randians some more information was my
rationalized goal to convince me that I would still be doing something
moral. I overcame my brain and didn't bother with them.

[Hey, I just noticed that if you drop the `n' from `Randians', I like
them a lot more. ;-)]

In other words, you should look at why it is you think AI is so close.
Is it because you see it as the best way of extending your life?

To be fair, though, you aren't likely to catch every FoR. Some of them
are very deeply rooted and take a lot of training to recognize and root
out. I know I don't catch them all (some I recognize but am not yet
able to prevent without massive effort).

On the actual topic of this message, real AI does seem to be near.
There are several very smart people working very hardly on it and even
more helping to generate funding. Furthermore, I and others estimate
that if GIAI is possible, we'll be able to brute force it within 20
years or so. There is only a problem if you think AI is coming not
because it will be technologically possible but because it will save
humanity or some other such thing (I hope that we will be able to use AI
to save humanity and that's a good reason to work on it, but that's not
a reason why it will happen).

--
Gordon Worley                     `When I use a word,' Humpty Dumpty
http://www.rbisland.cx/            said, `it means just what I choose
redbird@rbisland.cx                it to mean--neither more nor less.'
PGP:  0xBBD3B003                                  --Lewis Carroll


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT