From: David Clark (firstname.lastname@example.org)
Date: Mon Jan 24 2005 - 16:17:32 MST
I said I wouldn't say more but I will just add this clarification.
----- Original Message -----
From: "Brian Atkins" <email@example.com>
Sent: Monday, January 24, 2005 10:39 AM
Subject: Re: When does it pay to play (lottery)?
> Disregarding issues I still have with your analogies, I think you are
> sidestepping the original statement from David, which was regarding
> *any* AGI implementations in the *near future*. I interpret that to
> include things like Novamente within the remainder of this decade, but
> perhaps that's not what he meant?
Near future to me is 2-3 years not 10. I also believe that advanced if not
super-human intelligence is doable within the next 15-20 years. I would be
very surprised if Ben's project isn't eventually sucessful but I don't think
it will look like what is he has today. Just a guess, I certainly could be
wrong. The reason I made such a statement is that noone can claim that they
are close to human level intelligence or that any like that is imminent.
That would mean to me that no *take off* could occur for at least the next
few years. Looking further into the future is only quess work and isn't
based on what exists today.
Some people believe that coming close to human level intelligence and that
AI going super intelligent will occur within a short time frame (1-2 years).
Even if we are capable of creating something like human intelligence, I
think our limited brain capabilities will limit how fast the AI goes super
intelligent *unless* it does that by itself. I have great skeptisism about
any programs ability to recursively enhance it's intelligence or
computational efficiency past a small percentage like 30%. Humans certainly
have no ability to increase their intelligence or efficiency for that matter
without the help of other humans. If you have evidence that shows that this
recursive brain boosting can occur please reference it so I can study it. I
am aware of a company that says it's optimizing compiler can optimize some
brands of programs but I think you will see that the increased speed is *one
time only* and the % increase only significant on the worst coded programs.
> Here's the quote again:
> "The chance that *any* implementation of AI will
> *take off* in the near future is absolutely zero. We haven't the foggiest
> clue exactly what it will take to make a human level AI, let alone an AI
> capable of doing serious harm to the human race."
> That combined with his casual "Oh and even if it does take off it will
> be constrained by current day hardware" kind of set me off. These aren't
> the kinds of attitudes that you would expect or hope to hear from people
> doing this kind of work.
Your condescending attitude definitely *set me off*. If you have all this
proof and evidence that AI is so scary or so imminent please share your
-- David Clark
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT