From: Jef Allbright (jef@jefallbright.net)
Date: Thu Aug 10 2006 - 20:26:31 MDT
On 8/10/06, Michael Anissimov <michaelanissimov@gmail.com> wrote:
<skipping several paragraphs that could be picked for nits, but to
little effect.>
> We aren't grabbing towards a higher intelligence to solve our problems
> because we want our daddies. We are phrasing these discussions in
> terms of superintelligence because we see the abrupt emergence of
> superintelligence as *inevitable*. Again, if you don't, this whole
> discussion is close to pointless. Sure, if you don't believe that
> superintelligence is around the corner, you will put everything
> towards working with what we have today - humans.
Truly, this is the crux of our disagreement. I see the benefits of
working with what we have now as outweighing what I see as a very
small probability that a galactic paper-clip generating AI is anywhere
close to right around the corner.
>
> > It's time for humanity to grow up and begin taking full responsibility
> > for ourselves and our way forward. We can and will do that when we
> > are ready to implement a framework for #1 and #2 above. It will
> > likely begin as a platform for social decision-making optimizing
> > objective outcomes based on subjective values in the entertainment
> > domain, and then when people begin to recognize its effectiveness, it
> > may extend to address what we currently think of as political issues.
>
> We can do all this, and then someone builds a superintelligence that
> maximizes for paperclips, and we all die. Looks like we should've
> been working on goal content for that first seed AI, huh?
I like that Eliezer is exploring this area, but I'm far from agreeing
that this is the overwhelming priority that it's portrayed to be by a
certain elite set of people.
> > You were right to refer to a superintelligence, but that
> > superintelligence will not be one separate from humanity. It will be
> > a superintelligence made up of humanity.
>
> Yes, Global Brain, metaman, etc. This is all well and good, but a
> community of chimps networked together with the best computing
> technology and decision systems does not make a human. We are talking
> about building something godlike, so it doesn't make sense to refer to
> it in the same way we refer to humans, no more than it makes sense to
> talk about chimps in the same way we talk about humans.
See my response to Eliezer for why I think this is naive due to its
excessively narrow focus.
> Singularitarians believe in the technological feasibility of something
> - building a recursively self-improving optimization process that
> starts from a buildable software program. Our arguments for the speed
> and intensity of the self-improvement process come from cognitive
> science and comparisons of the relative advantages of humans and AIs.
> (http://www.acceleratingfuture.com/articles/relativeadvantages.htm)
> You obviously dont believe that the self-improvement process in AIs
> will play out at this speed, otherwise we would be on the same page.
Correct.
> We are talking superior intelligence capable of developing
> nanotechnology and whatever comes after that, with the resources to
> rip apart this planet in seconds, minutes, hours, whatever.
I'm familiar with the arguments for planet-destroying
"superintelligence", and that it's conceivable in a narrow sense, but
do you see the inherent contradiction in the concept of an
"intelligence" with such a silly goal?
Well, I'm not trying to win an argument here so I'll let it rest, but
I do occasionally wish to contribute what I see as a worthwhile view
even if it doesn't support this particular version of Pascal's Wager.
- Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT