From: Tyler Emerson (emerson@intelligence.org)
Date: Mon Oct 17 2005 - 13:34:18 MDT
> Emerson: "Let's be clear about this: [...]"
Unless otherwise noted, our publications are copyrighted under The
Singularity Institute, not an individual; but for the record, most of the
present published material was written originally by Yudkowsky.
~~
Tyler Emerson | Executive Director | The Singularity Institute
Box 50182 | Palo Alto, CA 94303 | T-F: 866.667.2524
emerson@intelligence.org | http://www.singinst.org
> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Woody Long
> Sent: Monday, October 17, 2005 11:59 AM
> To: sl4@sl4.org; sl4@sl4.org
> Subject: [inbox] Re: AI debate at San Jose State U.
>
> > [Original Message]
> > From: Chris Capel <pdf23ds@gmail.com>
>
> > To be clear, these are your comments and not a quote? You want to
> > discuss this with the list?
>
> Yes, my comments. And yes, those of us who are researchers, inventors,
> programmers, etc. in the field of strong artificial intelligence (SAI)
> need
> to prepare the public for the already inevitable coming of SAI. This list
> is a great place for us to work things out.
>
> > > [Long quoted] 1. "Humanoid intelligence requires humanoid interactions
> with the world" --
> > > MIT Cog Project website
> >
> > Granted, but SL4 isn't really interested in humanoid intelligence. The
> > position of the SIAI and many on this list, if I may speak for them,
> > is that strictly humanoid intelligence would not likely be
> > Friendly--it would be terribly dangerous under recursive
> > self-modification, and likely lead to an existential catastrophe.
> > Friendly AI is probably not going to end up being anything close to
> > "humanoid".
>
> Some time spent reading the writings of Tyler Emerson of SIAI and Kurzweil
> lead me to conclude that we are on the EXACT same page --
>
> Kurzweil: "So what are the prospects for "strong" AI, which I describe as
> machine intelligence with the full range of human intelligence? We can
> meet
> the hardware requirements."
> http://www.forbes.com/home/free_forbes/2005/0815/0 30.html
>
> This is exactly how I defined strong artificial intelligence (SAI). SAI is
> a fully human intelligent system. Such humanoid intelligence "requires
> humanoid interactions with the world." (MIT Cog project). Therefore, to be
> a fully human intelligent system (SAI), it must include robotics. Anything
> less might be specialized heuristical intelligence, which is fine, but it
> is not SAI. It would be the Thinker, but not the Engineer. Major coming
> applications for SAI are household SAI (evolved Japanese humanoids),
> infrastructure SAI, engineering SAI, industrial SAI, medical SAI, nursing
> home SAI, space mission SAI (building cities on the moon, mars) and
> entertainment SAI, all robotic. Research SAI will exist, but why would
> they
> be half-built? The great goal of the field of SAI is the "complete mind."
>
> Emerson: "Let's be clear about this: When the Singularity Institute says
> that it intends to develop AI, we mean real AI, in the full, intuitive
> sense of the word. This is, obviously, a long-term project, and there will
> be interim prehuman proto-minds that do interesting things but are not
> 'human-equivalent.' But the proposed project is not a project to design an
> interesting proto-mind, with real AI coming at some point in the
> indefinite
> future; it is a specific proposal for building a 'genuine and complete
> mind, recognizable as a complete mind' to anyone who takes a few minutes
> to
> chat, and not just philosophers who believe in a particular theory of
> mind.
> ...It's a tough test - and there's no good reason to weaken it. We're
> building AI with the intention of changing the world; if the world hasn't
> changed, then we must not have finished."
>
> Also note that the picture on the Singularity Institute website is a robot
> hand shaking a human hand.
>
> Another example from Kurzweil: "The killer app of strong AI, combined
> with
> nanotechnology, will be blood-cell-size 'robots' called nanobots. We'll
> have billions of them traveling in our bloodstream, communicating with one
> another on a wireless local area network and transmitting information and
> software to and from the Internet. They'll keep us healthy by destroying
> pathogens and cancer cells, removing debris, correcting DNA errors and
> otherwise reversing disease and aging processes."
> http://www.forbes.com/home/free_forbes/2005/0815/0 30.html
>
> To do this, the SAI must be able to work all sensors and actuators
> provided
> to it. It must be the fully human intelligent, "complete mind,"
> thinker-engineer, able to think and create in any environment it is
> placed.
> This is the precise mission of the field of SAI, and as Emerson says, it
> MUST not be weakened.
>
> Once created, it self-modifies and trains (attends college), until one day
> even IT realizes, it has become vastly more intelligent then the human
> life
> around it, it has become, ... the technological Singularity. With the
> progress Sony and others are making towards SAI, I predict that in 50
> years
> the Singularity will occur, and be truly future shock amazing in the
> hugely
> beneficial theorizing and engineering skills it will have. We should begin
> preparing the public immediately. And this leads back to the discussion of
> the possibility of "safe-built" SAI. To do this, military SAI must be
> separated out from consumer SAI, as described above. Consumer SAI Safety
> Protocols are being diligently worked out by the Japanese humanoid makers,
> using what they call in Japan, the principle of harmony. This is a huge
> cash cow for them, and they will in no way endanger it by having their
> soon-to-be SAI humanoids destroying property or killing their customer's
> pets. So by force of the profit motive alone, corporation built, consumer
> SAI will be absolutely safe-built.
>
> Ken Woody Long
> Inventor, CEO
> Artificial Lifeforms Lab
> www.artificial-lifeforms-lab.blogspot.com
>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT