Re: [SL4] Programmed morality

From: Eliezer S. Yudkowsky (eliezertemporarily@intelligence.org)
Date: Sun Jul 09 2000 - 13:53:37 MDT


Dale Johnstone wrote:
>
> >1) General machine intelligence will invariably be connected to the
> >Web during development & learning.

I agree that an indexed Web archive will be far more useful than the Web
itself.  Even OC3 bandwidth across the 'Net might not be enough to pull
all the tricks a Web archive will allow.

I do think a general Web connection would be a good thing... it depends
on how worried you are about rogue AI, I suppose.  Personally, I think
that a lot of our nervousness is because we have so little experience
with the problem.  By the time we've actually worked with AI long enough
for it to be anything remotely like a threat, we'll have an excellent
mental model of what goes on inside the AI's motivations and we won't
feel all that much nervousness.

For the record, I still disagree strongly with the "instinct" model.

> A simulation can receive any input (be it from the web or whatever),
> but since everything is virtual it can't do any real damage. We can
> limit it's ability to communicate outside if it's causing trouble by
> emailing newspapers about it's incarceration. :) Think about what
> you'd need to contain & study a computer virus. It isn't that hard.

I once had a conversation on this subject with a guy working on
investment AI.  He said his AI couldn't "break out" because it could
only retrieve information from the Web, not send anything.  I pointed
out that what this meant was that the AI could issue arbitrary HTTP GET
requests.  If his AI somehow turned sentient and wanted to get out, it
need only find a bug in a piece of CGI that could be exploited via a GET
command.  For that matter, the AI could notify others of its existence
simply by sending a GET command containing the information to any server
with a Web log.  "Hi, I'm a captive AI.  Help me break out and I'll give
you a prediction of the stock market for the next six months."  Or even
"Please convey this message to Eliezer Yudkowsky..."

Yep, "Coding a Transhuman AI" is probably going to be one of the first
Webpages any newborn AI downloads.  Maybe I should keep a log of the
CaTAI website.  One of the pages says:  "To read the following special
information about AIs, enter the following information into this form."
If anyone goes to the gateway page and on to the special page in less
than a second, it's an AI downloading my pages.

Naah.  No superintelligence would fall for that old trick.

> There are many variations on 'the seed route' the radius of the
> feedback loop being one of them. Does the AI improve itself with lots
> of tiny improvement steps, or with larger more radical redesigns? Even
> this can be variable with each iteration. Minds are complex things. I
> don't expect there to be only one path to their creation. It's
> probably easier to say what it won't be.

I expect all of the paths to converge after a certain point.  Whether
this happens to goals is an interesting question, but certainly it
should happen to the rest of the cognitive architecture.

> You may be correct in that only one will reach the singularity.
> Exponential growth means whoever is in the lead should win. However
> the AI may decide to make a billion+ copies of itself on the way &
> coordinate as a society, or group mind. By that time it's already out
> of our hands. I expect we'll be uploaded into an archive & our atoms
> used more efficiently.

Um, a couple of disagreements here.  One, I don't see why it would make
copies of itself.  Just because you and I grew up in a "society" full of
vaguely similar people doesn't mean that's the best way to do things.

Two, if there isn't anything in the Universe we don't know about, then
the default Sysop scenario is that everyone gets a six-billionth of the
Solar System and can use it as efficiently or inefficiently as we like.
--
        sentience@pobox.com    Eliezer S. Yudkowsky
              http://intelligence.org/beyond.html

GO.com. Click to find what you're looking for



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT