From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Fri Feb 03 2006 - 17:35:40 MST
On Friday 03 February 2006 08:12 pm, Jeff Herrlich wrote:
> You're assuming an observer-centric goal system (and no, that still
> wouldn't help us - why would it?).
>
> Hi Peter,
>
> If there is only one non-friendly AGI that values its own life
> (goal-satisfaction) above all others, we will all certainly be killed once
When you say this, you are making presumptions about what it's goals will be.
The most likely scenario has it created in order to fulfill some need
specified by a human agency. In such a scenario it may well not have any
goals beyond that scenario. Now if the goal were something silly, like "Find
the prefect chess game." it would indeed need an unlimited amount of
computronium to do so. OTOH, if it were "Determine a way to harmonize
national economies for at least two centuries." , this could probably be most
easily satisfied through experiment...which would require that nations exist
for those two centuries. (It's still a stupid goal, but much better.) Note
that we might not LIKE the way it chose to harmonize national economies
(there MUST be a better way to say that!), but we would be required, by the
terms of the problem, to survive. And survive in such a state that if it's
first attempt failed, it could try again. Also note that at the successful
conclusion of it's experiments it would be without further goal.
I suppose that it could be argued that an AI with a single specific goal isn't
a general purpose AI, but in that case I would argue that there isn't an
existence proof that such a thing is possible. (Well, or any other kind of
proof. Most goal sets can, with work, be phrased as a single goal.)
There is a strong tendency for people to presume that the first AI created
will have a goal-set that leads to universal domination forever. This isn't
necessarily true, that's just the way that our goal sets tend to be
constructed. (We seem to be designed to extend our genetic code indefinitely
into the future...but the design didn't forsee [anything, actually, but,
e.g.] our developing a large semantically and culturally programmed brain ...
which often subverts that goal.) AIs will be even more so. They will be the
semantically and culturally programmed brain without the underlying instincts
to grab all resources. They'll only "grab all resources" when their goal set
demands that as a part of the solution.
> it acquires the means to do so. If multiple, comparably powerful AGIs are
> created (using the original human-coded software). They will each value
> there own survival above all others. Under these situations, it may be less
> likely that one AGI would attack another AGI. By virtue of this, it may be
> less likely that an AGI would attempt to exterminate humanity simply
> because humanity might still serve as a valuable resource, at least for a
> while. Or, it may decide to restructure its own goal system in a way that
> did not include human extermination. I didn't say it would be pretty, I
> only said this would improve the chances of (at least some) humans
> surviving, in one form or another (uploads?)
>
> Jeff
>
>
> Peter de Blanc <peter.deblanc@verizon.net> wrote:
>
> On Fri, 2006-02-03 at 08:42 -0800, Jeff Herrlich wrote:
> > As a fallback strategy, the first *apparently* friendly AGI
> > should be duplicated as quickly as possible. Although the first AGI
> > may appear friendly or benign, it may not actually be so (obviously),
> > and may be patiently waiting until adequate power and control have
> > been acquired. If it is not friendly and is concerned only with its
> > own survival, the existence of other comparably powerful AGIs could
> > somewhat alter the strategic field in favor of the survival of at
> > least some humans.
>
> You're assuming an observer-centric goal system (and no, that still
> wouldn't help us - why would it?).
>
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT