Re: Maximize the renormalized human utility function!

From: Jef Allbright (jef@jefallbright.net)
Date: Thu Aug 10 2006 - 19:13:42 MDT


On 8/10/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> Jef Allbright wrote:
> >
> > Your statement "some we might want a superintelligence to maximize..."
> > obscures the problem of promoting human values with the presumption
> > that "a superintelligence" is a necessary part of the solution. It
> > would be clearer and more conducive to an accurate description of the
> > problem to say "some values we may wish to maximize, others to
> > satisfice."
>
> Anissimov's phrasing is correct. The set of values we wish an external
> superintelligence to satisfice or maximize is presumably a subset of the
> values we wish to satisfice or maximize through our own efforts. What
> you would-want your extrapolated volition to do *for* you, if you knew
> more and thought faster, is not necessarily what you would-want to do
> for yourself if you knew more and thought faster.
>

I agree with your statement within its intended context, but it misses
my point about the presumption of such a (dominant) superintelligence
as essential to *the* solution to the problem at hand. Presently such
a superintelligence does not exist nor is there any tangible plan or
timeframe for one. There seems to be a general lack of appreciation
for the element of time needed to extract regularities from the
environment, and a lack of appreciation for just how much of perceived
local intelligence is actually a product of eons of synergetic
development.

You have not responded to various questions as to the expected limits
to growth of an intelligence effectively isolated from the world. You
have been talking for many years about how a self-improving AI can be
expected to go FOOM in a matter of minutes to hours, effectively
isolated from dynamically learning from the larger world due to
incompatible time-scales. If you still hold this view, that the first
such AI will immediately become the dominant AI, then how do you hope
to influence its moral direction (apart from the obvious strategy of
going to work for the likely creater), given that it will likely be
created by a military organization, if not big business due to their
vastly larger resources? Mightn't it be reasonable in such a scenario
to exert influence by beginning as early as possible to promote the
amplification of human morality?

On the other hand, if you expect proliferation of a diverse range of
AI and IA technology leading up to the big FOOM, then again, mightn't
it be reasonable to exploit this growth of intelligence toward
development of a framework for increasingly moral social
decision-making?

Have you seriously considered that the answer to the hard problem" of
Friendly AI may be the same as the answer to the so-called "hard
problem" of consciousness? Mu.

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT