From: Mitchell Porter (mitchtemporarily@hotmail.com)
Date: Tue Aug 22 2006 - 04:42:56 MDT
Two weeks ago I wrote:
>How's that for a slogan? That is - is that an acceptable synopsis of what
>we
>want the first superintelligence to do, or is there a better way to put it?
Let me start again... The simplest SIAI-inspired blueprint for
a Friendly Singularity that I can think of is: Couple the first
intelligence explosion to a renormalized human utility function.
That RHUF is not to be figured out by human intelligence,
indeed not even the HUF is to be so figured out; rather, the I^2
(intelligence increase) process is to have as its interim goals
the determination of HUF and RHUF. That is...
begin { recursive intelligence enhancement } ;
supergoal /* interim! */ :
when capable {
determine human utility function := HUF;
renormalize HUF := RHUF;
set supergoal to 'maximize RHUF'
}
else { continue intelligence enhancement } ;
I call this "SIAI-inspired" because Eliezer takes care to
point out that humans are not expected utility maximizers,
therefore CEV does not reduce to the determination of a
"human utility function". But otherwise, this is what SIAI's
current thinking looks like to me.
Now it seems to me intuitively that the explicit renormalization
step should somehow be redundant. Renormalization, after all,
is what happens when HUF rewrites itself. But if HUF's
inclination is to produce a RHUF, shouldn't it be sufficient
just to set the supergoal to 'maximize HUF'?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT