From: Jef Allbright (jef@jefallbright.net)
Date: Tue Jun 13 2006 - 11:57:11 MDT
On 6/13/06, Mikko Särelä <msarela@cc.hut.fi> wrote:
> On Tue, 13 Jun 2006, Jef Allbright wrote:
> > On 6/12/06, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
> > > (4) I'm not sure whether AIs of different motives would be willing to
> > > cooperate, even among the very rare Friendly AIs. If it is *possible*
> > > to proceed strictly by internal self-improvement, there is a
> > > *tremendous* expected utility bonus to doing so, if it avoids having
> > > to share power later.
> >
> > Eliezer, most would agree that there are huge efficiencies to be gained
> > over the evolved biological substrate, but I continue to have a problem
> > with your idea that a process can recursively self-improve in isolation.
> > Doesn't your recent emphasis on perception being the perception of
> > difference (which I strongly agree with) highlight the contradiction and
> > the enormity of the "if" in "if it is *possible* to proceed strictly by
> > internal self-improvement"?
>
> Internal workings of a system are also part of the percieved reality. One
> can test out another algorithm for indexing data and notice that it works
> better. Completely internally. And still percieving the difference. Or one
> could prove that a certain algorithm for searching data is more efficient
> than another. And self-improve. The software and hardware are part of the
> reality.
>
The problem is in the concept of "works better". Where does the
knowledge defining what is better (necessarily more refined than
present internal knowledge) come from, if not from some form of
competition with that which is external to the present system?
- Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT