From: Nick Tarleton (nickptar@gmail.com)
Date: Fri Jun 06 2008 - 18:16:01 MDT
On a related note, this just came to mind: extrapolated volition is an
instance of Oracle AI - it's supposed to produce one answer (the AI
that we would want to write if we knew more etc.), without optimizing
the world, and then shut itself off. So the problem of defining
(before you have a full metaethic) what side effects shouldn't happen
has to be solved anyway. It might be useful to apply this solution to
easier-to-specify problems before the full definition of EV is
complete, either to help write said definition or to hold off
existential risk.
(Actually, the side effects problem for CEV looks harder than for
Oracles-in-general, because the data collection about humans necessary
for CEV would likely require substantial interaction with the world.)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT