From: Mike Dougherty (msd001@gmail.com)
Date: Wed May 07 2008 - 18:58:24 MDT
On Wed, May 7, 2008 at 7:36 PM, Krekoski Ross <rosskrekoski@gmail.com> wrote:
> not notice a difference if some of the quantum interactions get
> 'approximated'. Of course, under such a scenario, science in a simulation
> would hit a complexity wall at a specific scale of observation.
At what scale would it really matter? A sufficiently skilled
charlatan can exert enough force of will to make a living bending
people that are not much less intelligent (though maybe more trusting)
than the charlatan himself. Do you think this requires godlike
approximation of every neuron in your brain to accomplish? No, it
takes a few general rules and command of the situation. Any
sufficiently advanced AI will have the upper-hand over an
average/unaugmented human mind. (and "sufficiently" could probably be
replaced by "slightly" and not change the meaning of that last
sentence) I think that's the point that is lost in many of these
discussions; the toolset we use to interact with the AI will either be
directly uplifting humans, or humans will have the out-of band means
to relate to the increasingly super-intelligence. The control systems
of a fighter plane are an example - the machine can tolerate stresses
the human pilot can not; the interfaces predict actions and the user
selects a course based on the supplied information. How difficult
would it be to misrepresent the information humans are using to assess
"reality" such that the logical course would be in favor of the AI?
We could find that we are always in agreement with our AI overlord's
decision for us because it only presents options that lead to a
foregone conclusion. The main problem with this attempting to solve
this problem is the reliance on further technology to uplift human
cognition (in either rate or capacity)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT