From: James Rogers (firstname.lastname@example.org)
Date: Thu Jul 04 2002 - 12:31:55 MDT
On 7/4/02 6:59 AM, "James Higgins" <email@example.com> wrote:
> At 01:20 AM 7/4/2002 -0400, Gordon Worley wrote:
>> Besides, I assert there's no such thing as free will and it's just an
>> illusion of the interpreter, but that's another thread.
> Of course there is free will, at least on the individual level.
Gordon is correct. IF you assume the mind can be run on finite state
machinery (something one generally assumes in AI research), you can't have
free will. Furthermore, in such a case it is mathematically impossible for
you to even perceive that you don't have free will (kind of like Godel's
theorem applied to computational machinery), though it is possible to
perceive the lack of "free will" in simpler machinery. This last sentence
catches most people as a surprise.
A lot of people have a hard time with this concept because it isn't
intuitive, but it is relatively simple to show why it must necessarily be
true. An SI may be able to observe that we do not have free will but we as
roughly equivalent individuals never can, so we've operated under the
assumption that we do have free will as a functional heuristic. The
emergence of large intelligence differentials in observer entities will
shatter this illusion, which is really just a narrow boundary case when
applied to humans.
I've seen quite a bit of very irrational backlash against this idea because
it invalidates a core axiom of human interaction. I've never seen anyone
actually refute it, they just get a "deer in the headlights" look and refuse
to accept it. But this is SL4. :-)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT