Re: [sl4] Irrational motivations (was [sl4] A Thought Experiment: Thou Shalt Not Bear False Witness)

From: Mike Dougherty (msd001@gmail.com)
Date: Wed Dec 02 2009 - 19:23:32 MST


On Wed, Dec 2, 2009 at 8:59 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:

> Mike Dougherty wrote:
> > I don't think I would bother raising the dog's intelligence to have a
> rational conversation.
>
> Of course you wouldn't because (1) you don't know how and (2) the dog
> didn't create you with a brain programmed to be completely honest with it.
>

that's true. I generally believe that attempts at such lower-level
intelligences shackling higher-level intelligences is wrong. If for no
other particular reason but that it will end badly for the lower-level
intelligence. simple example: I put my keys in a safe place (good idea) I
have forgotten where the safe place was :: I screwed myself. (bad result
from good idea due to simple human Fail )

on point #1 - even If I knew how, it would fundamentally change the dog and
my relationship to the dog. I'm cautious about that in principle because
it's a steep slippery slope to all the nightmarish consequences threads
ending in Autobliss wireheads being turned off by a more capable power after
having compressing their behavior pattern to 1 iteration of addictive loop.
Or even without the more capable power (sysadmin, etc.) there might be a
buffer overflow when autobliss increments past the storage of unsigned
integer (or the number of clock faces in a line is exceeded by 1) something
on the order of 'complete protonic reversal' or equally Bad Day.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT