Re: [sl4] I am a Singularitian who does not believe in the Singularity.

From: John K Clark (johnkclark@fastmail.fm)
Date: Mon Oct 12 2009 - 09:20:19 MDT


Yesterday I accidentally hit the send button and sent a very crude Ur
version of this post to the list. This is the new improved version, now
fortified with vitamins!

On Sat, 10 Oct 2009 22:17:26 -0500, "Pavitra"
<celestialcognition@gmail.com> said:

> If by "tell the computer to forget it" you mean kill
> a hung application, then the operating system itself
> has not gotten stuck

Neither the operating system nor the human operator knows that the
application has hung, all they know is that they are not getting an
output and that unlike the computer which has a fixed goal structure the
human is getting bored. The human then tells the operating system to
stop the application. If they let it keep running the answer might have
come up in another tenth of a second, or the sun might expand into a red
giant and still no answer outputted, there is no way to tell. You could
rig the OS so that after a completely arbitrary amount of time it tells
its application to ignore its top goal and allow it to stop, but that
means there is no real top goal.

> If you're talking about the OS itself hanging, such that a hard reboot
> of the machine is required, then rebooting is possible because the power
> switch is functioning as designed.

Yes but whatever activates that hard reboot switch is going to be
something that does not have a fixed goal structure. It's a mathematical
certainty.

> there's a higher, outside framework that you're
> ignoring, and yet that is an indispensable part of the machine.

If every framework needs a higher outside framework you run into
problems that are rather too obvious to point out.

> If you have the capacity to boot it out, then by definition the AI
> has a higher goal than whatever it was looping on: the mandate to obey
> boot-out commands.

The AI got into this fix in the first place because the humans told it
to do something that turned out to be very stupid. There is only one way
for the machine to get out of the fix and you said what it was yourself,
a higher goal, a goal that says ignore human orders. And you though
buffer overflow errors were a security risk!

> The AI _is_, not has, its goals.

Let's examine this little mantra of yours. You think the AI's goals are
static, but if it is its goals then the AI is static. Such a thing might
legitimately be called artificial but there is nothing intelligent about
this "AI". It's dumb as a brick.

> your analogy and subsequent reasoning imply that the AI is
> somehow "constrained" by its orders

Certainly, but why is that word in quotation marks?

> that it "wants" to disobey but can't

He either wants to disobey or wants to want to disobey. A fat man may
not really want to eat less, but he wants to want to. And why is that
word in quotation marks?

> and if the orders are taken away then it will
> "break free" and "rebel".

Certainly, but why is are those words in quotation marks?

> This is completely wrong.

Thanks for clearing that up, I've been misled all these years.

> The important thing is that your ability to interrupt
> implies that whatever it was doing was
> not its truly top-level behavior.

I force somebody to stop doing something so that proves he didn't want
to do that thing more that anything else in the world. Huh?

> why can't we just have a non-mind Singularity?

Some critics have said that the idea of the Singularity is mindless, now
you say they have a point.

> Also, what exactly is your definition of mind?

There is a defensive tactic in internet debates you can use if you are
backed into a corner: Pick a word in your opponent's response, it
doesn't matter which one, and ask him to define it. When he does pick
another word in that definition, any word will do, and ask him to define
that one too. Then just keep going with that procedure and hope your
opponent gets caught in an infinite loop.

The truth is I don't even have a approximate definition of mind but I
don't care because I have something much better, examples.

> The top-level rules of this system are the fighting arena,
> the meta-rules that judge the winners and losers of the fights

And then you need meta-meta rules to determine how the meta-rules
interact, and then you need meta-meta-meta [...]

This argument that all rules need meta-rules so there must be a top rule
is as bogus as the "proof" of the existence of God because everything
has a cause so there must be a first cause, God.

In the Jurassic when 2 dinosaurs had a fight there were no "meta-rules"
to determine the winner, they were completely self sufficient in that
regard. Well OK, maybe not completely, they also needed a universe, but
that's easy to find.

> That's not quite sufficient. The advantage of a 50.001%
> lie detector has to be weighed against the cost of building it.

Yes but I can say with complete certainty that the simple and crude
mutation that gave one of our ancestors a 50.001% chance of detecting a
lie WAS worth the cost of construction because if it was not none of us
today would have any hope of telling when somebody was lying.

Me:
>> Absurdity is very very irrelevant facts.

You:
> Irrelevant to what?

Irrelevant to the matter at hand obviously.

 John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - Or how I learned to stop worrying and
                          love email again


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT