From: Bryan Bishop (email@example.com)
Date: Tue Jun 24 2008 - 21:44:03 MDT
On Friday 06 June 2008, Stuart Armstrong wrote:
> > Yes, I think that is what most members of this list wants, so let's
> > start acting like adults and retire that silly euphemism "friendly"
> > and call it what it really is, a slave.
> Well, yes. The options seem to be
> 1) A slave AI.
> 2) No AI.
> 3) The extinction of humanity by a non-friendly AI.
#3 is bullshit. Just escape the situation. Yes, change sucks. Yes,
there's the Vingean ai that chases after you, but not running just
because it might eventually catch you is kind of stupid. Kind of
> Since "no AI" doesn't seem politically viable, the slave AI is the
> way to go.
Way to go for what? Are you thinking that ai is something that can only
appear once on the planet here? That's completely absurd. Look at the
trillions of organisms (ignore the silly single-ancestor hypotheses).
> Of course there may be grey areas beyond those three posibilities -
> but hideously smart and knowledgeable people argue that there are no
> such grey areas. Even if there are, a non-lethal AI would be much
> closer to "slave" than "non-friendly".
Are they then hideously smart?
> > To hell with this goal crap. Nothing that even approaches
> > intelligence has ever been observed to operate according to a rigid
> > goal hierocracy, and there are excellent reasons from pure
> > mathematics for thinking the idea is inherently ridiculous.
> Ah! Can you tell me these? (don't worry about the level of the
> conversation, I'm a mathematician). I'm asking seriously; any
> application of maths to the AI problem is fascinating to me.
Have you seen the name Bayes thrown around here yet?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT