From: Mark Waser (mwaser@cox.net)
Date: Wed Mar 12 2008 - 20:59:16 MDT
> Performing unethical acts is usually in the self-interest of, not only
> AIs, but most humans. Billionaire drug-barons and third world
> dictators make themselves huge piles of money off horrible and
> unethical actions.
Only in a short-sighted view in a society with inadequate enforcement. This
is *much* more the argument that I was expecting to have. I will continue
to address this point shortly. Thank you for bringing it up.
> Show us examples of such derivations.
Coming shortly (it's getting late). Again, an excellent question!
> Error, reference not found. There's no such thing as a computer "with
> the intelligence of a human", because computers will have vastly
> different skillsets than humans do (see
> http://www.intelligence.org/upload/LOGI/seedAI.html).
:-) You're being pedantic and difficult. I'm arguing a general equivalence
here, not a specific skill set.
> The people on this list already have a great deal of human-universal
> architecture, which AIs won't have. See
> http://www.intelligence.org/upload/CFAI//anthro.html,
> http://www.intelligence.org/Biases.pdf,
> http://www.overcomingbias.com/2007/11/evolutionary-ps.html.
Yes, but I don't see why my argument cares whether or not the AGIs have
human universal architecture (except that it is a good argument that my
testing on humans is insufficient for proof of behavior in AGIs)
> Any AI intelligent enough to actually understand all this will be more
> than intelligent enough to rewrite itself and start a recursive
> self-improvement loop. See http://www.acceleratingfuture.com/tom/?p=7.
Possibly true but it is probably not smart enough to get around the blocks
that humans will have placed in it's way (and the fact that humans will have
placed the goal that it is UnFriendly to attempt to do so until the humans
declare that it is ready).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT