From: Mark Waser (mwaser@cox.net)
Date: Wed Apr 30 2003 - 16:50:58 MDT
Eliezer said to Ben Goertzel:
> So you find yourself to be a seed AI programmer, *and* you find yourself thinking it's OK to be simulated?
Aha! I can finally articulate what I've been murkily perceiving as an apparent contradiction but couldn't quite clarify . . . .
Eliezer, in your papers on developing a Friendly AI, the Friendly AI regularly simulates a changed version of itself before radically changing itself to ensure that there are no errors or unforeseen consequences that would result in it's becoming unfriendly. How do you reconcile this with believing that it's not OK to simulate conscious beings (and I will also refer you back to my previous question about stripping your own memories and placing yourself in a game since, to me, that seems VERY analogous to the Friendly AI case)?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT