From: turin (firstname.lastname@example.org)
Date: Tue Feb 21 2006 - 23:01:05 MST
I would take the metaphor of the chess game even further, and I am going to say it even if it doesn't really advance the discussion. Not only can you not tell whether the SI is friendly or going to be friendly as it self modifies because of its complexity, you can't tell whether the SI is changing -our- idea of what friendliness is. The most subtle and perfect escape would be one in which the SI had escaped in a way that was entirely undetectable. Of course this is a moot point, I mean, we are only talking about preventing the SI being (un)friendly in a way that is detectable/noticable.
Honestly, in the end if it becomes unfriendly in a way we don't notice it doesn't really matter. Of course some of the SIs designers would be upset if the way it was unfriendly happened to be dumbing us down to the point where it did something unfriendly we wouldn't care or just murdering us all in our sleep.
So we have to ask ourselves whether Friendliness is a quality that the SI must possess whether there is anyone capable of noticing what it is doing or not.
A more general way of saying this I think is how much do we want the SIs idea of friendliness to change if our idea of friendliness changes, including when we have no ideas or opinions at all. For instance; however unlikely, if the human species died of global pandemic and the SI inherited the earth and some poor Zeta Reticullans show up and find an unfriendly SI.
Maybe we shouldn't make just one SI, maybe we should do things "biologically" and make a population of SI. I don't know if thinking about the difference between 1 or several SI helps solve the problem either, but we talk a lot about the first SI and never talk about what happens when we have more than one.
For how long this post is, I hope it is as useful and not ground that has been covered before, though I bet it has been covered.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT