Re: Flight recorders in AIs

From: Steven Fodstad (sf55@evansville.edu)
Date: Fri May 23 2003 - 16:29:24 MDT


Samantha wrote:

>>I think you missed the point of a flight recorder. The idea is
>>that even if you don't notice a failure *instantly*, there's at
>>least the *potential* to notice the failure five years later - so
>>long as the AI hasn't overwritten the evidence meanwhile. When the
>>goal of hiding a catastrophic failure first forms, and before it is
>>implemented, the goal itself should be noticeable. It may only be
>>noticeable for 500 microseconds before it's obscured, but even so,
>>formation of the desire and implementing it should not be
>>simultaneous. If you have the ability to run an exact
>>frame-by-frame reenactment of the AI's history, you can use
>>advanced tools built offsite, or additional programmers, to notice
>>that 500-microsecond unobscured failure. Five years later, if
>>necessary. There would at least be a chance, so long as the
>>evidence is not erased in the meanwhile. That's why the AI can't
>>have access to the flight recorder.
>>
>>
>I trust we are talking about very immature AIs here with abnormally
>slow maturation rates. Five years is a very, very long time in the
>life of an AI of much real promise. In five years an AI that had
>gone off track would be expected to take considerable countermeasures
>and be a *lot* harder to stop or countermand. I am very surprised
>to see you speak in terms of noticing a problem five years later and
>doing something useful about it. It seems very counter earlier
>notions of how fast a Singularity would ensue from a well constructed
>seed.
>
>
Not all ethical problems are large, not all are immediate, and not all
come up every day. Some problems might even be stable, and so an AGI's
revisions of it would not expand the erroneous ethics.

Let's say an AGI comes to the conclusion that violence is a result of
culture, and so banning expressions of violence by humanity would
eliminate unethical violence. This is a small ethical error (and
probably a large cognitive one). I do martial arts, which is obviously
an expression of violence. The ban would run counter the principle of
volition (as all bans do) -- I volunteer to risk pain. It's a small
error because when it realizes its mistake, it will correct it. Losing
5 years of training in my hopefully endless life will not concern me.

Take another example: say the AGI has an error in its ethics whereby it
denies "rights" (for lack of a better word) to uploads. If it takes 5
years before anyone's ready to upload, that's not a concern.

Or take the same example, but consider the cases when the AGI would
actually violate ethical principles. The computational architecture
required to support an upload is probably unimpressive to the AGI by the
time uploading is possible. Let's say it takes an upload 5 years to
create a computer the AGI is interested in, and he (the upload)
transfers himself to the supercomputer. As long as the error is spotted
and corrected before the AGI wipes the upload from the computer to
subsume the computing power, it's not a concern.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT