From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Tue Jun 15 2004 - 05:13:18 MDT
Hi Eliezer,
> > PS: Do you mean this literally or are you assuming that the FAI with
> > a collective volition function will externally observe (or converse with)
> > 6+ billion people individually and inductively model them or would you
> > expect the collective volition function to just gather what information
> > it can from all sorts or primary and secondary sources (like we do) but
> > just on a more massive scale?
>
> EY: Yes.
Assuming that your one-word reply was not intended to be ambiguous,
then apparently 'all of the above' was what you had in mind?
That means that to create the collective volition to ensure that a
superAI is friendly the AI will:
- directly inspect what goes on in the brains of over 6 bllion+ people
- will directly observe or perhaps converse with over 6 bllion+ people
Could a pre-singularity AI do this? That is, it seems like the technology
required goes way beyond anything humans are likely to develop in the
next few decades and the scale of the task is clearly astronomical.
Also, is it likely that more than a small percentage of the 6 billion+
people would agree to have an AI trawling around in their heads. If
only a small (most likely non-representative) sample of humanity
participate in the mind reaming, will the data collected be adequate.
Are you proposing to do this mind reaming against the will of those who
object? Or are you proposing that the AI do the reaming without asking
for permission?
If the tasks outline above can only be accomplished by a post-
singularity AI, how will you ensure friendliness in any advanced but pre-
singularity AIs?
Cheers, Philip
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT