From: Olie Lamb (olie@vaporate.com)
Date: Sun Oct 23 2005 - 22:40:35 MDT
Woody Long wrote:
>Example of a current, friendly intelligent system exhibiting self
>awareness, self interest, and self destruction --
>
>The Sony Aibo dog
>
Dude, you have a VERY different interpretation of the term "self-aware"
going on there.
I can deal with people using the term "intelligent" to cover
non-conscious entities, even though I find this a dumbing-down of the term.
But self-aware???
Surely, a basic requirement of an entity being "self-aware" is that has
Consciousness. I know the c-word makes a few people round here squirm a
little, but sometimes you have to tackle the hard problems. (Hee)
Reaction to entities is not awareness of the entities. Bugs react to
damage to their bodies (~ their selves), but, as far as anyone knows,
they don't operate on the basis of "My body is in some way different
from stuff that is not my body". They don't have concepts, let alone
concepts of the self.
>However Sony built it so that when it meets a certain amount of light
>resistance, it self-destroys this behavior; and its jaw goes slack. If a
>hobbiest trys to get inside and tinker with this behavior, it self destroys
>the system, and is rendered inoperable. It implements what they call in
>Japan the principle of harmony. It also could be considered an
>implementation of Asimov's First Law of Robotics.
>
>
If there is any tinkering, the entire system destroys itself, or just
certain types of tinkering result in certain failures?
Either way, the comparison to Asimov belies the flaw in the
methodology: Rather than structuring the system for safety with a
ground-up approach, you are describing a system with a safety catch
"plastered on".
>2. Self awareness and self interest - The Sony Aibo dog has tactile sensors
>and it can tell when it is being "stroked" or "hit". If its being stroked,
>"it" - the dog self - forms a positive self interest or affection for the
>person stroking it, and develops a self interest in being near it so that
>when it sees the person, based on ITS so formed self interest alone, it
>goes up to the person. Coversely when it is "hit" it forms a negative self
>interest or aversion for the person, and develops a self interest in
>avoiding him, and so based on this self interest alone, it moves away from
>the person.
>
>
Fantastic! We're already designing robots to mimic behaviours of
disliking people. Maybe the next generation of AIBO will be able to hit
back. I'm sure we can add some safety system, though, to make sure that
any maul-behaviours are non fatal.
See'sly, tho, forming profiles of people could be a very constructive
approach. People have preferences, and it's good for a device to be
able to match those preferences.
>In the same way, future humanoid SAI will be safe-built and friendly.
>
I have no objection to attempting to design humanoid intelligences. I
think that, as Loosemore said, there is no reason why an AGI can't be
humanoid - in the broader sense - and without self-interest.
Furthermore, some limited forms of self-interest are not necessarily
un-Friendly, but there are better motivational systems.
Likewise, it is quite likely that the kind of design that a
consumer-products group would create would be "friendly" in the broad
sense. However, I think the Guidelines make it pretty clear that the
way they use Friendly is much more specific.
> So,
>as a singulatarian, I support such friendly humanoid SAI, which I believe
>will someday evolve into a super intelligent humanoid technological
>singularity. My guess is Sony in 10 to 15 years will complete the creation
>of their authentic, friendly humanoid SAI.
>
>Ken Woody Long
>artificial-lifeforms-lab.blogspot.com
>
>
Being an advocate of the singularity doesn't require a belief that: "AGI
will necessarily be Good, so all research in the area is Good"
I think that MNT is fantastic stuff. But I was strict controls on
research, and would rather have no MNT than allow undisciplined
investigations into replicators.
I think that genetic research into diseases /will/ help humanity
immensely. But I really don't like the idea of ebola-sequences being
posted on the net.
Likewise, I believe that AGI research has a non-trivial probability of
getting practical benefit, so much as to be the "best thing for
Humanity, ever" within a decade*. But I still think that having any
old group doing the research is ridiculously dangerous. If I cound have
$100M spent by the a military organisation on AGI research, or have it
spent on making 1-inch lenghths of string, I'd probably opt for the
stringettes.
Being a singularity-advocate doesn't mean having blind faith.
-- Olie
*I'd estimate that the probability of a bloody-sharp-AI (which would
create a technological singularity in the broader sense) being created
in the next decade is around 8%ish. I feel this is fairly optimistic.
A 10% chance per decade gives, what, a 65% chance within a century or
so. I think our chances in the decade after this one are significantly
improved - maybe 12 to 15%? If one considers the chance of achieving a
bloody-sharp-AI as high as 30% per decade, this would make the
probability of not getting a BSAI within a century at a piddling 3%.
I'm sorry, I find that kinda unrealistic.
Note that my one-off term use here of bloody-sharp doesn't /have/ to be
conscious. I just think that creating consciousness will help to make
an AI cluey, and thereby intelligent.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT