From: Harry Chesley (chesley@acm.org)
Date: Sun Nov 25 2007 - 18:43:49 MST
Thomas McCabe wrote:
> I have nothing against you posting, but please *read* before you
> post. If you disagree with everything you read, and then post about
> it, at least we can have a useful discussion.
I would be curious to hear what you consider the prerequisite reading
material to be. (I don't mean that at all facetiously. I really would
like to know.)
But note that not everyone on this list has the same goal or background
as you do. My own interest is in building AIs. Theoretical speculation
on things like fundamental limits of intelligence and provably friendly
AIs is interesting but often pretty irrelevant. It's sort of as if I
were trying to build a crystal wireless set and you're talking about
Feynman diagrams. They may be entertaining, and they're certainly
related in some way, but they don't help me build the radio. Similarly,
this list has been an occasionally interesting diversion, but I don't
want to spend inordinate amounts of time reading tangential material.
> We are so used to interacting with a certain type of intelligence
> (Homo sapiens sapiens) that we would be shocked by the alienness of
> a generally intelligent AI. Look at how shocked we are by *each
> other* when we violate cultural norms. And we're all 99.9% identical;
> we all share the same brain architecture. See
> http://www.depaul.edu/~mfiddler/hyphen/humunivers.htm for a list of
> things that we have in common and the vast majority of AIs do *not*.
Very true. And one reason we may intentionally build anthropomorphic AIs.
> How is this going to happen? Magic? Osmosis? None of our other
> computer programs just wake up one day and start displaying parts of
> a human personality; why would an AGI?
It'll happen by design, of course. You don't think we can program a
human-like personality into an AI?
For example, some companies are building companion robots for the
elderly that very intentionally have personality, and that encourage the
formation of long-term emotional relationships with their owners.
> We can name a long list of things that are definitely
> anthropomorphic, because they only arise out of specific selection
> pressures. Love and mating for one thing. Tribal political structures
> for another.
I don't have your confidence that I know what is and isn't inherent. For
example, I'm not sure that a group of interacting GAIs would not
logically employ a system very much like tribal politics. (Agoric
systems come to mind.) Or even love. As I understand it, love evolved
because it allows two parties to trust each other beyond the initial
exchange, which is important for some mutually beneficial contracts
which would otherwise be unworkable. A similar arrangement might make
sense in a GAI. If you feel that's impossible because machines can't
feel, then we have another area of disagreement as I don't see any
reason they should not if we can.
> Brain simulations and uploads are another thing, I'm talking about
> built-from-scratch, human-designed AGIs.
You may have been talking only about built-from-scratch GAIs, but we
weren't. Dare I tell you to read the previous posts before replying?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT