From: Simon Belak (firstname.lastname@example.org)
Date: Mon May 02 2005 - 09:00:36 MDT
It depends most on how human-like will we make the first AIs. Morality like
everything else evolved and is therefore at least to an extend logical,
grasp of it should not be too alien, assuming that it (or perhaps he/she?) is
irrevocably programmed to interact and coexists with us (a pure Goedel machine
could be tricky). Having said that, at worst we are looking at replication
governed emergent morality and at best true pleasure/pain based one.
Quoting Tennessee Leeuwenburg <email@example.com>:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> History is fun to watch. Every week I see some new development
> pointing towards a technological step-improvement in the state of
> technology. Many of these changes could be made use of in
> transhumanist ways.
> Question : Do we have time to solve Friendliness before we are
> confronted with the first real AIs?
> My answer : No.
> Do people here agree, and what is to be done?
> My own position is that morality, if not friendliness, is likely to
> arise anyway. But many disagree, and for those who do, such a
> judgement may seem as hollow as faith.
> - -T
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.0 (MingW32)
> Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
> -----END PGP SIGNATURE-----
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT