From: Lawrence Foard (entropy@farviolet.com)
Date: Mon Oct 06 2003 - 10:41:41 MDT
On Sat, 4 Oct 2003, Metaqualia wrote:
> > The replicators (primitive ribonucleic acids) started out without a goal
> > system: they were simply settling into the their most stable free energy
> > state, based on the laws of physics and chemistry. Yet, from this system,
> > goal- and meaning-creating intelligence arose. Perhaps an AI with the
> > simplest directives or starting rules could produce some rather elaborate
> > and magnificent results.
>
> see my previous reply. evolving a community necessarily gives rise to some
> level of cooperation, hence morals. Evolving a single being is a completely
> different matter.
Its going to be a very difficult matter. Humans have enough problem with
morals even with nervous system wiring thats very similar across
individuals. For example a machine might not have the concept of pain
outside of self preservation. But there are many things which we register
as pain which are essentially harmless, eventually even losing an arm
would be harmless because it could be easily replaced. The AI would have
to become familiar with these human quirks, for it replacing a manipulator
arm attached to some network somewhere is a non issue, so for example it
might not see a situation where a persons arm was stuck somewhere to be
a big deal, just chew it off and insurance will pay for a new one :)
To even complicate it further, humans at times can enjoy pain and fear,
roller coasters, horror movies, S&M. The AI would have to undertake a
prolonged study of human reactions to be able to not hurt us, even if
starting with the best intentions and best training in a community of
its own.
-- Be a counter terrorist perpetrate random senseless acts of kindness Rave: Immanentization of the Eschaton in a Temporary Autonomous Zone. We are nothing but sunlight detours, in the road between fusion and eternity.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT