From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Wed May 21 2003 - 07:42:28 MDT
I said:
> AGIs are not the solution to *this* problem they are just an
> extension of the existing problem.
Then Samantha said:
> You do not believe that something with that (thinks) several
> orders of magnitude faster than you can, can outperform you and deal
> successfully with more complex situations than you or a group of people
> like you can?
Like most people on this list, I think AGIs will eventually be smarter than
humans and they'll certainly be able to outperform us in heaps of ways.
But will they be better at solving problems that humans think need
solving? There are grounds for thinking they WON'T under several
different and reasonably plausible scenarios.
Firstly, problems only exist once a normative (goal or values-based)
framework has been established. So if AGIs are not motivated to solve
problems that some humans think are problems then they won't help
humans.
AGIs without a moral code that encompasses humans, AGIs and other
living things will be too dangerous to have around because, if they are
motivated to do things in the real world, they won't care what effect it has
on us, other biological lifeforms, or even other AGIs.
And if AGIs DO have a moral code then there's a fair chance it will be
influenced by some pretty limited moral frameworks due to who had the
money to get them designed and housed in a big computer complex in
the first place ie. I will hate x, y, z people because they have been
nominated as the enemy of my human owner, I will skrew everyone
except x, y, z because my owner wants to make squillions of dollars and
become the most powerful person in the known universe.
If a single AGI exisits and it gets independence then, unless the AGI has
been designed/trained to be both friendly to life and very wise, then we
are all going to be hostage to its ideosyncracies.
And if there are multiple AGIs, and if the overwhelming majority are not
both friendly to life and very wise, then far from solving problems things
will just get difficult at a vastly higher level of complexity. We will be
living in a world like the ancient Greek Olympus or the Norse Valhalla
where the gods are at war.
Samantha said:
> Trusting the general public in such an area rather speaks for itself as
> does a belief that the real authority is or should be in the hands of
> the public at large. The vast majority in the US believe in demons,
> can't find Iraq (Brazil, India, Russia(!), etc.) on a world map, read
> less than one non-fiction book in their lives after schooling, barely
> understand simple algebra and so on. Your faith in them is touching
> but the masses really do no deserve this sort of credit. Failure to
> understand this or any other unpleasant facts will not serve us.
Fortunately not all publics are (on average) as poorly educated, and as
self-centred as the US public. Many of the European countries have
better education and a more cooperative/collaborative attitude than the
inhabitants at the centre of the US imperium. And critically, in many of
these European countries, not coincidentally, democracy is in much
better shape. (By the way, I don't now nor have I ever lived in Europe).
What I'm trying to get at (apart from sending a gratuitous broadside in the
direction of the US elite and citizenry) is that average humans are
demonstrably capable of a lot more in the way of problem solving than
we see in the worst cases. Rather than damning humans generally as a
bunch of no-hopers, we need to look carefully at what makes average
humans into either no-hopers or into quite clever sensible people.
While humans are hardware/software limited, most of the stupidity we
see them demonstrate is due to poor top layer software, poor databases
and disatrous decision-making settings (an institutional issue).
So if humans, that are pretty impressive GIs, can stuff up badly for
reasons unrelated to the basic hardware software issues, then cleverer
AGIs can probably stuff up also for reasons that go beyond the issue of
hardware and software.
Quite frankly I doubt that AGIs will solve anything very profound (in
terms of making a better life for humans and a sustained world for all life)
unless we and the AGIs pay attention to the critical issues of how
political/economic power is wielded and how wisdom is generated, made
widespread and given scope to be expressed.
Cheers, Philip
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT