Date: Mon Dec 09 2002 - 09:36:17 MST
> [...] after humans meet super-intelligent machines they will want to become
> super-intelligent themselves [...] Human values are mostly self-interest. As
> social animals humans have some more altruistic values,
I wish I had the exact quote at hand, but in Ridley's "The Origins of Virtue"
he emphasized that it's _among peers_ that cooperation and non-zero sum
interactions really pay off for the individuals, even when they are greedy.
I don't ever want to be a non-peer.
> Very powerful humans can transcend social pressure....
> Society has a gradual [...] long-term trend toward equality...
You made some good points.
> As social animals humans have some more altruistic values, but these mostly
> depend on social pressure.
Implying we didn't create these social pressures on our own?
> the solution is to make alteration of reinforcement learning values a
> necessary condition for granting a human super-intelligence.
I don't see this flying. I certainly wouldn't accept it. Maybe I will when
I get to 'almost-super' intelligence?
> Given that the combination of super-intelligence and human values is
Do you think right now, with an unaltered learning system, the average person
has the _capacity_ to handle super-intelligence without being destructive? I'm
not asking if you think it's likely or not, just if the average person were
raised in the perfect environment, perfect parents, perfect culture, perfect
schools, etc etc, could they handle super intelligence? And would a super
intelligent society be able to create that perfect cultural environment? That
is, would it all be self stabilizing? Or do you think the brain has
fundamental, insurmountable, physical flaws as it stands now? I'm not saying
improvements wouldn't help, I'm just objecting to the prospect of more
hardwired CONSTRAINTS on how and what we think.
> This may provide a way to increase human intelligence before we understand
> how it works
As others here have said many times, it's the transitions that will be
painful and risky.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT