From: ben goertzel (ben@goertzel.org)
Date: Wed Feb 06 2002 - 10:25:42 MST
> > Us trying to understand whether hacking the Sysop will be possible, is
much
> > like a very smart dog, who has intuited a little bit of what human
language
> > is like, trying to make projections about the complex machinations of a
> > dispute in intellectual property law.
> >
> > ben g
>
> Yes, Ben. That's why my actual answer to the question was "I don't
> know."
Eliezer, I wasn't trying to dispute anything you had said; I was just trying
to
more colorfully and succinctly do the same thing that you were doing in your
reply.
Actually it seems to me that you and I agree on almost everything to do with
the Singularity.
As far as I can tell our most serious disagreements are:
a) specific design strategies for seed AI [we both agree that detailed brain
simulation
is not necessary though, unlike kurzweil]
b) the probable speed of the hard takeoff (exactly how hard is hard? you
think it'll
be faster than I do)
c) our estimates of the conditional probability difference
Prob( friendly post-singularity AI | friendly seed AI) -
Prob(friendly post-singularity AI| neutral seed AI)
And I consider these basically to be differences of intuition, not
resolvable through
empirical evidence or logical argumentation at the moment.
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT