From: Brian Atkins (brian@posthuman.com)
Date: Sat Jul 28 2001 - 02:32:46 MDT
I'm just going to go through all three messages quoted here and
respond...
Jack Richardson wrote:
>
> Evan,
>
> Having written the original post advocating a consideration of human
> augmentation as a possible better alternative, I'm pleased to see the point
> being made that today's concentrated activity, primarily for medical
> reasons, to learn how to augment humans is preparing the way to human
> participation in the onset of the Singularity.
>
> No doubt, as Ben has pointed out, there are many technical problems to be
Do not forget the political/societal problems when it comes to what you
are describing here. Our current government gets the willies just trying
to think about stem cells. Do you think it will be so quick to adapt that
it will be able to come to grips with radical biotech and other bio-
enhancements later on such as inloading via nanotech that would most
likely be required to achieve a Singularity by strictly (trans)human actors?
> solved along the way, but the starting point is way ahead of where a seed AI
> approach is starting. Progress in the area of human augmentation will take
> many forms and likely will involve thousands of steps. But it will be very
Each of which must get the ok of the FDA and the government "ethicists"
> measureable and we will know, from year to year, just how much progress is
> being made. I'm not so sure the same will be true of the seed AI approach.
At the current rate I'm sorry to tell you that progress in biotech is not
keeping up with Moore's Law (sorry Higgins :-). Sure we can see a few
bit and pieces of the puzzle falling into place, but I don't see a doubling
in human-modifying abilities yearly. Heck, it would probably take more
than a few YEARS just to get the FDA to approve a radical new therapy.
Meanwhile, we have a pretty guaranteed timeline of computing power showing
up that will deliver us human-level computational power by 2010 at the
latest. You'll be lucky if they can cure most cancer and get it approved by
the FDA by then- don't even think about any form of intelligence-enhancement
except maybe just maybe some experiments on a few anonymous rich babies
who won't grow up in time to affect an AI-based Singularity.
>
> Furthermore, as a group of humans most interested in this development, I
> would argue that most of us would want to ensure that we were all active
> participants in the experiences the Singularity would make available to us.
> It may be true that the Singularity might leave us all behind anyway, but I
> would like to think that we could construct it in such a way that augmenting
> humans would be its primary task. If we were fairly far along at the time of
> onset, that would make its task that much easier.
I know you all (and even those who don't know about the Singularity yet)
want to have a say in how it turns out, and to guarantee if possible a
positive result for yourself. Me too. However consider a couple of points:
1. If you believe that the Singularity has great potential to be a very
good thing for humanity, then you should logically be doing your best to
get it here as quickly (and yet safely) as possible. Not only for yourself,
but also it doesn't hurt to save those 150k people who die everyday.
2. If you believe that science is already relatively well-funded in the
biological research realm, then it is unlikely you personally could have
any effect on accelerating a biologically-oriented Singularity. You also
have to somehow address the "Singularity gap" between the two approaches:
most people such as Kurzweil for instance don't believe we can have the
technology to seriously enhance human intelligence until after advanced
nanotech arrives (2030 era), and yet the computing hardware to support
superintelligent AI is arriving in the 2005 to 2010 era. That is a 20+
year gap in the competing approaches. All the AI approach needs is the
software side (sounds so easy :-).
If you have resources (money, time, advice, whatever) to commit towards
accelerating the Singularity, does it really make sense to put them
towards the biological path (which is already well-resourced), or towards
the AI path? You will get more bang for your buck with the AI path since
there are so few supporters in that area right now, and potentially shave
as much as 20 years off the arrival of a Singularity. And if you agree
that you don't have the power to significantly alter a human-based
Singularity timeline (since it involves so many bits and pieces of
interdependent technology, plus the fact that your addition of resources
would be a drop in the ocean) then what have you got to lose by throwing
your weight behind the AI approach? You very well might be shaving a lot
of time off of a Singularity, or at worse blowing your resources on a
failed project that will at least provide some valuable work on
Friendliness among other things that will be very useful to the people
who finally do manage to create a working AI even if it is intra or post
Singularity.
So I think I can make a case for AI potentially driving a Singularity much
sooner than a purely human-based one could. That does not address your
question of making sure it turns out well for us all. That is a tough
question, and one which no one can answer definitively. What I can say
is that if we do get an AI up and running, it will be extensively tested
to make as sure as possible that it is Friendly and wants to help us as
much as it can. That alone is important to me- I'd much prefer us to get
the first real self-enhancing AI up and running rather than someone like
a Hugo de Garis who just scares me (from a scientific point of view too).
But past that let me ask you this- do you think a purely human led
Singularity would be safer or less safe than a Singularity driven by
one superintelligence? Personally I think it would be less safe, from
the point of view that a) you are going to have many more less intelligent
(compared to a SI) minds mucking about with these advanced technologies
b) this "mucking about" stage of the Singularity runup will last quite a
bit of time compared to what an AI could do. More minds + more time =
more chance of some kind of disaster IMO. What do you think?
>
> Best wishes on joining the group,
>
> Jack
>
> ----- Original Message -----
> From: Evan Reese <mentat@telocity.com>
> To: <sl4@sysopmind.com>
> Sent: Friday, July 27, 2007 9:08 AM
> Subject: Re: augmenting humans is difficult and slow...
>
> >
> > ----- Original Message -----
> > From: "Ben Houston" <ben@exocortex.org>
> > To: <sl4@sysopmind.com>; "'Michael Korns'" <mkorns@korns.com>
> > Sent: Saturday, July 07, 2001 6:07 AM
> > Subject: augmenting humans is difficult and slow... Hi...
> > >
> > > Just did a talk on augmenting humans through direct brain interfaces --
> > > my degree is cognitive science / neuroscience so I have a little of the
> > > requisite knowledge in this area. It seems very likely that we can do a
> > > lot by making little additions or regulatory changes but it will not be
> > > that easy.
> > >
> > > The technology to read from individual neurons within chronic
> > > implantations is here. I have not yet read of any major successes in
> > > long-term artificial stimulation of individual neurons -- but that's
> > > just an engineering problem and just give it time. This stuff doesn't
> > > really require esoteric nanotechnology, magical quantum interfaces but
> > > just electrical current readings of the relevant neurons. In other
> > > words, the technology for making the bidirectional connections is not
> > > major limiting factor.
No, the corporate willpower and political roadblocks are. Which company
do you think will blow the millions to attempt to commercialize some
form of neural interface for normal adults. I just don't see it happening
as long as this technology involves macroscopic implants. Whose insurance
or employer is going to cover the cost of having the implants put in and
maintained even if such technology should come to exist? And would the
government even allow it?
I'm sorry, but the cyberpunk future still seems rather far fetched to me.
Wearable computing I definitely agree will be very big though.
> > >
> > > What is the problem is figuring out what exactly will make us smarter
> > > and how to integrate that in to our existing brain architecture. It's
> > > not as simple as adding more memory -- there is tons of different types
> > > of memory in the brain and they are highly distributed very connected
> > > with the computations being preformed. Also there are a lot of
> > > calibration problems that have to be overcome if we would like to be
> > > able to recognize meaningful patterns in the brain.
Exactly, it may well be impossible to come up with a one-size fits all
technology for something as uniquely individual as the brain. And what
company will take the risks to commercial it if they know that for many
people it won't work, or they even risk getting sued? We live in a
country where Dow Chemical got sued by women who got breast implants.
Will companies really expose themselves to the kinds of risks involved
with neural hacking?
> >
> > Of course, if you can interface one human, then you can do it to a
> thousand
> > or a billion. You don't need detailed models of the brain for this kind
> of
> > thing - at least to start. You can begin with a "what do you feel when I
> do
> > this?" kind of thing and once crude dni's are working, things can take
> off.
And so theoretically if you can "interface" a human, what does that get
you? Slightly quicker output for typing or controlling machines, maybe
slightly quicker input than you could get by reading? Expanded access
to memory, but I bet that would be very hard to do. But where does the
massive intelligence increase we want come from?
> >
> > It is certainly a hell of a lot more interesting than this uninspired
> > fear-based seed AI thing. The really neat part about the evolutionary
> > approach - and why it will nullify the seed AI approach is that you don't
> > have to ask for resources to fund it, or try to recruit people to work on
> > it. It's happening all by itself; most people are not - and need never
> > know, and probably wouldn't care if they did - that they are contributing
> to
> > the singularity. The resources of the evolutionary singularity are truly
> > vast and rapidly getting vaster. And as others have pointed out, the
> > evolutionary path begins with many of what are generally considered the
> > "hard problems" solved, whereas the AI people have to start from square
> one.
The hard problem of the Singularity is how to create greater than human
intelligence. So no, that problem is not already solved by taking the
"evolutionary" approach. And I really do not see any real scientific
(or more importantly commercial) work being done on increasing human
intelligence. The Internet does it, but we are already halfway done
with using up what it can give us. What comes next? I hold that you
are incorrect that you do not need to seek resources for biotech path
to succeed. If the corporations and their researchers do not see any money
making possibilities for intelligence enhancement research, then there
will be no work done in that area. This is already the case with much
more mundane things like a potential malaria vaccine. The companies don't
see a lot of money to be made there so there is practically no work
being done on it.
> >
> > There's a lot more to be said on this subject, but I'm busy with moving
> > currently - to Pasadena, perhaps I'll meet some of you in Southern
> Calif. -
> > so I'll close for now. But I'll be more talkative when I get established
> > there. (I haven't even written a "join" post yet." It needs to be
> > emphasized that there other paths - more inspired and inspiring ones - to
I'd like to see your definition of inspired relative to all this.
> > the singularity than the imho cringing one proposed by the Institute of
> > building an AI and - if everything works out as hoped - maybe humans will
> be
> > permitted to scale the heights; what I would call the "singularity by
> proxy"
> > path. I, for one, intend to participate DIRECTLY in the singularity. I
> > hope there are at least a few others here as well.
> >
In order to participate directly in a transhuman based Singularity you
would have to be one of the first humans enhanced into transhumanity. How
do you plan to achieve that? Even if you do, the vast majority of humanity
will just be riding your coattails no matter which path occurs first.
Secondly, without an AI to guide things, what prevents individuals intra or
post-Singularity from using nanotech or other ultratechnologies in destructive
ways in an anarchic fashion? I'd like to hear a brief but coherent timeline/
description of how you think this would play out. Our argument is that while
it all probably would turn out ok, it would generally be safer to get a
Friendly AI in place first.
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT