From: James Higgins (jameshiggins@earthlink.net)
Date: Sun Jul 29 2001 - 02:20:48 MDT
At 08:15 PM 7/28/2001 -0400, Brian Atkins wrote:
> > >Now you are the one making claims.. for all you know Webmind may very well
> > >be remotely similar to real AI. In fact you have Ben here making that
> > >claim. I do not see anyone around claiming to be near to finishing a
> > >Real Neural Interface. RNIs seem to be around the stage of development
> > >that AI was back when computers were using vacuum tubes.
> > >
> > >A different way to look at it is this: with the computing power of the
> > >near future, AI is at the stage now where we can do real scientific
> > >experimentation. That (being able to really experiment) almost always
> > >leads to breakthroughs. RNIs are not there yet. I think you will agree
> > >with me that the AI path /definitely/ seems to be much farther along from
> > >these two perspectives.
> >
> > No, I'm specifically NOT making claims. I'm taking a show me attitude.
>
>You did make a claim regarding the existence of something near real AI.
>And you are using that claim to simultaneously claim that the AI path can
>not be shown to be more likely to succeed first. Have you examined the
>Webmind design and code in detail and determined that it is not remotely
>similar to a real AI? Do you care to address my two points showing how
>much more advanced AI research already is compared to research into real
>neural interfaces?
I don't have to examine the Webmind design or code because no one can in
fact define exactly what Real AI is. You can't compare to something that
has no definition! That's my point. We don't have any idea of what Real
AI looks like, and as such can't make any accurate statements about how to
create one much less how long that will take.
I know that researchers have been able to interface devices with the
nervous system. There is a vision system that can take a camera image and
directly stimulates the optic nerve to produce rudimentary vision in some
blind people. Thus giving blind people real-time vision (albeit of a very
low resolution currently). So we can say that their has been success in
neural interfaces. On the other hand, as far as I know, no one has ever
created a working general AI of any order. Webmind/Biomind may be the most
advanced in that field but if I remember correctly they have never run the
system as a whole. So I'd be tempted to say that, at the moment, research
into neural interfaces seems further along than research into AI.
> > Certainly the hardware will be available before then (there is a strong
> > track record to predict that on). But there is no track record to predict
> > Real AI. In order to make predictions on how long it will take to
> > understand it. We don't understand Real AI yet. I could just as easily
> > say we will break the barrier for traveling faster than the speed of light
> > by 2050, but that also requires knowledge that we don't yet have and thus
> > we can not predict this.
> >
> > So everyone agrees that 2030 is the outside estimate, but that is just a
> > hopeful guess (that I also share, BTW).
>
>If we put 2010 and 2030 as the outside dates then the most probable time
>is 2020? Do you think we'll have RNIs by then?
I'm arguing that we don't have any clue if we'll have RNIs OR AI by
then! All of these dates are just dreams at present, compelling dreams,
but still dreams.
> > You're probably correct, but they both require steps. And no one knows for
> > certain, maybe an advanced biological implant would allow for reprogramming
> > & expendability, which could put both on similar terms.
>
>An AI should only be eventually constrained by how much computing power
>it has available. The same will hold for your RNI. How can a RNI that
>is internal to your skull, or at most wearable, possibly match the
>computing power available to an AI? No matter what computing substrate
>you use the physical space constraint on the RNI will limit it. The only
>way a human could compete with an AI would be for the human to upload.
Impossible to answer for certain. If the implant could be connected to
massive computing power via broadband wireless. This *may* impose a slight
delay compared to that of an AI, but then again this solution includes the
human mind with all of its assets and capabilities. At the far extreme of
AI (getting near the Singularity) your probably right about the
upload. But an augmented human would be able to communicate with, plan for
and participate in the final stages of an AI singularity a hell of a lot
better than we can currently.
> > But it will most likely take many steps to get to that point, especially
> > based on Eli's Seed AI.
>
>Ok, but you will agree that in a SI vs. somewhat augmented humans match,
>the SI can get the Singularity done quicker, probably extremely quickly
>by whipping up some very advanced replicating nanotech hardware. The
>only real question is how long it takes to achieve SIness.
Given a fully functioning General AI, the availability of strong nanotech,
and that this AI has access to nanotech (I find this VERY unlikely), then
yes. But what fool is going to give a seed AI access to nanotech? And
this still requires a functioning General AI that we still have no idea how
to build.
> > development, a computer that runs 10 times faster has almost no effect on
> > the speed of the developer. Nural implants could, on the other hand, have
> > incredible impact on the speed of developers as maybe they could think code
> > instead of typing it. I'm not saying that this is going to be necessary,
> > but it would definitely be helpful and may be necessary in order to keep
> > the proposed time line.
>
>Actually with stuff like Flare we are beginning to see how computing
>power can help developers out. Just like how software helps Intel
>engineers create chip designs, software will eventually help software
>people create code. Actually it already does, but it is a pretty limited
>effect.
Don't get me started on Flare. That is going to take a long time and a
huge amount of effort. Great idea, but I doubt it will get the necessary
resources.
>I don't buy the argument that there is a major difference between the
>speed we think and type. I know when I'm coding I spend MOST of the
>time simply thinking about what to type next. And I was always the
>fastest/most productive coder wherever I worked... if it was the case
>that typing speeds were what was holding software creation back, you
>could simply throw more developers at a project and it would get done
>faster. Or hire professional typists and let the programmers talk really
>fast :-)
More developers = longer development cycle. It is a myth that adding
developers speeds up development! A professional typist would get in the
way, try using voice recognition (even good ones) for coding some time.
Personally, I can code at least as fast as I can type. If I could type
faster (85 wpm last time I checked) I'm fairly certain I could code
faster. But I am admittedly and oddity. I once personally produced some
350,000+ lines of working, clean & (mostly) reusable code in a single year
(while gathering requirements, doing architectural design & technical
management). Maybe this is why *I* think a neural interface would be so
darn helpful (because I want one now)!
> > >Care to answer my questions?
> >
> > Which questions?
> >
> > The participate directly one? Don't care. I would *like* to participate
> > but if it happens without me and goes smoothly I'll happily ride someone's
> > coattails.
> >
> > Destruction / Anarchic? First, I have nothing against anarchy. Actually,
> > an anarchy where the individuals treat each other respectfully would be my
> > preference. As for destructive technologies, nothing. My personal belief
> > is that either super intelligence will promote friendliness or we're
> doomed.
>
>Sure that'd be great if everyone treated everyone perfectly. But in my
>experience you are smoking crack if you think that is realistic. In a
>world without a Sysop, how can that possibly last? If you're worried
>that one very well tested AI can go wrong, I'm worried that 6 billion
>uploaded humans just might have a few bad apples for whom access to IQ-
>enhancing technologies just might enable to do very bad things with
>very advanced technologies that quite possibly are much more difficult
>to defend against than to use offensively. How do you prevent that from
>happening?
I don't worry about it. If/when I ever do upload do you really think I
plan to stick around here? I'm building myself a fleet of ships, cloning
myself a few dozen times and heading off in all directions to see the
galaxy! Maybe I'll even run into myself again down the road, that would be
interesting...
>Finally, if you think SI (whether AI or human-based) is all we need,
>then why the bias of wanting human-based ones instead of AI first?
I'm not biased! I'm simply trying to state that we don't have any real,
honest estimation of when we'll get either neural implants or general
AI. I might, personally slightly prefer the human path due to my end goals
and my concern over the Sysop. But any path that truly works and ends with
myself + wife + friends uploading into a reasonable, non-restrictive
environment is fine by me!
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT